tag
dict
content
listlengths
1
139
{ "category": "Observability and Analysis", "file_name": ".md", "project_name": "VictoriaMetrics", "subcategory": "Observability" }
[ { "data": "VictoriaMetrics is a fast, cost-effective and scalable monitoring solution and time series database. See case studies for VictoriaMetrics. VictoriaMetrics is available in binary releases, Docker images, Snap packages and source code. Documentation for the cluster version of VictoriaMetrics is available here. Learn more about key concepts of VictoriaMetrics and follow the quick start guide for a better experience. If you have questions about VictoriaMetrics, then feel free asking them in the VictoriaMetrics community Slack chat, you can join it via Slack Inviter. Contact us if you need enterprise support for VictoriaMetrics. See features available in enterprise package. Enterprise binaries can be downloaded and evaluated for free from the releases page. You can also request a free trial license. VictoriaMetrics is developed at a fast pace, so it is recommended to check the CHANGELOG periodically, and to perform regular upgrades. VictoriaMetrics enterprise provides long-term support lines of releases (LTS releases) - see these docs. VictoriaMetrics has achieved security certifications for Database Software Development and Software-Based Monitoring Services. We apply strict security measures in everything we do. See Security page for more details. VictoriaMetrics has the following prominent features: See case studies for VictoriaMetrics and various Articles about VictoriaMetrics. VictoriaMetrics ecosystem contains the following components additionally to single-node VictoriaMetrics: To quickly try VictoriaMetrics, just download the VictoriaMetrics executable or Docker image and start it with the desired command-line flags. See also QuickStart guide for additional information. VictoriaMetrics can also be installed via these installation methods: The following command-line flags are used the most: Other flags have good enough default values, so set them only if you really need to. Pass -help to see all the available flags with description and default values. The following docs may be useful during initial VictoriaMetrics setup: VictoriaMetrics accepts Prometheus querying API requests on port 8428 by default. It is recommended setting up monitoring for VictoriaMetrics. All the VictoriaMetrics components allow referring environment variables in yaml configuration files (such as -promscrape.config) and in command-line flags via %{ENV_VAR} syntax. For example, -metricsAuthKey=%{METRICSAUTHKEY} is automatically expanded to -metricsAuthKey=top-secret if METRICSAUTHKEY=top-secret environment variable exists at VictoriaMetrics startup. This expansion is performed by VictoriaMetrics itself. VictoriaMetrics recursively expands %{ENV_VAR} references in environment variables on startup. For example, FOO=%{BAR} environment variable is expanded to FOO=abc if BAR=a%{BAZ} and BAZ=bc. Additionally, all the VictoriaMetrics components allow setting flag values via environment variables according to these rules: Snap package for VictoriaMetrics is available here. Command-line flags for Snap package can be set with following command: ``` echo 'FLAGS=\"-selfScrapeInterval=10s -search.logSlowQueryDuration=20s\"' > $SNAPDATA/var/snap/victoriametrics/current/extraflags snap restart victoriametrics ``` Do not change value for -storageDataPath flag, because snap package has limited access to host filesystem. Changing scrape configuration is possible with text editor: ``` vi $SNAP_DATA/var/snap/victoriametrics/current/etc/victoriametrics-scrape-config.yaml ``` After changes were made, trigger config re-read with the command curl 127.0.0.1:8428/-/reload. In order to run VictoriaMetrics as a Windows service it is required to create a service configuration for WinSW and then install it as a service according to the following guide: Create a service configuration: ``` <service> <id>VictoriaMetrics</id> <name>VictoriaMetrics</name> <description>VictoriaMetrics</description> <executable>%BASE%\\victoria-metrics-windows-amd64-prod.exe\"</executable> <onfailure action=\"restart\" delay=\"10 sec\"/> <onfailure action=\"restart\" delay=\"20 sec\"/> <resetfailure>1 hour</resetfailure> <arguments>-envflag.enable</arguments> <priority>Normal</priority> <stoptimeout>15 sec</stoptimeout> <stopparentprocessfirst>true</stopparentprocessfirst> <startmode>Automatic</startmode> <waithint>15 sec</waithint> <sleeptime>1 sec</sleeptime> <logpath>%BASE%\\logs</logpath> <log mode=\"roll\"> <sizeThreshold>10240</sizeThreshold> <keepFiles>8</keepFiles> </log> <env name=\"loggerFormat\" value=\"json\" /> <env name=\"loggerOutput\" value=\"stderr\" /> <env name=\"promscrape_config\" value=\"C:\\Program Files\\victoria-metrics\\promscrape.yml\" /> </service> ``` Install WinSW by following this documentation. Install VictoriaMetrics as a service by running the following from elevated PowerShell: ``` winsw install VictoriaMetrics.xml Get-Service VictoriaMetrics | Start-Service ``` See this issue for more details. Add the following lines to Prometheus config file (it is usually located at" }, { "data": "in order to send data to VictoriaMetrics: ``` remote_write: url: http://<victoriametrics-addr>:8428/api/v1/write ``` Substitute <victoriametrics-addr> with hostname or IP address of VictoriaMetrics. Then apply new config via the following command: ``` kill -HUP `pidof prometheus` ``` Prometheus writes incoming data to local storage and replicates it to remote storage in parallel. This means that data remains available in local storage for --storage.tsdb.retention.time duration even if remote storage is unavailable. If you plan sending data to VictoriaMetrics from multiple Prometheus instances, then add the following lines into global section of Prometheus config: ``` global: external_labels: datacenter: dc-123 ``` This instructs Prometheus to add datacenter=dc-123 label to each sample before sending it to remote storage. The label name can be arbitrary - datacenter is just an example. The label value must be unique across Prometheus instances, so time series could be filtered and grouped by this label. For highly loaded Prometheus instances (200k+ samples per second) the following tuning may be applied: ``` remote_write: url: http://<victoriametrics-addr>:8428/api/v1/write queue_config: maxsamplesper_send: 10000 capacity: 20000 max_shards: 30 ``` Using remote write increases memory usage for Prometheus by up to ~25%. If you are experiencing issues with too high memory consumption of Prometheus, then try to lower maxsamplesper_send and capacity params. Keep in mind that these two params are tightly connected. Read more about tuning remote write for Prometheus here. It is recommended upgrading Prometheus to v2.12.0 or newer, since previous versions may have issues with remote_write. Take a look also at vmagent and vmalert, which can be used as faster and less resource-hungry alternative to Prometheus. Create Prometheus datasource in Grafana with the following url: ``` http://<victoriametrics-addr>:8428 ``` Substitute <victoriametrics-addr> with the hostname or IP address of VictoriaMetrics. In the Type and version section it is recommended to set the type to Prometheus and the version to at least 2.24.x: This allows Grafana to use a more efficient API to get label values. Then build graphs and dashboards for the created datasource using PromQL or MetricsQL. Alternatively, use VictoriaMetrics datasource plugin with support of extra features. See more in description. Creating a datasource may require specific permissions. If you dont see an option to create a data source - try contacting system administrator. Grafana playground is available for viewing at our sandbox. VictoriaMetrics is developed at a fast pace, so it is recommended periodically checking the CHANGELOG page and performing regular upgrades. It is safe upgrading VictoriaMetrics to new versions unless release notes say otherwise. It is safe skipping multiple versions during the upgrade unless release notes say otherwise. It is recommended performing regular upgrades to the latest version, since it may contain important bug fixes, performance optimizations or new features. It is also safe downgrading to older versions unless release notes say otherwise. The following steps must be performed during the upgrade / downgrade procedure: Prometheus doesnt drop data during VictoriaMetrics restart. See this article for details. The same applies also to vmagent. VictoriaMetrics provides UI for query troubleshooting and exploration. The UI is available at http://victoriametrics:8428/vmui (or at http://<vmselect>:8481/select/<accountID>/vmui/ in cluster version of VictoriaMetrics). The UI allows exploring query results via graphs and tables. It also provides the following features: VMUI provides auto-completion for MetricsQL functions, metric names, label names and label values. The auto-completion can be enabled by checking the Autocomplete toggle. When the auto-completion is disabled, it can still be triggered for the current cursor position by pressing ctrl+space. VMUI automatically switches from graph view to heatmap view when the query returns histogram buckets (both Prometheus histograms and VictoriaMetrics histograms are" }, { "data": "Try, for example, this query. Graphs in vmui support scrolling and zooming: Query history can be navigated by holding Ctrl (or Cmd on MacOS) and pressing up or down arrows on the keyboard while the cursor is located in the query input field. Multi-line queries can be entered by pressing Shift-Enter in query input field. When querying the backfilled data or during query troubleshooting, it may be useful disabling response cache by clicking Disable cache checkbox. VMUI automatically adjusts the interval between datapoints on the graph depending on the horizontal resolution and on the selected time range. The step value can be customized by changing Step value input. VMUI allows investigating correlations between multiple queries on the same graph. Just click Add Query button, enter an additional query in the newly appeared input field and press Enter. Results for all the queries are displayed simultaneously on the same graph. Graphs for a particular query can be temporarily hidden by clicking the eye icon on the right side of the input field. When the eye icon is clicked while holding the ctrl key, then query results for the rest of queries become hidden except of the current query results. VMUI allows sharing query and trace results by clicking on Export query button in top right corner of the graph area. The query and trace will be exported as a file that later can be loaded in VMUI via Query Analyzer tool. See the example VMUI at VictoriaMetrics playground. VMUI provides top queries tab, which can help determining the following query types: This information is obtained from the /api/v1/status/top_queries HTTP endpoint. VMUI provides active queries tab, which shows currently execute queries. It provides the following information per each query: This information is obtained from the /api/v1/status/active_queries HTTP endpoint. VMUI provides an ability to explore metrics exported by a particular job / instance in the following way: It is possible to change the selected time range for the graphs in the top right corner. VictoriaMetrics provides an ability to explore time series cardinality at Explore cardinality tab in vmui in the following ways: By default, cardinality explorer analyzes time series for the current date. It provides the ability to select different day at the top right corner. By default, all the time series for the selected date are analyzed. It is possible to narrow down the analysis to series matching the specified series selector. Cardinality explorer is built on top of /api/v1/status/tsdb. See cardinality explorer playground. See the example of using the cardinality explorer here. In cluster version of VictoriaMetrics each vmstorage tracks the stored time series individually. vmselect requests stats via /api/v1/status/tsdb API from each vmstorage node and merges the results by summing per-series stats. This may lead to inflated values when samples for the same time series are spread across multiple vmstorage nodes due to replication or rerouting. VictoriaMetrics is configured via command-line flags, so it must be restarted when new command-line flags should be applied: Prometheus doesnt drop data during VictoriaMetrics restart. See this article for details. The same applies also to vmagent. VictoriaMetrics can be used as drop-in replacement for Prometheus for scraping targets configured in prometheus.yml config file according to the specification. Just set -promscrape.config command-line flag to the path to prometheus.yml config - and VictoriaMetrics should start scraping the configured targets. If the provided configuration file contains unsupported options, then either delete them from the file or just pass -promscrape.config.strictParse=false command-line flag to VictoriaMetrics, so it will ignore unsupported options. The file pointed by" }, { "data": "may contain %{ENVVAR} placeholders, which are substituted by the corresponding ENVVAR environment variable values. See also: VictoriaMetrics also supports importing data in Prometheus exposition format. See also vmagent, which can be used as drop-in replacement for Prometheus. VictoriaMetrics accepts data from DataDog agent, DogStatsD and DataDog Lambda Extension via submit metrics API at /datadog/api/v2/series or via sketches API at /datadog/api/beta/sketches. DataDog agent allows configuring destinations for metrics sending via ENV variable DDDDURL or via configuration file in section dd_url. To configure DataDog agent via ENV variable add the following prefix: ``` DDDDURL=http://victoriametrics:8428/datadog ``` Choose correct URL for VictoriaMetrics here. To configure DataDog agent via configuration file add the following line: ``` dd_url: http://victoriametrics:8428/datadog ``` vmagent also can accept DataDog metrics format. Depending on where vmagent will forward data, pick single-node or cluster URL formats. DataDog allows configuring Dual Shipping for metrics sending via ENV variable DDADDITIONALENDPOINTS or via configuration file additional_endpoints. Run DataDog using the following ENV variable with VictoriaMetrics as additional metrics receiver: ``` DDADDITIONALENDPOINTS='{\\\"http://victoriametrics:8428/datadog\\\": [\\\"apikey\\\"]}' ``` Choose correct URL for VictoriaMetrics here. To configure DataDog Dual Shipping via configuration file add the following line: ``` additional_endpoints: \"http://victoriametrics:8428/datadog\": apikey ``` Disable logs (logs ingestion is not supported by VictoriaMetrics) and set a custom endpoint in serverless.yaml: ``` custom: datadog: enableDDLogs: false # Disabled not supported DD logs apiKey: fakekey # Set any key, otherwise plugin fails provider: environment: DDDDURL: <<vm-url>>/datadog # VictoriaMetrics endpoint for DataDog ``` See how to send data to VictoriaMetrics via DataDog submit metrics API here. The imported data can be read via export API. VictoriaMetrics automatically sanitizes metric names for the data ingested via DataDog protocol according to DataDog metric naming recommendations. If you need accepting metric names as is without sanitizing, then pass -datadog.sanitizeMetricName=false command-line flag to VictoriaMetrics. Extra labels may be added to all the written time series by passing extra_label=name=value query args. For example, /datadog/api/v2/series?extra_label=foo=bar would add {foo=\"bar\"} label to all the ingested metrics. DataDog agent sends the configured tags to undocumented endpoint - /datadog/intake. This endpoint isnt supported by VictoriaMetrics yet. This prevents from adding the configured tags to DataDog agent data sent into VictoriaMetrics. The workaround is to run a sidecar vmagent alongside every DataDog agent, which must run with DDDDURL=http://localhost:8429/datadog environment variable. The sidecar vmagent must be configured with the needed tags via -remoteWrite.label command-line flag and must forward incoming data with the added tags to a centralized VictoriaMetrics specified via -remoteWrite.url command-line flag. See these docs for details on how to add labels to metrics at vmagent. Use http://<victoriametrics-addr>:8428 url instead of InfluxDB url in agents configs. For instance, put the following lines into Telegraf config, so it sends data to VictoriaMetrics instead of InfluxDB: ``` [[outputs.influxdb]] urls = [\"http://<victoriametrics-addr>:8428\"] ``` Another option is to enable TCP and UDP receiver for InfluxDB line protocol via -influxListenAddr command-line flag and stream plain InfluxDB line protocol data to the configured TCP and/or UDP addresses. VictoriaMetrics performs the following transformations to the ingested InfluxDB data: For example, the following InfluxDB line: ``` foo,tag1=value1,tag2=value2 field1=12,field2=40 ``` is converted into the following Prometheus data points: ``` foo_field1{tag1=\"value1\", tag2=\"value2\"} 12 foo_field2{tag1=\"value1\", tag2=\"value2\"} 40 ``` Example for writing data with InfluxDB line protocol to local VictoriaMetrics using curl: ``` curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write' ``` An arbitrary number of lines delimited by \\n (aka newline char) can be sent in a single request. After that the data may be read via /api/v1/export endpoint: ``` curl -G 'http://localhost:8428/api/v1/export' -d 'match={name=~\"measurement_.*\"}' ``` The /api/v1/export endpoint should return the following response: ``` {\"metric\":{\"name\":\"measurement_field1\",\"tag1\":\"value1\",\"tag2\":\"value2\"},\"values\":[123],\"timestamps\":[1560272508147]}" }, { "data": "``` Note that InfluxDB line protocol expects timestamps in nanoseconds by default, while VictoriaMetrics stores them with milliseconds precision. It is allowed to ingest timestamps with seconds, microseconds or nanoseconds precision - VictoriaMetrics will automatically convert them to milliseconds. Extra labels may be added to all the written time series by passing extra_label=name=value query args. For example, /write?extra_label=foo=bar would add {foo=\"bar\"} label to all the ingested metrics. Some plugins for Telegraf such as fluentd, Juniper/open-nti or Juniper/jitmon send SHOW DATABASES query to /query and expect a particular database name in the response. Comma-separated list of expected databases can be passed to VictoriaMetrics via -influx.databaseNames command-line flag. VictoriaMetrics exposes endpoint for InfluxDB v2 HTTP API at /influx/api/v2/write and /api/v2/write. In order to write data with InfluxDB line protocol to local VictoriaMetrics using curl: ``` curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/api/v2/write' ``` The /api/v1/export endpoint should return the following response: ``` {\"metric\":{\"name\":\"measurement_field1\",\"tag1\":\"value1\",\"tag2\":\"value2\"},\"values\":[123],\"timestamps\":[1695902762311]} {\"metric\":{\"name\":\"measurement_field2\",\"tag1\":\"value1\",\"tag2\":\"value2\"},\"values\":[1.23],\"timestamps\":[1695902762311]} ``` VictoriaMetrics supports extended StatsD protocol. Currently, it supports tags and value packing extensions provided by dogstatsd. During parsing, metrics <TYPE> is added as a special label statsd_metric_type. It is strongly advisable to configure streaming aggregation for each metric type. This process serves two primary objectives: VictoriaMetrics supports the following metric types: The Not Assigned type is not supported due to the ambiguity surrounding its aggregation method. The correct aggregation method cannot be determined for the undefined metric. Enable Statsd receiver in VictoriaMetrics by setting -statsdListenAddr command line flag and configure stream aggregation. For instance, the following command will enable StatsD receiver in VictoriaMetrics on TCP and UDP port 8125: ``` /path/to/victoria-metrics-prod -statsdListenAddr=:8125 -streamAggr.config=statsd_aggr.yaml ``` Example of stream aggregation config: ``` match: '{statsd_metric_type=\"g\"}' outputs: [last] interval: 1m ``` Example for writing data with StatsD plaintext protocol to local VictoriaMetrics using nc: ``` echo \"foo.bar:123|g|#tag1:baz\" | nc -N localhost 8125 ``` An arbitrary number of lines delimited by \\n (aka newline char) can be sent in one go. Explicit setting of timestamps is not supported for StatsD protocol. Timestamp is set to the current time when VictoriaMetrics or vmagent receives it. Once ingested, the data can be read via /api/v1/export endpoint: ``` curl -G 'http://localhost:8428/api/v1/export' -d 'match={name=~\"foo.*\"}' ``` Please note, with stream aggregation enabled data will become available only after specified aggregation interval. The /api/v1/export endpoint should return the following response: ``` {\"metric\":{\"name\":\"foo.bar:1mlast\",\"statsdmetrictype_\":\"g\",\"tag1\":\"baz\"},\"values\":[123],\"timestamps\":[1715843939000]} ``` Some examples of compatible statsd clients: Enable Graphite receiver in VictoriaMetrics by setting -graphiteListenAddr command line flag. For instance, the following command will enable Graphite receiver in VictoriaMetrics on TCP and UDP port 2003: ``` /path/to/victoria-metrics-prod -graphiteListenAddr=:2003 ``` Use the configured address in Graphite-compatible agents. For instance, set graphiteHost to the VictoriaMetrics host in StatsD configs. Example for writing data with Graphite plaintext protocol to local VictoriaMetrics using nc: ``` echo \"foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`\" | nc -N localhost 2003 ``` VictoriaMetrics sets the current time if the timestamp is omitted. An arbitrary number of lines delimited by \\n (aka newline char) can be sent in one go. After that the data may be read via /api/v1/export endpoint: ``` curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' ``` The /api/v1/export endpoint should return the following response: ``` {\"metric\":{\"name\":\"foo.bar.baz\",\"tag1\":\"value1\",\"tag2\":\"value2\"},\"values\":[123],\"timestamps\":[1560277406000]} ``` Graphite relabeling can be used if the imported Graphite data is going to be queried via MetricsQL. Data sent to VictoriaMetrics via Graphite plaintext protocol may be read via the following APIs: VictoriaMetrics supports graphite pseudo-label for selecting time series with Graphite-compatible filters in MetricsQL. For example, {graphite=\"foo..bar\"} is equivalent to {__name__=~\"foo[.]bar\"}, but it works faster and it is easier to use when migrating from Graphite to VictoriaMetrics. See docs for Graphite paths and" }, { "data": "VictoriaMetrics also supports labelgraphitegroup function for extracting the given groups from Graphite metric name. The graphite pseudo-label supports e.g. alternate regexp filters such as (value1|...|valueN). They are transparently converted to {value1,...,valueN} syntax used in Graphite. This allows using multi-value template variables in Grafana inside graphite pseudo-label. For example, Grafana expands {graphite=~\"foo.($bar).baz\"} into {graphite=~\"foo.(x|y).baz\"} if $bar template variable contains x and y values. In this case the query is automatically converted into {graphite=~\"foo.{x,y}.baz\"} before execution. VictoriaMetrics also supports Graphite query language - see these docs. VictoriaMetrics supports telnet put protocol and HTTP /api/put requests for ingesting OpenTSDB data. The same protocol is used for ingesting data in KairosDB. Enable OpenTSDB receiver in VictoriaMetrics by setting -opentsdbListenAddr command line flag. For instance, the following command enables OpenTSDB receiver in VictoriaMetrics on TCP and UDP port 4242: ``` /path/to/victoria-metrics-prod -opentsdbListenAddr=:4242 ``` Send data to the given address from OpenTSDB-compatible agents. Example for writing data with OpenTSDB protocol to local VictoriaMetrics using nc: ``` echo \"put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2\" | nc -N localhost 4242 ``` An arbitrary number of lines delimited by \\n (aka newline char) can be sent in one go. After that the data may be read via /api/v1/export endpoint: ``` curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' ``` The /api/v1/export endpoint should return the following response: ``` {\"metric\":{\"name\":\"foo.bar.baz\",\"tag1\":\"value1\",\"tag2\":\"value2\"},\"values\":[123],\"timestamps\":[1560277292000]} ``` Enable HTTP server for OpenTSDB /api/put requests by setting -opentsdbHTTPListenAddr command line flag. For instance, the following command enables OpenTSDB HTTP server on port 4242: ``` /path/to/victoria-metrics-prod -opentsdbHTTPListenAddr=:4242 ``` Send data to the given address from OpenTSDB-compatible agents. Example for writing a single data point: ``` curl -H 'Content-Type: application/json' -d '{\"metric\":\"x.y.z\",\"value\":45.34,\"tags\":{\"t1\":\"v1\",\"t2\":\"v2\"}}' http://localhost:4242/api/put ``` Example for writing multiple data points in a single request: ``` curl -H 'Content-Type: application/json' -d '[{\"metric\":\"foo\",\"value\":45.34},{\"metric\":\"bar\",\"value\":43}]' http://localhost:4242/api/put ``` After that the data may be read via /api/v1/export endpoint: ``` curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar' ``` The /api/v1/export endpoint should return the following response: ``` {\"metric\":{\"name\":\"foo\"},\"values\":[45.34],\"timestamps\":[1566464846000]} {\"metric\":{\"name\":\"bar\"},\"values\":[43],\"timestamps\":[1566464846000]} {\"metric\":{\"name\":\"x.y.z\",\"t1\":\"v1\",\"t2\":\"v2\"},\"values\":[45.34],\"timestamps\":[1566464763000]} ``` Extra labels may be added to all the imported time series by passing extra_label=name=value query args. For example, /api/put?extra_label=foo=bar would add {foo=\"bar\"} label to all the ingested metrics. VictoriaMetrics accepts data from NewRelic infrastructure agent at /newrelic/infra/v2/metrics/events/bulk HTTP path. VictoriaMetrics receives Events from NewRelic agent at the given path, transforms them to raw samples according to these docs before storing the raw samples to the database. You need passing COLLECTORURL and NRIALICENSE_KEY environment variables to NewRelic infrastructure agent in order to send the collected metrics to VictoriaMetrics. The COLLECTORURL must point to /newrelic HTTP endpoint at VictoriaMetrics, while the NRIALICENSE_KEY must contain NewRelic license key, which can be obtained here. For example, if VictoriaMetrics runs at localhost:8428, then the following command can be used for running NewRelic infrastructure agent: ``` COLLECTORURL=\"http://localhost:8428/newrelic\" NRIALICENSEKEY=\"NEWRELICLICENSE_KEY\" ./newrelic-infra ``` VictoriaMetrics maps NewRelic Events to raw samples in the following way: For example, lets import the following NewRelic Events request to VictoriaMetrics: ``` [ { \"Events\":[ { \"eventType\":\"SystemSample\", \"entityKey\":\"macbook-pro.local\", \"cpuPercent\":25.056660790748904, \"cpuUserPercent\":8.687987912389374, \"cpuSystemPercent\":16.36867287835953, \"cpuIOWaitPercent\":0, \"cpuIdlePercent\":74.94333920925109, \"cpuStealPercent\":0, \"loadAverageOneMinute\":5.42333984375, \"loadAverageFiveMinute\":4.099609375, \"loadAverageFifteenMinute\":3.58203125 } ] } ] ``` Save this JSON into newrelic.json file and then use the following command in order to import it into VictoriaMetrics: ``` curl -X POST -H 'Content-Type: application/json' --data-binary @newrelic.json http://localhost:8428/newrelic/infra/v2/metrics/events/bulk ``` Lets fetch the ingested data via data export API: ``` curl http://localhost:8428/api/v1/export -d 'match={eventType=\"SystemSample\"}' {\"metric\":{\"name\":\"cpuStealPercent\",\"entityKey\":\"macbook-pro.local\",\"eventType\":\"SystemSample\"},\"values\":[0],\"timestamps\":[1697407970000]} {\"metric\":{\"name\":\"loadAverageFiveMinute\",\"entityKey\":\"macbook-pro.local\",\"eventType\":\"SystemSample\"},\"values\":[4.099609375],\"timestamps\":[1697407970000]} {\"metric\":{\"name\":\"cpuIOWaitPercent\",\"entityKey\":\"macbook-pro.local\",\"eventType\":\"SystemSample\"},\"values\":[0],\"timestamps\":[1697407970000]} {\"metric\":{\"name\":\"cpuSystemPercent\",\"entityKey\":\"macbook-pro.local\",\"eventType\":\"SystemSample\"},\"values\":[16.368672878359],\"timestamps\":[1697407970000]} {\"metric\":{\"name\":\"loadAverageOneMinute\",\"entityKey\":\"macbook-pro.local\",\"eventType\":\"SystemSample\"},\"values\":[5.42333984375],\"timestamps\":[1697407970000]} {\"metric\":{\"name\":\"cpuUserPercent\",\"entityKey\":\"macbook-pro.local\",\"eventType\":\"SystemSample\"},\"values\":[8.687987912389],\"timestamps\":[1697407970000]} {\"metric\":{\"name\":\"cpuIdlePercent\",\"entityKey\":\"macbook-pro.local\",\"eventType\":\"SystemSample\"},\"values\":[74.9433392092],\"timestamps\":[1697407970000]} {\"metric\":{\"name\":\"loadAverageFifteenMinute\",\"entityKey\":\"macbook-pro.local\",\"eventType\":\"SystemSample\"},\"values\":[3.58203125],\"timestamps\":[1697407970000]} {\"metric\":{\"name\":\"cpuPercent\",\"entityKey\":\"macbook-pro.local\",\"eventType\":\"SystemSample\"},\"values\":[25.056660790748],\"timestamps\":[1697407970000]} ``` VictoriaMetrics supports the following handlers from Prometheus querying API: These handlers can be queried from Prometheus-compatible clients such as Grafana or curl. All the Prometheus querying API handlers can be prepended with /prometheus" }, { "data": "For example, both /prometheus/api/v1/query and /api/v1/query should work. VictoriaMetrics accepts optional extralabel=<labelname>=<label_value> query arg, which can be used for enforcing additional label filters for queries. For example, /api/v1/queryrange?extralabel=userid=123&extralabel=group_id=456&query=<query> would automatically add {userid=\"123\",groupid=\"456\"} label filters to the given <query>. This functionality can be used for limiting the scope of time series visible to the given tenant. It is expected that the extra_label query args are automatically set by auth proxy sitting in front of VictoriaMetrics. See vmauth and vmgateway as examples of such proxies. VictoriaMetrics accepts optional extrafilters[]=seriesselector query arg, which can be used for enforcing arbitrary label filters for queries. For example, /api/v1/queryrange?extrafilters[]={env=~\"prod|staging\",user=\"xyz\"}&query=<query> would automatically add {env=~\"prod|staging\",user=\"xyz\"} label filters to the given <query>. This functionality can be used for limiting the scope of time series visible to the given tenant. It is expected that the extra_filters[] query args are automatically set by auth proxy sitting in front of VictoriaMetrics. See vmauth and vmgateway as examples of such proxies. VictoriaMetrics accepts multiple formats for time, start and end query args - see these docs. VictoriaMetrics accepts round_digits query arg for /api/v1/query and /api/v1/query_range handlers. It can be used for rounding response values to the given number of digits after the decimal point. For example, /api/v1/query?query=avgovertime(temperature[1h])&round_digits=2 would round response values to up to two digits after the decimal point. VictoriaMetrics accepts limit query arg for /api/v1/labels and /api/v1/label/<labelName>/values handlers for limiting the number of returned entries. For example, the query to /api/v1/labels?limit=5 returns a sample of up to 5 unique labels, while ignoring the rest of labels. If the provided limit value exceeds the corresponding -search.maxTagKeys / -search.maxTagValues command-line flag values, then limits specified in the command-line flags are used. By default, VictoriaMetrics returns time series for the last day starting at 00:00 UTC from /api/v1/series, /api/v1/labels and /api/v1/label/<labelName>/values, while the Prometheus API defaults to all time. Explicitly set start and end to select the desired time range. VictoriaMetrics rounds the specified start..end time range to day granularity because of performance optimization concerns. If you need the exact set of label names and label values on the given time range, then send queries to /api/v1/query or to /api/v1/query_range. VictoriaMetrics accepts limit query arg at /api/v1/series for limiting the number of returned entries. For example, the query to /api/v1/series?limit=5 returns a sample of up to 5 series, while ignoring the rest of series. If the provided limit value exceeds the corresponding -search.maxSeries command-line flag values, then limits specified in the command-line flags are used. Additionally, VictoriaMetrics provides the following handlers: /vmui - Basic Web UI. See these docs. /api/v1/series/count - returns the total number of time series in the database. Some notes: /api/v1/status/active_queries - returns the list of currently running queries. This list is also available at active queries page at VMUI. /api/v1/status/top_queries - returns the following query lists: The number of returned queries can be limited via topN query arg. Old queries can be filtered out with maxLifetime query arg. For example, request to /api/v1/status/top_queries?topN=5&maxLifetime=30s would return up to 5 queries per list, which were executed during the last 30 seconds. VictoriaMetrics tracks the last -search.queryStats.lastQueriesCount queries with durations at least -search.queryStats.minQueryDuration. See also top queries page at VMUI. VictoriaMetrics accepts the following formats for time, start and end query args in query APIs and in export APIs. VictoriaMetrics supports data ingestion in Graphite protocol - see these docs for details. VictoriaMetrics supports the following Graphite querying APIs, which are needed for Graphite datasource in Grafana: All the Graphite handlers can be pre-pended with /graphite prefix. For example, both /graphite/metrics/find and /metrics/find should" }, { "data": "VictoriaMetrics accepts optional query args: extralabel=<labelname>=<labelvalue> and extrafilters[]=seriesselector query args for all the Graphite APIs. These args can be used for limiting the scope of time series visible to the given tenant. It is expected that the extralabel query arg is automatically set by auth proxy sitting in front of VictoriaMetrics. See vmauth and vmgateway as examples of such proxies. Contact us if you need assistance with such a proxy. VictoriaMetrics supports graphite pseudo-label for filtering time series with Graphite-compatible filters in MetricsQL. See these docs. VictoriaMetrics supports Graphite Render API subset at /render endpoint, which is used by Graphite datasource in Grafana. When configuring Graphite datasource in Grafana, the Storage-Step http request header must be set to a step between Graphite data points stored in VictoriaMetrics. For example, Storage-Step: 10s would mean 10 seconds distance between Graphite datapoints stored in VictoriaMetrics. VictoriaMetrics supports the following handlers from Graphite Metrics API: VictoriaMetrics accepts the following additional query args at /metrics/find and /metrics/expand: VictoriaMetrics supports the following handlers from Graphite Tags API: We recommend using either binary releases or docker images instead of building VictoriaMetrics from sources. Building from sources is reasonable when developing additional features specific to your needs or when testing bugfixes. ARM build may run on Raspberry Pi or on energy-efficient ARM servers. Pure Go mode builds only Go code without cgo dependencies. Run make package-victoria-metrics. It builds victoriametrics/victoria-metrics:<PKG_TAG> docker image locally. <PKG_TAG> is auto-generated image tag, which depends on source code in the repository. The <PKGTAG> may be manually set via PKGTAG=foobar make package-victoria-metrics. The base docker image is alpine but it is possible to use any other base image by setting it via <ROOT_IMAGE> environment variable. For example, the following command builds the image on top of scratch image: ``` ROOT_IMAGE=scratch make package-victoria-metrics ``` VictoriaMetrics can be built with Podman in either rootful or rootless mode. When building via rootful Podman, simply add DOCKER=podman to the relevant make commandline. To build via rootless Podman, add DOCKER=podman DOCKER_RUN=\"podman run --userns=keep-id\" to the make commandline. For example: make victoria-metrics-pure DOCKER=podman DOCKER_RUN=\"podman run --userns=keep-id\" Note that production builds are not supported via Podman because Podman does not support buildx. Docker-compose helps to spin up VictoriaMetrics, vmagent and Grafana with one command. More details may be found here. Read instructions on how to set up VictoriaMetrics as a service for your OS. A snap package is available for Ubuntu. Send a request to http://<victoriametrics-addr>:8428/snapshot/create endpoint in order to create an instant snapshot. The page returns the following JSON response on successful creation of snapshot: ``` {\"status\":\"ok\",\"snapshot\":\"<snapshot-name>\"} ``` Snapshots are created under <-storageDataPath>/snapshots directory, where <-storageDataPath> is the corresponding command-line flag value. Snapshots can be archived to backup storage at any time with vmbackup. Snapshots consist of a mix of hard-links and soft-links to various files and directories inside -storageDataPath. See this article for more details. This adds some restrictions on what can be done with the contents of <-storageDataPath>/snapshots directory: See also snapshot troubleshooting. The http://<victoriametrics-addr>:8428/snapshot/list endpoint returns the list of available snapshots. Send a query to http://<victoriametrics-addr>:8428/snapshot/delete?snapshot=<snapshot-name> in order to delete the snapshot with <snapshot-name> name. Navigate to http://<victoriametrics-addr>:8428/snapshot/delete_all in order to delete all the snapshots. Snapshot doesnt occupy disk space just after its creation thanks to the used approach. Old snapshots may start occupying additional disk space if they refer to old parts, which were already deleted during background merge. Thats why it is recommended deleting old snapshots after they are no longer needed in order to free up disk space used by old" }, { "data": "This can be done either manually or automatically if the -snapshotsMaxAge command-line flag is set. Make sure that the backup process has enough time to complete when setting -snapshotsMaxAge command-line flag. VictoriaMetrics exposes the current number of available snapshots via vm_snapshots metric at /metrics page. Send a request to http://<victoriametrics-addr>:8428/api/v1/admin/tsdb/deleteseries?match[]=<timeseriesselectorfordelete>, where <timeseriesselectorfor_delete> may contain any time series selector for metrics to delete. Delete API doesnt support the deletion of specific time ranges, the series can only be deleted completely. Storage space for the deleted time series isnt freed instantly - it is freed during subsequent background merges of data files. Note that background merges may never occur for data from previous months, so storage space wont be freed for historical data. In this case forced merge may help freeing up storage space. It is recommended verifying which metrics will be deleted with the call to http://<victoria-metrics-addr>:8428/api/v1/series?match[]=<timeseriesselectorfor_delete> before actually deleting the metrics. By default, this query will only scan series in the past 5 minutes, so you may need to adjust start and end to a suitable range to achieve match hits. The /api/v1/admin/tsdb/delete_series handler may be protected with authKey if -deleteAuthKey command-line flag is set. Note that handler accepts any HTTP method, so sending a GET request to /api/v1/admin/tsdb/delete_series will result in deletion of time series. The delete API is intended mainly for the following cases: Using the delete API is not recommended in the following cases, since it brings a non-zero overhead: Its better to use the -retentionPeriod command-line flag for efficient pruning of old data. VictoriaMetrics performs data compactions in background in order to keep good performance characteristics when accepting new data. These compactions (merges) are performed independently on per-month partitions. This means that compactions are stopped for per-month partitions if no new data is ingested into these partitions. Sometimes it is necessary to trigger compactions for old partitions. For instance, in order to free up disk space occupied by deleted time series. In this case forced compaction may be initiated on the specified per-month partition by sending request to /internal/forcemerge?partitionprefix=YYYY_MM, where YYYYMM is per-month partition name. For example, http://victoriametrics:8428/internal/forcemerge?partitionprefix=202008 would initiate forced merge for August 2020 partition. The call to /internal/force_merge returns immediately, while the corresponding forced merge continues running in background. Forced merges may require additional CPU, disk IO and storage space resources. It is unnecessary to run forced merge under normal conditions, since VictoriaMetrics automatically performs optimal merges in background when new data is ingested into it. VictoriaMetrics provides the following handlers for exporting data: Send a request to http://<victoriametrics-addr>:8428/api/v1/export?match[]=<timeseriesselectorfor_export>, where <timeseriesselectorfor_export> may contain any time series selector for metrics to export. Use {name!=\"\"} selector for fetching all the time series. The response would contain all the data for the selected time series in JSON line format - see these docs for details on this format. Each JSON line contains samples for a single time series. An example output: ``` {\"metric\":{\"name\":\"up\",\"job\":\"node_exporter\",\"instance\":\"localhost:9100\"},\"values\":[0,0,0],\"timestamps\":[1549891472010,1549891487724,1549891503438]} {\"metric\":{\"name\":\"up\",\"job\":\"prometheus\",\"instance\":\"localhost:9090\"},\"values\":[1,1,1],\"timestamps\":[1549891461511,1549891476511,1549891491511]} ``` Optional start and end args may be added to the request in order to limit the time frame for the exported data. See allowed formats for these args. For example: ``` curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseriesselectorfor_export>' -d 'start=1654543486' -d 'end=1654543486' curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseriesselectorfor_export>' -d 'start=2022-06-06T19:25:48' -d 'end=2022-06-06T19:29:07' ``` Optional maxrowsper_line arg may be added to the request for limiting the maximum number of rows exported per each JSON line. Optional reducememusage=1 arg may be added to the request for reducing memory usage when exporting big number of time series. In this case the output may contain multiple lines with samples for the same time" }, { "data": "Pass Accept-Encoding: gzip HTTP header in the request to /api/v1/export in order to reduce network bandwidth during exporting big amounts of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data: ``` curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={name!=\"\"}' > data.jsonl.gz ``` The maximum duration for each request to /api/v1/export is limited by -search.maxExportDuration command-line flag. Exported data can be imported via POSTing it to /api/v1/import. The deduplication is applied to the data exported via /api/v1/export by default. The deduplication isnt applied if reducememusage=1 query arg is passed to the request. Send a request to http://<victoriametrics-addr>:8428/api/v1/export/csv?format=<format>&match=<timeseriesselectorfor_export>, where: <format> must contain comma-delimited label names for the exported CSV. The following special label names are supported: <timeseriesselectorfor_export> may contain any time series selector for metrics to export. Optional start and end args may be added to the request in order to limit the time frame for the exported data. See allowed formats for these args. For example: ``` curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseriesselectorfor_export>' -d 'start=1654543486' -d 'end=1654543486' curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseriesselectorfor_export>' -d 'start=2022-06-06T19:25:48' -d 'end=2022-06-06T19:29:07' ``` The exported CSV data can be imported to VictoriaMetrics via /api/v1/import/csv. The deduplication is applied for the data exported in CSV by default. It is possible to export raw data without de-duplication by passing reducememusage=1 query arg to /api/v1/export/csv. Send a request to http://<victoriametrics-addr>:8428/api/v1/export/native?match[]=<timeseriesselectorfor_export>, where <timeseriesselectorfor_export> may contain any time series selector for metrics to export. Use {name=~\".*\"} selector for fetching all the time series. On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust -search.maxExportSeries command-line flag: ``` wget -O- -q 'http://yourvictoriametricsinstance:8428/api/v1/series/count' | jq '.data[0]' ``` Optional start and end args may be added to the request in order to limit the time frame for the exported data. See allowed formats for these args. For example: ``` curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseriesselectorfor_export>' -d 'start=1654543486' -d 'end=1654543486' curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseriesselectorfor_export>' -d 'start=2022-06-06T19:25:48' -d 'end=2022-06-06T19:29:07' ``` The exported data can be imported to VictoriaMetrics via /api/v1/import/native. The native export format may change in incompatible way between VictoriaMetrics releases, so the data exported from the release X can fail to be imported into VictoriaMetrics release Y. The deduplication isnt applied for the data exported in native format. It is expected that the de-duplication is performed during data import. VictoriaMetrics can discover and scrape metrics from Prometheus-compatible targets (aka pull protocol) - see these docs. Additionally, VictoriaMetrics can accept metrics via the following popular data ingestion protocols (aka push protocols): Please note, most of the ingestion APIs (except Prometheus remote_write API) are optimized for performance and processes data in a streaming fashion. It means that client can transfer unlimited amount of data through the open connection. Because of this, import APIs may not return parsing errors to the client, as it is expected for data stream to be not interrupted. Instead, look for parsing errors on the server side (VictoriaMetrics single-node or vminsert) or check for changes in vmrowsinvalid_total (exported by server side) metric. VictoriaMetrics accepts metrics data in JSON line format at /api/v1/import endpoint. See these docs for details on this format. Example for importing data obtained via /api/v1/export: ``` curl http://source-victoriametrics:8428/api/v1/export -d 'match={name!=\"\"}' > exported_data.jsonl curl -X POST http://destination-victoriametrics:8428/api/v1/import -T exported_data.jsonl ``` Pass Content-Encoding: gzip HTTP request header to /api/v1/import for importing gzipped data: ``` curl -H 'Accept-Encoding: gzip' http://source-victoriametrics:8428/api/v1/export -d 'match={name!=\"\"}' > exported_data.jsonl.gz curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import -T" }, { "data": "``` Extra labels may be added to all the imported time series by passing extra_label=name=value query args. For example, /api/v1/import?extra_label=foo=bar would add \"foo\":\"bar\" label to all the imported time series. Note that it could be required to flush response cache after importing historical data. See these docs for detail. VictoriaMetrics parses input JSON lines one-by-one. It loads the whole JSON line in memory, then parses it and then saves the parsed samples into persistent storage. This means that VictoriaMetrics can occupy big amounts of RAM when importing too long JSON lines. The solution is to split too long JSON lines into shorter lines. It is OK if samples for a single time series are split among multiple JSON lines. JSON line length can be limited via maxrowsper_line query arg when exporting via /api/v1/export. The maximum JSON line length, which can be parsed by VictoriaMetrics, is limited by -import.maxLineLen command-line flag value. The specification of VictoriaMetrics native format may yet change and is not formally documented yet. So currently we do not recommend that external clients attempt to pack their own metrics in native format file. If you have a native format file obtained via /api/v1/export/native however this is the most efficient protocol for importing data in. ``` curl http://source-victoriametrics:8428/api/v1/export/native -d 'match={name!=\"\"}' > exported_data.bin curl -X POST http://destination-victoriametrics:8428/api/v1/import/native -T exported_data.bin ``` Extra labels may be added to all the imported time series by passing extra_label=name=value query args. For example, /api/v1/import/native?extra_label=foo=bar would add \"foo\":\"bar\" label to all the imported time series. Note that it could be required to flush response cache after importing historical data. See these docs for detail. Arbitrary CSV data can be imported via /api/v1/import/csv. The CSV data is imported according to the provided format query arg. The format query arg must contain comma-separated list of parsing rules for CSV fields. Each rule consists of three parts delimited by a colon: ``` <column_pos>:<type>:<context> ``` Each request to /api/v1/import/csv may contain arbitrary number of CSV lines. Example for importing CSV data via /api/v1/import/csv: ``` curl -d \"GOOG,1.23,4.56,NYSE\" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market' curl -d \"MSFT,3.21,1.67,NASDAQ\" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market' ``` After that the data may be read via /api/v1/export endpoint: ``` curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=\"\"}' ``` The following response should be returned: ``` {\"metric\":{\"name\":\"bid\",\"market\":\"NASDAQ\",\"ticker\":\"MSFT\"},\"values\":[1.67],\"timestamps\":[1583865146520]} {\"metric\":{\"name\":\"bid\",\"market\":\"NYSE\",\"ticker\":\"GOOG\"},\"values\":[4.56],\"timestamps\":[1583865146495]} {\"metric\":{\"name\":\"ask\",\"market\":\"NASDAQ\",\"ticker\":\"MSFT\"},\"values\":[3.21],\"timestamps\":[1583865146520]} {\"metric\":{\"name\":\"ask\",\"market\":\"NYSE\",\"ticker\":\"GOOG\"},\"values\":[1.23],\"timestamps\":[1583865146495]} ``` Extra labels may be added to all the imported lines by passing extra_label=name=value query args. For example, /api/v1/import/csv?extra_label=foo=bar would add \"foo\":\"bar\" label to all the imported lines. Note that it could be required to flush response cache after importing historical data. See these docs for detail. VictoriaMetrics accepts data in Prometheus exposition format, in OpenMetrics format and in Pushgateway format via /api/v1/import/prometheus path. For example, the following command imports a single line in Prometheus exposition format into VictoriaMetrics: ``` curl -d 'foo{bar=\"baz\"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus' ``` The following command may be used for verifying the imported data: ``` curl -G 'http://localhost:8428/api/v1/export' -d 'match={name=~\"foo\"}' ``` It should return something like the following: ``` {\"metric\":{\"name\":\"foo\",\"bar\":\"baz\"},\"values\":[123],\"timestamps\":[1594370496905]} ``` The following command imports a single metric via Pushgateway format with {job=\"my_app\",instance=\"host123\"} labels: ``` curl -d 'metric{label=\"abc\"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus/metrics/job/my_app/instance/host123' ``` Pass Content-Encoding: gzip HTTP request header to /api/v1/import/prometheus for importing gzipped data: ``` curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz ``` Extra labels may be added to all the imported metrics either via Pushgateway format or by passing extralabel=name=value query args. For example, /api/v1/import/prometheus?extralabel=foo=bar would add {foo=\"bar\"} label to all the imported metrics. If timestamp is missing in <metric> <value> <timestamp> Prometheus exposition format line, then the current timestamp is used during data ingestion. It can be overridden by passing unix timestamp in milliseconds via timestamp query arg. For example," }, { "data": "VictoriaMetrics accepts arbitrary number of lines in a single request to /api/v1/import/prometheus, i.e. it supports data streaming. Note that it could be required to flush response cache after importing historical data. See these docs for detail. VictoriaMetrics also may scrape Prometheus targets - see these docs. VictoriaMetrics supports data ingestion via OpenTelemetry protocol for metrics at /opentelemetry/v1/metrics path. VictoriaMetrics expects protobuf-encoded requests at /opentelemetry/v1/metrics. Set HTTP request header Content-Encoding: gzip when sending gzip-compressed data to /opentelemetry/v1/metrics. VictoriaMetrics stores the ingested OpenTelemetry raw samples as is without any transformations. Pass -opentelemetry.usePrometheusNaming command-line flag to VictoriaMetrics for automatic conversion of metric names and labels into Prometheus-compatible format. See How to use OpenTelemetry metrics with VictoriaMetrics. VictoriaMetrics accepts data in JSON line format at /api/v1/import and exports data in this format at /api/v1/export. The format follows JSON streaming concept, e.g. each line contains JSON object with metrics data in the following format: ``` { // metric contans metric name plus labels for a particular time series \"metric\":{ \"name\": \"metric_name\", // <- this is metric name // Other labels for the time series \"label1\": \"value1\", \"label2\": \"value2\", ... \"labelN\": \"valueN\" }, // values contains raw sample values for the given time series \"values\": [1, 2.345, -678], // timestamps contains raw sample UNIX timestamps in milliseconds for the given time series // every timestamp is associated with the value at the corresponding position \"timestamps\": [1549891472010,1549891487724,1549891503438] } ``` Note that every JSON object must be written in a single line, e.g. all the newline chars must be removed from it. /api/v1/import handler doesnt accept JSON lines longer than the value passed to -import.maxLineLen command-line flag (by default this is 10MB). It is recommended passing 1K-10K samples per line for achieving the maximum data ingestion performance at /api/v1/import. Too long JSON lines may increase RAM usage at VictoriaMetrics side. /api/v1/export handler accepts maxrowsper_line query arg, which allows limiting the number of samples per each exported line. It is OK to split raw samples for the same time series across multiple lines. The number of lines in the request to /api/v1/import can be arbitrary - they are imported in streaming manner. VictoriaMetrics supports Prometheus-compatible relabeling for all the ingested metrics if -relabelConfig command-line flag points to a file containing a list of relabel_config entries. The -relabelConfig also can point to http or https url. For example, -relabelConfig=https://config-server/relabel_config.yml. The following docs can be useful in understanding the relabeling: The -relabelConfig files can contain special placeholders in the form %{ENV_VAR}, which are replaced by the corresponding environment variable values. Example contents for -relabelConfig file: ``` target_label: cluster replacement: dev action: drop sourcelabels: [metakubernetespodcontainer_init] regex: true ``` VictoriaMetrics provides additional relabeling features such as Graphite-style relabeling. See these docs for more details. The relabeling can be debugged at http://victoriametrics:8428/metric-relabel-debug page or at our public playground. See these docs for more details. VictoriaMetrics exports Prometheus-compatible federation data at http://<victoriametrics-addr>:8428/federate?match[]=<timeseriesselectorfor_federation>. Optional start and end args may be added to the request in order to scrape the last point for each selected time series on the [start ... end] interval. See allowed formats for these args. For example: ``` curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseriesselectorfor_export>' -d 'start=1654543486' -d 'end=1654543486' curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseriesselectorfor_export>' -d 'start=2022-06-06T19:25:48' -d 'end=2022-06-06T19:29:07' ``` By default, the last point on the interval [now - maxlookback ... now] is scraped for each time series. The default value for maxlookback is 5m (5 minutes), but it can be overridden with max_lookback query arg. For instance, /federate?match[]=up&max_lookback=1h would return last points on the [now - 1h ... now]" }, { "data": "This may be useful for time series federation with scrape intervals exceeding 5m. VictoriaMetrics uses lower amounts of CPU, RAM and storage space on production workloads compared to competing solutions (Prometheus, Thanos, Cortex, TimescaleDB, InfluxDB, QuestDB, M3DB) according to our case studies. VictoriaMetrics capacity scales linearly with the available resources. The needed amounts of CPU and RAM highly depends on the workload - the number of active time series, series churn rate, query types, query qps, etc. It is recommended setting up a test VictoriaMetrics for your production workload and iteratively scaling CPU and RAM resources until it becomes stable according to troubleshooting docs. A single-node VictoriaMetrics works perfectly with the following production workload according to our case studies: The needed storage space for the given retention (the retention is set via -retentionPeriod command-line flag) can be extrapolated from disk space usage in a test run. For example, if -storageDataPath directory size becomes 10GB after a day-long test run on a production workload, then it will need at least 10GB*100=1TB of disk space for -retentionPeriod=100d (100-days retention period). It is recommended leaving the following amounts of spare resources: See also resource usage limits docs. By default, VictoriaMetrics is tuned for an optimal resource usage under typical workloads. Some workloads may need fine-grained resource usage limits. In these cases the following command-line flags may be useful: See also resource usage limits at VictoriaMetrics cluster, cardinality limiter and capacity planning docs. The general approach for achieving high availability is the following: Such a setup guarantees that the collected data isnt lost when one of VictoriaMetrics instance becomes unavailable. The collected data continues to be written to the available VictoriaMetrics instance, so it should be available for querying. Both vmagent and Prometheus buffer the collected data locally if they cannot send it to the configured remote storage. So the collected data will be written to the temporarily unavailable VictoriaMetrics instance after it becomes available. If you use vmagent for storing the data into VictoriaMetrics, then it can be configured with multiple -remoteWrite.url command-line flags, where every flag points to the VictoriaMetrics instance in a particular availability zone, in order to replicate the collected data to all the VictoriaMetrics instances. For example, the following command instructs vmagent to replicate data to vm-az1 and vm-az2 instances of VictoriaMetrics: ``` /path/to/vmagent \\ -remoteWrite.url=http://<vm-az1>:8428/api/v1/write \\ -remoteWrite.url=http://<vm-az2>:8428/api/v1/write ``` If you use Prometheus for collecting and writing the data to VictoriaMetrics, then the following remote_write section in Prometheus config can be used for replicating the collected data to vm-az1 and vm-az2 VictoriaMetrics instances: ``` remote_write: url: http://<vm-az1>:8428/api/v1/write url: http://<vm-az2>:8428/api/v1/write ``` It is recommended to use vmagent instead of Prometheus for highly loaded setups, since it uses lower amounts of RAM, CPU and network bandwidth than Prometheus. If you use identically configured vmagent instances for collecting the same data and sending it to VictoriaMetrics, then do not forget enabling deduplication at VictoriaMetrics side. VictoriaMetrics leaves a single raw sample with the biggest timestamp for each time series per each -dedup.minScrapeInterval discrete interval if -dedup.minScrapeInterval is set to positive duration. For example, -dedup.minScrapeInterval=60s would leave a single raw sample with the biggest timestamp per each discrete 60s interval. This aligns with the staleness rules in Prometheus. If multiple raw samples have the same timestamp on the given -dedup.minScrapeInterval discrete interval, then the sample with the biggest value is kept. Prometheus staleness markers are processed as any other value during de-duplication. If raw sample with the biggest timestamp on -dedup.minScrapeInterval contains a stale marker, then it is kept after the deduplication. This allows properly preserving staleness markers during the" }, { "data": "Please note, labels of raw samples should be identical in order to be deduplicated. For example, this is why HA pair of vmagents needs to be identically configured. The -dedup.minScrapeInterval=D is equivalent to -downsampling.period=0s:D if downsampling is enabled. So it is safe to use deduplication and downsampling simultaneously. The recommended value for -dedup.minScrapeInterval must equal to scrape_interval config from Prometheus configs. It is recommended to have a single scrape_interval across all the scrape targets. See this article for details. The de-duplication reduces disk space usage if multiple identically configured vmagent or Prometheus instances in HA pair write data to the same VictoriaMetrics instance. These vmagent or Prometheus instances must have identical external_labels section in their configs, so they write data to the same time series. See also how to set up multiple vmagent instances for scraping the same targets. It is recommended passing different -promscrape.cluster.name values to each distinct HA pair of vmagent instances, so the de-duplication consistently leaves samples for one vmagent instance and removes duplicate samples from other vmagent instances. See these docs for details. VictoriaMetrics stores all the ingested samples to disk even if -dedup.minScrapeInterval command-line flag is set. The ingested samples are de-duplicated during background merges and during query execution. VictoriaMetrics also supports de-duplication during data ingestion before the data is stored to disk, via -streamAggr.dedupInterval command-line flag - see these docs. VictoriaMetrics buffers the ingested data in memory for up to a second. Then the buffered data is written to in-memory parts, which can be searched during queries. The in-memory parts are periodically persisted to disk, so they could survive unclean shutdown such as out of memory crash, hardware power loss or SIGKILL signal. The interval for flushing the in-memory data to disk can be configured with the -inmemoryDataFlushInterval command-line flag (note that too short flush interval may significantly increase disk IO). In-memory parts are persisted to disk into part directories under the <-storageDataPath>/data/small/YYYY_MM/ folder, where YYYYMM is the month partition for the stored data. For example, 202211 is the partition for parts with raw samples from November 2022. Each partition directory contains parts.json file with the actual list of parts in the partition. Every part directory contains metadata.json file with the following fields: Each part consists of blocks sorted by internal time series id (aka TSID). Each block contains up to 8K raw samples, which belong to a single time series. Raw samples in each block are sorted by timestamp. Blocks for the same time series are sorted by the timestamp of the first sample. Timestamps and values for all the blocks are stored in compressed form in separate files under part directory - timestamps.bin and values.bin. The part directory also contains index.bin and metaindex.bin files - these files contain index for fast block lookups, which belong to the given TSID and cover the given time range. Parts are periodically merged into bigger parts in background. The background merge provides the following benefits: Newly added parts either successfully appear in the storage or fail to appear. The newly added part is atomically registered in the parts.json file under the corresponding partition after it is fully written and fsynced to the storage. Thanks to this algorithm, storage never contains partially created parts, even if hardware power off occurs in the middle of writing the part to disk - such incompletely written parts are automatically deleted on the next VictoriaMetrics start. The same applies to merge process parts are either fully merged into a new part or fail to merge, leaving the source parts" }, { "data": "However, due to hardware issues data on disk may be corrupted regardless of VictoriaMetrics process. VictoriaMetrics can detect corruption during decompressing, decoding or sanity checking of the data blocks. But it cannot fix the corrupted data. Data parts that fail to load on startup need to be deleted or restored from backups. This is why it is recommended performing regular backups. VictoriaMetrics doesnt use checksums for stored data blocks. See why here. VictoriaMetrics doesnt merge parts if their summary size exceeds free disk space. This prevents from potential out of disk space errors during merge. The number of parts may significantly increase over time under free disk space shortage. This increases overhead during data querying, since VictoriaMetrics needs to read data from bigger number of parts per each request. Thats why it is recommended to have at least 20% of free disk space under directory pointed by -storageDataPath command-line flag. Information about merging process is available in the dashboard for single-node VictoriaMetrics and the dashboard for VictoriaMetrics cluster. See more details in monitoring docs. See this article for more details. See also how to work with snapshots. Retention is configured with the -retentionPeriod command-line flag, which takes a number followed by a time unit character - h(ours), d(ays), w(eeks), y(ears). If the time unit is not specified, a month (31 days) is assumed. For instance, -retentionPeriod=3 means that the data will be stored for 3 months (93 days) and then deleted. The default retention period is one month. The minimum retention period is 24h or 1d. Data is split in per-month partitions inside <-storageDataPath>/data/{small,big} folders. Data partitions outside the configured retention are deleted on the first day of the new month. Each partition consists of one or more data parts. Data parts outside the configured retention are eventually deleted during background merge. The time range covered by data part is not limited by retention period unit. One data part can cover hours or days of data. Hence, a data part can be deleted only when fully outside the configured retention. See more about partitions and parts here. The maximum disk space usage for a given -retentionPeriod is going to be (-retentionPeriod + 1) months. For example, if -retentionPeriod is set to 1, data for January is deleted on March 1st. It is safe to extend -retentionPeriod on existing data. If -retentionPeriod is set to a lower value than before, then data outside the configured period will be eventually deleted. VictoriaMetrics does not support indefinite retention, but you can specify an arbitrarily high duration, e.g. -retentionPeriod=100y. Distinct retentions for distinct time series can be configured via retention filters in VictoriaMetrics enterprise. Community version of VictoriaMetrics supports only a single retention, which can be configured via -retentionPeriod command-line flag. If you need multiple retentions in community version of VictoriaMetrics, then you may start multiple VictoriaMetrics instances with distinct values for the following flags: Then set up vmauth in front of VictoriaMetrics instances, so it could route requests from particular user to VictoriaMetrics with the desired retention. Similar scheme can be applied for multiple tenants in VictoriaMetrics cluster. See these docs for multi-retention setup details. Enterprise version of VictoriaMetrics supports e.g. retention filters, which allow configuring multiple retentions for distinct sets of time series matching the configured series filters via -retentionFilter command-line flag. This flag accepts filter:duration options, where filter must be a valid series filter, while the duration must contain valid retention for time series matching the given filter. The duration of the -retentionFilter must be lower or equal to -retentionPeriod flag" }, { "data": "If series doesnt match any configured -retentionFilter, then the retention configured via -retentionPeriod command-line flag is applied to it. If series matches multiple configured retention filters, then the smallest retention is applied. For example, the following config sets 3 days retention for time series with team=\"juniors\" label, 30 days retention for time series with env=\"dev\" or env=\"staging\" label and 1 year retention for the remaining time series: ``` -retentionFilter='{team=\"juniors\"}:3d' -retentionFilter='{env=~\"dev|staging\"}:30d' -retentionPeriod=1y ``` Important notes: It is safe updating -retentionFilter during VictoriaMetrics restarts - the updated retention filters are applied eventually to historical data. See how to configure multiple retentions in VictoriaMetrics cluster. See also downsampling. Retention filters can be evaluated for free by downloading and using enterprise binaries from the releases page. See how to request a free trial license here. VictoriaMetrics Enterprise supports multi-level downsampling via -downsampling.period=offset:interval command-line flag. This command-line flag instructs leaving the last sample per each interval for time series samples older than the offset. For example, -downsampling.period=30d:5m instructs leaving the last sample per each 5-minute interval for samples older than 30 days, while the rest of samples are dropped. The -downsampling.period command-line flag can be specified multiple times in order to apply different downsampling levels for different time ranges (aka multi-level downsampling). For example, -downsampling.period=30d:5m,180d:1h instructs leaving the last sample per each 5-minute interval for samples older than 30 days, while leaving the last sample per each 1-hour interval for samples older than 180 days. VictoriaMetrics supports configuring independent downsampling per different sets of time series via -downsampling.period=filter:offset:interval syntax. In this case the given offset:interval downsampling is applied only to time series matching the given filter. The filter can contain arbitrary series filter. For example, -downsampling.period='{name=~\"(node|process)_.*\"}:1d:1m instructs VictoriaMetrics to deduplicate samples older than one day with one minute interval only for time series with names starting with node or process prefixes. The de-duplication for other time series can be configured independently via additional -downsampling.period command-line flags. If the time series doesnt match any filter, then it isnt downsampled. If the time series matches multiple filters, then the downsampling for the first matching filter is applied. For example, -downsampling.period='{env=\"prod\"}:1d:30s,{name=~\"node_.*\"}:1d:5m' de-duplicates samples older than one day with 30 seconds interval across all the time series with env=\"prod\" label, even if their names start with node prefix. All the other time series with names starting with node prefix are de-duplicated with 5 minutes interval. If downsampling shouldnt be applied to some time series matching the given filter, then pass -downsampling.period=filter:0s:0s command-line flag to VictoriaMetrics. For example, if series with env=\"prod\" label shouldnt be downsampled, then pass -downsampling.period='{env=\"prod\"}:0s:0s' command-line flag in front of other -downsampling.period flags. Downsampling is applied independently per each time series and leaves a single raw sample with the biggest timestamp on the configured interval, in the same way as deduplication does. It works the best for counters and histograms, as their values are always increasing. Downsampling gauges and summaries lose some changes within the downsampling interval, since only the last sample on the given interval is left and the rest of samples are dropped. You can use recording rules or steaming aggregation to apply custom aggregation functions, like min/max/avg etc., in order to make gauges more resilient to downsampling. Downsampling can reduce disk space usage and improve query performance if it is applied to time series with big number of samples per each series. The downsampling doesnt improve query performance and doesnt reduce disk space if the database contains big number of time series with small number of samples per each series, since downsampling doesnt reduce the number of time" }, { "data": "So there is little sense in applying downsampling to time series with high churn rate. In this case the majority of query time is spent on searching for the matching time series instead of processing the found samples. It is possible to use stream aggregation in vmagent or recording rules in vmalert in order to reduce the number of time series. Downsampling is performed during background merges. It cannot be performed if there is not enough of free disk space or if vmstorage is in read-only mode. Please, note that intervals of -downsampling.period must be multiples of each other. In case deduplication is enabled, value of -dedup.minScrapeInterval command-line flag must also be multiple of -downsampling.period intervals. This is required to ensure consistency of deduplication and downsampling results. It is safe updating -downsampling.period during VictoriaMetrics restarts - the updated downsampling configuration will be applied eventually to historical data during background merges. See how to configure downsampling in VictoriaMetrics cluster. See also retention filters. The downsampling can be evaluated for free by downloading and using enterprise binaries from the releases page. See how to request a free trial license. Single-node VictoriaMetrics doesnt support multi-tenancy. Use the cluster version instead. Though single-node VictoriaMetrics cannot scale to multiple nodes, it is optimized for resource usage - storage size / bandwidth / IOPS, RAM, CPU. This means that a single-node VictoriaMetrics may scale vertically and substitute a moderately sized cluster built with competing solutions such as Thanos, Uber M3, InfluxDB or TimescaleDB. See vertical scalability benchmarks. So try single-node VictoriaMetrics at first and then switch to the cluster version if you still need horizontally scalable long-term remote storage for really large Prometheus deployments. Contact us for enterprise support. It is recommended using vmalert for alerting. Additionally, alerting can be set up with the following tools: By default VictoriaMetrics accepts http requests at 8428 port (this port can be changed via -httpListenAddr command-line flags). Enterprise version of VictoriaMetrics supports the ability to accept mTLS requests at this port, by specifying -tls and -mtls command-line flags. For example, the following command runs VictoriaMetrics, which accepts only mTLS requests at port 8428: ``` ./victoria-metrics -tls -mtls ``` By default system-wide TLS Root CA is used for verifying client certificates if -mtls command-line flag is specified. It is possible to specify custom TLS Root CA via -mtlsCAFile command-line flag. See also security docs. General security recommendations: VictoriaMetrics provides the following security-related command-line flags: Explicitly set internal network interface for TCP and UDP ports for data ingestion with Graphite and OpenTSDB formats. For example, substitute -graphiteListenAddr=:2003 with -graphiteListenAddr=<internalifaceip>:2003. This protects from unexpected requests from untrusted network interfaces. See also security recommendation for VictoriaMetrics cluster and the general security page at VictoriaMetrics website. All the VictoriaMetrics Enterprise components support automatic issuing of TLS certificates for public HTTPS server running at -httpListenAddr via Lets Encrypt service. The following command-line flags must be set in order to enable automatic issuing of TLS certificates: This functionality can be evaluated for free according to these docs. See also security recommendations. ``` mkfs.ext4 ... -O 64bit,huge_file,extent -T huge ``` VictoriaMetrics exports internal metrics in Prometheus exposition format at /metrics page. These metrics can be scraped via vmagent or any other Prometheus-compatible scraper. If you use Google Cloud Managed Prometheus for scraping metrics from VictoriaMetrics components, then pass -metrics.exposeMetadata command-line to them, so they add TYPE and HELP comments per each exposed metric at /metrics page. See these docs for details. Alternatively, single-node VictoriaMetrics can self-scrape the metrics when -selfScrapeInterval command-line flag is set to duration greater than" }, { "data": "For example, -selfScrapeInterval=10s would enable self-scraping of /metrics page with 10 seconds interval. Please note, never use loadbalancer address for scraping metrics. All the monitored components should be scraped directly by their address. Official Grafana dashboards available for single-node and clustered VictoriaMetrics. See an alternative dashboard for clustered VictoriaMetrics created by community. Graphs on the dashboards contain useful hints - hover the i icon in the top left corner of each graph to read it. We recommend setting up alerts via vmalert or via Prometheus. VictoriaMetrics exposes currently running queries and their execution times at active queries page. VictoriaMetrics exposes queries, which take the most time to execute, at top queries page. See also VictoriaMetrics Monitoring and troubleshooting docs. VictoriaMetrics returns TSDB stats at /api/v1/status/tsdb page in the way similar to Prometheus - see these Prometheus docs. VictoriaMetrics accepts the following optional query args at /api/v1/status/tsdb page: In cluster version of VictoriaMetrics each vmstorage tracks the stored time series individually. vmselect requests stats via /api/v1/status/tsdb API from each vmstorage node and merges the results by summing per-series stats. This may lead to inflated values when samples for the same time series are spread across multiple vmstorage nodes due to replication or rerouting. VictoriaMetrics provides an UI on top of /api/v1/status/tsdb - see cardinality explorer docs. VictoriaMetrics supports query tracing, which can be used for determining bottlenecks during query processing. This is like EXPLAIN ANALYZE from Postgresql. Query tracing can be enabled for a specific query by passing trace=1 query arg. In this case VictoriaMetrics puts query trace into trace field in the output JSON. For example, the following command: ``` curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq '.trace' ``` would return the following trace: ``` { \"duration_msec\": 0.099, \"message\": \"/api/v1/query_range: start=1654034340000, end=1654037880000, step=60000, query=\\\"2*rand()\\\": series=1\", \"children\": [ { \"duration_msec\": 0.034, \"message\": \"eval: query=2 * rand(), timeRange=[1654034340000..1654037880000], step=60000, mayCache=true: series=1, points=60, pointsPerSeries=60\", \"children\": [ { \"duration_msec\": 0.032, \"message\": \"binary op \\\"*\\\": series=1\", \"children\": [ { \"duration_msec\": 0.009, \"message\": \"eval: query=2, timeRange=[1654034340000..1654037880000], step=60000, mayCache=true: series=1, points=60, pointsPerSeries=60\" }, { \"duration_msec\": 0.017, \"message\": \"eval: query=rand(), timeRange=[1654034340000..1654037880000], step=60000, mayCache=true: series=1, points=60, pointsPerSeries=60\", \"children\": [ { \"duration_msec\": 0.015, \"message\": \"transform rand(): series=1\" } ] } ] } ] }, { \"duration_msec\": 0.004, \"message\": \"sort series by metric name and labels\" }, { \"duration_msec\": 0.044, \"message\": \"generate /api/v1/query_range response for series=1, points=60\" } ] } ``` All the durations and timestamps in traces are in milliseconds. Query tracing is allowed by default. It can be denied by passing -denyQueryTracing command-line flag to VictoriaMetrics. VMUI provides an UI: By default VictoriaMetrics doesnt limit the number of stored time series. The limit can be enforced by setting the following command-line flags: Both limits can be set simultaneously. If any of these limits is reached, then incoming samples for new time series are dropped. A sample of dropped series is put in the log with WARNING level. The exceeded limits can be monitored with the following metrics: vmhourlyserieslimitrowsdroppedtotal - the number of metrics dropped due to exceeded hourly limit on the number of unique time series. vmhourlyserieslimitmax_series - the hourly series limit set via -storage.maxHourlySeries command-line flag. vmhourlyserieslimitcurrent_series - the current number of unique series during the last hour. The following query can be useful for alerting when the number of unique series during the last hour exceeds 90% of the -storage.maxHourlySeries: ``` vmhourlyserieslimitcurrentseries / vmhourlyserieslimitmaxseries > 0.9 ``` vmdailyserieslimitrowsdroppedtotal - the number of metrics dropped due to exceeded daily limit on the number of unique time" }, { "data": "vmdailyserieslimitmax_series - the daily series limit set via -storage.maxDailySeries command-line flag. vmdailyserieslimitcurrent_series - the current number of unique series during the last day. The following query can be useful for alerting when the number of unique series during the last day exceeds 90% of the -storage.maxDailySeries: ``` vmdailyserieslimitcurrentseries / vmdailyserieslimitmaxseries > 0.9 ``` These limits are approximate, so VictoriaMetrics can underflow/overflow the limit by a small percentage (usually less than 1%). See also more advanced cardinality limiter in vmagent and cardinality explorer docs. It is recommended to use default command-line flag values (i.e. dont set them explicitly) until the need of tweaking these flag values arises. It is recommended inspecting logs during troubleshooting, since they may contain useful information. It is recommended upgrading to the latest available release from this page, since the encountered issue could be already fixed there. It is recommended to have at least 50% of spare resources for CPU, disk IO and RAM, so VictoriaMetrics could handle short spikes in the workload without performance issues. VictoriaMetrics requires free disk space for merging data files to bigger ones. It may slow down when there is no enough free space left. So make sure -storageDataPath directory has at least 20% of free space. The remaining amount of free space can be monitored via vmfreediskspacebytes metric. The total size of data stored on the disk can be monitored via sum of vmdatasize_bytes metrics. If you run VictoriaMetrics on a host with 16 or more CPU cores, then it may be needed to tune the -search.maxWorkersPerQuery command-line flag in order to improve query performance. If VictoriaMetrics serves big number of concurrent select queries, then try reducing the value for this flag. If VictoriaMetrics serves heavy queries, which select >10K of time series and/or process >100M of raw samples per query, then try setting the value for this flag to the number of available CPU cores. VictoriaMetrics buffers incoming data in memory for up to a few seconds before flushing it to persistent storage. This may lead to the following issues: If VictoriaMetrics works slowly and eats more than a CPU core per 100K ingested data points per second, then it is likely you have too many active time series for the current amount of RAM. VictoriaMetrics exposes vmslow* metrics such as vmslowrowinsertstotal and vmslowmetricnameloads_total, which could be used as an indicator of low amounts of RAM. It is recommended increasing the amount of RAM on the node with VictoriaMetrics in order to improve ingestion and query performance in this case. If the order of labels for the same metrics can change over time (e.g. if metric{k1=\"v1\",k2=\"v2\"} may become metric{k2=\"v2\",k1=\"v1\"}), then it is recommended running VictoriaMetrics with -sortLabels command-line flag in order to reduce memory usage and CPU usage. VictoriaMetrics prioritizes data ingestion over data querying. So if it has no enough resources for data ingestion, then data querying may slow down significantly. If VictoriaMetrics doesnt work because of certain parts are corrupted due to disk errors, then just remove directories with broken parts. It is safe removing subdirectories under <-storageDataPath>/data/{big,small}/YYYY_MM directories when VictoriaMetrics isnt running. This recovers VictoriaMetrics at the cost of data loss stored in the deleted broken parts. In the future, vmrecover tool will be created for automatic recovering from such errors. If you see gaps on the graphs, try resetting the cache by sending request to /internal/resetRollupResultCache. If this removes gaps on the graphs, then it is likely data with timestamps older than -search.cacheTimestampOffset is ingested into VictoriaMetrics. Make sure that data sources have synchronized time with" }, { "data": "If the gaps are related to irregular intervals between samples, then try adjusting -search.minStalenessInterval command-line flag to value close to the maximum interval between samples. If you are switching from InfluxDB or TimescaleDB, then it may be needed to set -search.setLookbackToStep command-line flag. This suppresses default gap filling algorithm used by VictoriaMetrics - by default it assumes each time series is continuous instead of discrete, so it fills gaps between real samples with regular intervals. Metrics and labels leading to high cardinality or high churn rate can be determined via cardinality explorer and via /api/v1/status/tsdb endpoint. New time series can be logged if -logNewSeries command-line flag is passed to VictoriaMetrics. VictoriaMetrics limits the number of labels per each metric with -maxLabelsPerTimeseries command-line flag and drops superfluous labels. This prevents from ingesting metrics with too many labels. It is recommended monitoring vmmetricswithdroppedlabels_total metric in order to determine whether -maxLabelsPerTimeseries must be adjusted for your workload. If you store Graphite metrics like foo.bar.baz in VictoriaMetrics, then {graphite=\"foo.*.baz\"} filter can be used for selecting such metrics. See these docs for details. You can also query Graphite metrics with Graphite querying API. VictoriaMetrics ignores NaN values during data ingestion. See also: All the VictoriaMetrics components support pushing their metrics exposed at /metrics page to remote storage in Prometheus text exposition format. This functionality may be used instead of classic Prometheus-like metrics scraping if VictoriaMetrics components are located in isolated networks, so they cannot be scraped by local vmagent. The following command-line flags are related to pushing metrics from VictoriaMetrics components: For example, the following command instructs VictoriaMetrics to push metrics from /metrics page to https://maas.victoriametrics.com/api/v1/import/prometheus with user:pass Basic auth. The instance=\"foobar\" and job=\"vm\" labels are added to all the metrics before sending them to the remote storage: ``` /path/to/victoria-metrics \\ -pushmetrics.url=https://user:pass@maas.victoriametrics.com/api/v1/import/prometheus \\ -pushmetrics.extraLabel='instance=\"foobar\"' \\ -pushmetrics.extraLabel='job=\"vm\"' ``` VictoriaMetrics uses various internal caches. These caches are stored to <-storageDataPath>/cache directory during graceful shutdown (e.g. when VictoriaMetrics is stopped by sending SIGINT signal). The caches are read on the next VictoriaMetrics startup. Sometimes it is needed to remove such caches on the next startup. This can be done in the following ways: It is also possible removing rollup result cache on startup by passing -search.resetRollupResultCacheOnStartup command-line flag to VictoriaMetrics. VictoriaMetrics caches query responses by default. This allows increasing performance for repeated queries to /api/v1/query and /api/v1/query_range with the increasing time, start and end query args. This cache may work incorrectly when ingesting historical data into VictoriaMetrics. See these docs for details. The rollup cache can be disabled either globally by running VictoriaMetrics with -search.disableCache command-line flag or on a per-query basis by passing nocache=1 query arg to /api/v1/query and /api/v1/query_range. See also cache removal docs. VictoriaMetrics uses various in-memory caches for faster data ingestion and query performance. The following metrics for each type of cache are exported at /metrics page: Both Grafana dashboards for single-node VictoriaMetrics and clustered VictoriaMetrics contain Caches section with cache metrics visualized. The panels show the current memory usage by each type of cache, and also a cache hit rate. If hit rate is close to 100% then cache efficiency is already very high and does not need any tuning. The panel Cache usage % in Troubleshooting section shows the percentage of used cache size from the allowed size by type. If the percentage is below 100%, then no further tuning needed. Please note, default cache sizes were carefully adjusted accordingly to the most practical scenarios and workloads. Change the defaults only if you understand the implications and vmstorage has enough free memory to accommodate new cache" }, { "data": "To override the default values see command-line flags with -storage.cacheSize prefix. See the full description of flags here. The simplest way to migrate data from one single-node (source) to another (destination), or from one vmstorage node to another do the following: Things to consider when copying data: For more complex scenarios like single-to-cluster, cluster-to-single, re-sharding or migrating only a fraction of data - see vmctl. Migrating data from VictoriaMetrics. Use vmctl for data migration. It supports the following data migration types: See vmctl docs for more details. VictoriaMetrics accepts historical data in arbitrary order of time via any supported ingestion method. See how to backfill data with recording rules in vmalert. Make sure that configured -retentionPeriod covers timestamps for the backfilled data. It is recommended disabling query cache with -search.disableCache command-line flag when writing historical data with timestamps from the past, since the cache assumes that the data is written with the current timestamps. Query cache can be enabled after the backfilling is complete. An alternative solution is to query /internal/resetRollupResultCache after the backfilling is complete. This will reset the query cache, which could contain incomplete data cached during the backfilling. Yet another solution is to increase -search.cacheTimestampOffset flag value in order to disable caching for data with timestamps close to the current time. Single-node VictoriaMetrics automatically resets response cache when samples with timestamps older than now - search.cacheTimestampOffset are ingested to it. VictoriaMetrics doesnt support updating already existing sample values to new ones. It stores all the ingested data points for the same time series with identical timestamps. While it is possible substituting old time series with new time series via removal of old time series and then writing new time series, this approach should be used only for one-off updates. It shouldnt be used for frequent updates because of non-zero overhead related to data removal. Single-node VictoriaMetrics doesnt support application-level replication. Use cluster version instead. See these docs for details. Storage-level replication may be offloaded to durable persistent storage such as Google Cloud disks. See also high availability docs and backup docs. VictoriaMetrics supports backups via vmbackup and vmrestore tools. We also provide vmbackupmanager tool for enterprise subscribers. Enterprise binaries can be downloaded and evaluated for free from the releases page. See how to request a free trial license here. A single-node VictoriaMetrics is capable of proxying requests to vmalert when -vmalert.proxyURL flag is set. Use this feature for the following cases: For accessing vmalerts UI through single-node VictoriaMetrics configure -vmalert.proxyURL flag and visit http://<victoriametrics-addr>:8428/vmalert/ link. Note, that vendors (including VictoriaMetrics) are often biased when doing such tests. E.g. they try highlighting the best parts of their product, while highlighting the worst parts of competing products. So we encourage users and all independent third parties to conduct their benchmarks for various products they are evaluating in production and publish the results. As a reference, please see benchmarks conducted by VictoriaMetrics team. Please also see the helm chart for running ingestion benchmarks based on node_exporter metrics. VictoriaMetrics provides handlers for collecting the following Go profiles: ``` curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof ``` ``` curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof ``` The command for collecting CPU profile waits for 30 seconds before returning. The collected profiles may be analyzed with go tool pprof. It is safe sharing the collected profiles from security point of view, since they do not contain sensitive information. Contact us with any questions regarding VictoriaMetrics at info@victoriametrics.com. Feel free asking any questions regarding VictoriaMetrics: If you like VictoriaMetrics and want to contribute, then please read these docs. Report bugs and propose new features" }, { "data": "Please, keep image size and number of images per single page low. Keep the docs page as lightweight as possible. If the page needs to have many images, consider using WEB-optimized image format webp. When adding a new doc with many images use webp format right away. Or use a Makefile command below to convert already existing images at docs folder automatically to web format: ``` make docs-images-to-webp ``` Once conversion is done, update the path to images in your docs and verify everything is correct. Zip contains three folders with different image orientations (main color and inverted version). Files included in each folder: Pass -help to VictoriaMetrics in order to see the list of supported command-line flags with their description: ``` -bigMergeConcurrency int Deprecated: this flag does nothing -blockcache.missesBeforeCaching int The number of cache misses before putting the block into cache. Higher values may reduce indexdb/dataBlocks cache size at the cost of higher CPU and disk read usage (default 2) -cacheExpireDuration duration Items are removed from in-memory caches after they aren't accessed for this duration. Lower values may reduce memory usage at the cost of higher CPU usage. See also -prevCacheRemovalPercent (default 30m0s) -configAuthKey value Authorization key for accessing /config page. It must be passed via authKey query arg. It overrides httpAuth.* settings. Flag value can be read from the given file when using -configAuthKey=file:///abs/path/to/file or -configAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -configAuthKey=http://host/path or -configAuthKey=https://host/path -csvTrimTimestamp duration Trim timestamps when importing csv data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms) -datadog.maxInsertRequestSize size The maximum size in bytes of a single DataDog POST request to /datadog/api/v2/series Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 67108864) -datadog.sanitizeMetricName Sanitize metric names for the ingested DataDog data to comply with DataDog behaviour described at https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics (default true) -dedup.minScrapeInterval duration Leave only the last sample in every time series per each discrete interval equal to -dedup.minScrapeInterval > 0. See also -streamAggr.dedupInterval and https://docs.victoriametrics.com/#deduplication -deleteAuthKey value authKey for metrics' deletion via /api/v1/admin/tsdb/delete_series and /tags/delSeries Flag value can be read from the given file when using -deleteAuthKey=file:///abs/path/to/file or -deleteAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -deleteAuthKey=http://host/path or -deleteAuthKey=https://host/path -denyQueriesOutsideRetention Whether to deny queries outside the configured -retentionPeriod. When set, then /api/v1/query_range would return '503 Service Unavailable' error for queries with 'from' value outside -retentionPeriod. This may be useful when multiple data sources with distinct retentions are hidden behind query-tee -denyQueryTracing Whether to disable the ability to trace queries. See https://docs.victoriametrics.com/#query-tracing -downsampling.period array Comma-separated downsampling periods in the format 'offset:period'. For example, '30d:10m' instructs to leave a single sample per 10 minutes for samples older than 30 days. When setting multiple downsampling periods, it is necessary for the periods to be multiples of each other. See https://docs.victoriametrics.com/#downsampling for details. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise/ Supports an array of values separated by comma or specified via multiple flags. Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces. -dryRun Whether to check config files without running VictoriaMetrics. The following config files are checked: -promscrape.config, -relabelConfig and -streamAggr.config. Unknown config entries aren't allowed in -promscrape.config by default. This can be changed with -promscrape.config.strictParse=false command-line flag -enableTCP6 Whether to enable IPv6 for listening and dialing. By default, only IPv4 TCP and UDP are used" }, { "data": "Whether to enable reading flags from environment variables in addition to the command line. Command line flag values have priority over values from environment vars. Flags are read only from the command line if this flag isn't set. See https://docs.victoriametrics.com/#environment-variables for more details -envflag.prefix string Prefix for environment variables if -envflag.enable is set -eula Deprecated, please use -license or -licenseFile flags instead. By specifying this flag, you confirm that you have an enterprise license and accept the ESA https://victoriametrics.com/legal/esa/ . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise/ -filestream.disableFadvise Whether to disable fadvise() syscall when reading large data files. The fadvise() syscall prevents from eviction of recently accessed data from OS page cache during background merges and backups. In some rare cases it is better to disable the syscall if it uses too much CPU -finalMergeDelay duration Deprecated: this flag does nothing -flagsAuthKey value Auth key for /flags endpoint. It must be passed via authKey query arg. It overrides httpAuth.* settings Flag value can be read from the given file when using -flagsAuthKey=file:///abs/path/to/file or -flagsAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -flagsAuthKey=http://host/path or -flagsAuthKey=https://host/path -forceFlushAuthKey value authKey, which must be passed in query string to /internal/force_flush pages Flag value can be read from the given file when using -forceFlushAuthKey=file:///abs/path/to/file or -forceFlushAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -forceFlushAuthKey=http://host/path or -forceFlushAuthKey=https://host/path -forceMergeAuthKey value authKey, which must be passed in query string to /internal/force_merge pages Flag value can be read from the given file when using -forceMergeAuthKey=file:///abs/path/to/file or -forceMergeAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -forceMergeAuthKey=http://host/path or -forceMergeAuthKey=https://host/path -fs.disableMmap Whether to use pread() instead of mmap() for reading data files. By default, mmap() is used for 64-bit arches and pread() is used for 32-bit arches, since they cannot read data files bigger than 2^32 bytes in memory. mmap() is usually faster for reading small data chunks than pread() -graphiteListenAddr string TCP and UDP address to listen for Graphite plaintext data. Usually :2003 must be set. Doesn't work if empty. See also -graphiteListenAddr.useProxyProtocol -graphiteListenAddr.useProxyProtocol Whether to use proxy protocol for connections accepted at -graphiteListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt -graphiteTrimTimestamp duration Trim timestamps for Graphite data to this duration. Minimum practical duration is 1s. Higher duration (i.e. 1m) may be used for reducing disk space usage for timestamp data (default 1s) -http.connTimeout duration Incoming connections to -httpListenAddr are closed after the configured timeout. This may help evenly spreading load among a cluster of services behind TCP-level load balancer. Zero value disables closing of incoming connections (default 2m0s) -http.disableResponseCompression Disable compression of HTTP responses to save CPU resources. By default, compression is enabled to save network bandwidth -http.header.csp string Value for 'Content-Security-Policy' header, recommended: \"default-src 'self'\" -http.header.frameOptions string Value for 'X-Frame-Options' header -http.header.hsts string Value for 'Strict-Transport-Security' header, recommended: 'max-age=31536000; includeSubDomains' -http.idleConnTimeout duration Timeout for incoming idle http connections (default 1m0s) -http.maxGracefulShutdownDuration duration The maximum duration for a graceful shutdown of the HTTP server. A highly loaded server may require increased value for a graceful shutdown (default 7s) -http.pathPrefix string An optional prefix to add to all the paths handled by http server. For example, if '-http.pathPrefix=/foo/bar' is set, then all the http requests will be handled on '/foo/bar/*' paths. This may be useful for proxied requests. See https://www.robustperception.io/using-external-urls-and-proxies-with-prometheus -http.shutdownDelay duration Optional delay before http server shutdown. During this delay, the server returns non-OK responses from /health page, so load balancers can route new requests to other servers -httpAuth.password value Password for HTTP server's Basic" }, { "data": "The authentication is disabled if -httpAuth.username is empty Flag value can be read from the given file when using -httpAuth.password=file:///abs/path/to/file or -httpAuth.password=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -httpAuth.password=http://host/path or -httpAuth.password=https://host/path -httpAuth.username string Username for HTTP server's Basic Auth. The authentication is disabled if empty. See also -httpAuth.password -httpListenAddr array TCP addresses to listen for incoming http requests. See also -tls and -httpListenAddr.useProxyProtocol Supports an array of values separated by comma or specified via multiple flags. Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces. -httpListenAddr.useProxyProtocol array Whether to use proxy protocol for connections accepted at the corresponding -httpListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing Supports array of values separated by comma or specified via multiple flags. Empty values are set to false. -import.maxLineLen size The maximum length in bytes of a single line accepted by /api/v1/import; the line length can be limited with 'maxrowsper_line' query arg passed to /api/v1/export Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 10485760) -influx.databaseNames array Comma-separated list of database names to return from /query and /influx/query API. This can be needed for accepting data from Telegraf plugins such as https://github.com/fangli/fluent-plugin-influxdb Supports an array of values separated by comma or specified via multiple flags. Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces. -influx.maxLineSize size The maximum size in bytes for a single InfluxDB line during parsing Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 262144) -influxDBLabel string Default label for the DB name sent over '?db={db_name}' query parameter (default \"db\") -influxListenAddr string TCP and UDP address to listen for InfluxDB line protocol data. Usually :8089 must be set. Doesn't work if empty. This flag isn't needed when ingesting data over HTTP - just send it to http://<victoriametrics>:8428/write . See also -influxListenAddr.useProxyProtocol -influxListenAddr.useProxyProtocol Whether to use proxy protocol for connections accepted at -influxListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt -influxMeasurementFieldSeparator string Separator for '{measurement}{separator}{fieldname}' metric name when inserted via InfluxDB line protocol (default \"\") -influxSkipMeasurement Uses '{field_name}' as a metric name while ignoring '{measurement}' and '-influxMeasurementFieldSeparator' -influxSkipSingleField Uses '{measurement}' instead of '{measurement}{separator}{field_name}' for metric name if InfluxDB line contains only a single field -influxTrimTimestamp duration Trim timestamps for InfluxDB line protocol data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms) -inmemoryDataFlushInterval duration The interval for guaranteed saving of in-memory data to disk. The saved data survives unclean shutdowns such as OOM crash, hardware reset, SIGKILL, etc. Bigger intervals may help increase the lifetime of flash storage with limited write cycles (e.g. Raspberry PI). Smaller intervals increase disk IO load. Minimum supported value is 1s (default 5s) -insert.maxQueueDuration duration The maximum duration to wait in the queue when -maxConcurrentInserts concurrent insert requests are executed (default 1m0s) -internStringCacheExpireDuration duration The expiry duration for caches for interned strings. See https://en.wikipedia.org/wiki/String_interning . See also -internStringMaxLen and -internStringDisableCache (default 6m0s) -internStringDisableCache Whether to disable caches for interned strings. This may reduce memory usage at the cost of higher CPU usage. See https://en.wikipedia.org/wiki/String_interning . See also -internStringCacheExpireDuration and -internStringMaxLen -internStringMaxLen int The maximum length for strings to intern. A lower limit may save memory at the cost of higher CPU usage. See https://en.wikipedia.org/wiki/String_interning . See also -internStringDisableCache and -internStringCacheExpireDuration (default 500) -license string License key for VictoriaMetrics Enterprise. See https://victoriametrics.com/products/enterprise/" }, { "data": "Trial Enterprise license can be obtained from https://victoriametrics.com/products/enterprise/trial/ . This flag is available only in Enterprise binaries. The license key can be also passed via file specified by -licenseFile command-line flag -license.forceOffline Whether to enable offline verification for VictoriaMetrics Enterprise license key, which has been passed either via -license or via -licenseFile command-line flag. The issued license key must support offline verification feature. Contact info@victoriametrics.com if you need offline license verification. This flag is available only in Enterprise binaries -licenseFile string Path to file with license key for VictoriaMetrics Enterprise. See https://victoriametrics.com/products/enterprise/ . Trial Enterprise license can be obtained from https://victoriametrics.com/products/enterprise/trial/ . This flag is available only in Enterprise binaries. The license key can be also passed inline via -license command-line flag -logNewSeries Whether to log new series. This option is for debug purposes only. It can lead to performance issues when big number of new series are ingested into VictoriaMetrics -loggerDisableTimestamps Whether to disable writing timestamps in logs -loggerErrorsPerSecondLimit int Per-second limit on the number of ERROR messages. If more than the given number of errors are emitted per second, the remaining errors are suppressed. Zero values disable the rate limit -loggerFormat string Format for logs. Possible values: default, json (default \"default\") -loggerJSONFields string Allows renaming fields in JSON formatted logs. Example: \"ts:timestamp,msg:message\" renames \"ts\" to \"timestamp\" and \"msg\" to \"message\". Supported fields: ts, level, caller, msg -loggerLevel string Minimum level of errors to log. Possible values: INFO, WARN, ERROR, FATAL, PANIC (default \"INFO\") -loggerMaxArgLen int The maximum length of a single logged argument. Longer arguments are replaced with 'argstart..argend', where 'argstart' and 'argend' is prefix and suffix of the arg with the length not exceeding -loggerMaxArgLen / 2 (default 1000) -loggerOutput string Output for the logs. Supported values: stderr, stdout (default \"stderr\") -loggerTimezone string Timezone to use for timestamps in logs. Timezone must be a valid IANA Time Zone. For example: America/New_York, Europe/Berlin, Etc/GMT+3 or Local (default \"UTC\") -loggerWarnsPerSecondLimit int Per-second limit on the number of WARN messages. If more than the given number of warns are emitted per second, then the remaining warns are suppressed. Zero values disable the rate limit -maxConcurrentInserts int The maximum number of concurrent insert requests. Set higher value when clients send data over slow networks. Default value depends on the number of available CPU cores. It should work fine in most cases since it minimizes resource usage. See also -insert.maxQueueDuration (default 32) -maxInsertRequestSize size The maximum size in bytes of a single Prometheus remote_write API request Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 33554432) -maxLabelValueLen int The maximum length of label values in the accepted time series. Longer label values are truncated. In this case the vmtoolonglabelvalues_total metric at /metrics page is incremented (default 1024) -maxLabelsPerTimeseries int The maximum number of labels accepted per time series. Superfluous labels are dropped. In this case the vmmetricswithdroppedlabels_total metric at /metrics page is incremented (default 30) -memory.allowedBytes size Allowed size of system memory VictoriaMetrics caches may occupy. This option overrides -memory.allowedPercent if set to a non-zero value. Too low a value may increase the cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from the OS page cache resulting in higher disk IO usage Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0) -memory.allowedPercent float Allowed percent of system memory VictoriaMetrics caches may occupy. See also" }, { "data": "Too low a value may increase cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from the OS page cache which will result in higher disk IO usage (default 60) -metrics.exposeMetadata Whether to expose TYPE and HELP metadata at the /metrics page, which is exposed at -httpListenAddr . The metadata may be needed when the /metrics page is consumed by systems, which require this information. For example, Managed Prometheus in Google Cloud - https://cloud.google.com/stackdriver/docs/managed-prometheus/troubleshooting#missing-metric-type -metricsAuthKey value Auth key for /metrics endpoint. It must be passed via authKey query arg. It overrides httpAuth.* settings Flag value can be read from the given file when using -metricsAuthKey=file:///abs/path/to/file or -metricsAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -metricsAuthKey=http://host/path or -metricsAuthKey=https://host/path -mtls array Whether to require valid client certificate for https requests to the corresponding -httpListenAddr . This flag works only if -tls flag is set. See also -mtlsCAFile . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise/ Supports array of values separated by comma or specified via multiple flags. Empty values are set to false. -mtlsCAFile array Optional path to TLS Root CA for verifying client certificates at the corresponding -httpListenAddr when -mtls is enabled. By default the host system TLS Root CA is used for client certificate verification. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise/ Supports an array of values separated by comma or specified via multiple flags. Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces. -newrelic.maxInsertRequestSize size The maximum size in bytes of a single NewRelic request to /newrelic/infra/v2/metrics/events/bulk Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 67108864) -opentelemetry.usePrometheusNaming Whether to convert metric names and labels into Prometheus-compatible format for the metrics ingested via OpenTelemetry protocol; see https://docs.victoriametrics.com/#sending-data-via-opentelemetry -opentsdbHTTPListenAddr string TCP address to listen for OpenTSDB HTTP put requests. Usually :4242 must be set. Doesn't work if empty. See also -opentsdbHTTPListenAddr.useProxyProtocol -opentsdbHTTPListenAddr.useProxyProtocol Whether to use proxy protocol for connections accepted at -opentsdbHTTPListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt -opentsdbListenAddr string TCP and UDP address to listen for OpenTSDB metrics. Telnet put messages and HTTP /api/put messages are simultaneously served on TCP port. Usually :4242 must be set. Doesn't work if empty. See also -opentsdbListenAddr.useProxyProtocol -opentsdbListenAddr.useProxyProtocol Whether to use proxy protocol for connections accepted at -opentsdbListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt -opentsdbTrimTimestamp duration Trim timestamps for OpenTSDB 'telnet put' data to this duration. Minimum practical duration is 1s. Higher duration (i.e. 1m) may be used for reducing disk space usage for timestamp data (default 1s) -opentsdbhttp.maxInsertRequestSize size The maximum size of OpenTSDB HTTP put request Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 33554432) -opentsdbhttpTrimTimestamp duration Trim timestamps for OpenTSDB HTTP data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms) -pprofAuthKey value Auth key for /debug/pprof/ endpoints. It must be passed via authKey query arg. It overrides httpAuth. settings Flag value can be read from the given file when using -pprofAuthKey=file:///abs/path/to/file or -pprofAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -pprofAuthKey=http://host/path or -pprofAuthKey=https://host/path -precisionBits int The number of precision bits to store per each value. Lower precision bits improves data compression at the cost of precision loss (default 64) -prevCacheRemovalPercent float Items in the previous caches are removed when the percent of requests it serves becomes lower than this" }, { "data": "Higher values reduce memory usage at the cost of higher CPU usage. See also -cacheExpireDuration (default 0.1) -promscrape.azureSDCheckInterval duration Interval for checking for changes in Azure. This works only if azuresdconfigs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sdconfigs/#azuresd_configs for details (default 1m0s) -promscrape.cluster.memberLabel string If non-empty, then the label with this name and the -promscrape.cluster.memberNum value is added to all the scraped metrics. See https://docs.victoriametrics.com/vmagent/#scraping-big-number-of-targets for more info -promscrape.cluster.memberNum string The number of vmagent instance in the cluster of scrapers. It must be a unique value in the range 0 ... promscrape.cluster.membersCount-1 across scrapers in the cluster. Can be specified as pod name of Kubernetes StatefulSet - pod-name-Num, where Num is a numeric part of pod name. See also -promscrape.cluster.memberLabel . See https://docs.victoriametrics.com/vmagent/#scraping-big-number-of-targets for more info (default \"0\") -promscrape.cluster.memberURLTemplate string An optional template for URL to access vmagent instance with the given -promscrape.cluster.memberNum value. Every %d occurrence in the template is substituted with -promscrape.cluster.memberNum at urls to vmagent instances responsible for scraping the given target at /service-discovery page. For example -promscrape.cluster.memberURLTemplate='http://vmagent-%d:8429/targets'. See https://docs.victoriametrics.com/vmagent/#scraping-big-number-of-targets for more details -promscrape.cluster.membersCount int The number of members in a cluster of scrapers. Each member must have a unique -promscrape.cluster.memberNum in the range 0 ... promscrape.cluster.membersCount-1 . Each member then scrapes roughly 1/N of all the targets. By default, cluster scraping is disabled, i.e. a single scraper scrapes all the targets. See https://docs.victoriametrics.com/vmagent/#scraping-big-number-of-targets for more info (default 1) -promscrape.cluster.name string Optional name of the cluster. If multiple vmagent clusters scrape the same targets, then each cluster must have unique name in order to properly de-duplicate samples received from these clusters. See https://docs.victoriametrics.com/vmagent/#scraping-big-number-of-targets for more info -promscrape.cluster.replicationFactor int The number of members in the cluster, which scrape the same targets. If the replication factor is greater than 1, then the deduplication must be enabled at remote storage side. See https://docs.victoriametrics.com/vmagent/#scraping-big-number-of-targets for more info (default 1) -promscrape.config string Optional path to Prometheus config file with 'scrape_configs' section containing targets to scrape. The path can point to local file and to http url. See https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter for details -promscrape.config.dryRun Checks -promscrape.config file for errors and unsupported fields and then exits. Returns non-zero exit code on parsing errors and emits these errors to stderr. See also -promscrape.config.strictParse command-line flag. Pass -loggerLevel=ERROR if you don't need to see info messages in the output. -promscrape.config.strictParse Whether to deny unsupported fields in -promscrape.config . Set to false in order to silently skip unsupported fields (default true) -promscrape.configCheckInterval duration Interval for checking for changes in -promscrape.config file. By default, the checking is disabled. See how to reload -promscrape.config file at https://docs.victoriametrics.com/vmagent/#configuration-update -promscrape.consul.waitTime duration Wait time used by Consul service discovery. Default value is used if not set -promscrape.consulSDCheckInterval duration Interval for checking for changes in Consul. This works only if consulsdconfigs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sdconfigs/#consulsd_configs for details (default 30s) -promscrape.consulagentSDCheckInterval duration Interval for checking for changes in Consul Agent. This works only if consulagentsdconfigs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sdconfigs/#consulagentsd_configs for details (default 30s) -promscrape.digitaloceanSDCheckInterval duration Interval for checking for changes in digital ocean. This works only if digitaloceansdconfigs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sdconfigs/#digitaloceansd_configs for details (default 1m0s) -promscrape.disableCompression Whether to disable sending 'Accept-Encoding: gzip' request headers to all the scrape targets. This may reduce CPU usage on scrape targets at the cost of higher network bandwidth utilization. It is possible to set 'disablecompression: true' individually per each 'scrapeconfig' section in '-promscrape.config' for fine-grained control -promscrape.disableKeepAlive Whether to disable HTTP keep-alive connections when scraping all the targets. This may be useful when targets has no support for HTTP keep-alive" }, { "data": "It is possible to set 'disablekeepalive: true' individually per each 'scrapeconfig' section in '-promscrape.config' for fine-grained control. Note that disabling HTTP keep-alive may increase load on both vmagent and scrape targets -promscrape.discovery.concurrency int The maximum number of concurrent requests to Prometheus autodiscovery API (Consul, Kubernetes, etc.) (default 100) -promscrape.discovery.concurrentWaitTime duration The maximum duration for waiting to perform API requests if more than -promscrape.discovery.concurrency requests are simultaneously performed (default 1m0s) -promscrape.dnsSDCheckInterval duration Interval for checking for changes in dns. This works only if dnssdconfigs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sdconfigs/#dnssd_configs for details (default 30s) -promscrape.dockerSDCheckInterval duration Interval for checking for changes in docker. This works only if dockersdconfigs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sdconfigs/#dockersd_configs for details (default 30s) -promscrape.dockerswarmSDCheckInterval duration Interval for checking for changes in dockerswarm. This works only if dockerswarmsdconfigs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sdconfigs/#dockerswarmsd_configs for details (default 30s) -promscrape.dropOriginalLabels Whether to drop original labels for scrape targets at /targets and /api/v1/targets pages. This may be needed for reducing memory usage when original labels for big number of scrape targets occupy big amounts of memory. Note that this reduces debuggability for improper per-target relabeling configs -promscrape.ec2SDCheckInterval duration Interval for checking for changes in ec2. This works only if ec2sdconfigs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sdconfigs/#ec2sd_configs for details (default 1m0s) -promscrape.eurekaSDCheckInterval duration Interval for checking for changes in eureka. This works only if eurekasdconfigs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sdconfigs/#eurekasd_configs for details (default 30s) -promscrape.fileSDCheckInterval duration Interval for checking for changes in 'filesdconfig'. See https://docs.victoriametrics.com/sdconfigs/#filesd_configs for details (default 1m0s) -promscrape.gceSDCheckInterval duration Interval for checking for changes in gce. This works only if gcesdconfigs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sdconfigs/#gcesd_configs for details (default 1m0s) -promscrape.hetznerSDCheckInterval duration Interval for checking for changes in Hetzner API. This works only if hetznersdconfigs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sdconfigs/#hetznersd_configs for details (default 1m0s) -promscrape.httpSDCheckInterval duration Interval for checking for changes in http endpoint service discovery. This works only if httpsdconfigs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sdconfigs/#httpsd_configs for details (default 1m0s) -promscrape.kubernetes.apiServerTimeout duration How frequently to reload the full state from Kubernetes API server (default 30m0s) -promscrape.kubernetes.attachNodeMetadataAll Whether to set attachmetadata.node=true for all the kubernetessdconfigs at -promscrape.config . It is possible to set attachmetadata.node=false individually per each kubernetessdconfigs . See https://docs.victoriametrics.com/sdconfigs/#kubernetessd_configs -promscrape.kubernetesSDCheckInterval duration Interval for checking for changes in Kubernetes API server. This works only if kubernetessdconfigs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sdconfigs/#kubernetessd_configs for details (default 30s) -promscrape.kumaSDCheckInterval duration Interval for checking for changes in kuma service discovery. This works only if kumasdconfigs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sdconfigs/#kumasd_configs for details (default 30s) -promscrape.maxDroppedTargets int The maximum number of droppedTargets to show at /api/v1/targets page. Increase this value if your setup drops more scrape targets during relabeling and you need investigating labels for all the dropped targets. Note that the increased number of tracked dropped targets may result in increased memory usage (default 1000) -promscrape.maxResponseHeadersSize size The maximum size of http response headers from Prometheus scrape targets Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 4096) -promscrape.maxScrapeSize size The maximum size of scrape response in bytes to process from Prometheus targets. Bigger responses are rejected Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 16777216) -promscrape.minResponseSizeForStreamParse size The minimum target response size for automatic switching to stream parsing mode, which can reduce memory usage. See https://docs.victoriametrics.com/vmagent/#stream-parsing-mode Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 1000000)" }, { "data": "Whether to disable sending Prometheus stale markers for metrics when scrape target disappears. This option may reduce memory usage if stale markers aren't needed for your setup. This option also disables populating the scrapeseriesadded metric. See https://prometheus.io/docs/concepts/jobs_instances/#automatically-generated-labels-and-time-series -promscrape.nomad.waitTime duration Wait time used by Nomad service discovery. Default value is used if not set -promscrape.nomadSDCheckInterval duration Interval for checking for changes in Nomad. This works only if nomadsdconfigs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sdconfigs/#nomadsd_configs for details (default 30s) -promscrape.openstackSDCheckInterval duration Interval for checking for changes in openstack API server. This works only if openstacksdconfigs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sdconfigs/#openstacksd_configs for details (default 30s) -promscrape.seriesLimitPerTarget int Optional limit on the number of unique time series a single scrape target can expose. See https://docs.victoriametrics.com/vmagent/#cardinality-limiter for more info -promscrape.streamParse Whether to enable stream parsing for metrics obtained from scrape targets. This may be useful for reducing memory usage when millions of metrics are exposed per each scrape target. It is possible to set 'streamparse: true' individually per each 'scrapeconfig' section in '-promscrape.config' for fine-grained control -promscrape.suppressDuplicateScrapeTargetErrors Whether to suppress 'duplicate scrape target' errors; see https://docs.victoriametrics.com/vmagent/#troubleshooting for details -promscrape.suppressScrapeErrors Whether to suppress scrape errors logging. The last error for each target is always available at '/targets' page even if scrape errors logging is suppressed. See also -promscrape.suppressScrapeErrorsDelay -promscrape.suppressScrapeErrorsDelay duration The delay for suppressing repeated scrape errors logging per each scrape targets. This may be used for reducing the number of log lines related to scrape errors. See also -promscrape.suppressScrapeErrors -promscrape.yandexcloudSDCheckInterval duration Interval for checking for changes in Yandex Cloud API. This works only if yandexcloudsdconfigs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sdconfigs/#yandexcloudsd_configs for details (default 30s) -pushmetrics.disableCompression Whether to disable request body compression when pushing metrics to every -pushmetrics.url -pushmetrics.extraLabel array Optional labels to add to metrics pushed to every -pushmetrics.url . For example, -pushmetrics.extraLabel='instance=\"foo\"' adds instance=\"foo\" label to all the metrics pushed to every -pushmetrics.url Supports an array of values separated by comma or specified via multiple flags. Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces. -pushmetrics.header array Optional HTTP request header to send to every -pushmetrics.url . For example, -pushmetrics.header='Authorization: Basic foobar' adds 'Authorization: Basic foobar' header to every request to every -pushmetrics.url Supports an array of values separated by comma or specified via multiple flags. Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces. -pushmetrics.interval duration Interval for pushing metrics to every -pushmetrics.url (default 10s) -pushmetrics.url array Optional URL to push metrics exposed at /metrics page. See https://docs.victoriametrics.com/#push-metrics . By default, metrics exposed at /metrics page aren't pushed to any remote storage Supports an array of values separated by comma or specified via multiple flags. Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces. -relabelConfig string Optional path to a file with relabeling rules, which are applied to all the ingested metrics. The path can point either to local file or to http url. See https://docs.victoriametrics.com/#relabeling for details. The config is reloaded on SIGHUP signal -reloadAuthKey value Auth key for /-/reload http endpoint. It must be passed via authKey query arg. It overrides httpAuth.* settings. Flag value can be read from the given file when using -reloadAuthKey=file:///abs/path/to/file or -reloadAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -reloadAuthKey=http://host/path or -reloadAuthKey=https://host/path -retentionFilter array Retention filter in the format 'filter:retention'. For example, '{env=\"dev\"}:3d' configures the retention for time series with env=\"dev\" label to 3 days. See https://docs.victoriametrics.com/#retention-filters for details. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise/ Supports an array of values separated by comma or specified via multiple" }, { "data": "Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces. -retentionPeriod value Data with timestamps outside the retentionPeriod is automatically deleted. The minimum retentionPeriod is 24h or 1d. See also -retentionFilter The following optional suffixes are supported: s (second), m (minute), h (hour), d (day), w (week), y (year). If suffix isn't set, then the duration is counted in months (default 1) -retentionTimezoneOffset duration The offset for performing indexdb rotation. If set to 0, then the indexdb rotation is performed at 4am UTC time per each -retentionPeriod. If set to 2h, then the indexdb rotation is performed at 4am EET time (the timezone with +2h offset) -search.cacheTimestampOffset duration The maximum duration since the current time for response data, which is always queried from the original raw data, without using the response cache. Increase this value if you see gaps in responses due to time synchronization issues between VictoriaMetrics and data sources. See also -search.disableAutoCacheReset (default 5m0s) -search.disableAutoCacheReset Whether to disable automatic response cache reset if a sample with timestamp outside -search.cacheTimestampOffset is inserted into VictoriaMetrics -search.disableCache Whether to disable response caching. This may be useful when ingesting historical data. See https://docs.victoriametrics.com/#backfilling . See also -search.resetRollupResultCacheOnStartup -search.graphiteMaxPointsPerSeries int The maximum number of points per series Graphite render API can return (default 1000000) -search.graphiteStorageStep duration The interval between datapoints stored in the database. It is used at Graphite Render API handler for normalizing the interval between datapoints in case it isn't normalized. It can be overridden by sending 'storage_step' query arg to /render API or by sending the desired interval via 'Storage-Step' http header during querying /render API (default 10s) -search.ignoreExtraFiltersAtLabelsAPI Whether to ignore match[], extrafilters[] and extralabel query args at /api/v1/labels and /api/v1/label/.../values . This may be useful for decreasing load on VictoriaMetrics when extra filters match too many time series. The downside is that superfluous labels or series could be returned, which do not match the extra filters. See also -search.maxLabelsAPISeries and -search.maxLabelsAPIDuration -search.latencyOffset duration The time when data points become visible in query results after the collection. It can be overridden on per-query basis via latency_offset arg. Too small value can result in incomplete last points for query results (default 30s) -search.logQueryMemoryUsage size Log query and increment vmmemoryintensivequeriestotal metric each time the query requires more memory than specified by this flag. This may help detecting and optimizing heavy queries. Query logging is disabled by default. See also -search.logSlowQueryDuration and -search.maxMemoryPerQuery Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0) -search.logSlowQueryDuration duration Log queries with execution time exceeding this value. Zero disables slow query logging. See also -search.logQueryMemoryUsage (default 5s) -search.maxConcurrentRequests int The maximum number of concurrent search requests. It shouldn't be high, since a single request can saturate all the CPU cores, while many concurrently executed requests may require high amounts of memory. See also -search.maxQueueDuration and -search.maxMemoryPerQuery (default 16) -search.maxExportDuration duration The maximum duration for /api/v1/export call (default 720h0m0s) -search.maxExportSeries int The maximum number of time series, which can be returned from /api/v1/export* APIs. This option allows limiting memory usage (default 10000000) -search.maxFederateSeries int The maximum number of time series, which can be returned from /federate. This option allows limiting memory usage (default 1000000) -search.maxGraphiteSeries int The maximum number of time series, which can be scanned during queries to Graphite Render API. See https://docs.victoriametrics.com/#graphite-render-api-usage (default 300000) -search.maxGraphiteTagKeys int The maximum number of tag keys returned from Graphite API, which returns tags. See https://docs.victoriametrics.com/#graphite-tags-api-usage (default 100000)" }, { "data": "int The maximum number of tag values returned from Graphite API, which returns tag values. See https://docs.victoriametrics.com/#graphite-tags-api-usage (default 100000) -search.maxLabelsAPIDuration duration The maximum duration for /api/v1/labels, /api/v1/label/.../values and /api/v1/series requests. See also -search.maxLabelsAPISeries and -search.ignoreExtraFiltersAtLabelsAPI (default 5s) -search.maxLabelsAPISeries int The maximum number of time series, which could be scanned when searching for the the matching time series at /api/v1/labels and /api/v1/label/.../values. This option allows limiting memory usage and CPU usage. See also -search.maxLabelsAPIDuration, -search.maxTagKeys, -search.maxTagValues and -search.ignoreExtraFiltersAtLabelsAPI (default 1000000) -search.maxLookback duration Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaning due to historical reasons -search.maxMemoryPerQuery size The maximum amounts of memory a single query may consume. Queries requiring more memory are rejected. The total memory limit for concurrently executed queries can be estimated as -search.maxMemoryPerQuery multiplied by -search.maxConcurrentRequests . See also -search.logQueryMemoryUsage Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0) -search.maxPointsPerTimeseries int The maximum points per a single timeseries returned from /api/v1/query_range. This option doesn't limit the number of scanned raw samples in the database. The main purpose of this option is to limit the number of per-series points returned to graphing UI such as VMUI or Grafana. There is no sense in setting this limit to values bigger than the horizontal resolution of the graph. See also -search.maxResponseSeries (default 30000) -search.maxPointsSubqueryPerTimeseries int The maximum number of points per series, which can be generated by subquery. See https://valyala.medium.com/prometheus-subqueries-in-victoriametrics-9b1492b720b3 (default 100000) -search.maxQueryDuration duration The maximum duration for query execution (default 30s) -search.maxQueryLen size The maximum search query length in bytes Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 16384) -search.maxQueueDuration duration The maximum time the request waits for execution when -search.maxConcurrentRequests limit is reached; see also -search.maxQueryDuration (default 10s) -search.maxResponseSeries int The maximum number of time series which can be returned from /api/v1/query and /api/v1/query_range . The limit is disabled if it equals to 0. See also -search.maxPointsPerTimeseries and -search.maxUniqueTimeseries -search.maxSamplesPerQuery int The maximum number of raw samples a single query can process across all time series. This protects from heavy queries, which select unexpectedly high number of raw samples. See also -search.maxSamplesPerSeries (default 1000000000) -search.maxSamplesPerSeries int The maximum number of raw samples a single query can scan per each time series. This option allows limiting memory usage (default 30000000) -search.maxSeries int The maximum number of time series, which can be returned from /api/v1/series. This option allows limiting memory usage (default 30000) -search.maxSeriesPerAggrFunc int The maximum number of time series an aggregate MetricsQL function can generate (default 1000000) -search.maxStalenessInterval duration The maximum interval for staleness calculations. By default, it is automatically calculated from the median interval between samples. This flag could be useful for tuning Prometheus data model closer to Influx-style data model. See https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness for details. See also '-search.setLookbackToStep' flag -search.maxStatusRequestDuration duration The maximum duration for /api/v1/status/* requests (default 5m0s) -search.maxStepForPointsAdjustment duration The maximum step when /api/v1/query_range handler adjusts points with timestamps closer than -search.latencyOffset to the current time. The adjustment is needed because such points may contain incomplete data (default 1m0s) -search.maxTSDBStatusSeries int The maximum number of time series, which can be processed during the call to /api/v1/status/tsdb. This option allows limiting memory usage (default 10000000) -search.maxTagKeys int The maximum number of tag keys returned from /api/v1/labels . See also -search.maxLabelsAPISeries and -search.maxLabelsAPIDuration (default 100000) -search.maxTagValueSuffixesPerSearch int The maximum number of tag value suffixes returned from /metrics/find (default 100000)" }, { "data": "int The maximum number of tag values returned from /api/v1/label/<label_name>/values . See also -search.maxLabelsAPISeries and -search.maxLabelsAPIDuration (default 100000) -search.maxUniqueTimeseries int The maximum number of unique time series, which can be selected during /api/v1/query and /api/v1/query_range queries. This option allows limiting memory usage (default 300000) -search.maxWorkersPerQuery int The maximum number of CPU cores a single query can use. The default value should work good for most cases. The flag can be set to lower values for improving performance of big number of concurrently executed queries. The flag can be set to bigger values for improving performance of heavy queries, which scan big number of time series (>10K) and/or big number of samples (>100M). There is no sense in setting this flag to values bigger than the number of CPU cores available on the system (default 16) -search.minStalenessInterval duration The minimum interval for staleness calculations. This flag could be useful for removing gaps on graphs generated from time series with irregular intervals between samples. See also '-search.maxStalenessInterval' -search.minWindowForInstantRollupOptimization value Enable cache-based optimization for repeated queries to /api/v1/query (aka instant queries), which contain rollup functions with lookbehind window exceeding the given value The following optional suffixes are supported: s (second), m (minute), h (hour), d (day), w (week), y (year). If suffix isn't set, then the duration is counted in months (default 3h) -search.noStaleMarkers Set this flag to true if the database doesn't contain Prometheus stale markers, so there is no need in spending additional CPU time on its handling. Staleness markers may exist only in data obtained from Prometheus scrape targets -search.queryStats.lastQueriesCount int Query stats for /api/v1/status/top_queries is tracked on this number of last queries. Zero value disables query stats tracking (default 20000) -search.queryStats.minQueryDuration duration The minimum duration for queries to track in query stats at /api/v1/status/top_queries. Queries with lower duration are ignored in query stats (default 1ms) -search.resetCacheAuthKey value Optional authKey for resetting rollup cache via /internal/resetRollupResultCache call Flag value can be read from the given file when using -search.resetCacheAuthKey=file:///abs/path/to/file or -search.resetCacheAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -search.resetCacheAuthKey=http://host/path or -search.resetCacheAuthKey=https://host/path -search.resetRollupResultCacheOnStartup Whether to reset rollup result cache on startup. See https://docs.victoriametrics.com/#rollup-result-cache . See also -search.disableCache -search.setLookbackToStep Whether to fix lookback interval to 'step' query arg value. If set to true, the query model becomes closer to InfluxDB data model. If set to true, then -search.maxLookback and -search.maxStalenessInterval are ignored -search.treatDotsAsIsInRegexps Whether to treat dots as is in regexp label filters used in queries. For example, foo{bar=~\"a.b.c\"} will be automatically converted to foo{bar=~\"a\\\\.b\\\\.c\"}, i.e. all the dots in regexp filters will be automatically escaped in order to match only dot char instead of matching any char. Dots in \".+\", \".\" and \".{n}\" regexps aren't escaped. This option is DEPRECATED in favor of {__graphite__=\"a..c\"} syntax for selecting metrics matching the given Graphite metrics filter -selfScrapeInstance string Value for 'instance' label, which is added to self-scraped metrics (default \"self\") -selfScrapeInterval duration Interval for self-scraping own metrics at /metrics page -selfScrapeJob string Value for 'job' label, which is added to self-scraped metrics (default \"victoria-metrics\") -smallMergeConcurrency int Deprecated: this flag does nothing -snapshotAuthKey value authKey, which must be passed in query string to /snapshot* pages Flag value can be read from the given file when using -snapshotAuthKey=file:///abs/path/to/file or -snapshotAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -snapshotAuthKey=http://host/path or -snapshotAuthKey=https://host/path -snapshotCreateTimeout duration Deprecated: this flag does nothing -snapshotsMaxAge value Automatically delete snapshots older than -snapshotsMaxAge if it is set to non-zero" }, { "data": "Make sure that backup process has enough time to finish the backup before the corresponding snapshot is automatically deleted The following optional suffixes are supported: s (second), m (minute), h (hour), d (day), w (week), y (year). If suffix isn't set, then the duration is counted in months (default 0) -sortLabels Whether to sort labels for incoming samples before writing them to storage. This may be needed for reducing memory usage at storage when the order of labels in incoming samples is random. For example, if m{k1=\"v1\",k2=\"v2\"} may be sent as m{k2=\"v2\",k1=\"v1\"}. Enabled sorting for labels can slow down ingestion performance a bit -statsd.disableAggregationEnforcement Whether to disable streaming aggregation requirement check. It's recommended to run statsdServer with pre-configured streaming aggregation to decrease load at database. -statsdListenAddr string TCP and UDP address to listen for Statsd plaintext data. Usually :8125 must be set. Doesn't work if empty. See also -statsdListenAddr.useProxyProtocol -statsdListenAddr.useProxyProtocol Whether to use proxy protocol for connections accepted at -statsdListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt -storage.cacheSizeIndexDBDataBlocks size Overrides max size for indexdb/dataBlocks cache. See https://docs.victoriametrics.com/single-server-victoriametrics/#cache-tuning Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0) -storage.cacheSizeIndexDBIndexBlocks size Overrides max size for indexdb/indexBlocks cache. See https://docs.victoriametrics.com/single-server-victoriametrics/#cache-tuning Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0) -storage.cacheSizeIndexDBTagFilters size Overrides max size for indexdb/tagFiltersToMetricIDs cache. See https://docs.victoriametrics.com/single-server-victoriametrics/#cache-tuning Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0) -storage.cacheSizeStorageTSID size Overrides max size for storage/tsid cache. See https://docs.victoriametrics.com/single-server-victoriametrics/#cache-tuning Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0) -storage.maxDailySeries int The maximum number of unique series can be added to the storage during the last 24 hours. Excess series are logged and dropped. This can be useful for limiting series churn rate. See https://docs.victoriametrics.com/#cardinality-limiter . See also -storage.maxHourlySeries -storage.maxHourlySeries int The maximum number of unique series can be added to the storage during the last hour. Excess series are logged and dropped. This can be useful for limiting series cardinality. See https://docs.victoriametrics.com/#cardinality-limiter . See also -storage.maxDailySeries -storage.minFreeDiskSpaceBytes size The minimum free disk space at -storageDataPath after which the storage stops accepting new data Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 10000000) -storageDataPath string Path to storage data (default \"victoria-metrics-data\") -streamAggr.config string Optional path to file with stream aggregation config. See https://docs.victoriametrics.com/stream-aggregation/ . See also -streamAggr.keepInput, -streamAggr.dropInput and -streamAggr.dedupInterval -streamAggr.dedupInterval duration Input samples are de-duplicated with this interval before optional aggregation with -streamAggr.config . See also -streamAggr.dropInputLabels and -dedup.minScrapeInterval and https://docs.victoriametrics.com/stream-aggregation/#deduplication -streamAggr.dropInput Whether to drop all the input samples after the aggregation with -streamAggr.config. By default, only aggregated samples are dropped, while the remaining samples are stored in the database. See also -streamAggr.keepInput and https://docs.victoriametrics.com/stream-aggregation/ -streamAggr.dropInputLabels array An optional list of labels to drop from samples before stream de-duplication and aggregation . See https://docs.victoriametrics.com/stream-aggregation/#dropping-unneeded-labels Supports an array of values separated by comma or specified via multiple flags. Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces. -streamAggr.ignoreFirstIntervals int Number of aggregation intervals to skip after the start. Increase this value if you observe incorrect aggregation results after restarts. It could be caused by receiving unordered delayed data from clients pushing data into the database. See https://docs.victoriametrics.com/stream-aggregation/#ignore-aggregation-intervals-on-start -streamAggr.ignoreOldSamples Whether to ignore input samples with old timestamps outside the current aggregation interval. See https://docs.victoriametrics.com/stream-aggregation/#ignoring-old-samples -streamAggr.keepInput Whether to keep all the input samples after the aggregation with -streamAggr.config. By default, only aggregated samples are dropped, while the remaining samples are stored in the database. See also" }, { "data": "and https://docs.victoriametrics.com/stream-aggregation/ -tls array Whether to enable TLS for incoming HTTP requests at the given -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set. See also -mtls Supports array of values separated by comma or specified via multiple flags. Empty values are set to false. -tlsAutocertCacheDir string Directory to store TLS certificates issued via Let's Encrypt. Certificates are lost on restarts if this flag isn't set. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise/ -tlsAutocertEmail string Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir .This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise/ -tlsAutocertHosts array Optional hostnames for automatic issuing of Let's Encrypt TLS certificates. These hostnames must be reachable at -httpListenAddr . The -httpListenAddr must listen tcp port 443 . The -tlsAutocertHosts overrides -tlsCertFile and -tlsKeyFile . See also -tlsAutocertEmail and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise/ Supports an array of values separated by comma or specified via multiple flags. Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces. -tlsCertFile array Path to file with TLS certificate for the corresponding -httpListenAddr if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated. See also -tlsAutocertHosts Supports an array of values separated by comma or specified via multiple flags. Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces. -tlsCipherSuites array Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants Supports an array of values separated by comma or specified via multiple flags. Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces. -tlsKeyFile array Path to file with TLS key for the corresponding -httpListenAddr if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated. See also -tlsAutocertHosts Supports an array of values separated by comma or specified via multiple flags. Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces. -tlsMinVersion array Optional minimum TLS version to use for the corresponding -httpListenAddr if -tls is set. Supported values: TLS10, TLS11, TLS12, TLS13 Supports an array of values separated by comma or specified via multiple flags. Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces. -usePromCompatibleNaming Whether to replace characters unsupported by Prometheus with underscores in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foobar{ab='c'} during data ingestion if this flag is set. See https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels -version Show VictoriaMetrics version -vmalert.proxyURL string Optional URL for proxying requests to vmalert. For example, if -vmalert.proxyURL=http://vmalert:8880 , then alerting API requests such as /api/v1/rules from Grafana will be proxied to http://vmalert:8880/api/v1/rules -vmui.customDashboardsPath string Optional path to vmui dashboards. See https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/app/vmui/packages/vmui/public/dashboards -vmui.defaultTimezone string The default timezone to be used in vmui. Timezone must be a valid IANA Time Zone. For example: America/New_York, Europe/Berlin, Etc/GMT+3 or Local ``` Hide This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Many thanks for signing up for our newsletter. You have been added to our distribution list and you will receive our newsletter accordingly. Note that you can unsubscribe at any time from within the newsletter. Many thanks for submitting our Contact Us form. One of our team members will be in touch shortly to follow up on it with you." } ]
{ "category": "Orchestration & Management", "file_name": "join.md", "project_name": "APISIX", "subcategory": "API Gateway" }
[ { "data": "The cors Plugins lets you enable CORS easily. | Name | Type | Required | Default | Description | |:--|:--|:--|:-|:| | alloworigins | string | False | \"*\" | Origins to allow CORS. Use the scheme://host:port format. For example, https://somedomain.com:8081. If you have multiple origins, use a , to list them. If allowcredential is set to false, you can enable CORS for all origins by using . If allow_credential is set to true, you can forcefully allow CORS on all origins by using * but it will pose some security issues. | | allowmethods | string | False | \"*\" | Request methods to enable CORS on. For example GET, POST. Use , to add multiple methods. If allowcredential is set to false, you can enable CORS for all methods by using . If allow_credential is set to true, you can forcefully allow CORS on all methods by using * but it will pose some security issues. | | allowheaders | string | False | \"*\" | Headers in the request allowed when accessing a cross-origin resource. Use , to add multiple headers. If allowcredential is set to false, you can enable CORS for all request headers by using . If allow_credential is set to true, you can forcefully allow CORS on all request headers by using * but it will pose some security issues. | | exposeheaders | string | False | \"*\" | Headers in the response allowed when accessing a cross-origin resource. Use , to add multiple headers. If allowcredential is set to false, you can enable CORS for all response headers by using . If allow_credential is set to true, you can forcefully allow CORS on all response headers by using * but it will pose some security issues. | | max_age | integer | False | 5 | Maximum time in seconds the result is cached. If the time is within this limit, the browser will check the cached result. Set to -1 to disable caching. Note that the maximum value is browser dependent. See Access-Control-Max-Age for more details. | | allow_credential | boolean | False | false | When set to true, allows requests to include credentials like cookies. According to CORS specification, if you set this to true, you cannot use '*' to allow all for the other attributes. | | alloworiginsbyregex | array | False | nil | Regex to match origins that allow CORS. For example, [\".*\\.test.com$\"] can match all subdomains of" }, { "data": "When set to specified range, only domains in this range will be allowed, no matter what alloworigins is. | | alloworiginsbymetadata | array | False | nil | Origins to enable CORS referenced from alloworigins set in the Plugin metadata. For example, if \"allow_origins\": {\"EXAMPLE\": \"https://example.com\"} is set in the Plugin metadata, then [\"EXAMPLE\"] can be used to allow CORS on the origin https://example.com. | | Name | Type | Required | Default | Description | |:|:-|:--|:-|:--| | timingalloworigins | string | False | nil | Origin to allow to access the resource timing information. See Timing-Allow-Origin. Use the scheme://host:port format. For example, https://somedomain.com:8081. If you have multiple origins, use a , to list them. | | timingalloworiginsbyregex | array | False | nil | Regex to match with origin for enabling access to the resource timing information. For example, [\".*\\.test.com\"] can match all subdomain of test.com. When set to specified range, only domains in this range will be allowed, no matter what timingalloworigins is. | The Timing-Allow-Origin header is defined in the Resource Timing API, but it is related to the CORS concept. Suppose you have 2 domains, domain-A.com and domain-B.com. You are on a page on domain-A.com, you have an XHR call to a resource on domain-B.com and you need its timing information. You can allow the browser to show this timing information only if you have cross-origin permissions on domain-B.com. So, you have to set the CORS headers first, then access the domain-B.com URL, and if you set Timing-Allow-Origin, the browser will show the requested timing information. | Name | Type | Required | Description | |:--|:-|:--|:-| | alloworigins | object | False | A map with origin reference and allowed origins. The keys in the map are used in the attribute alloworiginsbymetadata and the value are equivalent to the allow_origins attribute of the Plugin. | You can enable the Plugin on a specific Route or Service: ``` curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '{ \"uri\": \"/hello\", \"plugins\": { \"cors\": {} }, \"upstream\": { \"type\": \"roundrobin\", \"nodes\": { \"127.0.0.1:8080\": 1 } }}'``` After enabling the Plugin, you can make a request to the server and see the CORS headers returned: ``` curl http://127.0.0.1:9080/hello -v``` ``` ...< Server: APISIX web server< Access-Control-Allow-Origin: < Access-Control-Allow-Methods: < Access-Control-Allow-Headers: < Access-Control-Expose-Headers: < Access-Control-Max-Age: 5...``` To remove the cors Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect. ``` curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '{ \"uri\": \"/hello\", \"plugins\": {}, \"upstream\": { \"type\": \"roundrobin\", \"nodes\": { \"127.0.0.1:8080\": 1 } }}'```" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "EnRoute OneStep Ingress", "subcategory": "API Gateway" }
[ { "data": "Getting Started with EnRoute Gateway EnRoute Universal Gateway is a an API gateway built to support traditional and cloud-native use cases. It is designed to run either as a Kubernetes Ingress Gateway, Standalone Gateway, Horizontally scaling L7 API gateway or a Mesh of Gateways. Depending on the need of the user, the environment, the application, either one or many of these solutions can be deployed. EnRoute also supports plugins/filters to extend functionality and enforce policies. The features page lists the available plugins for the Gateway. More details about each of the plugins can also be found on plugin pages. A consistent policy framework across all these network components makes the EnRoute Universal Gateway a versatile and powerful solution. This article covers how to get started with the EnRoute Kubernetes Ingress Gateway. The minimum requirement is a working Kubernetes cluster. We first install example workloads and provide connectivity and security for these workloads using EnRoute. EnRoute configuration includes Global Configuration, per-host config and per-route config. EnRoute provides a helm chart to easily configure each of these aspects EnRoute can be easily configure using helm charts. The following helm charts are available | Chart | Description | |:|:--| | enroute | Use this chart to configure and install EnRoute Ingress API Gateway | | demo-services | Use this chart to install workloads used to demo EnRoute (eg: httpbin, grpcbin) | | service-globalconfig | Use this chart to configure EnRoute global configuration (eg: Global Rate-Limit Engine Config, Configuration for Mesh Integration, Filters for all traffic - eg: Health Checker, Lua, etc.) | | service-host-route | Use this chart to provide L7 connectivity and policy for a service using a host-and-route (GatewayHost) or just a route (ServiceRoute) | The demo-services helm chart installs service httpbin, grpcbin and echo. Traffic going to a service needs a host and a route. A host is the root of a configuration tree and with a route defines a way to reach the service. A resource of type GatewayHost can be use to create a Host and a route. A resource of ServiceRoute creates a route to a service while attaching that route to an existing host. Below we show a list of configuration items we create to make these services externally accessible | Service | Host | Route | Resource | Notes | |:-|:|:--|:-|:| | httpbin | * | / | GatewayHost | Create a Host and a Route | | echo | * | /echo | ServiceRoute | Create a Route for echo that gets associated to Host created in previous step | | grpcbin |" }, { "data": "| / | GatewayHost | Create a Host and Route | We add a GatewayHost that includes a host (fqdn *) and route / to make service httpbin externally accessible. Next, a ServiceRoute for route /echo maps the service echo to the host created in earlier step (fqdn *) Next, another GatewayHost for route / maps the service grpcbin to host (fqdn grpcbin.enroutedemo.com) and route / We will go through the following steps Add the helm chart - ``` helm repo add saaras https://charts.getenroute.io ``` Check repositories - ``` helm search repo ``` ``` NAME CHART VERSION APP VERSION DESCRIPTION saaras/demo-services 0.1.0 0.1.0 Demo Workloads - httpbin, echo, grpcbin saaras/enroute 0.7.0 v0.11.0 EnRoute API Gateway saaras/service-globalconfig 0.2.0 v0.11.0 Global Config and Global Filters for EnRoute saaras/service-host-route 0.2.0 v0.11.0 Host (GatewayHost), Route (ServiceRoute) co... saaras/service-policy 0.5.0 v0.11.0 Demo Service L7 Policy using EnRoute API Gateway ``` ``` helm install enroute-demo saaras/enroute \\ --set serviceAccount.create=true \\ --create-namespace \\ --namespace enroutedemo ``` ``` NAME: enroute-demo LAST DEPLOYED: Mon Jul 25 21:53:15 2022 NAMESPACE: enroutedemo STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Ingress API Gateway Community Edition Installed! Request a free evaluation license for enterprise version by sending an email to contact@saaras.io Slack Channel - https://slack.saaras.io Getting Started Guide - https://getenroute.io/docs/getting-started-enroute-ingress-controller/ EnRoute Features - https://getenroute.io/features/ ``` ``` kubectl get all -n enroutedemo ``` ``` NAME READY STATUS RESTARTS AGE pod/enroute-demo-5b4d45ff6c-mzv4b 3/3 Running 0 16m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/enroute-demo LoadBalancer 10.43.91.42 212.2.242.227 80:30808/TCP,443:31920/TCP 16m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/enroute-demo 1/1 1 1 16m NAME DESIRED CURRENT READY AGE replicaset.apps/enroute-demo-5b4d45ff6c 1 1 1 16m ``` ``` kubectl create namespace echo kubectl create namespace httpbin kubectl create namespace grpc kubectl create namespace avote helm install demo-services saaras/demo-services ``` ``` kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 2d21h kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 2d21h enroutedemo enroute-demo LoadBalancer 10.43.91.42 212.2.242.227 80:30808/TCP,443:31920/TCP 16m echo echo ClusterIP 10.43.137.85 <none> 9001/TCP 21s httpbin httpbin ClusterIP 10.43.104.4 <none> 9000/TCP 21s grpc grpcbin ClusterIP 10.43.56.47 <none> 9002/TCP 21s ``` However we still need to program EnRoute to expose a service. ``` helm install httpbin-host saaras/service-host-route \\ --namespace=httpbin \\ --set service.name=httpbin \\ --set service.prefix=/ \\ --set service.port=9000 ``` Note the public-IP of EnRoute LoadBalancer type of service, we can use this IP address to send test traffic ``` curl 212.2.242.227/get ``` ``` { \"args\": {}, \"headers\": { \"Host\": \"212.2.242.227\", \"User-Agent\": \"curl/7.68.0\", \"X-Envoy-Expected-Rq-Timeout-Ms\": \"15000\", \"X-Envoy-Internal\": \"true\" }, \"origin\": \"10.42.1.1\", \"url\": \"http://212.2.242.227/get\" } ``` ``` kubectl describe \\ -n httpbin gatewayhosts.enroute.saaras.io \\ httpbin-9000-gatewayhost-httpbin-host ``` ``` Name: httpbin-9000-gatewayhost-httpbin-host Namespace: httpbin Labels: app=httpbin app.kubernetes.io/managed-by=Helm Annotations:" }, { "data": "httpbin-host meta.helm.sh/release-namespace: httpbin API Version: enroute.saaras.io/v1 Kind: GatewayHost Metadata: ... Spec: Routes: Conditions: Prefix: / Services: Name: httpbin Port: 9000 Virtualhost: Fqdn: * Events: <none> ``` ``` helm install echo-route saaras/service-host-route --namespace=echo --set service.name=echo --set service.port=9001 --set routeonly=true --set service.prefix=/echo ``` ``` NAME: echo-route LAST DEPLOYED: Mon Jul 25 22:38:15 2022 NAMESPACE: echo STATUS: deployed REVISION: 1 TEST SUITE: None ``` ``` kubectl describe -n echo serviceroutes.enroute.saaras.io echo-9001-serviceroute-echo-route ``` ``` Name: echo-9001-serviceroute-echo-route Namespace: echo Labels: app=echo app.kubernetes.io/managed-by=Helm Annotations: meta.helm.sh/release-name: echo-route meta.helm.sh/release-namespace: echo API Version: enroute.saaras.io/v1 Kind: ServiceRoute Metadata: ... Spec: Fqdn: * Route: Conditions: Prefix: /echo Services: Name: echo Port: 9001 Events: <none> ``` ``` curl 212.2.242.227/get ``` ``` { \"args\": {}, \"headers\": { \"Host\": \"212.2.242.227\", \"User-Agent\": \"curl/7.68.0\", \"X-Envoy-Expected-Rq-Timeout-Ms\": \"15000\", \"X-Envoy-Internal\": \"true\" }, \"origin\": \"192.168.1.8\", \"url\": \"http://212.2.242.227/get\" } ``` helm install grpcbin-host saaras/service-host-route namespace=grpc set service.name=grpcbin set service.prefix=/ set service.port=9002 set service.fqdn=grpcbin.enroutedemo.com set service.protocol=h2c Setup DNS to point to external IP of EnRoute LoadBalancer service and send test traffic ``` ./go/bin/grpcurl -v -plaintext grpcbin.enroutedemo.com:80 hello.HelloService.SayHello ``` ``` Resolved method descriptor: rpc SayHello ( .hello.HelloRequest ) returns ( .hello.HelloResponse ); Request metadata to send: (empty) Response headers received: content-type: application/grpc date: Mon, 25 Jul 2022 23:12:24 GMT server: envoy x-envoy-upstream-service-time: 2 Response contents: { \"reply\": \"hello noname\" } Response trailers received: (empty) Sent 0 requests and received 1 response ``` The EnRoute service can be reached using the External-IP and a request on path /get sends it to the httpbin service ``` curl 212.2.246.47/get ``` ``` { \"args\": {}, \"headers\": { \"Host\": \"212.2.246.47\", \"User-Agent\": \"curl/7.68.0\", \"X-Envoy-Expected-Rq-Timeout-Ms\": \"15000\", \"X-Envoy-Internal\": \"true\" }, \"origin\": \"10.42.0.41\", \"url\": \"http://212.2.246.47/get\" } ``` The above steps create the following routing rules - ``` GatewayHost (```*```) +> Route (/) -> Service (httpbin) +> Route (/echo) -> Service (echo) GatewayHost (grpcbin.enroutedemo.com) -> Route (/) -> Service (grpcbin) ``` Note that the route for services httpbin and grpcbin is setup using GatewayHost and the route for service echo is setup using ServiceRoute type of resource EnRoute can be used to protect services outside kubernetes using the standalone gateway or services running inside kubernetes. EnRoute follows a configuration model similar to Envoy and is extensible using Filters. It uses filters to extend functionality at the global Service level and per-route level. The config objects used for this are - GatewayHost, Route, HttpFilter, RouteFilter and Service as shown here for Kubernetes gateway. Regardless of where the workload runs, a consistent service policy can be defined once and applied to secure any service running inside or without Kubernetes using Envoy. EnRoute provides key functionality using modular filters which make it easy to secure any service." } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Emissary-Ingress", "subcategory": "API Gateway" }
[ { "data": "Products Built on Envoy Proxy BY USE CASE BY INDUSTRY BY ROLE LEARN LISTEN ACT Company Docs DocsTelepresenceTelepresence Quick Start This quickstart provides the fastest way to get an understanding of how Telepresence can speed up your development in Kubernetes. It should take you about 5-10 minutes. You'll create a local cluster using Kind with a sample app installed, and use Telepresence to Then we'll point you to some next steps you can take, including trying out collaboration features and trying it in your own infrastructure. Youll need kubectl installed and set up (Linux / macOS / Windows) to use a Kubernetes cluster. You will also need Docker installed. The sample application instructions default to Python, which is pre-installed on MacOS and Linux. If you are on Windows and don't already have Python installed, you can install it from the official Python site. There are also instructions for NodeJS, Java and Go if you already have those installed and prefer to work in them. We offer an easy installation path using an MSI Installer. However if you'd like to setup Telepresence using Powershell, you can run these commands: We provide a repo that sets up a local cluster for you with the in-cluster Telepresence components and a sample app already installed. It does not need sudo or Run as Administrator privileges. Telepresence connects your local workstation to a namespace in your remote Kubernetes cluster, allowing you to talk to cluster resources like your laptop is in the selected namespace of the cluster. Intercepts can only be created in the selected namespace. Connect to the cluster: telepresence connect --namespace default Now we'll test that Telepresence is working properly by accessing a service running in the cluster. Telepresence has merged your local IP routing tables and DNS resolution with the clusters, so you can talk to the cluster in its DNS language and to services on their cluster IP address. Open up a browser and go to http://verylargejavaservice.default:8080. As you can see you've loaded up a dashboard showing the architecture of the sample app. You are connected to the VeryLargeJavaService, which talks to the DataProcessingService as an upstream dependency. The DataProcessingService in turn has a dependency on VeryLargeDatastore. You were able to connect to it using the cluster DNS name thanks to Telepresence. We'll take on the role of a DataProcessingService" }, { "data": "We want to be able to connect to that big test database that everyone has that dates back to the founding of the company and has all the critical test scenarios and is too big to run locally. In the other direction, VeryLargeJavaService is developed by another team and we need to make sure with each change that we are being good upstream citizens and maintaining valid contracts with that service. Historically, when developing microservices with Kubernetes, your choices have been to run an entire set of services in a cluster or namespace just for you, and spend 15 minutes on every one-line change, pushing the code, waiting for it to build, waiting for it to deploy, etc. Or, you could run all 50 services in your environment on your laptop, and be deafened by the fans. With Telepresence, you can intercept traffic from a service in the cluster and route it to your laptop, effectively replacing the cluster version with your local development environment. This gives you back the fast feedback loop of local development, and access to your preferred tools like your favorite IDE or debugger. And you still have access to all the cluster resources via telepresence connect. Now you'll see this in action. Look back at your browser tab looking at the app dashboard. You see the EdgyCorp WebApp with a green title and green pod in the diagram. The local version of the code has the UI color set to blue instead of green. Next, well create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: Start the intercept with the intercept command, setting the service name and port: telepresence intercept dataprocessingservice --port 3000 Go to the frontend service again in your browser and refresh. You will now see the blue elements in the app. Weve now set up a local development environment for the DataProcessingService, and weve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. Use personal intercepts to get specific requests when working with colleagues. Control what your laptop can reach in the cluster while connected. Develop in a hybrid local/cluster environment using Telepresence for Docker Compose. ON THIS PAGE Were here to help if you have questions." } ]
{ "category": "Orchestration & Management", "file_name": "docs.getenroute.io.md", "project_name": "EnRoute OneStep Ingress", "subcategory": "API Gateway" }
[ { "data": "EnRoute Documentation, Getting Started Guides, Plugins, Blogs and Features Open-source Apache Licensed. GitHub v1.0.0 EnRoute provides detailed helm templates for quick configuration EnRoute is a light-weight shim that follows Envoy config model. It can be easily configured and extended. EnRoute packages a lot of premium features like JWT, WebAssembly Support, L7 Rate-Limits and more for free in the EnRoute WebAssembly support makes it easily extensible in a language of choice EnRoute works with minimal config and minimal code. It's fast! Quick support on Slack channel istent policy framework across all these network components makes the EnRoute Universal Gateway a versatile and powerful solution. This article covers how to get started with the EnRoute Kubernetes Ingress Gateway. The minimum requirement is a working Kubernetes cluster. We first install example workloads and provide connectivity and security for these workloads using EnRoute. EnRoute configuration includes Global Configuration, per-host config and per-route config. EnRoute provides a helm chart to easily configure each of these aspects EnRoute can be easily configure using helm charts. The following helm charts are available | Chart | Description | |:|:--| | enroute | Use this chart to configure and install EnRoute Ingress API Gateway | | demo-services | Use this chart to install workloads used to demo EnRoute (eg: httpbin, grpcbin) | | service-globalconfig | Use this chart to configure EnRoute global configuration (eg: Global Rate-Limit Engine Config, Configuration for Mesh Integration, Filters for all traffic - eg: Health Checker, Lua, etc.) | | service-host-route | Use this chart to provide L7 connectivity and policy for a service using a host-and-route (GatewayHost) or just a route (ServiceRoute) | The demo-services helm chart installs service httpbin, grpcbin and echo. Traffic going to a service needs a host and a route. A host is the root of a configuration tree and with a route defines a way to reach the service. A resource of type GatewayHost can be use to create a Host and a route. A resource of ServiceRoute creates a route to a service while attaching that route to an existing host. Below we show a list of configuration items we create to make these services externally accessible | Service | Host | Route | Resource | Notes | |:-|:|:--|:-|:| | httpbin | * | / | GatewayHost | Create a Host and a Route | | echo | * | /echo | ServiceRoute | Create a Route for echo that gets associated to Host created in previous step | | grpcbin |" }, { "data": "| / | GatewayHost | Create a Host and Route | We add a GatewayHost that includes a host (fqdn *) and route / to make service httpbin externally accessible. Next, a ServiceRoute for route /echo maps the service echo to the host created in earlier step (fqdn *) Next, another GatewayHost for route / maps the service grpcbin to host (fqdn grpcbin.enroutedemo.com) and route / We will go through the following steps Add the helm chart - ``` helm repo add saaras https://charts.getenroute.io ``` Check repositories - ``` helm search repo ``` ``` NAME CHART VERSION APP VERSION DESCRIPTION saaras/demo-services 0.1.0 0.1.0 Demo Workloads - httpbin, echo, grpcbin saaras/enroute 0.7.0 v0.11.0 EnRoute API Gateway saaras/service-globalconfig 0.2.0 v0.11.0 Global Config and Global Filters for EnRoute saaras/service-host-route 0.2.0 v0.11.0 Host (GatewayHost), Route (ServiceRoute) co... saaras/service-policy 0.5.0 v0.11.0 Demo Service L7 Policy using EnRoute API Gateway ``` ``` helm install enroute-demo saaras/enroute \\ --set serviceAccount.create=true \\ --create-namespace \\ --namespace enroutedemo ``` ``` NAME: enroute-demo LAST DEPLOYED: Mon Jul 25 21:53:15 2022 NAMESPACE: enroutedemo STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Ingress API Gateway Community Edition Installed! Request a free evaluation license for enterprise version by sending an email to contact@saaras.io Slack Channel - https://slack.saaras.io Getting Started Guide - https://getenroute.io/docs/getting-started-enroute-ingress-controller/ EnRoute Features - https://getenroute.io/features/ ``` ``` kubectl get all -n enroutedemo ``` ``` NAME READY STATUS RESTARTS AGE pod/enroute-demo-5b4d45ff6c-mzv4b 3/3 Running 0 16m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/enroute-demo LoadBalancer 10.43.91.42 212.2.242.227 80:30808/TCP,443:31920/TCP 16m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/enroute-demo 1/1 1 1 16m NAME DESIRED CURRENT READY AGE replicaset.apps/enroute-demo-5b4d45ff6c 1 1 1 16m ``` ``` kubectl create namespace echo kubectl create namespace httpbin kubectl create namespace grpc kubectl create namespace avote helm install demo-services saaras/demo-services ``` ``` kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 2d21h kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 2d21h enroutedemo enroute-demo LoadBalancer 10.43.91.42 212.2.242.227 80:30808/TCP,443:31920/TCP 16m echo echo ClusterIP 10.43.137.85 <none> 9001/TCP 21s httpbin httpbin ClusterIP 10.43.104.4 <none> 9000/TCP 21s grpc grpcbin ClusterIP 10.43.56.47 <none> 9002/TCP 21s ``` However we still need to program EnRoute to expose a service. ``` helm install httpbin-host saaras/service-host-route \\ --namespace=httpbin \\ --set service.name=httpbin \\ --set service.prefix=/ \\ --set service.port=9000 ``` Note the public-IP of EnRoute LoadBalancer type of service, we can use this IP address to send test traffic ``` curl 212.2.242.227/get ``` ``` { \"args\": {}, \"headers\": { \"Host\": \"212.2.242.227\", \"User-Agent\": \"curl/7.68.0\", \"X-Envoy-Expected-Rq-Timeout-Ms\": \"15000\", \"X-Envoy-Internal\": \"true\" }, \"origin\": \"10.42.1.1\", \"url\": \"http://212.2.242.227/get\" } ``` ``` kubectl describe \\ -n httpbin gatewayhosts.enroute.saaras.io \\ httpbin-9000-gatewayhost-httpbin-host ``` ``` Name: httpbin-9000-gatewayhost-httpbin-host Namespace: httpbin Labels: app=httpbin app.kubernetes.io/managed-by=Helm Annotations: meta.helm.sh/release-name: httpbin-host" }, { "data": "httpbin API Version: enroute.saaras.io/v1 Kind: GatewayHost Metadata: ... Spec: Routes: Conditions: Prefix: / Services: Name: httpbin Port: 9000 Virtualhost: Fqdn: * Events: <none> ``` ``` helm install echo-route saaras/service-host-route --namespace=echo --set service.name=echo --set service.port=9001 --set routeonly=true --set service.prefix=/echo ``` ``` NAME: echo-route LAST DEPLOYED: Mon Jul 25 22:38:15 2022 NAMESPACE: echo STATUS: deployed REVISION: 1 TEST SUITE: None ``` ``` kubectl describe -n echo serviceroutes.enroute.saaras.io echo-9001-serviceroute-echo-route ``` ``` Name: echo-9001-serviceroute-echo-route Namespace: echo Labels: app=echo app.kubernetes.io/managed-by=Helm Annotations: meta.helm.sh/release-name: echo-route meta.helm.sh/release-namespace: echo API Version: enroute.saaras.io/v1 Kind: ServiceRoute Metadata: ... Spec: Fqdn: * Route: Conditions: Prefix: /echo Services: Name: echo Port: 9001 Events: <none> ``` ``` curl 212.2.242.227/get ``` ``` { \"args\": {}, \"headers\": { \"Host\": \"212.2.242.227\", \"User-Agent\": \"curl/7.68.0\", \"X-Envoy-Expected-Rq-Timeout-Ms\": \"15000\", \"X-Envoy-Internal\": \"true\" }, \"origin\": \"192.168.1.8\", \"url\": \"http://212.2.242.227/get\" } ``` helm install grpcbin-host saaras/service-host-route namespace=grpc set service.name=grpcbin set service.prefix=/ set service.port=9002 set service.fqdn=grpcbin.enroutedemo.com set service.protocol=h2c Setup DNS to point to external IP of EnRoute LoadBalancer service and send test traffic ``` ./go/bin/grpcurl -v -plaintext grpcbin.enroutedemo.com:80 hello.HelloService.SayHello ``` ``` Resolved method descriptor: rpc SayHello ( .hello.HelloRequest ) returns ( .hello.HelloResponse ); Request metadata to send: (empty) Response headers received: content-type: application/grpc date: Mon, 25 Jul 2022 23:12:24 GMT server: envoy x-envoy-upstream-service-time: 2 Response contents: { \"reply\": \"hello noname\" } Response trailers received: (empty) Sent 0 requests and received 1 response ``` The EnRoute service can be reached using the External-IP and a request on path /get sends it to the httpbin service ``` curl 212.2.246.47/get ``` ``` { \"args\": {}, \"headers\": { \"Host\": \"212.2.246.47\", \"User-Agent\": \"curl/7.68.0\", \"X-Envoy-Expected-Rq-Timeout-Ms\": \"15000\", \"X-Envoy-Internal\": \"true\" }, \"origin\": \"10.42.0.41\", \"url\": \"http://212.2.246.47/get\" } ``` The above steps create the following routing rules - ``` GatewayHost (```*```) +> Route (/) -> Service (httpbin) +> Route (/echo) -> Service (echo) GatewayHost (grpcbin.enroutedemo.com) -> Route (/) -> Service (grpcbin) ``` Note that the route for services httpbin and grpcbin is setup using GatewayHost and the route for service echo is setup using ServiceRoute type of resource EnRoute can be used to protect services outside kubernetes using the standalone gateway or services running inside kubernetes. EnRoute follows a configuration model similar to Envoy and is extensible using Filters. It uses filters to extend functionality at the global Service level and per-route level. The config objects used for this are - GatewayHost, Route, HttpFilter, RouteFilter and Service as shown here for Kubernetes gateway. Regardless of where the workload runs, a consistent service policy can be defined once and applied to secure any service running inside or without Kubernetes using Envoy. EnRoute provides key functionality using modular filters which make it easy to secure any service." } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Gloo", "subcategory": "API Gateway" }
[ { "data": "Version: latest Get started Spin up an API Gateway for the apps in your cluster, and set up intelligent network traffic routing About Learn about the benefits, architecture, and deployment patterns of Gloo Mesh Gateway. Setup Prepare, customize, upgrade, and uninstall your Gloo Mesh Gateway setup. Gateway listeners Set up listeners on your ingress gateway to set up ingress to workloads in your cluster. Traffic management Set up intelligent intra-mesh and multicluster routing for your Kubernetes and non-Kubernetes Security Use Gloo policies to secure the traffic within your service mesh environment. Resiliency Use Gloo policies to control the traffic within your service mesh environment. Observability Monitor your Istio workload health, and access metrics, logs, and traces to troubleshoot issues. Portal Create a developer portal for your users to discovery and access your APIs securely. GraphQL Create GraphQL APIs directly in Envoy using declarative configuration with Gloo Mesh Gateway. AWS Lambda Invoke AWS Lambda functions from Gloo Mesh Gateway. Reference Review reference documentation for Gloo Mesh Gateway, such as API, CLI, Helm, and version reference. Troubleshooting Troubleshoot and debug your Gloo Mesh Gateway setup. Get help and support Get help, training, and other forms of support. Solo.io copyright 2024" } ]
{ "category": "Orchestration & Management", "file_name": "docs.github.com.md", "project_name": "Hango", "subcategory": "API Gateway" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "github-terms-of-service.md", "project_name": "Hango", "subcategory": "API Gateway" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Kong", "subcategory": "API Gateway" }
[ { "data": "Set up your Gateway in under 5 minutes with Kong Konnect: Kong Konnect is an API lifecycle management platform that lets you build modern applications better, faster, and more securely. Start for Free To learn more about what you can do with Kong Gateway, see Features. Kong Gateway is a lightweight, fast, and flexible cloud-native API gateway. An API gateway is a reverse proxy that lets you manage, configure, and route requests to your APIs. Kong Gateway runs in front of any RESTful API and can be extended through modules and plugins. Its designed to run on decentralized architectures, including hybrid-cloud and multi-cloud deployments. With Kong Gateway, users can: Looking for additional help? Free training and curated content, just for you: Kong Gateway is a Lua application running in Nginx. Kong Gateway is distributed along with OpenResty, which is a bundle of modules that extend the lua-nginx-module. This sets the foundations for a modular architecture, where plugins can be enabled and executed at runtime. At its core, Kong Gateway implements database abstraction, routing, and plugin management. Plugins can live in separate code bases and be injected anywhere into the request lifecycle, all with a few lines of code. Kong provides many plugins for you to use in your Gateway deployments. You can also create your own custom plugins. For more information, see the plugin development guide, the PDK reference, and the guide on creating plugins with other languages (JavaScript, Go, and Python). There are two ways to deploy Kong Gateway: Managed with Kong Konnect, and self-managed. If youre trying out Kong Gateway for the first time, we recommend starting with Kong Konnect. Konnect provides the easiest way to get started with Kong Gateway. The global control plane is hosted in the cloud by Kong, and you manage the individual data plane nodes within your preferred network environment. Konnect offers two pricing packages: Plus: Our self-serve pay-as-you-go pricing model, giving you access to the Konnect platform in its entirety while offering the flexibility to only pay for the services your organization uses. Enterprise: With an Enterprise subscription, you have access to the entire Kong Konnect suite and: For more information, visit the pricing page. Figure 1: Diagram of Kong Gateway data planes connected to a Konnect control plane. Requests flow from an API client into the Gateway data planes, are modified and managed by the proxy based on your control plane configuration, and are forwarded to upstream services. Kong Gateway is available in two different packages: Open Source (OSS) and Enterprise. Kong Gateway (OSS): An open-source package containing the basic API gateway functionality and open-source plugins. You can manage the open-source Gateway with Kongs Admin API, Kong Manager Open Source, or with declarative configuration. Kong Gateway Enterprise (available in Free or Enterprise mode): Kongs API gateway with added functionality. You can manage Kong Gateway Enterprise in Free or Enterprise mode with Kongs Admin API, declarative configuration, or Kong Manager. Figure 2: Diagram of Kong Gateway key features. Kong Gateway (OSS) provides basic functionality, while Kong Gateway Enterprise builds on top of the open-source foundation with advanced proxy features. Requests flow from an API client into the Gateway, are modified and managed by the proxy based on your Gateway configuration, and forwarded to upstream" }, { "data": "| Unnamed: 0 | Open Source Open Source Get Started | Kong Gateway Enterprise Kong Gateway Enterprise Contact Sales | |:|--:|-:| | API Infrastructure Modernization | nan | nan | | Fast, Lightweight, Cloud-Native API Gateway | nan | nan | | End-to-End Automation Drive a GitOps flow of API design and execution | nan | nan | | Kong Ingress Controller Deploy APIs to Kubernetes in a native fashion | nan | nan | | Gateway Mocking Mock API responses directly on the API gateway | nan | nan | | Kong Manager: Admin GUI Visually manage Kong cluster, plugins, APIs, and consumers | nan | nan | | Traffic Management and Transformations | nan | nan | | Basic Traffic Control Plugins Manage ACME certificates, basic rate limiting, and lightweight caching | nan | nan | | Simple Data Transformations Add or remove headers, JSON data, or query strings | nan | nan | | gRPC Transformations Translate requests from gRPC-Web and REST to backend gRPC services | nan | nan | | GraphQL Convert GraphQL queries to REST requests. Rate limit and cache GraphQL queries. | nan | nan | | Request Validation Validate requests using either Kongs own schema validator or a JSON Schema Draft 4-compliant validator | nan | nan | | jq Transformations Advanced JSON transformations of requests or responses with the ability to chain transformations | nan | nan | | Advanced Caching Cache responses and optimize for high scale by integrating distributed backends | nan | nan | | Advanced Rate Limiting Enterprise-grade rate limiting with sliding window controls | nan | nan | | Security and Governance | nan | nan | | Authentication Common methods of API authentication - Basic Auth, HMAC, JWT Key Auth, limited OAuth 2.0, limited LDAP | nan | nan | | Advanced Authentication Enterprise-grade API authentication - Full OAuth 2.0, OpenID Connect, Vault, mutual TLS, JWT signing/resigning, full LDAP | nan | nan | | Role-Based Access Control (RBAC) Control gateway configurations based on a user's role in the organization | nan | nan | | Basic Authorization (Bot Detection, CORS controls, ACLs) Control access to APIs by rules of user behavior and control lists | nan | nan | | Advanced Authorization (OPA) Control access to APIs with complex, programmable, enterprise-wide rules | nan | nan | | Secret Management Encrypt sensitive keys, certificates, and passwords | nan | nan | | FIPS 140-2 Support Kong Gateway now provides a FIPS mode, which at its core uses the FIPS 140-2 compliant BoringCrypto for cryptographic operations. | nan | nan | | Signed Kong Images Kong Gateway container images are signed and verifiable in accordance with SLSA guidelines. | nan | nan | | Kong Images Build Provenance Kong Gateway container images generate build level provenance and are verifiable in accordance with SLSA" }, { "data": "| nan | nan | | AI Gateway | nan | nan | | Multi-LLM support Switch between different AI providers and models without having to change your application code | nan | nan | | AI traffic control Proxy AI traffic through the Kong Gateway and manage it with AI plugins | nan | nan | | AI prompt security Enforce secure and compliant AI prompts with the AI Prompt Decorator, AI Prompt Guard, and AI Prompt Template plugins | nan | nan | | AI observability Collect metrics from AI traffic, and use any Kong Gateway logging plugin to send it to your logging provider of choice | nan | nan | | Enterprise Support and Services | nan | nan | | Enterprise support 24/7 x 365 technical support SLAs | nan | nan | | Security CVE and Bug Fix Backports | nan | nan | | Performance Tuning Guidance | nan | nan | | Customer Success Packages - Add-on Accelerate time to value with dedicated Technical Account Managers and Field Engineers | nan | nan | Kong Admin API provides a RESTful interface for administration and configuration of Gateway entities such as services, routes, plugins, consumers, and more. All of the tasks you can perform against the Gateway can be automated using the Kong Admin API. Note: If you are running Kong in traditional mode, increased traffic could lead to potential performance with Kong Proxy. Server-side sorting and filtering large quantities of entities will also cause increased CPU usage in both Kong CP and database. Kong Manager is the graphical user interface (GUI) for Kong Gateway. It uses the Kong Admin API under the hood to administer and control Kong Gateway. Here are some of the things you can do with Kong Manager: Kong Gateway can run natively on Kubernetes with its custom ingress controller, Helm chart, and Operator. A Kubernetes ingress controller is a proxy that exposes Kubernetes services from applications (for example, Deployments, ReplicaSets) running on a Kubernetes cluster to client applications running outside of the cluster. The intent of an ingress controller is to provide a single point of control for all incoming traffic into the Kubernetes cluster. Kong Gateway plugins provide advanced functionality to better manage your API and microservices. With turnkey capabilities to meet the most challenging use cases, Kong Gateway plugins ensure maximum control and minimizes unnecessary overhead. Enable features like authentication, rate-limiting, and transformations by enabling Kong Gateway plugins through Kong Manager or the Admin API. Kong also provides API lifecycle management tools that you can use with Kong Gateway. Insomnia enables spec-first development for all REST and GraphQL services. With Insomnia, organizations can accelerate design and test workflows using automated testing, direct Git sync, and inspection of all response types. Teams of all sizes can use Insomnia to increase development velocity, reduce deployment risk, and increase collaboration. decK helps manage Kong Gateways configuration in a declarative fashion. This means that a developer can define the desired state of Kong Gateway or Konnect services, routes, plugins, and more and let decK handle implementation without needing to execute each step manually, as you would with the Kong Admin API. Download and install Kong Gateway. To test it out, you can choose either the open-source package, or run Kong Gateway Enterprise in free mode and also try out Kong Manager. After installation, get started with the introductory quickstart guide. Kong Konnect can manage Kong Gateway instances. With this setup, Kong hosts the control plane and you host your own data planes. There are a few ways to test out" } ]
{ "category": "Orchestration & Management", "file_name": "docs.github.com.md", "project_name": "KubeGateway", "subcategory": "API Gateway" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "KrakenD", "subcategory": "API Gateway" }
[ { "data": "News Automating License Validity Checks with KrakenD Community Edition Enterprise Edition Recent changes Document updated on Oct 28, 2016 An API Gateway is a component that needs to deliver really fast, as it is an added layer in the infrastructure. KrakenD was built with performance in mind. In this page and inner pages, youll find several tests we did to measure the performance. We also invite you to do them for yourself! ~18,000 requests/second on an ordinary laptop. The following table summarizes different performance tests using Amazon EC2 virtual instances and an example with a laptop. | # | Hardware specs | Requests second | Average response | |-:|:|:|:-| | 1 | Amazon EC2 (c4.2xlarge) | 10126.1613 reqs/s | 9.8ms | | 2 | Amazon EC2 (c4.xlarge) | 8465.4012 reqs/s | 11.7ms | | 3 | Amazon EC2 (m4.large) | 3634.1247 reqs/s | 27.3ms | | 4 | Amazon EC2 (t2.medium) | 2781.8611 reqs/s | 351.3ms | | 5 | Amazon EC2 (t2.micro) | 2757.6407 reqs/s | 35.8ms | | 6 | MacBook Pro (Aug 2015) 2,2 GHz Intel Core i7 | 18157.4274 reqs/s | 5.5ms | Here you will find the results of the benchmarks Here you will find the results of the benchmarks Here you will find the results of the comparisons Some local benchmarks used the hey tool, which is an Apache Benchmark (ab) replacement tool. This project aims to provide a complete set of tools needed to do simple performance comparisons in the API manager/gateway space. It is inspired by the great Framework Benchmarks project by TechEmpower. Check the varnish/api-gateway-benchmarks project for more info. LWAN is a high performance web server used to build the backends REST APIs for KrakenD to load the data during the benchmarks. Improve this page The documentation is only a piece of the help you can get! Whether you are looking for Open Source or Enterprise support, see more support channels that can help you. The Ultra-High performance Open Source API Gateway KrakenD helps application developers release features quickly by eliminating all the complexities of SOA architectures while offering a unique performance." } ]
{ "category": "Orchestration & Management", "file_name": "docs.md", "project_name": "KubeGateway", "subcategory": "API Gateway" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:-|:-|:-|-:|-:| | parent directory.. | parent directory.. | parent directory.. | nan | nan | | en | en | en | nan | nan | | image | image | image | nan | nan | | zh | zh | zh | nan | nan | | View all files | View all files | View all files | nan | nan |" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Lunar.dev", "subcategory": "API Gateway" }
[ { "data": "Hi there! Welcome to Lunar.dev, your go-to solution for API Consumption Management. Unleash the power of third-party APIs while keeping costs, performance, and scalability in check. Unmanaged 3rd party APIs can lead to unforeseen costs and performance issues. Lunar addresses these challenges as an egress proxy a real-time, unified API consumption management tool that efficiently manages outgoing API traffic without requiring any code changes. Lunar's installation process takes just 5 minutes. With Lunar's Proxy and Interceptor, you get seamless detection and discovery through diagnose plugins and quick fixes via remedy plugins. Explore the key features of Lunar's Proxy and Interceptor below. Developers and organizations often face issues due to unmanaged API consumption, leading to poor performance and high costs. Lunar provides a solution a real-time, unified API management tool with no code changes required. Lunar enables teams to: Unify the management of 3rd party API consumption in production. Discover and fix performance issues in real time. Build resilience, enforce new policies, and create custom plugins. With Lunar, developers can fully control and configure policies according to their needs. Lunar comes with a set of plugins and integrates into existing environments through its API-proxy based architecture. It optimizes outgoing traffic and provides out-of-the-box support for all 3rd party APIs without compromising overall latency and performance. Lunar is designed for a variety of users: Lunar is agnostic to your consumed APIs; no limitations on specific plugins for specific API providers. Its modular approach ensures an optimal user experience. Create a simple YAML file through your terminal and activate diagnose and remedy plugins. Integrated seamlessly into your architecture, Lunar utilizes two main components: SDK Installation: For developers, the SDK Installation is akin to a versatile language bridge. It seamlessly integrates Lunar Core into your existing codebase, supporting multiple programming languages. The SDK offers language-specific libraries and tools, facilitating smooth implementation. eBPF Interceptor: In scenarios demanding efficient traffic control within a cluster, the eBPF Interceptor comes into play. Harnessing the power of Extended Berkeley Packet Filters (eBPF), it efficiently intercepts and manipulates network traffic, guaranteeing seamless integration and operation. By enabling policy enforcement at the application level, Lunar empowers developers to create highly efficient, secure, and optimized environments for consuming external APIs. The Lunar Control Plane is a fully self-serve graphical user interface for integrating with Lunar. Gain valuable insights into your API consumption landscape, monitor performance, and optimize your API usage. Learn more here." } ]
{ "category": "Orchestration & Management", "file_name": "github-terms-of-service.md", "project_name": "KubeGateway", "subcategory": "API Gateway" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "github-privacy-statement.md", "project_name": "KubeGateway", "subcategory": "API Gateway" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "MuleSoft", "subcategory": "API Gateway" }
[ { "data": "Getting started Community Training Tutorials Documentation Getting started Community Training Tutorials Documentation Copyright 2024 Salesforce, Inc. All rights reserved. | 0 | 1 | |:-|:-| | Jun 10 | APIkit for OData v4 for Mule 4 1.4.1 | | Jun 10 | Anypoint MQ Connector 4.0.8 | | Jun 4 | API Manager 2.x | | Jun 4 | APIkit for AsyncAPI 1.0.0 | | Jun 4 | Anypoint Code Builder | Jun 10 APIkit for OData v4 for Mule 4 1.4.1 Jun 10 Anypoint MQ Connector 4.0.8 Jun 4 API Manager 2.x Jun 4 APIkit for AsyncAPI 1.0.0 Jun 4 Anypoint Code Builder View all release notes Build the digital transformation your business needs. Tutorial: Build an API from start to finish. Browse Exchange to find existing API specifications, templates, examples, and other assets that you can reuse for your projects. Discover reusable assets. Create and publish new API specifications from scratch. Create a reusable API specification. Manage access to your Anypoint Platform account. Build security into your application network. Build and test APIs and integration apps. Develop your API. Connect your data, systems, and apps. Develop the Mule app that integrates systems, services, APIs, and devices. Access and transform data within the Mule app. Build automated tests for your APIs and integrations. Choose a deployment option, deploy APIs and apps, and secure them with policies. Choose a deployment option. Configure a proxy for your API. Create policies to secure your API. Monitor your APIs and integrations using dashboards, metrics, and visualization. View metrics for integration apps and APIs. Test the functional behavior and performance of your API. Monitor your deployed apps." } ]
{ "category": "Orchestration & Management", "file_name": "lunar-sandbox.md", "project_name": "Lunar.dev", "subcategory": "API Gateway" }
[ { "data": "This sandbox simulates lunar.dev simplifies 3rd-party API consumption shared by different environments through effective load-balancing policies. Lunar.dev helps allocate quotas to different environments or services sharing the same API key, ensuring optimal orchestration and preventing unnecessary 429 errors. If you prefer a guided introduction to the Sandbox, click the button below to access a quick interactive tutorial. It offers comprehensive instructions for navigating the Sandbox effectively. If you choose not to follow the tutorial, proceed to the steps below. Our Sandbox is set up in Gitpod. Click the button below to set up the GitPod environment. Connect with your GitHub account. If you don't have one, sign up for free here. Create your new workspace by pressing the Continue button, without modifying any predefined specifications. To hide your left file explorer sidebar, press Ctrl+B (Windows/Linux), or +B if you're a MacOS user. View the large error rates being presented, observe any triggered Alerts, and refer to the initial allocation between environments is set at 50/50. ``` docker exec sandbox-lunar-proxy-1 discover ``` The output will display the involved endpoints and assigned interceptors. Gain insights into traffic patterns, highlighting successful and error responses. ``` docker exec sandbox-lunar-proxy-1 apply_policies``` ``` docker exec sandbox-lunar-proxy-1 remedy_stats``` Review the rate of successful requests per minute. Review the improvements in the presented Status Codes distribution graph. Now, most of the returned API responses are not facing the rate limits by the provider, resulting in mostly 200s API responses. Note the improvements in alignment with the configured policies for each service/client. Observe the allocation between the staging and production environments which now corresponds to the assigned percentage quotas of 20% for staging and 80% for production. Observe the reduction in overall error rates, indicating its success in preventing unnecessary 429 errors and rate limit violations." } ]
{ "category": "Orchestration & Management", "file_name": "docs.md", "project_name": "ngrok", "subcategory": "API Gateway" }
[ { "data": "ngrok is your app's front door. ngrok is a globally distributed reverse proxy that secures, protects and accelerates your applications and network services, no matter where you run them. ngrok supports delivering HTTP, TLS or TCP-based applications. More about how ngrok works You can use ngrok in development for webhook testing or in production as an API Gateway, Kubernetes Ingress, or Identity-Aware Proxy. You can also run ngrok to easily create secure connectivity to APIs in your customers' networks or on your devices in the field. More about what you can do with ngrok Put your app on the internet with the ngrok agent in less than a minute. Or instead of the ngrok agent, get started another way: Once ngrok is in front of your app, you can add authentication, acceleration, transformation, and other behaviors. Require a username and password with HTTP Basic Auth. Allow or deny traffic based on the source IP of connections. Enforce an OAuth flow to well-known IdPs like Google. Verify HTTP requests are signed by a webhook provider like Slack or GitHub. Add or remove headers from HTTP requests before they are sent to your upstream service Enforce an OpenID Connect flow to a federated IdP. Enforce mutual TLS auth with a configured set of CAs. Enforce a SAML flow to a federated IdP, optionally authorizing users by group. Block bots and browsers with rules that allow or deny HTTP requests based on User-Agent header. Protect upstream services by rejecting traffic when they become overwhelmed. All Modules Agent CLI commands and options Options for ngrok.yml: the agent config file HTTP API resources and methods for api.ngrok.com All observable events and event payload shapes Variables you can template into request or response headers. Codes for every unique error condition returned by ngrok ngrok-go package docs on pkg.go.dev ngrok-rust crate docs on docs.rs ngrok-javascript module docs ngrok-python package docs All Agent SDKs Follow tutorials for common tasks when working with ngrok. View your app's traffic in real time with the ngrok agent's inspection interface Use ngrok with your own domain by setting up a DNS CNAME record All Guides" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "ngrok", "subcategory": "API Gateway" }
[ { "data": "Protect your ngrok environment with the 1Password ngrok shell plugin. Manage your AfterShip webhooks securely with ngrok. Receive real-time inbox data with Airship and ngrok tunnels. Securely listen to Alchemy security events with ngrok. Secure ngrok endpoints with Amazon.com OAuth Sending ngrok events into Amazon CloudWatch. ngrok Kubernetes Ingress Controller on Amazon EKS Sending ngrok events into AWS Firehose. Sending ngrok events into AWS Kinesis. Publish and deliver event notifications from Amazon SNS with ngrok. Secure access to your ngrok tunnels with Auth0 SSO. Securely listen to events and get notifications from Autodesk using ngrok. Secure access to ngrok tunnels with Azure Active Directory. Secure access to ngrok tunnels with Azure Active Directory B2C. Quickly add ingress traffic to apps running on Kubernetes clusters managed by Azure Kubernetes Service (AKS). Securely integrate your CI/CD with Bitbucket using ngrok. Securely monitor Box events and notifications using ngrok. Securely monitor Brex transfers and sign-ups using ngrok. Securely monitor Buildkite agent, build and job events using ngrok. Securely receive real-time Calendly meeting data with ngrok. Securely listen to Castle security events with ngrok. Securely connect Chargify webhooks to your apps with ngrok. Securely connect your CircleCI CI/CD pipelines with ngrok. Securely validate Clearbit webhook calls with ngrok. Securely sync your store, send emails, or get Clerk notifications with ngrok. Securely monitor your Coinbase crypto and notifications with ngrok. Easily allow Ingress traffic to services running in your Consul Service Mesh Securely trigger Contentful website rebuilds or send notifications with ngrok. Secure access to your ngrok tunnels using Curity SSO. Sending ngrok events into Datadog. Secure access to your ngrok endpoints with Descope. Quickly add ingress traffic to apps running on clusters managed by DigitalOcean. Securely get notifications from DocuSign with ngrok. Securely get file notifications for Dropbox with ngrok. Securely develop with Facebook webhooks using ngrok. Securely develop and receive Messenger webhooks using ngrok. Securely test and share frame.io projects with ngrok. Secure access to your ngrok endpoints with Frontegg SSO. Easily secure access to ngrok tunnels with FusionAuth SSO. Integrate CI/CD tools with your GitHub webhooks using ngrok Securely develop and test GitLab webhooks using ngrok Securely authenticate users with Google OAuth Run the ngrok Kubernetes Ingress Controller on Google Kubernetes Engine (GKE). Securely drive custom operations and real-time integrations with Heroku using ngrok. Easily build a production-ready webhook platform using HostedHooks and ngrok. Easily connect HubSpot webhooks to your local apps using ngrok. Securely receive Hygraph event notifications using" }, { "data": "Securely connect Instagram webhooks to your apps using ngrok. Securely connect Intercom webhooks to your apps using ngrok. Secure access to your ngrok tunnels with JumpCloud SSO. Securely get notifications from LaunchDarkly using ngrok. Quickly add ingress traffic to any Kubernetes app running on top of a Linkerd service mesh. Quickly add ingress traffic to apps running on Linode VMs. Securely receive notifications from Mailchimp using ngrok. Securely monitor your Mailgun email campaigns using ngrok. Quickly add ingress traffic to apps running on Kubernetes clusters managed by Canonical MicroK8s. Secure access to your ngrok tunnels with minoOrange SSO. Securely monitor notifications from Modern Treasury using ngrok. Securely create or modify MongoDB resources using ngrok. Securely listen to your Mux events and transitions using ngrok. Secure ngrok tunnels with Okta SSO or monitor Okta webhooks. Securely integrate your apps with Orbit using ngrok. Securely get incident notifications from PagerDuty using ngrok. Securely integrate your apps with Pinwheel using ngrok. Securely integrate your apps with Plivo using ngrok. Securely get your notifications from Pusher using ngrok. Quickly add ingress traffic to applications running on Rafay's Kubernetes management platform. Quickly add ingress traffic to applications operating in a Rancher-based management platform. Securely connect your SendGrid webhooks using ngrok. Securely listen to your events from Sentry using ngrok. Securely receive Shopify calls and notifications using ngrok. Securely integrate your apps with Signal Sciences using ngrok. Securely get your event notifications from Slack using ngrok. Securely integrate your apps with Sonatype Nexus using ngrok. Quickly add ingress traffic to applications operating in a cluster managed by the Palette platform. Securely integrate your applications with Square using ngrok. Securely integrate your applications with Stripe using ngrok. Securely integrate your applications with Svix using ngrok. Securely integrate your applications with Microsoft Teams using ngrok. Securely integrate your apps with Terraform using ngrok. Securely integrate your applications with TikTok using ngrok. Securely integrate your applications with Trend Micro using ngrok. Secure access to ngrok tunnels with Wallix Trustelem SSO. Securely integrate your apps with Twilio using ngrok. Securely integrate your apps with Twitter using ngrok. Securely integrate your apps with Typeform using ngrok. Quickly add ingress traffic to applications running in virtual clusters. Securely integrate your apps with VMware using ngrok. Securely integrate your apps with Webex using ngrok. Securely integrate your apps with WhatsApp using ngrok. Securely integrate your apps with Worldline using ngrok. Securely integrate your apps with Xero using ngrok. Securely integrate your apps with Zendesk using ngrok. Securely integrate your apps with Zoom using ngrok." } ]
{ "category": "Orchestration & Management", "file_name": "performance.md", "project_name": "Lunar.dev", "subcategory": "API Gateway" }
[ { "data": "In the ever-evolving landscape of the API economy, time emerges as one of our most precious resources. Swift response times have become essential for both API providers and consumers. As providers strive to meet response time percentiles, consumers are becoming more aware of these metrics, often establishing service level agreements (SLAs) based on them. At Lunar, we recognize the significance of latency. Our solution addresses the complex challenges of API consumption by by acting as a bridge between API providers and consumers. Naturally, we aim to minimize any impact on our users' existing latency. As developers, we knew instinctively that this is precisely what we would desire as end users. To achieve nearly invisible latency footprint, extensive research went into selecting the ideal stack and architecture. However, any assumption must undergo rigorous testing to validate its accuracy. This is where latency benchmarking becomes invaluable. As declared in our Architecture page, Lunar operates alongside our users' applications, handling all outgoing traffic directed towards third-party providers. While its default behavior involves seamless forwarding of requests and responses, akin to forward proxies, its true brilliance materializes when augmented with remedy and diagnosis plugins. While remedy plugins can modify requests and responsese.g., retrieving responses from cache without issuing actual provider requestsdiagnosis plugins, by their nature, lack such transformative capabilities. Through our benchmarking sessions, we wanted to unveil Lunar's latency footprint on response time percentiles in comparison to direct API calls made without Lunar's intervention. In our experiments, we have selected a provider with a constant response time of 150ms, which is remarkably fast for a web-based API. In addition, we wanted to examine whether a correlation exists between the provider's response time and Lunar's latency footprint, or if, ideally, it remains constant regardless of the API provider's response time. To accomplish this, we explore the following scenarios: We allocated two different AWS EC2 instances of type c5a.large for this purpose - one dedicated to the provider only, and another one dedicated to the client application and Lunar. This is key: there will always be real network time when calling API providers; hence, the separate EC2 instances are crucial here. On the contrary, Lunars product is designed to be located as close as possible to the client application which integrates with it, so it makes sense to place these two on the same EC2 instance. We used Apache AB, to simulate client-side behavior and gather necessary metrics. To replicate how client applications interact with Lunar, we directed Apache AB to call Lunar, which, in turn, forwarded the requests to the" }, { "data": "On the provider side, we leveraged go-httpbin, a Docker image that serves as a Go version of httpbin.org. In our performance analysis, in addition the set a baseline of \"Direct calls to the provider\" (referred to as \"direct\"), we conducted three experiments to compare the following scenarios: The visualization below presents percentile values on the X-axis and runtime in milliseconds on the Y-axis, highlighting the differences between each experiment and the baseline direct experiment, which are relatively small across percentiles. To provide a clearer understanding, let's examine the same graph with the Y-axis representing the delta from the baseline for each experiment and percentile. In the direct experiment, the delta remains 0 for all percentiles, as it is compared against itself. Through our benchmarking sessions, we aimed to assess the capacity of Lunar in handling requests per second under various load conditions. In this performance evaluation, we established an EKS cluster and node group with the provided hardware configuration, ensuring a reliable and scalable infrastructure. Employing our benchmarking tool, we generated a load on the Lunar to measure its performance within the EKS cluster. By systematically varying load parameters, including the number of concurrent connections and request rate, we sought to understand how Lunar performed under different scenarios. Our objective was to uncover the efficiency and capability of Lunar, identifying potential bottlenecks or areas for improvement in its capacity to handle substantial workloads. The results demonstrate the requests per second achieved by Lunar in each scenario, providing insights into its performance characteristics. With multiple requests ranging from 32 to 256, and capacity limits from 1 to 8 cores, we observed a clear correlation between these factors and the resulting requests per second. These findings contribute to a deeper understanding of Lunar's performance and can assist in optimizing its configuration for enhanced capacity and throughput. | Concurrency | Capacity Limit (Cores) | Number of Requests | Requests per Second | |--:|-:|:|-:| | 32 | 1 | 2.5e+06 | 9050.42 | | 64 | 2 | 5e+06 | 16550.5 | | 128 | 4 | 1e+07 | 35896.2 | | 256 | 8 | 2e+07 | 84867.8 | In this table, the columns represents the following: Through extensive benchmarking, Lunar demonstrates impressive results. It adds only a small latency footprint to response times, ranging from 4ms to 37ms at the 95th and 99th percentiles, respectively. The capacity benchmark shows that Lunar efficiently handles substantial workloads, achieving up to 84,867 requests per second. These outstanding results validate Lunar's effectiveness in minimizing latency and ensuring optimal performance in API interactions." } ]
{ "category": "Orchestration & Management", "file_name": "intro.html.md", "project_name": "Reactive Interaction Gateway", "subcategory": "API Gateway" }
[ { "data": "TL;DR: In order to answer the what, how, why of the project, this document proposes a use case for real time updates, shows iterations of how to solve this problem architecturally, explains Reactive architecture, and presents where the Reactive Interaction Gateway fits into this architecture. RIG is part of Accenture's contribution to the Reactive architecture community. Typically, an API gateway acts as a reverse proxy, forwarding requests from frontend to backend services. The backend services typically send back a reply, which is then forwarded back to the client. Quite often, you'd like your UI to display events as they occur (think \"two customer's are looking at this\" on your favorite hotel-booking site). The simplest way to implement this is by having the frontend poll a backend service for updates, but this doesn't scale well - a lot of extra traffic and a single service that is coupled to all services that emit interesting events. The first problem is easy: to reduce traffic and get rid of potentially large notification delays, you could also have your reverse proxy forward a websocket connection, or something similar, to that backend service. The approach so far works okay as long as you have a monolithic application, but fails in a microservice environment: it's a single component coupled to most services in your system as it asks them for updates - any change in any other service will affect it. We can solve this problem by decoupling the services using some kind of messaging service, like Kafka; now the backend-for-frontends service simply listens to the Kafka stream, where all other services publish their events to. This is exactly what RIG does: it subscribes to Kafka topics, while holding connections to all active frontends, forwarding events to the users they're addressed to, all in a scalable way. And on top of that, it also handles authorization, so your services don't have to care about that either. For integrating the event stream into frontends, RIG supports several options (and additional transports are easy to implement): The obvious thing to do is polling, which basically means hitting the server for updates, again and again. This is easy to implement and has the advantage of working everywhere, regardless of how old the browser or strange the firewall setup may be. However, there are also some downsides to it. Imaging yourself in a project where you're building a website that's composed of many smaller (React) components, and most of them show a different part of the state tree. Some of them might even present the same data in different" }, { "data": "How would you make sure the UI always shows the most recent data, just like on the ticket vendor website outlined above? There is a trade-off here: either you have the top-level component update all the data every few seconds (which might still be too slow), or each component fetches the data it needs individually. Both approaches are less than ideal: fetching everything all the time causes loads of traffic. Most likely, you'll be overfetching, because some of the data you request will probably never change at all. Having each component fetch their own data sounds good at first, but it will cost you a lot in terms of performance and complexity (think loading indicators, handling debouncing, connection timeouts, synchronizing view state among components, ...). From an architectural standpoint, your app generates a lot of unnecessary load that must be handled by the server, which means that your app cannot scale well with the number of users. Finally, each connection attempt may affect battery life when running on a mobile device. Still, we offer HTTP Long-polling in case SSE and WS are not supported. You see a Twitter notification that The Glitch Mob is doing a pop-up concert two nights from now at a venue just a few blocks from your apartment. No way! Your long-distance partner loves The Glitch Mob and theyre in town for a visit this weekend. Working late at the client site, you quickly log on to the ticket vendor on your laptop and look to book tickets before your car to the airport arrives. You see the live map of the venue, seats disappearing before your eyes at an alarming rate. You grab two, get to payment processing, and the car pulls up. Switching to your phone, you pull up the app, continuing the checkout process, and book the tickets. The event sells out in the next few minutes. Success! Youre tasked to architect an application setup that handles this functional requirement; a user can move seamlessly between mobile and browser experiences. It requires real time updates to the user interface. Well start with the simplest form and then layer complexity into the model. We might see an architecture like this. A main backend application handles the business logic and maintains web socket connections to the web and mobile user interfaces. User Story Clearly, were missing the financial transaction. Lets add that in as a microservice and use HTTP endpoints to handle the connection between the application. We want to send an email to the user once the process is complete. It is common to stand up a microservice to handle sending emails. Well use HTTP connections from the main application to the email service. Following this progression over time, the architectural complexity increases as functional requirements grow and we build out a collection of interacting" }, { "data": "Enterprise scale traffic and data flow demands place great stress on an architecture like this. Thousands of messages are passed through each microservice at once. In an environment that depends on real time updates, like buying tickets, load can increase dramatically at peak times. A crash or stalled process at those times leads to a bad user experience and threatens the business. A microservice architecture has a lot of benefits but depending on the way that the messages are passed throughout the system and stateful dependencies are managed, it can be difficult to debug what goes wrong. Building endpoints between the applications means that as complexity grows, it becomes more difficult to change the overall system. Other application features are built around the data from an endpoint and this locks together application interfaces. Each endpoint represents a long-term investment in a static architecture. In this example, App1 exposes an endpoint /foo. App2 makes an API call for that data and builds a method bar on the received data. App2 then exposes an endpoint /foobar4everthat uses the bar method. It creates a method called foobar. While that's a simple and humorous example, this is a common situation in microservice architectures. As functionality is built out, each application is dependent on endpoints existing in another application. To change and adapt the system requires changing the whole chain of methods and endpoints. As systems grow in complexity they become ossified. There is a community who has committed to a set of principles, the Reactive Manifesto, that guide architectures to be able to handle the many challenges that arise from a microservice architecture at scale. This is known as Reactive Architecture. Only a few years ago a large application had tens of servers, seconds of response time, hours of offline maintenance and gigabytes of data. Today applications are deployed on everything from mobile devices to cloud-based clusters running thousands of multi-core processors. Users expect millisecond response times and 100% uptime. Data is measured in Petabytes. Today's demands are simply not met by yesterdays software architectures. Reactive Systems are: Here's a potential way of designing the previous architecture in line with Reactive principles by adding an event hub. Instead of using HTTP endpoints between individual applications, each application publishes the events they generate to a topic and subscribe to the topic with the event data they need. The data flow is organized with topics. Designing the architecture with message passing flows means that the structure of the architecture can be changed more freely. Imagining the flow of the data as representing the state of the system, applications can be inserted, added, or changed, rather than relying on HTTP endpoints between multiple applications that need updating at each new" }, { "data": "The architectural design is centered on streams of data through the system rather than the relationships and connections between applications. Data is sent asynchronously and applications react to events as they occur. This ability to think abstractly about streams of data as the system evolves regardless of the technical implementation is very valuable. Say the tickets endpoint was initially written in a language and framework that cant handle the increased volume as it scales. It can be easily replaced, with the new application simply taking over the event subscription and publication to the topic. Topics can be easily updated. A reactive architecture using an event hub like Kafka enables an increased flexibility and the ability to debug, monitor, and maintain the backend in a microservices architecture. Even so, something about this architecture feels disjointed and unaligned to Reactive principles because of the real time updates. The application maintaining the websocket connections in our diagram will have trouble handling thousands of concurrent users with real time updates, whether they be mobile or browser based. At scale, this poses problems in the infrastructure and architecture. Events will need to be sent via the event hub and an application has to function as an interface with the client side. Which application should do it? Should the application that handles the main business logic also handle connections? The Reactive Interaction Gateway (RIG) was designed to solve this problem elegantly and in line with Reactive principles. Using the Erlang VM, BEAM, we can model web socket connections using actors, which are much lighter weight than OS threads. RIG functions as an application interface layer and works as an event hub for the front end. It powers real time updates and decouples the backend interface from the frontend while enabling many concurrent users. It handles asynchronous events streaming from the event hub or from the UI. This architecture can evolve in complexity as features are built, adding or subtracting services in order to reflect the problem domain. It enables the continuous deployment of backend services without effecting users' connections. RIG is designed to manage all connections to the front end and to be language agnostic. It does not matter in which framework or language a connecting application is written and developers do not need to know Elixir / Phoenix to use RIG. Routes are defined using a configuration file or by POSTing directly to the application. This gives an architect a great deal of flexibility to choose the tools they use to meet functional requirements. In the /examples folder, there is an example architecture made with a React frontend, RIG, a Node backend, and a Kafka instance. Here's a chart demonstrating that reference architecture in abstract. Go to /examples for more depth:" } ]
{ "category": "Orchestration & Management", "file_name": "tutorial.html.md", "project_name": "Reactive Interaction Gateway", "subcategory": "API Gateway" }
[ { "data": "This tutorial shows a basic use case for RIG. A frontend (e.g. the mobile app for a chatroom service) connects to RIG and subscribes to a certain event type (e.g. messages from a chatroom). The backend (e.g. chatroom server) publishes the message to RIG, and RIG forwards it to the frontend. We simulate frontend and backend HTTP requests using HTTPie for HTTP requests, but of course you can also use curl or any other HTTP client. Please note that HTTPie sets the content type to application/json automatically, whereas for curl you need to use -H \"Content-Type: application/json\" for all but GET requests. To get started, run our Docker image using this command: ``` $ docker run -p 4000:4000 -p 4010:4010 accenture/reactive-interaction-gateway ... Reactive Interaction Gateway 2.1.0 [rig@127.0.0.1, ERTS 10.2.2, OTP 21] ``` Note that HTTPS is not enabled by default. Please read the RIG operator guide before running a production setup. Let's connect to RIG using Server-Sent Events, which is our recommended approach (open standard, firewall friendly, plays nicely with HTTP/2): ``` $ http --stream :4000/_rig/v1/connection/sse HTTP/1.1 200 OK connection: keep-alive content-type: text/event-stream transfer-encoding: chunked ... event: rig.connection.create data: {\"specversion\":\"0.2\",\"source\":\"rig\",\"type\":\"rig.connection.create\",\"time\":\"2018-08-22T10:06:04.730484+00:00\",\"id\":\"2b0a4f05-9032-4617-8d1e-92d97fb870dd\",\"data\":{\"connection_token\":\"g2dkAA1ub25vZGVAbm9ob3N0AAACrAAAAAAA\",\"errors\":[]}} id: 2b0a4f05-9032-4617-8d1e-92d97fb870dd ``` After the connection has been established, RIG sends out a CloudEvent of type rig.connection.create. You can see that ID and event type of the outer event (= SSE event) match ID and event type of the inner event (= CloudEvent). The cloud event is serialized to the data field. Please take note of the connection_token in the CloudEvent's data field - you need it in the next step. With the connection established, you can create subscriptions - that is, you can tell RIG which events your app is interested in. RIG needs to know which connection you are referring to, so you need to use the connection token you have noted down in the last step: ``` $ CONN_TOKEN=\"g2dkAA1ub25vZGVAbm9ob3N0AAACrAAAAAAA\" $ SUBSCRIPTIONS='{\"subscriptions\":[{\"eventType\":\"chatroom_message\"}]}' $ http put \":4000/rig/v1/connection/sse/${CONNTOKEN}/subscriptions\" <<<\"$SUBSCRIPTIONS\" HTTP/1.1 204 No Content content-type: application/json; charset=utf-8 ... ``` With that you're ready to receive all \"chatroom_message\" events. RIG expects to receive CloudEvents, so the following fields are required: Let's send a simple chatroom_message event: ``` $ http post :4000/_rig/v1/events \\ specversion=0.2 \\ type=chatroom_message \\ id=first-event \\ source=tutorial HTTP/1.1 202 Accepted content-type: application/json; charset=utf-8 ... { \"specversion\": \"0.2\", \"id\": \"first-event\", \"time\": \"2018-08-21T09:11:27.614970+00:00\", \"type\": \"chatroom_message\", \"source\": \"tutorial\" } ``` RIG responds with 202 Accepted, followed by the CloudEvent as sent to subscribers. If there are no subscribers for a received event, the response will still be 202 Accepted and the event will be silently dropped. Going back to the first terminal window you should now see your greeting event In a real-world frontend app the above example to connect your app to RIG would look something like this below. See examples/sse-demo.html for a full example. ``` <!DOCTYPE html> <html> <head> ... <script src=\"https://unpkg.com/event-source-polyfill/src/eventsource.min.js\"></script> </head> <body> ... <script> ... const source = new EventSource(`http://localhost:4000/_rig/v1/connection/sse`) source.onopen = (e) => console.log(\"open\", e) source.onmessage = (e) => console.log(\"message\", e) source.onerror = (e) => console.log(\"error\", e) source.addEventListener(\"rig.connection.create\", function (e) { cloudEvent = JSON.parse(e.data) payload = cloudEvent.data connectionToken = payload[\"connection_token\"] createSubscription(connectionToken) }, false); source.addEventListener(\"greeting\", function (e) { cloudEvent = JSON.parse(e.data) ... }) source.addEventListener(\"error\", function (e) { if (e.readyState == EventSource.CLOSED) { console.log(\"Connection was closed.\") } else { console.log(\"Connection error:\", e) } }, false); function createSubscription(connectionToken) { const eventType = \"greeting\" return fetch(`http://localhost:4000/_rig/v1/connection/sse/${connectionToken}/subscriptions`, { method: \"PUT\", headers: { \"Content-Type\": \"application/json\" }, body: JSON.stringify({ \"subscriptions\": [{ \"eventType\": eventType }] }) }) ... } </script> </body> </html> ```" } ]
{ "category": "Orchestration & Management", "file_name": "index.html.md", "project_name": "Reactive Interaction Gateway", "subcategory": "API Gateway" }
[ { "data": "This tutorial shows a basic use case for RIG. A frontend (e.g. the mobile app for a chatroom service) connects to RIG and subscribes to a certain event type (e.g. messages from a chatroom). The backend (e.g. chatroom server) publishes the message to RIG, and RIG forwards it to the frontend. We simulate frontend and backend HTTP requests using HTTPie for HTTP requests, but of course you can also use curl or any other HTTP client. Please note that HTTPie sets the content type to application/json automatically, whereas for curl you need to use -H \"Content-Type: application/json\" for all but GET requests. To get started, run our Docker image using this command: ``` $ docker run -p 4000:4000 -p 4010:4010 accenture/reactive-interaction-gateway ... Reactive Interaction Gateway 2.1.0 [rig@127.0.0.1, ERTS 10.2.2, OTP 21] ``` Note that HTTPS is not enabled by default. Please read the RIG operator guide before running a production setup. Let's connect to RIG using Server-Sent Events, which is our recommended approach (open standard, firewall friendly, plays nicely with HTTP/2): ``` $ http --stream :4000/_rig/v1/connection/sse HTTP/1.1 200 OK connection: keep-alive content-type: text/event-stream transfer-encoding: chunked ... event: rig.connection.create data: {\"specversion\":\"0.2\",\"source\":\"rig\",\"type\":\"rig.connection.create\",\"time\":\"2018-08-22T10:06:04.730484+00:00\",\"id\":\"2b0a4f05-9032-4617-8d1e-92d97fb870dd\",\"data\":{\"connection_token\":\"g2dkAA1ub25vZGVAbm9ob3N0AAACrAAAAAAA\",\"errors\":[]}} id: 2b0a4f05-9032-4617-8d1e-92d97fb870dd ``` After the connection has been established, RIG sends out a CloudEvent of type rig.connection.create. You can see that ID and event type of the outer event (= SSE event) match ID and event type of the inner event (= CloudEvent). The cloud event is serialized to the data field. Please take note of the connection_token in the CloudEvent's data field - you need it in the next step. With the connection established, you can create subscriptions - that is, you can tell RIG which events your app is interested in. RIG needs to know which connection you are referring to, so you need to use the connection token you have noted down in the last step: ``` $ CONN_TOKEN=\"g2dkAA1ub25vZGVAbm9ob3N0AAACrAAAAAAA\" $ SUBSCRIPTIONS='{\"subscriptions\":[{\"eventType\":\"chatroom_message\"}]}' $ http put \":4000/rig/v1/connection/sse/${CONNTOKEN}/subscriptions\" <<<\"$SUBSCRIPTIONS\" HTTP/1.1 204 No Content content-type: application/json; charset=utf-8 ... ``` With that you're ready to receive all \"chatroom_message\" events. RIG expects to receive CloudEvents, so the following fields are required: Let's send a simple chatroom_message event: ``` $ http post :4000/_rig/v1/events \\ specversion=0.2 \\ type=chatroom_message \\ id=first-event \\ source=tutorial HTTP/1.1 202 Accepted content-type: application/json; charset=utf-8 ... { \"specversion\": \"0.2\", \"id\": \"first-event\", \"time\": \"2018-08-21T09:11:27.614970+00:00\", \"type\": \"chatroom_message\", \"source\": \"tutorial\" } ``` RIG responds with 202 Accepted, followed by the CloudEvent as sent to subscribers. If there are no subscribers for a received event, the response will still be 202 Accepted and the event will be silently dropped. Going back to the first terminal window you should now see your greeting event In a real-world frontend app the above example to connect your app to RIG would look something like this below. See examples/sse-demo.html for a full example. ``` <!DOCTYPE html> <html> <head> ... <script src=\"https://unpkg.com/event-source-polyfill/src/eventsource.min.js\"></script> </head> <body> ... <script> ... const source = new EventSource(`http://localhost:4000/_rig/v1/connection/sse`) source.onopen = (e) => console.log(\"open\", e) source.onmessage = (e) => console.log(\"message\", e) source.onerror = (e) => console.log(\"error\", e) source.addEventListener(\"rig.connection.create\", function (e) { cloudEvent = JSON.parse(e.data) payload = cloudEvent.data connectionToken = payload[\"connection_token\"] createSubscription(connectionToken) }, false); source.addEventListener(\"greeting\", function (e) { cloudEvent = JSON.parse(e.data) ... }) source.addEventListener(\"error\", function (e) { if (e.readyState == EventSource.CLOSED) { console.log(\"Connection was closed.\") } else { console.log(\"Connection error:\", e) } }, false); function createSubscription(connectionToken) { const eventType = \"greeting\" return fetch(`http://localhost:4000/_rig/v1/connection/sse/${connectionToken}/subscriptions`, { method: \"PUT\", headers: { \"Content-Type\": \"application/json\" }, body: JSON.stringify({ \"subscriptions\": [{ \"eventType\": eventType }] }) }) ... } </script> </body> </html> ```" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Tyk", "subcategory": "API Gateway" }
[ { "data": "Need help from one of our engineers? The hub for Tyk API management. Whether you're new or experienced, get started with Tyk, explore our product stack and core concepts, access in-depth guides, and actively contribute to our ever-evolving products. Easily install our Full Lifecycle API Management solution in your own infrastructure. There is no calling home and there are no usage limits. You have full control. Includes: Tyk API Gateway, Tyk Dashboard, Tyk Portal, Tyk UDG A fully managed service that makes it easy for API teams to create, secure, publish and maintain APIs at any scale, anywhere in the world. Includes: Tyk API Gateway, Tyk Dashboard, Tyk Portal, Tyk UDG The heart of what we do. Anything that is API Gateway-related, lives in the Gateway, or is critical for the Gateway to work is open and freely available. Includes: Tyk OSS Gateway" } ]
{ "category": "Orchestration & Management", "file_name": "quickstart.md", "project_name": "etcd", "subcategory": "Coordination & Service Discovery" }
[ { "data": "Follow these instructions to locally install, run, and test a single-member cluster of etcd: Install etcd from pre-built binaries or from source. For details, see Install. Launch etcd: ``` $ etcd {\"level\":\"info\",\"ts\":\"2021-09-17T09:19:32.783-0400\",\"caller\":\"etcdmain/etcd.go:72\",\"msg\":... } ``` From another terminal, use etcdctl to set a key: ``` $ etcdctl put greeting \"Hello, etcd\" OK ``` From the same terminal, retrieve the key: ``` $ etcdctl get greeting greeting Hello, etcd ``` Learn about more ways to configure and use etcd from the following pages: Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve. etcd Authors" } ]
{ "category": "Orchestration & Management", "file_name": "admiralty.html.md", "project_name": "k8gb", "subcategory": "Coordination & Service Discovery" }
[ { "data": "For simplicity let's assume that you operate two geographically distributed clusters you want to enable global load-balancing for. In this example, two local clusters will represent those two distributed clusters. ``` export KUBECONFIG=eu-cluster ``` ``` cp chart/k8gb/values.yaml ~/k8gb/eu-cluster.yaml ``` dnsZone - this zone will be delegated to the edgeDNS in your environment. E.g. yourzone.edgedns.com edgeDNSZone - this zone will be automatically configured by k8gb to delegate to dnsZone and will make k8gb controlled nodes act as authoritative server for this zone. E.g. edgedns.com edgeDNSServers stable DNS servers in your environment that is controlled by edgeDNS provider e.g. Infoblox so k8gb instances will be able to talk to each other through automatically created DNS names clusterGeoTag to geographically tag your cluster. We are operating eu cluster in this example extGslbClustersGeoTags contains Geo tag of the cluster(s) to talk with when k8gb is deployed to multiple clusters. Imagine your second cluster is us so we tag it accordingly infoblox.enabled: true to enable automated zone delegation configuration at edgeDNS provider. You don't need it for local testing and can optionally be skipped. Meanwhile, in this section we will cover a fully operational end-to-end scenario. The other parameters do not need to be modified unless you want to do something special. E.g. to use images from private registry Export Infoblox related information in the shell. ``` export WAPIUSERNAME=<WAPIUSERNAME> export WAPIPASSWORD=<WAPIPASSWORD> ``` ``` kubectl create ns k8gb make infoblox-secret ``` Expose associated k8gb CoreDNS service for DNS traffic on worker nodes. Check this document for detailed information. Let's deploy k8gb to the first cluster. Most of the helper commands are abstracted by GNU make. If you want to look under the hood please check the Makefile. In general, standard Kubernetes/Helm commands are used. Point deployment mechanism to your custom values.yaml ``` make deploy-gslb-operator VALUES_YAML=~/k8gb/eu-cluster.yaml ``` ``` kubectl -n k8gb get pod NAME READY STATUS RESTARTS AGE k8gb-76cc56b55-t779s 1/1 Running 0 39s k8gb-coredns-799984c646-qz88m 1/1 Running 0 41s ``` Deploy k8gb to the second cluster by repeating the same steps with the exception of: When your 2nd cluster is ready by checking with kubectl -n k8gb get pod, we can proceed with the sample application installation We will use well known testing community app of podinfo ``` helm repo add podinfo https://stefanprodan.github.io/podinfo kubectl create ns test-gslb helm upgrade --install podinfo --namespace test-gslb --set ui.message=\"us\" podinfo/podinfo ``` As you can see above we did set special geo tag message in podinfo configuration matching cluster geo tag. It is just for demonstration purposes. ``` kubectl -n test-gslb get pod NAME READY STATUS RESTARTS AGE podinfo-5cfcdc9c45-jbg96 1/1 Running 0 2m18s ``` ``` kubectl -n test-gslb get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE podinfo ClusterIP 10.96.250.84 <none> 9898/TCP,9999/TCP 9m39s ``` ``` apiVersion: k8gb.absa.oss/v1beta1 kind: Gslb metadata: name: podinfo namespace: test-gslb spec: ingress: ingressClassName: nginx rules: host: podinfo.cloud.example.com http: paths: path: / backend: service: name:" }, { "data": "# This should point to Service name of testing application port: name: http strategy: type: roundRobin # Use a round robin load balancing strategy, when deciding which downstream clusters to route clients too ``` ``` kubectl -n test-gslb apply -f podinfogslb.yaml gslb.k8gb.absa.oss/podinfo created ``` ``` kubectl -n test-gslb get gslb NAME AGE podinfo 39s ``` ``` kubectl -n test-gslb describe gslb Name: podinfo Namespace: test-gslb Labels: <none> Annotations: API Version: k8gb.absa.oss/v1beta1 Kind: Gslb Metadata: Creation Timestamp: 2020-06-24T22:51:09Z Finalizers: k8gb.absa.oss/finalizer Generation: 1 Resource Version: 14197 Self Link: /apis/k8gb.absa.oss/v1beta1/namespaces/test-gslb/gslbs/podinfo UID: 86d4121b-b870-434e-bd4d-fece681116f0 Spec: Ingress: Rules: Host: podinfo.cloud.example.com Http: Paths: Backend: Service Name: podinfo Service Port: http Path: / Strategy: Type: roundRobin Status: Geo Tag: us Healthy Records: podinfo.cloud.example.com: 172.17.0.10 172.17.0.7 172.17.0.8 Service Health: podinfo.cloud.example.com: Healthy Events: <none> ``` In the output above you should see that Gslb detected the Healthy status of underlying podinfo standard Kubernetes Service Check that internal k8gb DNS servers are responding accordingly on this cluster ``` k get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME test-gslb2-control-plane Ready master 53m v1.17.0 172.17.0.9 <none> Ubuntu 19.10 4.19.76-linuxkit containerd://1.3.2 test-gslb2-worker Ready <none> 52m v1.17.0 172.17.0.8 <none> Ubuntu 19.10 4.19.76-linuxkit containerd://1.3.2 test-gslb2-worker2 Ready <none> 52m v1.17.0 172.17.0.7 <none> Ubuntu 19.10 4.19.76-linuxkit containerd://1.3.2 test-gslb2-worker3 Ready <none> 52m v1.17.0 172.17.0.10 <none> Ubuntu 19.10 4.19.76-linuxkit containerd://1.3.2 ``` ``` dig +short @172.17.0.10 podinfo.cloud.example.com 172.17.0.8 172.17.0.10 172.17.0.7 ``` ``` dig +short podinfo.cloud.example.com 172.17.0.8 172.17.0.10 172.17.0.7 ``` Now it's time to deploy this application to the first eu cluster. The steps and configuration are exactly the same. Just changing ui.message to eu ``` kubectl create ns test-gslb helm upgrade --install podinfo --namespace test-gslb --set ui.message=\"eu\" podinfo/podinfo ``` ``` kubectl -n test-gslb apply -f podinfogslb.yaml ``` ``` k -n test-gslb describe gslb podinfo Name: podinfo Namespace: test-gslb Labels: <none> Annotations: API Version: k8gb.absa.oss/v1beta1 Kind: Gslb Metadata: Creation Timestamp: 2020-06-24T23:25:08Z Finalizers: k8gb.absa.oss/finalizer Generation: 1 Resource Version: 23881 Self Link: /apis/k8gb.absa.oss/v1beta1/namespaces/test-gslb/gslbs/podinfo UID: a5ab509b-5ea2-49d6-982e-4129a8410c3e Spec: Ingress: Rules: Host: podinfo.cloud.example.com Http: Paths: Backend: Service Name: podinfo Service Port: http Path: / Strategy: Type: roundRobin Status: Geo Tag: eu Healthy Records: podinfo.cloud.example.com: 172.17.0.3 172.17.0.5 172.17.0.6 172.17.0.8 172.17.0.10 172.17.0.7 Service Health: podinfo.cloud.example.com: Healthy Events: <none> ``` Ideally you should already see that Healthy Records of podinfo.cloud.example.com return the records from both of the clusters. Otherwise, give it a couple of minutes to sync up. Now you can check the DNS responses the same way as before. ``` dig +short podinfo.cloud.example.com 172.17.0.8 172.17.0.5 172.17.0.10 172.17.0.7 172.17.0.6 172.17.0.3 ``` ``` curl -s podinfo.example.com|grep message \"message\": \"eu\", curl -s podinfo.example.com|grep message \"message\": \"us\", curl -s podinfo.example.com|grep message \"message\": \"us\", curl -s podinfo.example.com||grep message \"message\": \"eu\", ``` Hope you enjoyed the ride! If anything unclear or is going wrong, feel free to contact us at https://github.com/k8gb-io/k8gb/issues. We will appreciate any feedback/bug report and Pull Requests are welcome. For more advanced technical documentation and fully automated local installation steps, see below." } ]
{ "category": "Orchestration & Management", "file_name": "deploy_route53.html.md", "project_name": "k8gb", "subcategory": "Coordination & Service Discovery" }
[ { "data": "K8GB generates Prometheus-compatible metrics. Metrics endpoints are exposed via -metrics service in operator namespace and can be scraped by 3rd party tools: ``` spec: ... ports: name: http-metrics port: 8383 protocol: TCP targetPort: 8383 name: cr-metrics port: 8686 protocol: TCP targetPort: 8686 ``` Metrics can be also automatically discovered and monitored by Prometheus Operator via automatically generated ServiceMonitor CRDs , in case if Prometheus Operator is deployed into the cluster. controller-runtime standard metrics, extended with K8GB operator-specific metrics listed below: Number of healthy records observed by K8GB. Example: ``` k8gbgslbhealthy_records{name=\"test-gslb\",namespace=\"test-gslb\"} 6 ``` Number of ingress hosts per status (NotFound, Healthy, Unhealthy), observed by K8GB. Example: ``` k8gbgslbingresshostsper_status{name=\"test-gslb\",namespace=\"test-gslb\",status=\"Healthy\"} 1 k8gbgslbingresshostsper_status{name=\"test-gslb\",namespace=\"test-gslb\",status=\"NotFound\"} 1 k8gbgslbingresshostsper_status{name=\"test-gslb\",namespace=\"test-gslb\",status=\"Unhealthy\"} 2 ``` Served on 0.0.0.0:8383/metrics endpoint Info metrics, automatically exposed by operator based on the number of the current instances of an operator's custom resources in the cluster. Example: ``` gslb_info{namespace=\"test-gslb\",gslb=\"test-gslb\"} 1 ``` Served on 0.0.0.0:8686/metrics endpoint The k8gb exposes several metrics to help you monitor the health and behavior. | Metric | Type | Description | Labels | |:|:-|:|:-| | k8gbgslberrors_total | Counter | Number of errors | namespace, name | | k8gbgslbhealthy_records | Gauge | Number of healthy records observed by k8gb. | namespace, name | | k8gbgslbreconciliationloopstotal | Counter | Number of successful reconciliation loops. | namespace, name | | k8gbgslbservicestatusnum | Gauge | Number of managed hosts observed by k8gb. | namespace, name, status | | k8gbgslbstatuscountfor_failover | Gauge | Gslb status count for Failover strategy. | namespace, name, status | | k8gbgslbstatuscountfor_geoip | Gauge | Gslb status count for GeoIP strategy. | namespace, name, status | | k8gbgslbstatuscountfor_roundrobin | Gauge | Gslb status count for RoundRobin strategy. | namespace, name, status | | k8gbinfobloxheartbeaterrorstotal | Counter | Number of k8gb Infoblox TXT record errors. | namespace, name | | k8gbinfobloxheartbeats_total | Counter | Number of k8gb Infoblox heartbeat TXT record updates. | namespace, name | | k8gbinfobloxrequest_duration | Histogram | Duration of the HTTP request to Infoblox API in seconds. | request, success | | k8gbinfobloxzoneupdateerrors_total | Counter | Number of k8gb Infoblox zone update errors. | namespace, name | | k8gbinfobloxzoneupdatestotal | Counter | Number of k8gb Infoblox zone updates. | namespace, name | | k8gbendpointstatusnum | Gauge | Number of targets in DNS endpoint. | namespace, name, dnsname | | k8gbruntimeinfo | Gauge | K8gb runtime info. | namespace, k8gbversion, goversion, arch, os, git_sha | Optionally k8gb operator can expose traces in OpenTelemetry format to any available OTEL compliant tracing solution. Consult the following page for more details." } ]
{ "category": "Orchestration & Management", "file_name": "rancher.html.md", "project_name": "k8gb", "subcategory": "Coordination & Service Discovery" }
[ { "data": "The K8gb has been modified to be easily deployed using Rancher Fleet. All you need to supply is a fleet.yaml file and possibly expose the labels on your cluster. The following shows the rancher application that will be installed on the target cluster. The values k8gb-dnsZone, k8gb-clusterGeoTag, k8gb-extGslbClustersGeoTags will be taken from the labels that are set on the cluster. ``` defaultNamespace: k8gb kustomize: dir: overlays/kustomization labels: bundle: k8gb helm: repo: https://www.k8gb.io chart: k8gb version: v0.11.4 releaseName: k8gb values: k8gb: dnsZone: global.fleet.clusterLabels.k8gb-dnsZone edgeDNSZone: \"cloud.example.com\" edgeDNSServers: \"1.2.3.4\" \"5.6.7.8\" clusterGeoTag: global.fleet.clusterLabels.k8gb-clusterGeoTag extGslbClustersGeoTags: global.fleet.clusterLabels.k8gb-extGslbClustersGeoTags log: format: simple ```" } ]
{ "category": "Orchestration & Management", "file_name": "docs.github.com.md", "project_name": "KubeBrain", "subcategory": "Coordination & Service Discovery" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "github-privacy-statement.md", "project_name": "Netflix Eureka", "subcategory": "Coordination & Service Discovery" }
[ { "data": "Effective date: February 1, 2024 Welcome to the GitHub Privacy Statement. This is where we describe how we handle your Personal Data, which is information that is directly linked or can be linked to you. It applies to the Personal Data that GitHub, Inc. or GitHub B.V., processes as the Data Controller when you interact with websites, applications, and services that display this Statement (collectively, Services). This Statement does not apply to services or products that do not display this Statement, such as Previews, where relevant. When a school or employer supplies your GitHub account, they assume the role of Data Controller for most Personal Data used in our Services. This enables them to: Should you access a GitHub Service through an account provided by an organization, such as your employer or school, the organization becomes the Data Controller, and this Privacy Statement's direct applicability to you changes. Even so, GitHub remains dedicated to preserving your privacy rights. In such circumstances, GitHub functions as a Data Processor, adhering to the Data Controller's instructions regarding your Personal Data's processing. A Data Protection Agreement governs the relationship between GitHub and the Data Controller. For further details regarding their privacy practices, please refer to the privacy statement of the organization providing your account. In cases where your organization grants access to GitHub products, GitHub acts as the Data Controller solely for specific processing activities. These activities are clearly defined in a contractual agreement with your organization, known as a Data Protection Agreement. You can review our standard Data Protection Agreement at GitHub Data Protection Agreement. For those limited purposes, this Statement governs the handling of your Personal Data. For all other aspects of GitHub product usage, your organization's policies apply. When you use third-party extensions, integrations, or follow references and links within our Services, the privacy policies of these third parties apply to any Personal Data you provide or consent to share with them. Their privacy statements will govern how this data is processed. Personal Data is collected from you directly, automatically from your device, and also from third parties. The Personal Data GitHub processes when you use the Services depends on variables like how you interact with our Services (such as through web interfaces, desktop or mobile applications), the features you use (such as pull requests, Codespaces, or GitHub Copilot) and your method of accessing the Services (your preferred IDE). Below, we detail the information we collect through each of these channels: The Personal Data we process depends on your interaction and access methods with our Services, including the interfaces (web, desktop, mobile apps), features used (pull requests, Codespaces, GitHub Copilot), and your preferred access tools (like your IDE). This section details all the potential ways GitHub may process your Personal Data: When carrying out these activities, GitHub practices data minimization and uses the minimum amount of Personal Information required. We may share Personal Data with the following recipients: If your GitHub account has private repositories, you control the access to that information. GitHub personnel does not access private repository information without your consent except as provided in this Privacy Statement and for: GitHub will provide you with notice regarding private repository access unless doing so is prohibited by law or if GitHub acted in response to a security threat or other risk to security. GitHub processes Personal Data in compliance with the GDPR, ensuring a lawful basis for each processing" }, { "data": "The basis varies depending on the data type and the context, including how you access the services. Our processing activities typically fall under these lawful bases: Depending on your residence location, you may have specific legal rights regarding your Personal Data: To exercise these rights, please send an email to privacy[at]github[dot]com and follow the instructions provided. To verify your identity for security, we may request extra information before addressing your data-related request. Please contact our Data Protection Officer at dpo[at]github[dot]com for any feedback or concerns. Depending on your region, you have the right to complain to your local Data Protection Authority. European users can find authority contacts on the European Data Protection Board website, and UK users on the Information Commissioners Office website. We aim to promptly respond to requests in compliance with legal requirements. Please note that we may retain certain data as necessary for legal obligations or for establishing, exercising, or defending legal claims. GitHub stores and processes Personal Data in a variety of locations, including your local region, the United States, and other countries where GitHub, its affiliates, subsidiaries, or subprocessors have operations. We transfer Personal Data from the European Union, the United Kingdom, and Switzerland to countries that the European Commission has not recognized as having an adequate level of data protection. When we engage in such transfers, we generally rely on the standard contractual clauses published by the European Commission under Commission Implementing Decision 2021/914, to help protect your rights and enable these protections to travel with your data. To learn more about the European Commissions decisions on the adequacy of the protection of personal data in the countries where GitHub processes personal data, see this article on the European Commission website. GitHub also complies with the EU-U.S. Data Privacy Framework (EU-U.S. DPF), the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. Data Privacy Framework (Swiss-U.S. DPF) as set forth by the U.S. Department of Commerce. GitHub has certified to the U.S. Department of Commerce that it adheres to the EU-U.S. Data Privacy Framework Principles (EU-U.S. DPF Principles) with regard to the processing of personal data received from the European Union in reliance on the EU-U.S. DPF and from the United Kingdom (and Gibraltar) in reliance on the UK Extension to the EU-U.S. DPF. GitHub has certified to the U.S. Department of Commerce that it adheres to the Swiss-U.S. Data Privacy Framework Principles (Swiss-U.S. DPF Principles) with regard to the processing of personal data received from Switzerland in reliance on the Swiss-U.S. DPF. If there is any conflict between the terms in this privacy statement and the EU-U.S. DPF Principles and/or the Swiss-U.S. DPF Principles, the Principles shall govern. To learn more about the Data Privacy Framework (DPF) program, and to view our certification, please visit https://www.dataprivacyframework.gov/. GitHub has the responsibility for the processing of Personal Data it receives under the Data Privacy Framework (DPF) Principles and subsequently transfers to a third party acting as an agent on GitHubs behalf. GitHub shall remain liable under the DPF Principles if its agent processes such Personal Data in a manner inconsistent with the DPF Principles, unless the organization proves that it is not responsible for the event giving rise to the damage. In compliance with the EU-U.S. DPF, the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. DPF, GitHub commits to resolve DPF Principles-related complaints about our collection and use of your personal" }, { "data": "EU, UK, and Swiss individuals with inquiries or complaints regarding our handling of personal data received in reliance on the EU-U.S. DPF, the UK Extension, and the Swiss-U.S. DPF should first contact GitHub at: dpo[at]github[dot]com. If you do not receive timely acknowledgment of your DPF Principles-related complaint from us, or if we have not addressed your DPF Principles-related complaint to your satisfaction, please visit https://go.adr.org/dpf_irm.html for more information or to file a complaint. The services of the International Centre for Dispute Resolution are provided at no cost to you. An individual has the possibility, under certain conditions, to invoke binding arbitration for complaints regarding DPF compliance not resolved by any of the other DPF mechanisms. For additional information visit https://www.dataprivacyframework.gov/s/article/ANNEX-I-introduction-dpf?tabset-35584=2. GitHub is subject to the investigatory and enforcement powers of the Federal Trade Commission (FTC). Under Section 5 of the Federal Trade Commission Act (15 U.S.C. 45), an organization's failure to abide by commitments to implement the DPF Principles may be challenged as deceptive by the FTC. The FTC has the power to prohibit such misrepresentations through administrative orders or by seeking court orders. GitHub uses appropriate administrative, technical, and physical security controls to protect your Personal Data. Well retain your Personal Data as long as your account is active and as needed to fulfill contractual obligations, comply with legal requirements, resolve disputes, and enforce agreements. The retention duration depends on the purpose of data collection and any legal obligations. GitHub uses administrative, technical, and physical security controls where appropriate to protect your Personal Data. Contact us via our contact form or by emailing our Data Protection Officer at dpo[at]github[dot]com. Our addresses are: GitHub B.V. Prins Bernhardplein 200, Amsterdam 1097JB The Netherlands GitHub, Inc. 88 Colin P. Kelly Jr. St. San Francisco, CA 94107 United States Our Services are not intended for individuals under the age of 13. We do not intentionally gather Personal Data from such individuals. If you become aware that a minor has provided us with Personal Data, please notify us. GitHub may periodically revise this Privacy Statement. If there are material changes to the statement, we will provide at least 30 days prior notice by updating our website or sending an email to your primary email address associated with your GitHub account. Below are translations of this document into other languages. In the event of any conflict, uncertainty, or apparent inconsistency between any of those versions and the English version, this English version is the controlling version. Cliquez ici pour obtenir la version franaise: Dclaration de confidentialit de GitHub (PDF). For translations of this statement into other languages, please visit https://docs.github.com/ and select a language from the drop-down menu under English. GitHub uses cookies to provide, secure and improve our Service or to develop new features and functionality of our Service. For example, we use them to (i) keep you logged in, (ii) remember your preferences, (iii) identify your device for security and fraud purposes, including as needed to maintain the integrity of our Service, (iv) compile statistical reports, and (v) provide information and insight for future development of GitHub. We provide more information about cookies on GitHub that describes the cookies we set, the needs we have for those cookies, and the expiration of such cookies. For Enterprise Marketing Pages, we may also use non-essential cookies to (i) gather information about enterprise users interests and online activities to personalize their experiences, including by making the ads, content, recommendations, and marketing seen or received more relevant and (ii) serve and measure the effectiveness of targeted advertising and other marketing" }, { "data": "If you disable the non-essential cookies on the Enterprise Marketing Pages, the ads, content, and marketing you see may be less relevant. Our emails to users may contain a pixel tag, which is a small, clear image that can tell us whether or not you have opened an email and what your IP address is. We use this pixel tag to make our email communications more effective and to make sure we are not sending you unwanted email. The length of time a cookie will stay on your browser or device depends on whether it is a persistent or session cookie. Session cookies will only stay on your device until you stop browsing. Persistent cookies stay until they expire or are deleted. The expiration time or retention period applicable to persistent cookies depends on the purpose of the cookie collection and tool used. You may be able to delete cookie data. For more information, see \"GitHub General Privacy Statement.\" We use cookies and similar technologies, such as web beacons, local storage, and mobile analytics, to operate and provide our Services. When visiting Enterprise Marketing Pages, like resources.github.com, these and additional cookies, like advertising IDs, may be used for sales and marketing purposes. Cookies are small text files stored by your browser on your device. A cookie can later be read when your browser connects to a web server in the same domain that placed the cookie. The text in a cookie contains a string of numbers and letters that may uniquely identify your device and can contain other information as well. This allows the web server to recognize your browser over time, each time it connects to that web server. Web beacons are electronic images (also called single-pixel or clear GIFs) that are contained within a website or email. When your browser opens a webpage or email that contains a web beacon, it automatically connects to the web server that hosts the image (typically operated by a third party). This allows that web server to log information about your device and to set and read its own cookies. In the same way, third-party content on our websites (such as embedded videos, plug-ins, or ads) results in your browser connecting to the third-party web server that hosts that content. Mobile identifiers for analytics can be accessed and used by apps on mobile devices in much the same way that websites access and use cookies. When visiting Enterprise Marketing pages, like resources.github.com, on a mobile device these may allow us and our third-party analytics and advertising partners to collect data for sales and marketing purposes. We may also use so-called flash cookies (also known as Local Shared Objects or LSOs) to collect and store information about your use of our Services. Flash cookies are commonly used for advertisements and videos. The GitHub Services use cookies and similar technologies for a variety of purposes, including to store your preferences and settings, enable you to sign-in, analyze how our Services perform, track your interaction with the Services, develop inferences, combat fraud, and fulfill other legitimate purposes. Some of these cookies and technologies may be provided by third parties, including service providers and advertising" }, { "data": "For example, our analytics and advertising partners may use these technologies in our Services to collect personal information (such as the pages you visit, the links you click on, and similar usage information, identifiers, and device information) related to your online activities over time and across Services for various purposes, including targeted advertising. GitHub will place non-essential cookies on pages where we market products and services to enterprise customers, for example, on resources.github.com. We and/or our partners also share the information we collect or infer with third parties for these purposes. The table below provides additional information about how we use different types of cookies: | Purpose | Description | |:--|:--| | Required Cookies | GitHub uses required cookies to perform essential website functions and to provide the services. For example, cookies are used to log you in, save your language preferences, provide a shopping cart experience, improve performance, route traffic between web servers, detect the size of your screen, determine page load times, improve user experience, and for audience measurement. These cookies are necessary for our websites to work. | | Analytics | We allow third parties to use analytics cookies to understand how you use our websites so we can make them better. For example, cookies are used to gather information about the pages you visit and how many clicks you need to accomplish a task. We also use some analytics cookies to provide personalized advertising. | | Social Media | GitHub and third parties use social media cookies to show you ads and content based on your social media profiles and activity on GitHubs websites. This ensures that the ads and content you see on our websites and on social media will better reflect your interests. This also enables third parties to develop and improve their products, which they may use on websites that are not owned or operated by GitHub. | | Advertising | In addition, GitHub and third parties use advertising cookies to show you new ads based on ads you've already seen. Cookies also track which ads you click or purchases you make after clicking an ad. This is done both for payment purposes and to show you ads that are more relevant to you. For example, cookies are used to detect when you click an ad and to show you ads based on your social media interests and website browsing history. | You have several options to disable non-essential cookies: Specifically on GitHub Enterprise Marketing Pages Any GitHub page that serves non-essential cookies will have a link in the pages footer to cookie settings. You can express your preferences at any time by clicking on that linking and updating your settings. Some users will also be able to manage non-essential cookies via a cookie consent banner, including the options to accept, manage, and reject all non-essential cookies. Generally for all websites You can control the cookies you encounter on the web using a variety of widely-available tools. For example: These choices are specific to the browser you are using. If you access our Services from other devices or browsers, take these actions from those systems to ensure your choices apply to the data collected when you use those systems. This section provides extra information specifically for residents of certain US states that have distinct data privacy laws and regulations. These laws may grant specific rights to residents of these states when the laws come into effect. This section uses the term personal information as an equivalent to the term Personal Data. These rights are common to the US State privacy laws: We may collect various categories of personal information about our website visitors and users of \"Services\" which includes GitHub applications, software, products, or" }, { "data": "That information includes identifiers/contact information, demographic information, payment information, commercial information, internet or electronic network activity information, geolocation data, audio, electronic, visual, or similar information, and inferences drawn from such information. We collect this information for various purposes. This includes identifying accessibility gaps and offering targeted support, fostering diversity and representation, providing services, troubleshooting, conducting business operations such as billing and security, improving products and supporting research, communicating important information, ensuring personalized experiences, and promoting safety and security. To make an access, deletion, correction, or opt-out request, please send an email to privacy[at]github[dot]com and follow the instructions provided. We may need to verify your identity before processing your request. If you choose to use an authorized agent to submit a request on your behalf, please ensure they have your signed permission or power of attorney as required. To opt out of the sharing of your personal information, you can click on the \"Do Not Share My Personal Information\" link on the footer of our Websites or use the Global Privacy Control (\"GPC\") if available. Authorized agents can also submit opt-out requests on your behalf. We also make the following disclosures for purposes of compliance with California privacy law: Under California Civil Code section 1798.83, also known as the Shine the Light law, California residents who have provided personal information to a business with which the individual has established a business relationship for personal, family, or household purposes (California Customers) may request information about whether the business has disclosed personal information to any third parties for the third parties direct marketing purposes. Please be aware that we do not disclose personal information to any third parties for their direct marketing purposes as defined by this law. California Customers may request further information about our compliance with this law by emailing (privacy[at]github[dot]com). Please note that businesses are required to respond to one request per California Customer each year and may not be required to respond to requests made by means other than through the designated email address. California residents under the age of 18 who are registered users of online sites, services, or applications have a right under California Business and Professions Code Section 22581 to remove, or request and obtain removal of, content or information they have publicly posted. To remove content or information you have publicly posted, please submit a Private Information Removal request. Alternatively, to request that we remove such content or information, please send a detailed description of the specific content or information you wish to have removed to GitHub support. Please be aware that your request does not guarantee complete or comprehensive removal of content or information posted online and that the law may not permit or require removal in certain circumstances. If you have any questions about our privacy practices with respect to California residents, please send an email to privacy[at]github[dot]com. We value the trust you place in us and are committed to handling your personal information with care and respect. If you have any questions or concerns about our privacy practices, please email our Data Protection Officer at dpo[at]github[dot]com. If you live in Colorado, Connecticut, or Virginia you have some additional rights: We do not sell your covered information, as defined under Chapter 603A of the Nevada Revised Statutes. If you still have questions about your covered information or anything else in our Privacy Statement, please send an email to privacy[at]github[dot]com. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "github-terms-of-service.md", "project_name": "KubeBrain", "subcategory": "Coordination & Service Discovery" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "understanding-github-code-search-syntax.md", "project_name": "KubeBrain", "subcategory": "Coordination & Service Discovery" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "understanding-github-code-search-syntax.md", "project_name": "Netflix Eureka", "subcategory": "Coordination & Service Discovery" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "docs.md", "project_name": "Apache Thrift", "subcategory": "Remote Procedure Call" }
[ { "data": "Each supported language needs the Apache Thrift Libraries and the generated code made by the Apache Thrift Compiler. Some language specific documentation is for the Apache Thrift Libraries are generated from lib/${language}/README.md files: For a quick introduction that covers a lot of Thrift knowledge on just one page, we recommended Diwaker Guptas Thrift: The Missing Guide. If you want to do a real deep dive into the various language bindings, consider Randy Abernethys The Programmers Guide to Apache Thrift. The book comes with a lot of inside knowlegde and is packed with practical examples." } ]
{ "category": "Orchestration & Management", "file_name": "BuildingFromSource.md", "project_name": "Apache Thrift", "subcategory": "Remote Procedure Call" }
[ { "data": "First make sure your system meets all necessary Apache Thrift Requirements If you are building from the first time out of the source repository, you will need to generate the configure scripts. (This is not necessary if you downloaded a released tarball.) From the top directory, do: ``` ./bootstrap.sh ``` Once the configure scripts are generated, thrift can be configured. From the top directory, do: ``` ./configure ``` Disable a language: ``` ./configure --without-java ``` You may need to specify the location of the boost files explicitly. If you installed boost in /usr/local, you would run configure as follows: ``` ./configure --with-boost=/usr/local ``` If you want to override the logic of the detection of the Java SDK, use the JAVAC environment variable: ``` ./configure JAVAC=/usb/bin/javac ``` Note that by default the thrift C++ library is typically built with debugging symbols included. If you want to customize these options you should use the CXXFLAGS option in configure, as such: ``` ./configure CXXFLAGS='-g -O2' ./configure CFLAGS='-g -O2' ./configure CPPFLAGS='-DDEBUGMYFEATURE' ``` To see other configuration options run ``` ./configure --help ``` Once you have run configure you can build Thrift via make: ``` make ``` and run the test suite: ``` make check ``` and the cross language test suite: ``` python3 test/test.py ``` you need to install the Flex library (See also Apache Thrift Requirements ) and re-run the configuration script. Re-reun configure with ``` --enable-libtool-lock ``` or by turning off parallel make by placing .NOTPARALLEL: in lib/cpp/Makefile or ``` make -j 1 ``` Although the thrift compiler build appears to be compatible with parallel make without libtool lock, the thrift runtime build is not. From the top directory, become superuser and do: ``` make install ``` Note that some language packages must be installed manually using build tools better suited to those languages (this applies to Java, Ruby, PHP). Look for the README file in the lib/<language>/ folder for more details on the installation of each language library package." } ]
{ "category": "Orchestration & Management", "file_name": "install.md", "project_name": "Apache Thrift", "subcategory": "Remote Procedure Call" }
[ { "data": "Apache Thrifts compiler is written in C++ and designed to be portable, but there are some system requirements which must be installed prior to use. Select your os below for a guide on setting up your system to get started These are only required if you choose to build the libraries for the given language" } ]
{ "category": "Orchestration & Management", "file_name": "HowToContribute.md", "project_name": "Apache Thrift", "subcategory": "Remote Procedure Call" }
[ { "data": "Thank you for your interest in contributing to the Apache Thrift project! Information on why and how to contribute is available on the Apache Software Foundation (ASF) web site. In particular, we recommend the following to become acquainted with Apache Contributions: This is the preferred method of submitting changes. When you submit a pull request through github, it activates the continuous integration (CI) build systems at Appveyor and Travis to build your changesxi on a variety of Linux and Windows configurations and run all the test suites. Follow these requirements for a successful pull request: All significant changes require an Apache Jira THRIFT Issue ticket. Trivial changes such as fixing a typo or a compiler warning do not. The pull request title must begin with the Jira THRIFT ticket identifier if it has an associated ticket, for example: ``` THRIFT-9999: an example pull request title ``` Commit messages must follow this pattern for code changes (deviations will not be merged): ``` THRIFT-9999: [summary of fix, one line if possible] Client: [language(s) affected, comma separated, for example: \"cpp,erl,perl\"] ``` Instructions: Modify the source to include the improvement/bugfix, and: For Windows systems, see our detailed instructions on the CMake README. For Windows Native C++ builds, see our detailed instructions on the WinCPP README. For unix systems, see our detailed instructions on the Docker README. To create a patch from changes in your local directory: ``` git diff > ../THRIFT-NNNN.patch ``` then wait for contributors or committers to review your changes, and then for a committer to apply your patch. This is not the preferred way to submit changes and incurs additional overhead for committers who must then create a pull request for you. Sometimes commmitters may ask you to take actions in your pull requests. Here are some recipes that will help you accomplish those requests. These examples assume you are working on Jira issue THRIFT-9999. You should also be familiar with the upstream repository concept. If you have not submitted a pull request yet, or if you have not yet rebased your existing pull request, you can squash all your commits down to a single commit. This makes life easier for the committers. If your pull request on GitHub has more than one commit, you should do this. If you already have a pull request outstanding, you will need to do a force push to overwrite it since you changed your commit history: ``` git push -u origin THRIFT-9999 --force ``` A more detailed walkthrough of a squash can be found at Git Ready. If your pull request has a conflict with master, it needs to be rebased: ``` git checkout THRIFT-9999 git rebase upstream master (resolve any conflicts, make sure it builds) git push -u origin THRIFT-9999 --force ``` If your pull request contains commits that are not yours, then you should use the following technique to fix the bad merge in your branch: ``` git checkout master git pull upstream master git checkout -b THRIFT-9999-take-2 git cherry-pick ... (pick only your commits from your original pull request in ascending chronological order) squash your changes to a single commit if there is more than one (see above) git push -u origin THRIFT-9999-take-2:THRIFT-9999 ``` This procedure will apply only your commits in order to the current master, then you will squash them to a single commit, and then you force push your local THRIFT-9999-take-2 into remote THRIFT-9999 which represents your pull request, replacing all the commits with the new one. This page was generated by Apache Thrift's source tree docs: CONTRIBUTING.md" } ]
{ "category": "Orchestration & Management", "file_name": "types.md", "project_name": "Apache Thrift", "subcategory": "Remote Procedure Call" }
[ { "data": "The Thrift type system is intended to allow programmers to use native types as much as possible, no matter what programming language they are working in. This information is based on, and supersedes, the information in the Thrift Whitepaper. The Thrift IDL provides descriptions of the types which are used to generate code for each target language. The base types were selected with the goal of simplicity and clarity rather than abundance, focusing on the key types available in all programming languages. Note the absence of unsigned integer types. This is due to the fact that there are no native unsigned integer types in many programming languages. binary: a sequence of unencoded bytes N.B.: This is currently a specialized form of the string type above, added to provide better interoperability with Java. The current plan-of-record is to elevate this to a base type at some point. Thrift structs define a common object they are essentially equivalent to classes in OOP languages, but without inheritance. A struct has a set of strongly typed fields, each with a unique name identifier. Fields may have various annotations (numeric field IDs, optional default values, etc.) that are described in the Thrift IDL. Thrift containers are strongly typed containers that map to commonly used and commonly available container types in most programming languages. There are three container types: Container elements may be of any valid Thrift Type. N.B.: For maximal compatibility, the key type for map should be a basic type rather than a struct or container type. There are some languages which do not support more complex key types in their native map types. In addition the JSON protocol only supports key types that are base types. Exceptions are functionally equivalent to structs, except that they inherit from the native exception base class as appropriate in each target programming language, in order to seamlessly integrate with the native exception handling in any given language. Services are defined using Thrift types. Definition of a service is semantically equivalent to defining an interface (or a pure virtual abstract class) in object oriented programming. The Thrift compiler generates fully functional client and server stubs that implement the interface. A service consists of a set of named functions, each with a list of parameters and a return type. Note that void is a valid type for a function return, in addition to all other defined Thrift types. Additionally, an oneway modifier keyword may be added to a void function, which will generate code that does not wait for a response. Note that a pure void function will return a response to the client which guarantees that the operation has completed on the server side. With oneway method calls the client will only be guaranteed that the request succeeded at the transport layer. Oneway method calls of the same client may be executed in parallel/out of order by the server." } ]
{ "category": "Orchestration & Management", "file_name": "idl.md", "project_name": "Apache Thrift", "subcategory": "Remote Procedure Call" }
[ { "data": "Thank you for your interest in contributing to the Apache Thrift project! Information on why and how to contribute is available on the Apache Software Foundation (ASF) web site. In particular, we recommend the following to become acquainted with Apache Contributions: This is the preferred method of submitting changes. When you submit a pull request through github, it activates the continuous integration (CI) build systems at Appveyor and Travis to build your changesxi on a variety of Linux and Windows configurations and run all the test suites. Follow these requirements for a successful pull request: All significant changes require an Apache Jira THRIFT Issue ticket. Trivial changes such as fixing a typo or a compiler warning do not. The pull request title must begin with the Jira THRIFT ticket identifier if it has an associated ticket, for example: ``` THRIFT-9999: an example pull request title ``` Commit messages must follow this pattern for code changes (deviations will not be merged): ``` THRIFT-9999: [summary of fix, one line if possible] Client: [language(s) affected, comma separated, for example: \"cpp,erl,perl\"] ``` Instructions: Modify the source to include the improvement/bugfix, and: For Windows systems, see our detailed instructions on the CMake README. For Windows Native C++ builds, see our detailed instructions on the WinCPP README. For unix systems, see our detailed instructions on the Docker README. To create a patch from changes in your local directory: ``` git diff > ../THRIFT-NNNN.patch ``` then wait for contributors or committers to review your changes, and then for a committer to apply your patch. This is not the preferred way to submit changes and incurs additional overhead for committers who must then create a pull request for you. Sometimes commmitters may ask you to take actions in your pull requests. Here are some recipes that will help you accomplish those requests. These examples assume you are working on Jira issue THRIFT-9999. You should also be familiar with the upstream repository concept. If you have not submitted a pull request yet, or if you have not yet rebased your existing pull request, you can squash all your commits down to a single commit. This makes life easier for the committers. If your pull request on GitHub has more than one commit, you should do this. If you already have a pull request outstanding, you will need to do a force push to overwrite it since you changed your commit history: ``` git push -u origin THRIFT-9999 --force ``` A more detailed walkthrough of a squash can be found at Git Ready. If your pull request has a conflict with master, it needs to be rebased: ``` git checkout THRIFT-9999 git rebase upstream master (resolve any conflicts, make sure it builds) git push -u origin THRIFT-9999 --force ``` If your pull request contains commits that are not yours, then you should use the following technique to fix the bad merge in your branch: ``` git checkout master git pull upstream master git checkout -b THRIFT-9999-take-2 git cherry-pick ... (pick only your commits from your original pull request in ascending chronological order) squash your changes to a single commit if there is more than one (see above) git push -u origin THRIFT-9999-take-2:THRIFT-9999 ``` This procedure will apply only your commits in order to the current master, then you will squash them to a single commit, and then you force push your local THRIFT-9999-take-2 into remote THRIFT-9999 which represents your pull request, replacing all the commits with the new one. This page was generated by Apache Thrift's source tree docs: CONTRIBUTING.md" } ]
{ "category": "Orchestration & Management", "file_name": "getting-started-java.md", "project_name": "Avro", "subcategory": "Remote Procedure Call" }
[ { "data": "11 minute read This is a short guide for getting started with Apache Avro using Java. This guide only covers using Avro for data serialization; see Patrick Hunts Avro RPC Quick Start for a good introduction to using Avro for RPC. Avro implementations for C, C++, C#, Java, PHP, Python, and Ruby can be downloaded from the Apache Avro Download page. This guide uses Avro 1.11.1, the latest version at the time of writing. For the examples in this guide, download avro-1.11.1.jar and avro-tools-1.11.1.jar. Alternatively, if you are using Maven, add the following dependency to your POM: ``` <dependency> <groupId>org.apache.avro</groupId> <artifactId>avro</artifactId> <version>1.11.1</version> </dependency> ``` As well as the Avro Maven plugin (for performing code generation): ``` <plugin> <groupId>org.apache.avro</groupId> <artifactId>avro-maven-plugin</artifactId> <version>1.11.1</version> <executions> <execution> <phase>generate-sources</phase> <goals> <goal>schema</goal> </goals> <configuration> <sourceDirectory>${project.basedir}/src/main/avro/</sourceDirectory> <outputDirectory>${project.basedir}/src/main/java/</outputDirectory> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> ``` You may also build the required Avro jars from source. Building Avro is beyond the scope of this guide; see the Build Documentation page in the wiki for more information. Avro schemas are defined using JSON. Schemas are composed of primitive types (null, boolean, int, long, float, double, bytes, and string) and complex types (record, enum, array, map, union, and fixed). You can learn more about Avro schemas and types from the specification, but for now lets start with a simple schema example, user.avsc: ``` {\"namespace\": \"example.avro\", \"type\": \"record\", \"name\": \"User\", \"fields\": [ {\"name\": \"name\", \"type\": \"string\"}, {\"name\": \"favorite_number\", \"type\": [\"int\", \"null\"]}, {\"name\": \"favorite_color\", \"type\": [\"string\", \"null\"]} ] } ``` This schema defines a record representing a hypothetical user. (Note that a schema file can only contain a single schema definition.) At minimum, a record definition must include its type (type: record), a name (name: User), and fields, in this case name, favoritenumber, and favoritecolor. We also define a namespace (namespace: example.avro), which together with the name attribute defines the full name of the schema (example.avro.User in this case). Fields are defined via an array of objects, each of which defines a name and type (other attributes are optional, see the record specification for more details). The type attribute of a field is another schema object, which can be either a primitive or complex type. For example, the name field of our User schema is the primitive type string, whereas the favoritenumber and favoritecolor fields are both unions, represented by JSON arrays. unions are a complex type that can be any of the types listed in the array; e.g., favorite_number can either be an int or null, essentially making it an optional field. Code generation allows us to automatically create classes based on our previously-defined schema. Once we have defined the relevant classes, there is no need to use the schema directly in our programs. We use the avro-tools jar to generate code as follows: ``` java -jar /path/to/avro-tools-1.11.1.jar compile schema <schema file> <destination> ``` This will generate the appropriate source files in a package based on the schemas namespace in the provided destination folder. For instance, to generate a User class in package example.avro from the schema defined above, run ``` java -jar /path/to/avro-tools-1.11.1.jar compile schema user.avsc . ``` Note that if you using the Avro Maven plugin, there is no need to manually invoke the schema compiler; the plugin automatically performs code generation on any .avsc files present in the configured source" }, { "data": "Now that weve completed the code generation, lets create some Users, serialize them to a data file on disk, and then read back the file and deserialize the User objects. First lets create some Users and set their fields. ``` User user1 = new User(); user1.setName(\"Alyssa\"); user1.setFavoriteNumber(256); // Leave favorite color null // Alternate constructor User user2 = new User(\"Ben\", 7, \"red\"); // Construct via builder User user3 = User.newBuilder() .setName(\"Charlie\") .setFavoriteColor(\"blue\") .setFavoriteNumber(null) .build(); ``` As shown in this example, Avro objects can be created either by invoking a constructor directly or by using a builder. Unlike constructors, builders will automatically set any default values specified in the schema. Additionally, builders validate the data as it set, whereas objects constructed directly will not cause an error until the object is serialized. However, using constructors directly generally offers better performance, as builders create a copy of the datastructure before it is written. Note that we do not set user1s favorite color. Since that record is of type [string, null], we can either set it to a string or leave it null; it is essentially optional. Similarly, we set user3s favorite number to null (using a builder requires setting all fields, even if they are null). Now lets serialize our Users to disk. ``` // Serialize user1, user2 and user3 to disk DatumWriter<User> userDatumWriter = new SpecificDatumWriter<User>(User.class); DataFileWriter<User> dataFileWriter = new DataFileWriter<User>(userDatumWriter); dataFileWriter.create(user1.getSchema(), new File(\"users.avro\")); dataFileWriter.append(user1); dataFileWriter.append(user2); dataFileWriter.append(user3); dataFileWriter.close(); ``` We create a DatumWriter, which converts Java objects into an in-memory serialized format. The SpecificDatumWriter class is used with generated classes and extracts the schema from the specified generated type. Next we create a DataFileWriter, which writes the serialized records, as well as the schema, to the file specified in the dataFileWriter.create call. We write our users to the file via calls to the dataFileWriter.append method. When we are done writing, we close the data file. Finally, lets deserialize the data file we just created. ``` // Deserialize Users from disk DatumReader<User> userDatumReader = new SpecificDatumReader<User>(User.class); DataFileReader<User> dataFileReader = new DataFileReader<User>(file, userDatumReader); User user = null; while (dataFileReader.hasNext()) { // Reuse user object by passing it to next(). This saves us from // allocating and garbage collecting many objects for files with // many items. user = dataFileReader.next(user); System.out.println(user); } ``` This snippet will output: ``` {\"name\": \"Alyssa\", \"favoritenumber\": 256, \"favoritecolor\": null} {\"name\": \"Ben\", \"favoritenumber\": 7, \"favoritecolor\": \"red\"} {\"name\": \"Charlie\", \"favoritenumber\": null, \"favoritecolor\": \"blue\"} ``` Deserializing is very similar to serializing. We create a SpecificDatumReader, analogous to the SpecificDatumWriter we used in serialization, which converts in-memory serialized items into instances of our generated class, in this case User. We pass the DatumReader and the previously created File to a DataFileReader, analogous to the DataFileWriter, which reads both the schema used by the writer as well as the data from the file on disk. The data will be read using the writers schema included in the file and the schema provided by the reader, in this case the User class. The writers schema is needed to know the order in which fields were written, while the readers schema is needed to know what fields are expected and how to fill in default values for fields added since the file was written. If there are differences between the two schemas, they are resolved according to the Schema Resolution specification. Next we use the DataFileReader to iterate through the serialized Users and print the deserialized object to" }, { "data": "Note how we perform the iteration: we create a single User object which we store the current deserialized user in, and pass this record object to every call of dataFileReader.next. This is a performance optimization that allows the DataFileReader to reuse the same User object rather than allocating a new User for every iteration, which can be very expensive in terms of object allocation and garbage collection if we deserialize a large data file. While this technique is the standard way to iterate through a data file, its also possible to use for (User user : dataFileReader) if performance is not a concern. This example code is included as a Maven project in the examples/java-example directory in the Avro docs. From this directory, execute the following commands to build and run the example: ``` $ mvn compile # includes code generation via Avro Maven plugin $ mvn -q exec:java -Dexec.mainClass=example.SpecificMain ``` In release 1.9.0, we introduced a new approach to generating code that speeds up decoding of objects by more than 10% and encoding by more than 30% (future performance enhancements are underway). To ensure a smooth introduction of this change into production systems, this feature is controlled by a feature flag, the system property org.apache.avro.specific.usecustomcoders. In this first release, this feature is off by default. To turn it on, set the system flag to true at runtime. In the sample above, for example, you could enable the fater coders as follows: $ mvn -q exec:java -Dexec.mainClass=example.SpecificMain-Dorg.apache.avro.specific.usecustomcoders=true Note that you do not have to recompile your Avro schema to have access to this feature. The feature is compiled and built into your code, and you turn it on and off at runtime using the feature flag. As a result, you can turn it on during testing, for example, and then off in production. Or you can turn it on in production, and quickly turn it off if something breaks. We encourage the Avro community to exercise this new feature early to help build confidence. (For those paying one-demand for compute resources in the cloud, it can lead to meaningful cost savings.) As confidence builds, we will turn this feature on by default, and eventually eliminate the feature flag (and the old code). Data in Avro is always stored with its corresponding schema, meaning we can always read a serialized item regardless of whether we know the schema ahead of time. This allows us to perform serialization and deserialization without code generation. Lets go over the same example as in the previous section, but without using code generation: well create some users, serialize them to a data file on disk, and then read back the file and deserialize the users objects. First, we use a Parser to read our schema definition and create a Schema object. ``` Schema schema = new Schema.Parser().parse(new File(\"user.avsc\")); ``` Using this schema, lets create some users. ``` GenericRecord user1 = new GenericData.Record(schema); user1.put(\"name\", \"Alyssa\"); user1.put(\"favorite_number\", 256); // Leave favorite color null GenericRecord user2 = new GenericData.Record(schema); user2.put(\"name\", \"Ben\"); user2.put(\"favorite_number\", 7); user2.put(\"favorite_color\", \"red\"); ``` Since were not using code generation, we use GenericRecords to represent users. GenericRecord uses the schema to verify that we only specify valid fields. If we try to set a non-existent field (e.g., user1.put(favorite_animal, cat)), well get an AvroRuntimeException when we run the program. Note that we do not set user1s favorite color. Since that record is of type [string, null], we can either set it to a string or leave it null; it is essentially" }, { "data": "Now that weve created our user objects, serializing and deserializing them is almost identical to the example above which uses code generation. The main difference is that we use generic instead of specific readers and writers. First well serialize our users to a data file on disk. ``` // Serialize user1 and user2 to disk File file = new File(\"users.avro\"); DatumWriter<GenericRecord> datumWriter = new GenericDatumWriter<GenericRecord>(schema); DataFileWriter<GenericRecord> dataFileWriter = new DataFileWriter<GenericRecord>(datumWriter); dataFileWriter.create(schema, file); dataFileWriter.append(user1); dataFileWriter.append(user2); dataFileWriter.close(); ``` We create a DatumWriter, which converts Java objects into an in-memory serialized format. Since we are not using code generation, we create a GenericDatumWriter. It requires the schema both to determine how to write the GenericRecords and to verify that all non-nullable fields are present. As in the code generation example, we also create a DataFileWriter, which writes the serialized records, as well as the schema, to the file specified in the dataFileWriter.create call. We write our users to the file via calls to the dataFileWriter.append method. When we are done writing, we close the data file. Finally, well deserialize the data file we just created. ``` // Deserialize users from disk DatumReader<GenericRecord> datumReader = new GenericDatumReader<GenericRecord>(schema); DataFileReader<GenericRecord> dataFileReader = new DataFileReader<GenericRecord>(file, datumReader); GenericRecord user = null; while (dataFileReader.hasNext()) { // Reuse user object by passing it to next(). This saves us from // allocating and garbage collecting many objects for files with // many items. user = dataFileReader.next(user); System.out.println(user); ``` This outputs: ``` {\"name\": \"Alyssa\", \"favoritenumber\": 256, \"favoritecolor\": null} {\"name\": \"Ben\", \"favoritenumber\": 7, \"favoritecolor\": \"red\"} ``` Deserializing is very similar to serializing. We create a GenericDatumReader, analogous to the GenericDatumWriter we used in serialization, which converts in-memory serialized items into GenericRecords. We pass the DatumReader and the previously created File to a DataFileReader, analogous to the DataFileWriter, which reads both the schema used by the writer as well as the data from the file on disk. The data will be read using the writers schema included in the file, and the readers schema provided to the GenericDatumReader. The writers schema is needed to know the order in which fields were written, while the readers schema is needed to know what fields are expected and how to fill in default values for fields added since the file was written. If there are differences between the two schemas, they are resolved according to the Schema Resolution specification. Next, we use the DataFileReader to iterate through the serialized users and print the deserialized object to stdout. Note how we perform the iteration: we create a single GenericRecord object which we store the current deserialized user in, and pass this record object to every call of dataFileReader.next. This is a performance optimization that allows the DataFileReader to reuse the same record object rather than allocating a new GenericRecord for every iteration, which can be very expensive in terms of object allocation and garbage collection if we deserialize a large data file. While this technique is the standard way to iterate through a data file, its also possible to use for (GenericRecord user : dataFileReader) if performance is not a concern. This example code is included as a Maven project in the examples/java-example directory in the Avro docs. From this directory, execute the following commands to build and run the example: ``` $ mvn compile $ mvn -q exec:java -Dexec.mainClass=example.GenericMain ``` Apache Avro, Avro, Apache, and the Apache feather logo are either registered trademarks or trademarks of The Apache Software Foundation." } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Avro", "subcategory": "Remote Procedure Call" }
[ { "data": "Avro is a data serialization system. Avro provides: Avro relies on schemas. When Avro data is read, the schema used when writing it is always present. This permits each datum to be written with no per-value overheads, making serialization both fast and small. This also facilitates use with dynamic, scripting languages, since data, together with its schema, is fully self-describing. When Avro data is stored in a file, its schema is stored with it, so that files may be processed later by any program. If the program reading the data expects a different schema this can be easily resolved, since both schemas are present. When Avro is used in RPC, the client and server exchange schemas in the connection handshake. (This can be optimized so that, for most calls, no schemas are actually transmitted.) Since both client and server both have the other's full schema, correspondence between same named fields, missing fields, extra fields, etc. can all be easily resolved. Avro schemas are defined with with JSON . This facilitates implementation in languages that already have JSON libraries. Avro provides functionality similar to systems such as Thrift, Protocol Buffers, etc. Avro differs from these systems in the following fundamental aspects." } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "CloudWeGo", "subcategory": "Remote Procedure Call" }
[ { "data": "This doc covers architecture design, features and performance of Kitex. This document covers the preparation of the development environment, quick start and basic tutorials of Kitex. Kitex Features Guide, including basic features, governance features, advanced features, code generation, framework extensions and options. Kitex best practices in production environments. Kitex Frequently Asked Questions and corresponding Answers. Was this page helpful? Please tell us how we can improve. Please tell us how we can improve. About | License" } ]
{ "category": "Orchestration & Management", "file_name": "getting-started-python.md", "project_name": "Avro", "subcategory": "Remote Procedure Call" }
[ { "data": "5 minute read This is a short guide for getting started with Apache Avro using Python. This guide only covers using Avro for data serialization; see Patrick Hunts Avro RPC Quick Start for a good introduction to using Avro for RPC. A package called avro-python3 had been provided to support Python 3 previously, but the codebase was consolidated into the avro package and that supports both Python 2 and 3 now. The avro-python3 package will be removed in the near future, so users should use the avro package instead. They are mostly API compatible, but theres a few minor difference (e.g., function name capitalization, such as avro.schema.Parse vs avro.schema.parse). For Python, the easiest way to get started is to install it from PyPI. Pythons Avro API is available over PyPi. ``` $ python3 -m pip install avro ``` The official releases of the Avro implementations for C, C++, C#, Java, PHP, Python, and Ruby can be downloaded from the Apache Avro Releases page. This guide uses Avro 1.11.1, the latest version at the time of writing. Download and unzip avro-1.11.1.tar.gz, and install via python setup.py (this will probably require root privileges). Ensure that you can import avro from a Python prompt. ``` $ tar xvf avro-1.11.1.tar.gz $ cd avro-1.11.1 $ python setup.py install $ python >> import avro # should not raise ImportError ``` Alternatively, you may build the Avro Python library from source. From your the root Avro directory, run the commands ``` $ cd lang/py/ $ python3 -m pip install -e . $ python ``` Avro schemas are defined using JSON. Schemas are composed of primitive types (null, boolean, int, long, float, double, bytes, and string) and complex types (record, enum, array, map, union, and fixed). You can learn more about Avro schemas and types from the specification, but for now lets start with a simple schema example, user.avsc: ``` {\"namespace\": \"example.avro\", \"type\": \"record\", \"name\": \"User\", \"fields\": [ {\"name\": \"name\", \"type\": \"string\"}, {\"name\": \"favorite_number\", \"type\": [\"int\", \"null\"]}, {\"name\": \"favorite_color\", \"type\": [\"string\", \"null\"]} ] } ``` This schema defines a record representing a hypothetical user. (Note that a schema file can only contain a single schema definition.) At minimum, a record definition must include its type (type: record), a name (name: User), and fields, in this case name, favoritenumber, and favoritecolor. We also define a namespace (namespace: example.avro), which together with the name attribute defines the full name of the schema (example.avro.User in this case). Fields are defined via an array of objects, each of which defines a name and type (other attributes are optional, see the record specification for more details). The type attribute of a field is another schema object, which can be either a primitive or complex" }, { "data": "For example, the name field of our User schema is the primitive type string, whereas the favoritenumber and favoritecolor fields are both unions, represented by JSON arrays. unions are a complex type that can be any of the types listed in the array; e.g., favorite_number can either be an int or null, essentially making it an optional field. Data in Avro is always stored with its corresponding schema, meaning we can always read a serialized item, regardless of whether we know the schema ahead of time. This allows us to perform serialization and deserialization without code generation. Note that the Avro Python library does not support code generation. Try running the following code snippet, which serializes two users to a data file on disk, and then reads back and deserializes the data file: ``` import avro.schema from avro.datafile import DataFileReader, DataFileWriter from avro.io import DatumReader, DatumWriter schema = avro.schema.parse(open(\"user.avsc\", \"rb\").read()) writer = DataFileWriter(open(\"users.avro\", \"wb\"), DatumWriter(), schema) writer.append({\"name\": \"Alyssa\", \"favorite_number\": 256}) writer.append({\"name\": \"Ben\", \"favoritenumber\": 7, \"favoritecolor\": \"red\"}) writer.close() reader = DataFileReader(open(\"users.avro\", \"rb\"), DatumReader()) for user in reader: print user reader.close() ``` This outputs: ``` {u'favoritecolor': None, u'favoritenumber': 256, u'name': u'Alyssa'} {u'favoritecolor': u'red', u'favoritenumber': 7, u'name': u'Ben'} ``` Do make sure that you open your files in binary mode (i.e. using the modes wb or rb respectively). Otherwise you might generate corrupt files due to automatic replacement of newline characters with the platform-specific representations. Lets take a closer look at whats going on here. ``` schema = avro.schema.parse(open(\"user.avsc\", \"rb\").read()) ``` avro.schema.parse takes a string containing a JSON schema definition as input and outputs a avro.schema.Schema object (specifically a subclass of Schema, in this case RecordSchema). Were passing in the contents of our user.avsc schema file here. ``` writer = DataFileWriter(open(\"users.avro\", \"wb\"), DatumWriter(), schema) ``` We create a DataFileWriter, which well use to write serialized items to a data file on disk. The DataFileWriter constructor takes three arguments: We use DataFileWriter.append to add items to our data file. Avro records are represented as Python dicts. Since the field favorite_color has type [int, null], we are not required to specify this field, as shown in the first append. Were we to omit the required name field, an exception would be raised. Any extra entries not corresponding to a field are present in the dict are ignored. ``` reader = DataFileReader(open(\"users.avro\", \"rb\"), DatumReader()) ``` We open the file again, this time for reading back from disk. We use a DataFileReader and DatumReader analagous to the DataFileWriter and DatumWriter above. ``` for user in reader: print user ``` The DataFileReader is an iterator that returns dicts corresponding to the serialized items. Apache Avro, Avro, Apache, and the Apache feather logo are either registered trademarks or trademarks of The Apache Software Foundation." } ]
{ "category": "Orchestration & Management", "file_name": "intro.md", "project_name": "Easy-Ngo", "subcategory": "Remote Procedure Call" }
[ { "data": "easy-ngoGoeasy-ngoeasy-ngo easy-ngo 2020GoGo Java Go200911Google GoWebeasy-ngo easy-ngo easy-ngoeasy-ngo easy-ngo HelloWorldeasy-ngo githubclone ``` git clone https://github.com/NetEase-Media/easy-ngo.git``` sample ``` cd examples/application``` ``` package mainimport ( \"net/http\" \"github.com/NetEase-Media/easy-ngo/application\" \"github.com/NetEase-Media/easy-ngo/application/r/rconfig\" \"github.com/NetEase-Media/easy-ngo/application/r/rgin\" \"github.com/NetEase-Media/easy-ngo/examples/application/include\" \"github.com/gin-gonic/gin\")func main() { app := application.Default() app.Initialize(xgin) app.Startup()}func xgin() error { g := rgin.Gin() g.GET(\"/hello\", func(ctx *gin.Context) { ctx.String(http.StatusOK, \"hello world!\") }) return nil}``` ``` [ngo.app]name = \"quickstart-demo\"[ngo.server.gin]port = 8888enabledMetric = false[ngo.app.healthz]port = 10000``` ``` go run . -c ./app.toml``` So Coolexamples easy-ngo ``` https://github.com/NetEase-Media/easy-ngo-examples```" } ]
{ "category": "Orchestration & Management", "file_name": "reference.md", "project_name": "go-zero", "subcategory": "Remote Procedure Call" }
[ { "data": "api api API ``` syntax = \"v1\"info ( title: \"api \" desc: \" api \" author: \"keson.an\" date: \"2022 12 26 \" version: \"v1\")type UpdateReq { Arg1 string `json:\"arg1\"`}type ListItem { Value1 string `json:\"value1\"`}type LoginReq { Username string `json:\"username\"` Password string `json:\"password\"`}type LoginResp { Name string `json:\"name\"`}type FormExampleReq { Name string `form:\"name\"`}type PathExampleReq { // path id // id service :id ID string `path:\"id\"`}type PathExampleResp { Name string `json:\"name\"`}@server ( jwt: Auth // Foo jwt prefix: /v1 // Foo /v1 group: g1 // Foo g1 timeout: 3s // Foo middleware: AuthInterceptor // Foo maxBytes: 1048576 // Foo byte,goctl >= 1.5.0 )service Foo { // ping @handler ping get /ping // @handler update post /update (UpdateReq) // @handler list get /list returns ([]ListItem) // @handler login post /login (LoginReq) returns (LoginResp) // @handler formExample post /form/example (FormExampleReq) // path @handler pathExample get /path/example/:id (PathExampleReq) returns (PathExampleResp)}```" } ]
{ "category": "Orchestration & Management", "file_name": "tasks.md", "project_name": "go-zero", "subcategory": "Remote Procedure Call" }
[ { "data": "api go-zero api api HTTP api info Golang struct api EBNF ``` Syntax = { Production } .Production = productionname \"=\" [ Expression ] \".\" .Expression = Term { \"|\" Term } .Term = Factor { Factor } .Factor = productionname | token [ \"\" token ] | Group | Option | Repetition .Group = \"(\" Expression \")\" .Option = \"[\" Expression \"]\" .Repetition = \"{\" Expression \"}\" .``` Production Term ``` | alternation() grouping[] option (0 or 1 times){} repetition (0 to n times)``` a...b a b 0...9 0 9 . ENBF token token ``` // tokennumber = \"0\"...\"9\" .lower_letter = \"a\"...\"z\" .// tokenDataType = TypeLit | TypeGroup .TypeLit = TypeAlias | TypeStruct .``` api ``` newline = / Unicode U+000A / .unicodechar = /* newline Unicode */ .unicodeletter = / a...z|A...Z Unicode / .unicode_digit = / 0...9 Unicode / .``` _ (U+005F) ``` letter = \"A\"...\"Z\" | \"a\"...\"z\" | \"\" .decimaldigit = \"0\" \"9\" .``` Abstract Syntax TreeASTSyntax tree if-condition-then Lexical Analysistokenlexical analyzer lexerscanner api Token api 2 // ``` // ``` / / ``` ////``` Token identifierkeywordoperatorpunctuationliteralWhite spaceU+0020U+0009U+000D U+000A api Token operator Token Golang ``` type Token struct { Type Type Text string Position Position}type Position struct { Filename string Line int Column int}``` api syntax=\"v1\" | | | |:-|:-| | syntax | | | = | | | \"v1\" | | ID ID 1 n _ EBNF ``` identifier = letter { letter | unicode_digit } .``` ID ``` a_a1GoZero``` ID api Golang ID ``` : any bool byte comparable complex64 complex128 error float32 float64 int int8 int16 int32 int64 rune string uint uint8 uint16 uint32 uint64 uintptr: true false iota: nil: append cap close complex copy delete imag len make new panic print println real recover``` ID api Golang Golang Golang ``` break default func interface selectcase defer go map structchan else goto package switchconst fallthrough if range typecontinue for import return var``` Token api ``` , ( )* . [ ]/ ; { }= : , ;...``` api Golang 2 raw string `foo` \"foo\" api \\\" ``` stringlit = rawstringlit | interpretedstringlit .rawstringlit = \"`\" { unicodechar | newline } \"`\" .interpretedstringlit = `\"` { unicodevalue | bytevalue } `\"` .``` ``` // ```foo``bar``json:\"baz\"`// \"\"\"foo\"\"bar\"``` Syntax AnalysisNodeExpressionStatement Node Token Golang ``` // Node represents a node in the AST.type Node interface { // Pos returns the position of the first character belonging to the node. Pos() token.Position // End returns the position of the first character immediately after the node. End() token.Position // Format returns the node's text after format. Format(...string) string // HasHeadCommentGroup returns true if the node has head comment group. HasHeadCommentGroup() bool // HasLeadingCommentGroup returns true if the node has leading comment group. HasLeadingCommentGroup() bool // CommentGroup returns the node's head comment group and leading comment group. CommentGroup() (head, leading CommentGroup)}``` Expression api api Golang ``` // Expr represents an expression in the AST.type Expr interface { Node exprNode()}``` Statement api api Golang ``` // Stmt represents a statement in the AST.type Stmt interface { Node stmtNode()}``` api AST ``` api = SyntaxStmt | InfoStmt | { ImportStmt } | { TypeStmt } | { ServiceStmt } .``` syntax api v1 syntax EBNF ``` SyntaxStmt = \"syntax\" \"=\" \"v1\" .``` syntax ``` syntax = \"v1\"``` info api meta api syntax info api info EBNF ``` InfoStmt = \"info\" \"(\" { InfoKeyValueExpr } \")\" .InfoKeyValueExpr = InfoKeyLit [ interpretedstringlit ] .InfoKeyLit = identifier \":\"" }, { "data": "info ``` // key-value info info ()// key-value info info ( foo: \"bar\" bar:)``` import api api / package EBNF ``` ImportStmt = ImportLiteralStmt | ImportGroupStmt .ImportLiteralStmt = \"import\" interpretedstringlit .ImportGroupStmt = \"import\" \"(\" { interpretedstringlit } \")\" .``` import ``` // importimport \"foo\"import \"/path/to/file\"// import import ()import ( \"bar\" \"relative/to/file\")``` api Golang rest / EBNF ``` TypeStmt = TypeLiteralStmt | TypeGroupStmt .TypeLiteralStmt = \"type\" TypeExpr .TypeGroupStmt = \"type\" \"(\" { TypeExpr } \")\" .TypeExpr = identifier [ \"=\" ] DataType .DataType = AnyDataType | ArrayDataType | BaseDataType | InterfaceDataType | MapDataType | PointerDataType | SliceDataType | StructDataType .AnyDataType = \"any\" .ArrayDataType = \"[\" { decimaldigit } \"]\" DataType .BaseDataType = \"bool\" | \"uint8\" | \"uint16\" | \"uint32\" | \"uint64\" | \"int8\" | \"int16\" | \"int32\" | \"int64\" | \"float32\" | \"float64\" | \"complex64\" | \"complex128\" | \"string\" | \"int\" | \"uint\" | \"uintptr\" | \"byte\" | \"rune\" | \"any\" | .InterfaceDataType = \"interface{}\" .MapDataType = \"map\" \"[\" DataType \"]\" DataType .PointerDataType = \"*\" DataType .SliceDataType = \"[\" \"]\" DataType .StructDataType = \"{\" { ElemExpr } \"}\" .ElemExpr = [ ElemNameExpr ] DataType [ Tag ].ElemNameExpr = identifier { \",\" identifier } .Tag = rawstring_lit .``` ``` // [1]type Int inttype Integer = int// type Foo {}// type Bar { Foo int `json:\"foo\"` Bar bool `json:\"bar\"` Baz []string `json:\"baz\"` Qux map[string]string `json:\"qux\"`}type Baz { Bar `json:\"baz\"` // [2] Qux { Foo string `json:\"foo\"` Bar bool `json:\"bar\"` } `json:\"baz\"`}// type ()// type ( Int int Integer = int Bar { Foo int `json:\"foo\"` Bar bool `json:\"bar\"` Baz []string `json:\"baz\"` Qux map[string]string `json:\"qux\"` })``` [1] [2] service HTTP handlerjwt EBNF ``` ServiceStmt = [ AtServerStmt ] \"service\" ServiceNameExpr \"(\" { ServiceItemStmt } \")\" .ServiceNameExpr = identifier [ \"-api\" ] .``` @server meta @server EBNF ``` AtServerStmt = \"@server\" \"(\" { AtServerKVExpr } \")\" .AtServerKVExpr = AtServerKeyLit [ AtServerValueLit ] .AtServerKeyLit = identifier \":\" .AtServerValueLit = PathLit | identifier { \",\" identifier } .PathLit = `\"` { \"/\" { identifier | \"-\" identifier} } `\"` .``` @server ``` // @server()// @server ( // jwt // key jwt: jwt // value jwt: Auth // // key prefix: // value / prefix: /v1 // // key group: // value goctl group: Foo // // key middleware: // value goctl middleware: AuthInterceptor // // key timeout: // value duration goctl timeout: 3s // key-value key key-value // annotation goctl // goctl foo: bar)``` ServiceItemStmt HTTP @doc handler EBNF ``` ServiceItemStmt = [ AtDocStmt ] AtHandlerStmt RouteStmt .``` @doc meta key-value goctl EBNF ``` AtDocStmt = AtDocLiteralStmt | AtDocGroupStmt .AtDocLiteralStmt = \"@doc\" interpretedstringlit .AtDocGroupStmt = \"@doc\" \"(\" { AtDocKVExpr } \")\" .AtDocKVExpr = AtServerKeyLit interpretedstringlit .AtServerKeyLit = identifier \":\" .``` @doc ``` // @doc@doc \"foo\"// @doc @doc ()// @doc @doc ( foo: \"bar\" bar: \"baz\")``` @handler handler golang http.HandleFunc EBNF ``` AtHandlerStmt = \"@handler\" identifier .``` @handler ``` @handler foo``` HTTP EBNF ``` RouteStmt = Method PathExpr [ BodyStmt ] [ \"returns\" ] [ BodyStmt ].Method = \"get\" | \"head\" | \"post\" | \"put\" | \"patch\" | \"delete\" | \"connect\" | \"options\" | \"trace\" .PathExpr = \"/\" identifier { ( \"-\" identifier ) | ( \":\" identifier) } .BodyStmt = \"(\" identifier \")\" .``` ``` // get /ping// get /foo (foo)// post /foo returns (foo)// post /foo (foo) returns (bar)``` service ``` // @server @server ( prefix: /v1 group: Login)service user { @doc \"\" @handler login post /user/login (LoginReq) returns (LoginResp) @handler getUserInfo get /user/info/:id (GetUserInfoReq) returns (GetUserInfoResp)}@server ( prefix: /v1 middleware: AuthInterceptor)service user { @doc \"\" @handler login post /user/login (LoginReq) returns (LoginResp) @handler getUserInfo get /user/info/:id (GetUserInfoReq) returns (GetUserInfoResp)}// @server service user { @doc \"\"" } ]
{ "category": "Orchestration & Management", "file_name": "tutorials.md", "project_name": "go-zero", "subcategory": "Remote Procedure Call" }
[ { "data": "api go-zero api api HTTP api info Golang struct api EBNF ``` Syntax = { Production } .Production = productionname \"=\" [ Expression ] \".\" .Expression = Term { \"|\" Term } .Term = Factor { Factor } .Factor = productionname | token [ \"\" token ] | Group | Option | Repetition .Group = \"(\" Expression \")\" .Option = \"[\" Expression \"]\" .Repetition = \"{\" Expression \"}\" .``` Production Term ``` | alternation() grouping[] option (0 or 1 times){} repetition (0 to n times)``` a...b a b 0...9 0 9 . ENBF token token ``` // tokennumber = \"0\"...\"9\" .lower_letter = \"a\"...\"z\" .// tokenDataType = TypeLit | TypeGroup .TypeLit = TypeAlias | TypeStruct .``` api ``` newline = / Unicode U+000A / .unicodechar = /* newline Unicode */ .unicodeletter = / a...z|A...Z Unicode / .unicode_digit = / 0...9 Unicode / .``` _ (U+005F) ``` letter = \"A\"...\"Z\" | \"a\"...\"z\" | \"\" .decimaldigit = \"0\" \"9\" .``` Abstract Syntax TreeASTSyntax tree if-condition-then Lexical Analysistokenlexical analyzer lexerscanner api Token api 2 // ``` // ``` / / ``` ////``` Token identifierkeywordoperatorpunctuationliteralWhite spaceU+0020U+0009U+000D U+000A api Token operator Token Golang ``` type Token struct { Type Type Text string Position Position}type Position struct { Filename string Line int Column int}``` api syntax=\"v1\" | | | |:-|:-| | syntax | | | = | | | \"v1\" | | ID ID 1 n _ EBNF ``` identifier = letter { letter | unicode_digit } .``` ID ``` a_a1GoZero``` ID api Golang ID ``` : any bool byte comparable complex64 complex128 error float32 float64 int int8 int16 int32 int64 rune string uint uint8 uint16 uint32 uint64 uintptr: true false iota: nil: append cap close complex copy delete imag len make new panic print println real recover``` ID api Golang Golang Golang ``` break default func interface selectcase defer go map structchan else goto package switchconst fallthrough if range typecontinue for import return var``` Token api ``` , ( )* . [ ]/ ; { }= : , ;...``` api Golang 2 raw string `foo` \"foo\" api \\\" ``` stringlit = rawstringlit | interpretedstringlit .rawstringlit = \"`\" { unicodechar | newline } \"`\" .interpretedstringlit = `\"` { unicodevalue | bytevalue } `\"` .``` ``` // ```foo``bar``json:\"baz\"`// \"\"\"foo\"\"bar\"``` Syntax AnalysisNodeExpressionStatement Node Token Golang ``` // Node represents a node in the AST.type Node interface { // Pos returns the position of the first character belonging to the node. Pos() token.Position // End returns the position of the first character immediately after the node. End() token.Position // Format returns the node's text after format. Format(...string) string // HasHeadCommentGroup returns true if the node has head comment group. HasHeadCommentGroup() bool // HasLeadingCommentGroup returns true if the node has leading comment group. HasLeadingCommentGroup() bool // CommentGroup returns the node's head comment group and leading comment group. CommentGroup() (head, leading CommentGroup)}``` Expression api api Golang ``` // Expr represents an expression in the AST.type Expr interface { Node exprNode()}``` Statement api api Golang ``` // Stmt represents a statement in the AST.type Stmt interface { Node stmtNode()}``` api AST ``` api = SyntaxStmt | InfoStmt | { ImportStmt } | { TypeStmt } | { ServiceStmt } .``` syntax api v1 syntax EBNF ``` SyntaxStmt = \"syntax\" \"=\" \"v1\" .``` syntax ``` syntax = \"v1\"``` info api meta api syntax info api info EBNF ``` InfoStmt = \"info\" \"(\" { InfoKeyValueExpr } \")\" .InfoKeyValueExpr = InfoKeyLit [ interpretedstringlit ] .InfoKeyLit = identifier \":\"" }, { "data": "info ``` // key-value info info ()// key-value info info ( foo: \"bar\" bar:)``` import api api / package EBNF ``` ImportStmt = ImportLiteralStmt | ImportGroupStmt .ImportLiteralStmt = \"import\" interpretedstringlit .ImportGroupStmt = \"import\" \"(\" { interpretedstringlit } \")\" .``` import ``` // importimport \"foo\"import \"/path/to/file\"// import import ()import ( \"bar\" \"relative/to/file\")``` api Golang rest / EBNF ``` TypeStmt = TypeLiteralStmt | TypeGroupStmt .TypeLiteralStmt = \"type\" TypeExpr .TypeGroupStmt = \"type\" \"(\" { TypeExpr } \")\" .TypeExpr = identifier [ \"=\" ] DataType .DataType = AnyDataType | ArrayDataType | BaseDataType | InterfaceDataType | MapDataType | PointerDataType | SliceDataType | StructDataType .AnyDataType = \"any\" .ArrayDataType = \"[\" { decimaldigit } \"]\" DataType .BaseDataType = \"bool\" | \"uint8\" | \"uint16\" | \"uint32\" | \"uint64\" | \"int8\" | \"int16\" | \"int32\" | \"int64\" | \"float32\" | \"float64\" | \"complex64\" | \"complex128\" | \"string\" | \"int\" | \"uint\" | \"uintptr\" | \"byte\" | \"rune\" | \"any\" | .InterfaceDataType = \"interface{}\" .MapDataType = \"map\" \"[\" DataType \"]\" DataType .PointerDataType = \"*\" DataType .SliceDataType = \"[\" \"]\" DataType .StructDataType = \"{\" { ElemExpr } \"}\" .ElemExpr = [ ElemNameExpr ] DataType [ Tag ].ElemNameExpr = identifier { \",\" identifier } .Tag = rawstring_lit .``` ``` // [1]type Int inttype Integer = int// type Foo {}// type Bar { Foo int `json:\"foo\"` Bar bool `json:\"bar\"` Baz []string `json:\"baz\"` Qux map[string]string `json:\"qux\"`}type Baz { Bar `json:\"baz\"` // [2] Qux { Foo string `json:\"foo\"` Bar bool `json:\"bar\"` } `json:\"baz\"`}// type ()// type ( Int int Integer = int Bar { Foo int `json:\"foo\"` Bar bool `json:\"bar\"` Baz []string `json:\"baz\"` Qux map[string]string `json:\"qux\"` })``` [1] [2] service HTTP handlerjwt EBNF ``` ServiceStmt = [ AtServerStmt ] \"service\" ServiceNameExpr \"(\" { ServiceItemStmt } \")\" .ServiceNameExpr = identifier [ \"-api\" ] .``` @server meta @server EBNF ``` AtServerStmt = \"@server\" \"(\" { AtServerKVExpr } \")\" .AtServerKVExpr = AtServerKeyLit [ AtServerValueLit ] .AtServerKeyLit = identifier \":\" .AtServerValueLit = PathLit | identifier { \",\" identifier } .PathLit = `\"` { \"/\" { identifier | \"-\" identifier} } `\"` .``` @server ``` // @server()// @server ( // jwt // key jwt: jwt // value jwt: Auth // // key prefix: // value / prefix: /v1 // // key group: // value goctl group: Foo // // key middleware: // value goctl middleware: AuthInterceptor // // key timeout: // value duration goctl timeout: 3s // key-value key key-value // annotation goctl // goctl foo: bar)``` ServiceItemStmt HTTP @doc handler EBNF ``` ServiceItemStmt = [ AtDocStmt ] AtHandlerStmt RouteStmt .``` @doc meta key-value goctl EBNF ``` AtDocStmt = AtDocLiteralStmt | AtDocGroupStmt .AtDocLiteralStmt = \"@doc\" interpretedstringlit .AtDocGroupStmt = \"@doc\" \"(\" { AtDocKVExpr } \")\" .AtDocKVExpr = AtServerKeyLit interpretedstringlit .AtServerKeyLit = identifier \":\" .``` @doc ``` // @doc@doc \"foo\"// @doc @doc ()// @doc @doc ( foo: \"bar\" bar: \"baz\")``` @handler handler golang http.HandleFunc EBNF ``` AtHandlerStmt = \"@handler\" identifier .``` @handler ``` @handler foo``` HTTP EBNF ``` RouteStmt = Method PathExpr [ BodyStmt ] [ \"returns\" ] [ BodyStmt ].Method = \"get\" | \"head\" | \"post\" | \"put\" | \"patch\" | \"delete\" | \"connect\" | \"options\" | \"trace\" .PathExpr = \"/\" identifier { ( \"-\" identifier ) | ( \":\" identifier) } .BodyStmt = \"(\" identifier \")\" .``` ``` // get /ping// get /foo (foo)// post /foo returns (foo)// post /foo (foo) returns (bar)``` service ``` // @server @server ( prefix: /v1 group: Login)service user { @doc \"\" @handler login post /user/login (LoginReq) returns (LoginResp) @handler getUserInfo get /user/info/:id (GetUserInfoReq) returns (GetUserInfoResp)}@server ( prefix: /v1 middleware: AuthInterceptor)service user { @doc \"\" @handler login post /user/login (LoginReq) returns (LoginResp) @handler getUserInfo get /user/info/:id (GetUserInfoReq) returns (GetUserInfoResp)}// @server service user { @doc \"\"" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "gRPC", "subcategory": "Remote Procedure Call" }
[ { "data": "This guide gets you started with gRPC in Java with a simple working example. The example code is part of the grpc-java repo. Download the repo as a zip file and unzip it, or clone the repo: ``` $ git clone -b v1.64.0 --depth 1 https://github.com/grpc/grpc-java ``` Change to the examples directory: ``` $ cd grpc-java/examples ``` From the examples directory: Compile the client and server ``` $ ./gradlew installDist ``` Run the server: ``` $ ./build/install/examples/bin/hello-world-server INFO: Server started, listening on 50051 ``` From another terminal, run the client: ``` $ ./build/install/examples/bin/hello-world-client INFO: Will try to greet world ... INFO: Greeting: Hello world ``` Congratulations! Youve just run a client-server application with gRPC. In this section youll update the application by adding an extra server method. The gRPC service is defined using protocol buffers. To learn more about how to define a service in a .proto file see Basics tutorial. For now, all you need to know is that both the server and the client stub have a SayHello() RPC method that takes a HelloRequest parameter from the client and returns a HelloReply from the server, and that the method is defined like this: ``` // The greeting service definition. service Greeter { // Sends a greeting rpc SayHello (HelloRequest) returns (HelloReply) {} } // The request message containing the user's name. message HelloRequest { string name = 1; } // The response message containing the greetings message HelloReply { string message = 1; } ``` Open src/main/proto/helloworld.proto and add a new SayHelloAgain() method with the same request and response types as SayHello(): ``` // The greeting service definition. service Greeter { // Sends a greeting. Original method. rpc SayHello (HelloRequest) returns (HelloReply) {} // Sends another greeting. New method. rpc SayHelloAgain (HelloRequest) returns (HelloReply) {} } // The request message containing the user's name. message HelloRequest { // The name of the user. string name = 1; } // The response message containing the greetings message HelloReply { // The greeting message. string message = 1; } ``` Remember to save the file! When you build the example, the build process regenerates GreeterGrpc.java, which contains the generated gRPC client and server" }, { "data": "This also regenerates classes for populating, serializing, and retrieving our request and response types. However, you still need to implement and call the new method in the hand-written parts of the example app. In the same directory, open src/main/java/io/grpc/examples/helloworld/HelloWorldServer.java. Implement the new method like this: ``` // Implementation of the gRPC service on the server-side. private class GreeterImpl extends GreeterGrpc.GreeterImplBase { @Override public void sayHello(HelloRequest req, StreamObserver<HelloReply> responseObserver) { // Generate a greeting message for the original method HelloReply reply = HelloReply.newBuilder().setMessage(\"Hello \" + req.getName()).build(); // Send the reply back to the client. responseObserver.onNext(reply); // Indicate that no further messages will be sent to the client. responseObserver.onCompleted(); } @Override public void sayHelloAgain(HelloRequest req, StreamObserver<HelloReply> responseObserver) { // Generate another greeting message for the new method. HelloReply reply = HelloReply.newBuilder().setMessage(\"Hello again \" + req.getName()).build(); // Send the reply back to the client. responseObserver.onNext(reply); // Indicate that no further messages will be sent to the client. responseObserver.onCompleted(); } } ``` In the same directory, open src/main/java/io/grpc/examples/helloworld/HelloWorldClient.java. Call the new method like this: ``` // Client-side logic for interacting with the gRPC service. public void greet(String name) { // Log a message indicating the intention to greet a user. logger.info(\"Will try to greet \" + name + \" ...\"); // Creating a request with the user's name. HelloRequest request = HelloRequest.newBuilder().setName(name).build(); HelloReply response; try { // Call the original method on the server. response = blockingStub.sayHello(request); } catch (StatusRuntimeException e) { // Log a warning if the RPC fails. logger.log(Level.WARNING, \"RPC failed: {0}\", e.getStatus()); return; } // Log the response from the original method. logger.info(\"Greeting: \" + response.getMessage()); try { // Call the new method on the server. response = blockingStub.sayHelloAgain(request); } catch (StatusRuntimeException e) { // Log a warning if the RPC fails. logger.log(Level.WARNING, \"RPC failed: {0}\", e.getStatus()); return; } // Log the response from the new method. logger.info(\"Greeting: \" + response.getMessage()); } ``` Run the client and server like you did before. Execute the following commands from the examples directory: Compile the client and server: ``` $ ./gradlew installDist ``` Run the server: ``` $ ./build/install/examples/bin/hello-world-server INFO: Server started, listening on 50051 ``` From another terminal, run the client: ``` $ ./build/install/examples/bin/hello-world-client INFO: Will try to greet world ... INFO: Greeting: Hello world INFO: Greeting: Hello again world ```" } ]
{ "category": "Orchestration & Management", "file_name": "docs.md", "project_name": "GoFr", "subcategory": "Remote Procedure Call" }
[ { "data": "GoFr is Opinionated Web Framework written in Go (Golang). It helps in building robust and scalable applications. This framework is designed to offer a user-friendly and familiar abstraction for all the developers . We prioritize simplicity over complexity. In this section we will walk through what GoFr is, what problems it solves, and how it can help in building your project. Step-by-step guides to setting up your system and installing the library. Our guides break down how to perform common tasks in GoFr." } ]
{ "category": "Orchestration & Management", "file_name": "introduction.md", "project_name": "gRPC", "subcategory": "Remote Procedure Call" }
[ { "data": "This guide gets you started with gRPC in Python with a simple working example. If necessary, upgrade your version of pip: ``` $ python -m pip install --upgrade pip ``` If you cannot upgrade pip due to a system-owned installation, you can run the example in a virtualenv: ``` $ python -m pip install virtualenv $ virtualenv venv $ source venv/bin/activate $ python -m pip install --upgrade pip ``` Install gRPC: ``` $ python -m pip install grpcio ``` Or, to install it system wide: ``` $ sudo python -m pip install grpcio ``` Pythons gRPC tools include the protocol buffer compiler protoc and the special plugin for generating server and client code from .proto service definitions. For the first part of our quick-start example, weve already generated the server and client stubs from helloworld.proto, but youll need the tools for the rest of our quick start, as well as later tutorials and your own projects. To install gRPC tools, run: ``` $ python -m pip install grpcio-tools ``` Youll need a local copy of the example code to work through this quick start. Download the example code from our GitHub repository (the following command clones the entire repository, but you just need the examples for this quick start and other tutorials): ``` $ git clone -b v1.64.0 --depth 1 --shallow-submodules https://github.com/grpc/grpc $ cd grpc/examples/python/helloworld ``` From the examples/python/helloworld directory: Run the server: ``` $ python greeter_server.py ``` From another terminal, run the client: ``` $ python greeter_client.py ``` Congratulations! Youve just run a client-server application with gRPC. Now lets look at how to update the application with an extra method on the server for the client to call. Our gRPC service is defined using protocol buffers; you can find out lots more about how to define a service in a .proto file in Introduction to gRPC and Basics" }, { "data": "For now all you need to know is that both the server and the client stub have a SayHello RPC method that takes a HelloRequest parameter from the client and returns a HelloReply from the server, and that this method is defined like this: ``` // The greeting service definition. service Greeter { // Sends a greeting rpc SayHello (HelloRequest) returns (HelloReply) {} } // The request message containing the user's name. message HelloRequest { string name = 1; } // The response message containing the greetings message HelloReply { string message = 1; } ``` Lets update this so that the Greeter service has two methods. Edit examples/protos/helloworld.proto and update it with a new SayHelloAgain method, with the same request and response types: ``` // The greeting service definition. service Greeter { // Sends a greeting rpc SayHello (HelloRequest) returns (HelloReply) {} // Sends another greeting rpc SayHelloAgain (HelloRequest) returns (HelloReply) {} } // The request message containing the user's name. message HelloRequest { string name = 1; } // The response message containing the greetings message HelloReply { string message = 1; } ``` Remember to save the file! Next we need to update the gRPC code used by our application to use the new service definition. From the examples/python/helloworld directory, run: ``` $ python -m grpctools.protoc -I../../protos --pythonout=. --pyiout=. --grpcpython_out=. ../../protos/helloworld.proto ``` This regenerates helloworld_pb2.py which contains our generated request and response classes and helloworldpb2grpc.py which contains our generated client and server classes. We now have new generated server and client code, but we still need to implement and call the new method in the human-written parts of our example application. In the same directory, open greeter_server.py. Implement the new method like this: ``` class Greeter(helloworldpb2grpc.GreeterServicer): def SayHello(self, request, context): return helloworld_pb2.HelloReply(message=f\"Hello, {request.name}!\") def SayHelloAgain(self, request, context): return helloworld_pb2.HelloReply(message=f\"Hello again, {request.name}!\") ... ``` In the same directory, open greeter_client.py. Call the new method like this: ``` def run(): with grpc.insecure_channel('localhost:50051') as channel: stub = helloworldpb2grpc.GreeterStub(channel) response = stub.SayHello(helloworld_pb2.HelloRequest(name='you')) print(\"Greeter client received: \" + response.message) response = stub.SayHelloAgain(helloworld_pb2.HelloRequest(name='you')) print(\"Greeter client received: \" + response.message) ``` Just like we did before, from the examples/python/helloworld directory: Run the server: ``` $ python greeter_server.py ``` From another terminal, run the client: ``` $ python greeter_client.py ```" } ]
{ "category": "Orchestration & Management", "file_name": "start.md", "project_name": "kratos", "subcategory": "Remote Procedure Call" }
[ { "data": "The Kratos community wants to be helped by a wide range of developers, so you'd like to take a few minutes to read this guide before you mention the problem or pull request. We use Github Issues to manage issues. If you want to submit , first make sure you've searched for existing issues, pull requests and read our FAQ. When submitting a bug report, use the issue template we provide to clearly describe the problems encountered and how to reproduce, and if convenient it is best to provide a minimal reproduce repository. In order to accurately distinguish whether the needs put forward by users are the needs or reasonable needs of most users, solicit opinions from the community through the proposal process, and the proposals adopted by the community will be realized as new feature. In order to make the proposal process as simple as possible, the process includes three stages: Feature, Proposal and PR, in which Feature, Proposal is issue and PR is the specific function implementation. If you've never submitted code on Github, follow these steps: Note That when you submit a PR request, you first ensure that the code uses the correct coding specifications and that there are complete test cases, and that the information in the submission of the PR is best associated with the relevant issue to ease the workload of the auditor. ``` <type>``` More: Conventional Commits There are the following types of commit: The following is the list of supported scopes: The description contains a succinct description of the change The body should include the motivation for the change and contrast this with previous behavior. The footer should contain any information about Breaking Changes and is also the place to reference Github issues that this commit Closes. ``` fix: The log debug level should be -1``` ``` refactor!(transport/http): replacement underlying implementation``` ``` fix(log): [BREAKING-CHANGE] unable to meet the requirement of log LibraryExplain the reason, purpose, realization method, etc.Close #777Doc change on doc/#111BREAKING CHANGE: Breaks log.info api, log.log should be used instead``` You can use kratos changelog dev to generate a change log during. The following is the list of supported types: You can use the kratos changelog dev generated log as the describe to Release,just need a simple modification. ``` Kratos is a web application framework with expressive, elegant syntax. We've already laid the foundation." } ]
{ "category": "Orchestration & Management", "file_name": "docs.github.com.md", "project_name": "SRPC", "subcategory": "Remote Procedure Call" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "docs.md", "project_name": "SRPC", "subcategory": "Remote Procedure Call" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:--|:--|:--|-:|-:| | parent directory.. | parent directory.. | parent directory.. | nan | nan | | en | en | en | nan | nan | | images | images | images | nan | nan | | docs-01-idl.md | docs-01-idl.md | docs-01-idl.md | nan | nan | | docs-02-service.md | docs-02-service.md | docs-02-service.md | nan | nan | | docs-03-server.md | docs-03-server.md | docs-03-server.md | nan | nan | | docs-04-client.md | docs-04-client.md | docs-04-client.md | nan | nan | | docs-05-context.md | docs-05-context.md | docs-05-context.md | nan | nan | | docs-06-workflow.md | docs-06-workflow.md | docs-06-workflow.md | nan | nan | | docs-07-srpc-http.md | docs-07-srpc-http.md | docs-07-srpc-http.md | nan | nan | | docs-08-tracing.md | docs-08-tracing.md | docs-08-tracing.md | nan | nan | | docs-09-metrics.md | docs-09-metrics.md | docs-09-metrics.md | nan | nan | | docs-10-http-with-modules.md | docs-10-http-with-modules.md | docs-10-http-with-modules.md | nan | nan | | installation.md | installation.md | installation.md | nan | nan | | rpc.md | rpc.md | rpc.md | nan | nan | | wiki.md | wiki.md | wiki.md | nan | nan | | View all files | View all files | View all files | nan | nan |" } ]
{ "category": "Orchestration & Management", "file_name": "github-privacy-statement.md", "project_name": "SRPC", "subcategory": "Remote Procedure Call" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "github-terms-of-service.md", "project_name": "TARS", "subcategory": "Remote Procedure Call" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "docs.github.com.md", "project_name": "TARS", "subcategory": "Remote Procedure Call" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "github-privacy-statement.md", "project_name": "TARS", "subcategory": "Remote Procedure Call" }
[ { "data": "Effective date: February 1, 2024 Welcome to the GitHub Privacy Statement. This is where we describe how we handle your Personal Data, which is information that is directly linked or can be linked to you. It applies to the Personal Data that GitHub, Inc. or GitHub B.V., processes as the Data Controller when you interact with websites, applications, and services that display this Statement (collectively, Services). This Statement does not apply to services or products that do not display this Statement, such as Previews, where relevant. When a school or employer supplies your GitHub account, they assume the role of Data Controller for most Personal Data used in our Services. This enables them to: Should you access a GitHub Service through an account provided by an organization, such as your employer or school, the organization becomes the Data Controller, and this Privacy Statement's direct applicability to you changes. Even so, GitHub remains dedicated to preserving your privacy rights. In such circumstances, GitHub functions as a Data Processor, adhering to the Data Controller's instructions regarding your Personal Data's processing. A Data Protection Agreement governs the relationship between GitHub and the Data Controller. For further details regarding their privacy practices, please refer to the privacy statement of the organization providing your account. In cases where your organization grants access to GitHub products, GitHub acts as the Data Controller solely for specific processing activities. These activities are clearly defined in a contractual agreement with your organization, known as a Data Protection Agreement. You can review our standard Data Protection Agreement at GitHub Data Protection Agreement. For those limited purposes, this Statement governs the handling of your Personal Data. For all other aspects of GitHub product usage, your organization's policies apply. When you use third-party extensions, integrations, or follow references and links within our Services, the privacy policies of these third parties apply to any Personal Data you provide or consent to share with them. Their privacy statements will govern how this data is processed. Personal Data is collected from you directly, automatically from your device, and also from third parties. The Personal Data GitHub processes when you use the Services depends on variables like how you interact with our Services (such as through web interfaces, desktop or mobile applications), the features you use (such as pull requests, Codespaces, or GitHub Copilot) and your method of accessing the Services (your preferred IDE). Below, we detail the information we collect through each of these channels: The Personal Data we process depends on your interaction and access methods with our Services, including the interfaces (web, desktop, mobile apps), features used (pull requests, Codespaces, GitHub Copilot), and your preferred access tools (like your IDE). This section details all the potential ways GitHub may process your Personal Data: When carrying out these activities, GitHub practices data minimization and uses the minimum amount of Personal Information required. We may share Personal Data with the following recipients: If your GitHub account has private repositories, you control the access to that information. GitHub personnel does not access private repository information without your consent except as provided in this Privacy Statement and for: GitHub will provide you with notice regarding private repository access unless doing so is prohibited by law or if GitHub acted in response to a security threat or other risk to security. GitHub processes Personal Data in compliance with the GDPR, ensuring a lawful basis for each processing" }, { "data": "The basis varies depending on the data type and the context, including how you access the services. Our processing activities typically fall under these lawful bases: Depending on your residence location, you may have specific legal rights regarding your Personal Data: To exercise these rights, please send an email to privacy[at]github[dot]com and follow the instructions provided. To verify your identity for security, we may request extra information before addressing your data-related request. Please contact our Data Protection Officer at dpo[at]github[dot]com for any feedback or concerns. Depending on your region, you have the right to complain to your local Data Protection Authority. European users can find authority contacts on the European Data Protection Board website, and UK users on the Information Commissioners Office website. We aim to promptly respond to requests in compliance with legal requirements. Please note that we may retain certain data as necessary for legal obligations or for establishing, exercising, or defending legal claims. GitHub stores and processes Personal Data in a variety of locations, including your local region, the United States, and other countries where GitHub, its affiliates, subsidiaries, or subprocessors have operations. We transfer Personal Data from the European Union, the United Kingdom, and Switzerland to countries that the European Commission has not recognized as having an adequate level of data protection. When we engage in such transfers, we generally rely on the standard contractual clauses published by the European Commission under Commission Implementing Decision 2021/914, to help protect your rights and enable these protections to travel with your data. To learn more about the European Commissions decisions on the adequacy of the protection of personal data in the countries where GitHub processes personal data, see this article on the European Commission website. GitHub also complies with the EU-U.S. Data Privacy Framework (EU-U.S. DPF), the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. Data Privacy Framework (Swiss-U.S. DPF) as set forth by the U.S. Department of Commerce. GitHub has certified to the U.S. Department of Commerce that it adheres to the EU-U.S. Data Privacy Framework Principles (EU-U.S. DPF Principles) with regard to the processing of personal data received from the European Union in reliance on the EU-U.S. DPF and from the United Kingdom (and Gibraltar) in reliance on the UK Extension to the EU-U.S. DPF. GitHub has certified to the U.S. Department of Commerce that it adheres to the Swiss-U.S. Data Privacy Framework Principles (Swiss-U.S. DPF Principles) with regard to the processing of personal data received from Switzerland in reliance on the Swiss-U.S. DPF. If there is any conflict between the terms in this privacy statement and the EU-U.S. DPF Principles and/or the Swiss-U.S. DPF Principles, the Principles shall govern. To learn more about the Data Privacy Framework (DPF) program, and to view our certification, please visit https://www.dataprivacyframework.gov/. GitHub has the responsibility for the processing of Personal Data it receives under the Data Privacy Framework (DPF) Principles and subsequently transfers to a third party acting as an agent on GitHubs behalf. GitHub shall remain liable under the DPF Principles if its agent processes such Personal Data in a manner inconsistent with the DPF Principles, unless the organization proves that it is not responsible for the event giving rise to the damage. In compliance with the EU-U.S. DPF, the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. DPF, GitHub commits to resolve DPF Principles-related complaints about our collection and use of your personal" }, { "data": "EU, UK, and Swiss individuals with inquiries or complaints regarding our handling of personal data received in reliance on the EU-U.S. DPF, the UK Extension, and the Swiss-U.S. DPF should first contact GitHub at: dpo[at]github[dot]com. If you do not receive timely acknowledgment of your DPF Principles-related complaint from us, or if we have not addressed your DPF Principles-related complaint to your satisfaction, please visit https://go.adr.org/dpf_irm.html for more information or to file a complaint. The services of the International Centre for Dispute Resolution are provided at no cost to you. An individual has the possibility, under certain conditions, to invoke binding arbitration for complaints regarding DPF compliance not resolved by any of the other DPF mechanisms. For additional information visit https://www.dataprivacyframework.gov/s/article/ANNEX-I-introduction-dpf?tabset-35584=2. GitHub is subject to the investigatory and enforcement powers of the Federal Trade Commission (FTC). Under Section 5 of the Federal Trade Commission Act (15 U.S.C. 45), an organization's failure to abide by commitments to implement the DPF Principles may be challenged as deceptive by the FTC. The FTC has the power to prohibit such misrepresentations through administrative orders or by seeking court orders. GitHub uses appropriate administrative, technical, and physical security controls to protect your Personal Data. Well retain your Personal Data as long as your account is active and as needed to fulfill contractual obligations, comply with legal requirements, resolve disputes, and enforce agreements. The retention duration depends on the purpose of data collection and any legal obligations. GitHub uses administrative, technical, and physical security controls where appropriate to protect your Personal Data. Contact us via our contact form or by emailing our Data Protection Officer at dpo[at]github[dot]com. Our addresses are: GitHub B.V. Prins Bernhardplein 200, Amsterdam 1097JB The Netherlands GitHub, Inc. 88 Colin P. Kelly Jr. St. San Francisco, CA 94107 United States Our Services are not intended for individuals under the age of 13. We do not intentionally gather Personal Data from such individuals. If you become aware that a minor has provided us with Personal Data, please notify us. GitHub may periodically revise this Privacy Statement. If there are material changes to the statement, we will provide at least 30 days prior notice by updating our website or sending an email to your primary email address associated with your GitHub account. Below are translations of this document into other languages. In the event of any conflict, uncertainty, or apparent inconsistency between any of those versions and the English version, this English version is the controlling version. Cliquez ici pour obtenir la version franaise: Dclaration de confidentialit de GitHub (PDF). For translations of this statement into other languages, please visit https://docs.github.com/ and select a language from the drop-down menu under English. GitHub uses cookies to provide, secure and improve our Service or to develop new features and functionality of our Service. For example, we use them to (i) keep you logged in, (ii) remember your preferences, (iii) identify your device for security and fraud purposes, including as needed to maintain the integrity of our Service, (iv) compile statistical reports, and (v) provide information and insight for future development of GitHub. We provide more information about cookies on GitHub that describes the cookies we set, the needs we have for those cookies, and the expiration of such cookies. For Enterprise Marketing Pages, we may also use non-essential cookies to (i) gather information about enterprise users interests and online activities to personalize their experiences, including by making the ads, content, recommendations, and marketing seen or received more relevant and (ii) serve and measure the effectiveness of targeted advertising and other marketing" }, { "data": "If you disable the non-essential cookies on the Enterprise Marketing Pages, the ads, content, and marketing you see may be less relevant. Our emails to users may contain a pixel tag, which is a small, clear image that can tell us whether or not you have opened an email and what your IP address is. We use this pixel tag to make our email communications more effective and to make sure we are not sending you unwanted email. The length of time a cookie will stay on your browser or device depends on whether it is a persistent or session cookie. Session cookies will only stay on your device until you stop browsing. Persistent cookies stay until they expire or are deleted. The expiration time or retention period applicable to persistent cookies depends on the purpose of the cookie collection and tool used. You may be able to delete cookie data. For more information, see \"GitHub General Privacy Statement.\" We use cookies and similar technologies, such as web beacons, local storage, and mobile analytics, to operate and provide our Services. When visiting Enterprise Marketing Pages, like resources.github.com, these and additional cookies, like advertising IDs, may be used for sales and marketing purposes. Cookies are small text files stored by your browser on your device. A cookie can later be read when your browser connects to a web server in the same domain that placed the cookie. The text in a cookie contains a string of numbers and letters that may uniquely identify your device and can contain other information as well. This allows the web server to recognize your browser over time, each time it connects to that web server. Web beacons are electronic images (also called single-pixel or clear GIFs) that are contained within a website or email. When your browser opens a webpage or email that contains a web beacon, it automatically connects to the web server that hosts the image (typically operated by a third party). This allows that web server to log information about your device and to set and read its own cookies. In the same way, third-party content on our websites (such as embedded videos, plug-ins, or ads) results in your browser connecting to the third-party web server that hosts that content. Mobile identifiers for analytics can be accessed and used by apps on mobile devices in much the same way that websites access and use cookies. When visiting Enterprise Marketing pages, like resources.github.com, on a mobile device these may allow us and our third-party analytics and advertising partners to collect data for sales and marketing purposes. We may also use so-called flash cookies (also known as Local Shared Objects or LSOs) to collect and store information about your use of our Services. Flash cookies are commonly used for advertisements and videos. The GitHub Services use cookies and similar technologies for a variety of purposes, including to store your preferences and settings, enable you to sign-in, analyze how our Services perform, track your interaction with the Services, develop inferences, combat fraud, and fulfill other legitimate purposes. Some of these cookies and technologies may be provided by third parties, including service providers and advertising" }, { "data": "For example, our analytics and advertising partners may use these technologies in our Services to collect personal information (such as the pages you visit, the links you click on, and similar usage information, identifiers, and device information) related to your online activities over time and across Services for various purposes, including targeted advertising. GitHub will place non-essential cookies on pages where we market products and services to enterprise customers, for example, on resources.github.com. We and/or our partners also share the information we collect or infer with third parties for these purposes. The table below provides additional information about how we use different types of cookies: | Purpose | Description | |:--|:--| | Required Cookies | GitHub uses required cookies to perform essential website functions and to provide the services. For example, cookies are used to log you in, save your language preferences, provide a shopping cart experience, improve performance, route traffic between web servers, detect the size of your screen, determine page load times, improve user experience, and for audience measurement. These cookies are necessary for our websites to work. | | Analytics | We allow third parties to use analytics cookies to understand how you use our websites so we can make them better. For example, cookies are used to gather information about the pages you visit and how many clicks you need to accomplish a task. We also use some analytics cookies to provide personalized advertising. | | Social Media | GitHub and third parties use social media cookies to show you ads and content based on your social media profiles and activity on GitHubs websites. This ensures that the ads and content you see on our websites and on social media will better reflect your interests. This also enables third parties to develop and improve their products, which they may use on websites that are not owned or operated by GitHub. | | Advertising | In addition, GitHub and third parties use advertising cookies to show you new ads based on ads you've already seen. Cookies also track which ads you click or purchases you make after clicking an ad. This is done both for payment purposes and to show you ads that are more relevant to you. For example, cookies are used to detect when you click an ad and to show you ads based on your social media interests and website browsing history. | You have several options to disable non-essential cookies: Specifically on GitHub Enterprise Marketing Pages Any GitHub page that serves non-essential cookies will have a link in the pages footer to cookie settings. You can express your preferences at any time by clicking on that linking and updating your settings. Some users will also be able to manage non-essential cookies via a cookie consent banner, including the options to accept, manage, and reject all non-essential cookies. Generally for all websites You can control the cookies you encounter on the web using a variety of widely-available tools. For example: These choices are specific to the browser you are using. If you access our Services from other devices or browsers, take these actions from those systems to ensure your choices apply to the data collected when you use those systems. This section provides extra information specifically for residents of certain US states that have distinct data privacy laws and regulations. These laws may grant specific rights to residents of these states when the laws come into effect. This section uses the term personal information as an equivalent to the term Personal Data. These rights are common to the US State privacy laws: We may collect various categories of personal information about our website visitors and users of \"Services\" which includes GitHub applications, software, products, or" }, { "data": "That information includes identifiers/contact information, demographic information, payment information, commercial information, internet or electronic network activity information, geolocation data, audio, electronic, visual, or similar information, and inferences drawn from such information. We collect this information for various purposes. This includes identifying accessibility gaps and offering targeted support, fostering diversity and representation, providing services, troubleshooting, conducting business operations such as billing and security, improving products and supporting research, communicating important information, ensuring personalized experiences, and promoting safety and security. To make an access, deletion, correction, or opt-out request, please send an email to privacy[at]github[dot]com and follow the instructions provided. We may need to verify your identity before processing your request. If you choose to use an authorized agent to submit a request on your behalf, please ensure they have your signed permission or power of attorney as required. To opt out of the sharing of your personal information, you can click on the \"Do Not Share My Personal Information\" link on the footer of our Websites or use the Global Privacy Control (\"GPC\") if available. Authorized agents can also submit opt-out requests on your behalf. We also make the following disclosures for purposes of compliance with California privacy law: Under California Civil Code section 1798.83, also known as the Shine the Light law, California residents who have provided personal information to a business with which the individual has established a business relationship for personal, family, or household purposes (California Customers) may request information about whether the business has disclosed personal information to any third parties for the third parties direct marketing purposes. Please be aware that we do not disclose personal information to any third parties for their direct marketing purposes as defined by this law. California Customers may request further information about our compliance with this law by emailing (privacy[at]github[dot]com). Please note that businesses are required to respond to one request per California Customer each year and may not be required to respond to requests made by means other than through the designated email address. California residents under the age of 18 who are registered users of online sites, services, or applications have a right under California Business and Professions Code Section 22581 to remove, or request and obtain removal of, content or information they have publicly posted. To remove content or information you have publicly posted, please submit a Private Information Removal request. Alternatively, to request that we remove such content or information, please send a detailed description of the specific content or information you wish to have removed to GitHub support. Please be aware that your request does not guarantee complete or comprehensive removal of content or information posted online and that the law may not permit or require removal in certain circumstances. If you have any questions about our privacy practices with respect to California residents, please send an email to privacy[at]github[dot]com. We value the trust you place in us and are committed to handling your personal information with care and respect. If you have any questions or concerns about our privacy practices, please email our Data Protection Officer at dpo[at]github[dot]com. If you live in Colorado, Connecticut, or Virginia you have some additional rights: We do not sell your covered information, as defined under Chapter 603A of the Nevada Revised Statutes. If you still have questions about your covered information or anything else in our Privacy Statement, please send an email to privacy[at]github[dot]com. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "github-terms-of-service.md", "project_name": "SRPC", "subcategory": "Remote Procedure Call" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "docs.md", "project_name": "Capsule", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Capsule implements a multi-tenant and policy-based environment in your Kubernetes cluster. It is designed as a micro-services-based ecosystem with the minimalist approach, leveraging only on upstream Kubernetes. Kubernetes introduces the Namespace object type to create logical partitions of the cluster as isolated slices. However, implementing advanced multi-tenancy scenarios, it soon becomes complicated because of the flat structure of Kubernetes namespaces and the impossibility to share resources among namespaces belonging to the same tenant. To overcome this, cluster admins tend to provision a dedicated cluster for each groups of users, teams, or departments. As an organization grows, the number of clusters to manage and keep aligned becomes an operational nightmare, described as the well known phenomena of the clusters sprawl. Capsule takes a different approach. In a single cluster, the Capsule Controller aggregates multiple namespaces in a lightweight abstraction called Tenant, basically a grouping of Kubernetes Namespaces. Within each tenant, users are free to create their namespaces and share all the assigned resources. On the other side, the Capsule Policy Engine keeps the different tenants isolated from each other. Network and Security Policies, Resource Quota, Limit Ranges, RBAC, and other policies defined at the tenant level are automatically inherited by all the namespaces in the tenant. Then users are free to operate their tenants in autonomy, without the intervention of the cluster administrator." } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Clusternet", "subcategory": "Scheduling & Orchestration" }
[ { "data": "This page shows how to install a custom resource into the Kubernetes API by creating a CustomResourceDefinition. You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds: When you create a new CustomResourceDefinition (CRD), the Kubernetes API Server creates a new RESTful resource path for each version you specify. The custom resource created from a CRD object can be either namespaced or cluster-scoped, as specified in the CRD's spec.scope field. As with existing built-in objects, deleting a namespace deletes all custom objects in that namespace. CustomResourceDefinitions themselves are non-namespaced and are available to all namespaces. For example, if you save the following CustomResourceDefinition to resourcedefinition.yaml: ``` apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com spec: group: stable.example.com versions: name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: cronSpec: type: string image: type: string replicas: type: integer scope: Namespaced names: plural: crontabs singular: crontab kind: CronTab shortNames: ct ``` and create it: ``` kubectl apply -f resourcedefinition.yaml ``` Then a new namespaced RESTful API endpoint is created at: ``` /apis/stable.example.com/v1/namespaces/*/crontabs/... ``` This endpoint URL can then be used to create and manage custom objects. The kind of these objects will be CronTab from the spec of the CustomResourceDefinition object you created above. It might take a few seconds for the endpoint to be created. You can watch the Established condition of your CustomResourceDefinition to be true or watch the discovery information of the API server for your resource to show up. After the CustomResourceDefinition object has been created, you can create custom objects. Custom objects can contain custom fields. These fields can contain arbitrary JSON. In the following example, the cronSpec and image custom fields are set in a custom object of kind CronTab. The kind CronTab comes from the spec of the CustomResourceDefinition object you created above. If you save the following YAML to my-crontab.yaml: ``` apiVersion: \"stable.example.com/v1\" kind: CronTab metadata: name: my-new-cron-object spec: cronSpec: \" */5\" image: my-awesome-cron-image ``` and create it: ``` kubectl apply -f my-crontab.yaml ``` You can then manage your CronTab objects using kubectl. For example: ``` kubectl get crontab ``` Should print a list like this: ``` NAME AGE my-new-cron-object 6s ``` Resource names are not case-sensitive when using kubectl, and you can use either the singular or plural forms defined in the CRD, as well as any short names. You can also view the raw YAML data: ``` kubectl get ct -o yaml ``` You should see that it contains the custom cronSpec and image fields from the YAML you used to create it: ``` apiVersion: v1 items: apiVersion: stable.example.com/v1 kind: CronTab metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"stable.example.com/v1\",\"kind\":\"CronTab\",\"metadata\":{\"annotations\":{},\"name\":\"my-new-cron-object\",\"namespace\":\"default\"},\"spec\":{\"cronSpec\":\" */5\",\"image\":\"my-awesome-cron-image\"}} creationTimestamp: \"2021-06-20T07:35:27Z\" generation: 1 name: my-new-cron-object namespace: default resourceVersion: \"1326\" uid: 9aab1d66-628e-41bb-a422-57b8b3b1f5a9 spec: cronSpec: ' */5' image: my-awesome-cron-image kind: List metadata: resourceVersion: \"\" selfLink: \"\" ``` When you delete a CustomResourceDefinition, the server will uninstall the RESTful API endpoint and delete all custom objects stored in it. ``` kubectl delete -f resourcedefinition.yaml kubectl get crontabs ``` ``` Error from server (NotFound): Unable to list {\"stable.example.com\" \"v1\" \"crontabs\"}: the server could not find the requested resource (get" }, { "data": "``` If you later recreate the same CustomResourceDefinition, it will start out empty. CustomResources store structured data in custom fields (alongside the built-in fields apiVersion, kind and metadata, which the API server validates implicitly). With OpenAPI v3.0 validation a schema can be specified, which is validated during creation and updates, compare below for details and limits of such a schema. With apiextensions.k8s.io/v1 the definition of a structural schema is mandatory for CustomResourceDefinitions. In the beta version of CustomResourceDefinition, the structural schema was optional. A structural schema is an OpenAPI v3.0 validation schema which: Non-structural example 1: ``` allOf: properties: foo: ... ``` conflicts with rule 2. The following would be correct: ``` properties: foo: ... allOf: properties: foo: ... ``` Non-structural example 2: ``` allOf: items: properties: foo: ... ``` conflicts with rule 2. The following would be correct: ``` items: properties: foo: ... allOf: items: properties: foo: ... ``` Non-structural example 3: ``` properties: foo: pattern: \"abc\" metadata: type: object properties: name: type: string pattern: \"^a\" finalizers: type: array items: type: string pattern: \"my-finalizer\" anyOf: properties: bar: type: integer minimum: 42 required: [\"bar\"] description: \"foo bar object\" ``` is not a structural schema because of the following violations: In contrast, the following, corresponding schema is structural: ``` type: object description: \"foo bar object\" properties: foo: type: string pattern: \"abc\" bar: type: integer metadata: type: object properties: name: type: string pattern: \"^a\" anyOf: properties: bar: minimum: 42 required: [\"bar\"] ``` Violations of the structural schema rules are reported in the NonStructural condition in the CustomResourceDefinition. CustomResourceDefinitions store validated resource data in the cluster's persistence store, etcd. As with native Kubernetes resources such as ConfigMap, if you specify a field that the API server does not recognize, the unknown field is pruned (removed) before being persisted. CRDs converted from apiextensions.k8s.io/v1beta1 to apiextensions.k8s.io/v1 might lack structural schemas, and spec.preserveUnknownFields might be true. For legacy CustomResourceDefinition objects created as apiextensions.k8s.io/v1beta1 with spec.preserveUnknownFields set to true, the following is also true: For compatibility with apiextensions.k8s.io/v1, update your custom resource definitions to: If you save the following YAML to my-crontab.yaml: ``` apiVersion: \"stable.example.com/v1\" kind: CronTab metadata: name: my-new-cron-object spec: cronSpec: \" */5\" image: my-awesome-cron-image someRandomField: 42 ``` and create it: ``` kubectl create --validate=false -f my-crontab.yaml -o yaml ``` Your output is similar to: ``` apiVersion: stable.example.com/v1 kind: CronTab metadata: creationTimestamp: 2017-05-31T12:56:35Z generation: 1 name: my-new-cron-object namespace: default resourceVersion: \"285\" uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: ' */5' image: my-awesome-cron-image ``` Notice that the field someRandomField was pruned. This example turned off client-side validation to demonstrate the API server's behavior, by adding the --validate=false command line option. Because the OpenAPI validation schemas are also published to clients, kubectl also checks for unknown fields and rejects those objects well before they would be sent to the API server. By default, all unspecified fields for a custom resource, across all versions, are pruned. It is possible though to opt-out of that for specific sub-trees of fields by adding x-kubernetes-preserve-unknown-fields: true in the structural OpenAPI v3 validation schema. For example: ``` type: object properties: json: x-kubernetes-preserve-unknown-fields: true ``` The field json can store any JSON value, without anything being pruned. You can also partially specify the permitted JSON; for example: ``` type: object properties: json: x-kubernetes-preserve-unknown-fields: true type: object description: this is arbitrary JSON ``` With this, only object type values are" }, { "data": "Pruning is enabled again for each specified property (or additionalProperties): ``` type: object properties: json: x-kubernetes-preserve-unknown-fields: true type: object properties: spec: type: object properties: foo: type: string bar: type: string ``` With this, the value: ``` json: spec: foo: abc bar: def something: x status: something: x ``` is pruned to: ``` json: spec: foo: abc bar: def status: something: x ``` This means that the something field in the specified spec object is pruned, but everything outside is not. Nodes in a schema with x-kubernetes-int-or-string: true are excluded from rule 1, such that the following is structural: ``` type: object properties: foo: x-kubernetes-int-or-string: true ``` Also those nodes are partially excluded from rule 3 in the sense that the following two patterns are allowed (exactly those, without variations in order to additional fields): ``` x-kubernetes-int-or-string: true anyOf: type: integer type: string ... ``` and ``` x-kubernetes-int-or-string: true allOf: anyOf: type: integer type: string ... # zero or more ... ``` With one of those specification, both an integer and a string validate. In Validation Schema Publishing, x-kubernetes-int-or-string: true is unfolded to one of the two patterns shown above. RawExtensions (as in runtime.RawExtension) holds complete Kubernetes objects, i.e. with apiVersion and kind fields. It is possible to specify those embedded objects (both completely without constraints or partially specified) by setting x-kubernetes-embedded-resource: true. For example: ``` type: object properties: foo: x-kubernetes-embedded-resource: true x-kubernetes-preserve-unknown-fields: true ``` Here, the field foo holds a complete object, e.g.: ``` foo: apiVersion: v1 kind: Pod spec: ... ``` Because x-kubernetes-preserve-unknown-fields: true is specified alongside, nothing is pruned. The use of x-kubernetes-preserve-unknown-fields: true is optional though. With x-kubernetes-embedded-resource: true, the apiVersion, kind and metadata are implicitly specified and validated. See Custom resource definition versioning for more information about serving multiple versions of your CustomResourceDefinition and migrating your objects from one version to another. Finalizers allow controllers to implement asynchronous pre-delete hooks. Custom objects support finalizers similar to built-in objects. You can add a finalizer to a custom object like this: ``` apiVersion: \"stable.example.com/v1\" kind: CronTab metadata: finalizers: stable.example.com/finalizer ``` Identifiers of custom finalizers consist of a domain name, a forward slash and the name of the finalizer. Any controller can add a finalizer to any object's list of finalizers. The first delete request on an object with finalizers sets a value for the metadata.deletionTimestamp field but does not delete it. Once this value is set, entries in the finalizers list can only be removed. While any finalizers remain it is also impossible to force the deletion of an object. When the metadata.deletionTimestamp field is set, controllers watching the object execute any finalizers they handle and remove the finalizer from the list after they are done. It is the responsibility of each controller to remove its finalizer from the list. The value of metadata.deletionGracePeriodSeconds controls the interval between polling updates. Once the list of finalizers is empty, meaning all finalizers have been executed, the resource is deleted by Kubernetes. Custom resources are validated via OpenAPI v3 schemas, by x-kubernetes-validations when the Validation Rules feature is enabled, and you can add additional validation using admission webhooks. Additionally, the following restrictions are applied to the schema: These fields cannot be set: The field uniqueItems cannot be set to true. The field additionalProperties cannot be set to false. The field additionalProperties is mutually exclusive with properties. The x-kubernetes-validations extension can be used to validate custom resources using Common Expression Language (CEL) expressions when the Validation rules feature is enabled and the CustomResourceDefinition schema is a structural schema. Refer to the structural schemas section for other restrictions and CustomResourceDefinition" }, { "data": "The schema is defined in the CustomResourceDefinition. In the following example, the CustomResourceDefinition applies the following validations on the custom object: Save the CustomResourceDefinition to resourcedefinition.yaml: ``` apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com spec: group: stable.example.com versions: name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: cronSpec: type: string pattern: '^(\\d+|\\)(/\\d+)?(\\s+(\\d+|\\)(/\\d+)?){4}$' image: type: string replicas: type: integer minimum: 1 maximum: 10 scope: Namespaced names: plural: crontabs singular: crontab kind: CronTab shortNames: ct ``` and create it: ``` kubectl apply -f resourcedefinition.yaml ``` A request to create a custom object of kind CronTab is rejected if there are invalid values in its fields. In the following example, the custom object contains fields with invalid values: If you save the following YAML to my-crontab.yaml: ``` apiVersion: \"stable.example.com/v1\" kind: CronTab metadata: name: my-new-cron-object spec: cronSpec: \" \" image: my-awesome-cron-image replicas: 15 ``` and attempt to create it: ``` kubectl apply -f my-crontab.yaml ``` then you get an error: ``` The CronTab \"my-new-cron-object\" is invalid: []: Invalid value: map[string]interface {}{\"apiVersion\":\"stable.example.com/v1\", \"kind\":\"CronTab\", \"metadata\":map[string]interface {}{\"name\":\"my-new-cron-object\", \"namespace\":\"default\", \"deletionTimestamp\":interface {}(nil), \"deletionGracePeriodSeconds\":(int64)(nil), \"creationTimestamp\":\"2017-09-05T05:20:07Z\", \"uid\":\"e14d79e7-91f9-11e7-a598-f0761cb232d1\", \"clusterName\":\"\"}, \"spec\":map[string]interface {}{\"cronSpec\":\" *\", \"image\":\"my-awesome-cron-image\", \"replicas\":15}}: validation failure list: spec.cronSpec in body should match '^(\\d+|\\)(/\\d+)?(\\s+(\\d+|\\)(/\\d+)?){4}$' spec.replicas in body should be less than or equal to 10 ``` If the fields contain valid values, the object creation request is accepted. Save the following YAML to my-crontab.yaml: ``` apiVersion: \"stable.example.com/v1\" kind: CronTab metadata: name: my-new-cron-object spec: cronSpec: \" */5\" image: my-awesome-cron-image replicas: 5 ``` And create it: ``` kubectl apply -f my-crontab.yaml crontab \"my-new-cron-object\" created ``` If you are using a version of Kubernetes older than v1.30, you need to explicitly enable the CRDValidationRatcheting feature gate to use this behavior, which then applies to all CustomResourceDefinitions in your cluster. Provided you enabled the feature gate, Kubernetes implements validation racheting for CustomResourceDefinitions. The API server is willing to accept updates to resources that are not valid after the update, provided that each part of the resource that failed to validate was not changed by the update operation. In other words, any invalid part of the resource that remains invalid must have already been wrong. You cannot use this mechanism to update a valid resource so that it becomes invalid. This feature allows authors of CRDs to confidently add new validations to the OpenAPIV3 schema under certain conditions. Users can update to the new schema safely without bumping the version of the object or breaking workflows. While most validations placed in the OpenAPIV3 schema of a CRD support ratcheting, there are a few exceptions. The following OpenAPIV3 schema validations are not supported by ratcheting under the implementation in Kubernetes 1.30 and if violated will continue to throw an error as normally: Quantors x-kubernetes-validations For Kubernetes 1.28, CRD validation rules](#validation-rules) are ignored by ratcheting. Starting with Alpha 2 in Kubernetes 1.29, x-kubernetes-validations are ratcheted only if they do not refer to oldSelf. Transition Rules are never ratcheted: only errors raised by rules that do not use oldSelf will be automatically ratcheted if their values are unchanged. To write custom ratcheting logic for CEL expressions, check out optionalOldSelf. x-kubernetes-list-type Errors arising from changing the list type of a subschema will not be ratcheted. For example adding set onto a list with duplicates will always result in an error. x-kubernetes-map-keys Errors arising from changing the map keys of a list schema will not be ratcheted. required Errors arising from changing the list of required fields will not be" }, { "data": "properties Adding/removing/modifying the names of properties is not ratcheted, but changes to validations in each properties' schemas and subschemas may be ratcheted if the name of the property stays the same. additionalProperties To remove a previously specified additionalProperties validation will not be ratcheted. metadata Errors that come from Kubernetes' built-in validation of an object's metadata are not ratcheted (such as object name, or characters in a label value). If you specify your own additional rules for the metadata of a custom resource, that additional validation will be ratcheted. Validation rules use the Common Expression Language (CEL) to validate custom resource values. Validation rules are included in CustomResourceDefinition schemas using the x-kubernetes-validations extension. The Rule is scoped to the location of the x-kubernetes-validations extension in the schema. And self variable in the CEL expression is bound to the scoped value. All validation rules are scoped to the current object: no cross-object or stateful validation rules are supported. For example: ``` ... openAPIV3Schema: type: object properties: spec: type: object x-kubernetes-validations: rule: \"self.minReplicas <= self.replicas\" message: \"replicas should be greater than or equal to minReplicas.\" rule: \"self.replicas <= self.maxReplicas\" message: \"replicas should be smaller than or equal to maxReplicas.\" properties: ... minReplicas: type: integer replicas: type: integer maxReplicas: type: integer required: minReplicas replicas maxReplicas ``` will reject a request to create this custom resource: ``` apiVersion: \"stable.example.com/v1\" kind: CronTab metadata: name: my-new-cron-object spec: minReplicas: 0 replicas: 20 maxReplicas: 10 ``` with the response: ``` The CronTab \"my-new-cron-object\" is invalid: spec: Invalid value: map[string]interface {}{\"maxReplicas\":10, \"minReplicas\":0, \"replicas\":20}: replicas should be smaller than or equal to maxReplicas. ``` x-kubernetes-validations could have multiple rules. The rule under x-kubernetes-validations represents the expression which will be evaluated by CEL. The message represents the message displayed when validation fails. If message is unset, the above response would be: ``` The CronTab \"my-new-cron-object\" is invalid: spec: Invalid value: map[string]interface {}{\"maxReplicas\":10, \"minReplicas\":0, \"replicas\":20}: failed rule: self.replicas <= self.maxReplicas ``` Validation rules are compiled when CRDs are created/updated. The request of CRDs create/update will fail if compilation of validation rules fail. Compilation process includes type checking as well. The compilation failure: nomatchingoverload: this function has no overload for the types of the arguments. For example, a rule like self == true against a field of integer type will get error: ``` Invalid value: apiextensions.ValidationRule{Rule:\"self == true\", Message:\"\"}: compilation failed: ERROR: \\<input>:1:6: found no matching overload for '==' applied to '(int, bool)' ``` nosuchfield: does not contain the desired field. For example, a rule like self.nonExistingField > 0 against a non-existing field will return the following error: ``` Invalid value: apiextensions.ValidationRule{Rule:\"self.nonExistingField > 0\", Message:\"\"}: compilation failed: ERROR: \\<input>:1:5: undefined field 'nonExistingField' ``` invalid argument: invalid argument to macros. For example, a rule like has(self) will return error: ``` Invalid value: apiextensions.ValidationRule{Rule:\"has(self)\", Message:\"\"}: compilation failed: ERROR: <input>:1:4: invalid argument to has() macro ``` Validation Rules Examples: | Rule | Purpose | |:--|:--| | self.minReplicas <= self.replicas && self.replicas <= self.maxReplicas | Validate that the three fields defining replicas are ordered appropriately | | 'Available' in self.stateCounts | Validate that an entry with the 'Available' key exists in a map | | (size(self.list1) == 0) != (size(self.list2) == 0) | Validate that one of two lists is non-empty, but not both | | !('MYKEY' in self.map1) || self['MYKEY'].matches('^[a-zA-Z]*$') | Validate the value of a map for a specific key, if it is in the map | | self.envars.filter(e, e.name == 'MYENV').all(e," }, { "data": "| Validate the 'value' field of a listMap entry where key field 'name' is 'MYENV' | | has(self.expired) && self.created + self.ttl < self.expired | Validate that 'expired' date is after a 'create' date plus a 'ttl' duration | | self.health.startsWith('ok') | Validate a 'health' string field has the prefix 'ok' | | self.widgets.exists(w, w.key == 'x' && w.foo < 10) | Validate that the 'foo' property of a listMap item with a key 'x' is less than 10 | | type(self) == string ? self == '100%' : self == 1000 | Validate an int-or-string field for both the int and string cases | | self.metadata.name.startsWith(self.prefix) | Validate that an object's name has the prefix of another field value | | self.set1.all(e, !(e in self.set2)) | Validate that two listSets are disjoint | | size(self.names) == size(self.details) && self.names.all(n, n in self.details) | Validate the 'details' map is keyed by the items in the 'names' listSet | | size(self.clusters.filter(c, c.name == self.primary)) == 1 | Validate that the 'primary' property has one and only one occurrence in the 'clusters' listMap | Xref: Supported evaluation on CEL If the Rule is scoped to the root of a resource, it may make field selection into any fields declared in the OpenAPIv3 schema of the CRD as well as apiVersion, kind, metadata.name and metadata.generateName. This includes selection of fields in both the spec and status in the same expression: ``` ... openAPIV3Schema: type: object x-kubernetes-validations: rule: \"self.status.availableReplicas >= self.spec.minReplicas\" properties: spec: type: object properties: minReplicas: type: integer ... status: type: object properties: availableReplicas: type: integer ``` If the Rule is scoped to an object with properties, the accessible properties of the object are field selectable via self.field and field presence can be checked via has(self.field). Null valued fields are treated as absent fields in CEL expressions. ``` ... openAPIV3Schema: type: object properties: spec: type: object x-kubernetes-validations: rule: \"has(self.foo)\" properties: ... foo: type: integer ``` If the Rule is scoped to an object with additionalProperties (i.e. a map) the value of the map are accessible via self[mapKey], map containment can be checked via mapKey in self and all entries of the map are accessible via CEL macros and functions such as self.all(...). ``` ... openAPIV3Schema: type: object properties: spec: type: object x-kubernetes-validations: rule: \"self['xyz'].foo > 0\" additionalProperties: ... type: object properties: foo: type: integer ``` If the Rule is scoped to an array, the elements of the array are accessible via self[i] and also by macros and functions. ``` ... openAPIV3Schema: type: object properties: ... foo: type: array x-kubernetes-validations: rule: \"size(self) == 1\" items: type: string ``` If the Rule is scoped to a scalar, self is bound to the scalar value. ``` ... openAPIV3Schema: type: object properties: spec: type: object properties: ... foo: type: integer x-kubernetes-validations: rule: \"self > 0\" ``` Examples: | type of the field rule scoped to | Rule example | |:--|:--| | root object | self.status.actual <= self.spec.maxDesired | | map of objects | self.components['Widget'].priority < 10 | | list of integers | self.values.all(value, value >= 0 && value < 100) | | string | self.startsWith('kube') | The apiVersion, kind, metadata.name and metadata.generateName are always accessible from the root of the object and from any x-kubernetes-embedded-resource annotated objects. No other metadata properties are accessible. Unknown data preserved in custom resources via x-kubernetes-preserve-unknown-fields is not accessible in CEL expressions. This includes: Unknown field values that are preserved by object schemas with x-kubernetes-preserve-unknown-fields. Object properties where the property schema is of an \"unknown" }, { "data": "An \"unknown type\" is recursively defined as: Only property names of the form * are accessible. Accessible property names are escaped according to the following rules when accessed in the expression: | escape sequence | property name equivalent | |:|:| | underscores | | | dot | . | | dash | - | | slash | / | | {keyword} | CEL RESERVED keyword | Note: CEL RESERVED keyword needs to match the exact property name to be escaped (e.g. int in the word sprint would not be escaped). Examples on escaping: | property name | rule with escaped property name | |:-|:-| | namespace | self.namespace > 0 | | x-prop | self.xdashprop > 0 | | redactd | self.redactunderscoresd > 0 | | string | self.startsWith('kube') | Equality on arrays with x-kubernetes-list-type of set or map ignores element order, i.e., [1, 2] == [2, 1]. Concatenation on arrays with x-kubernetes-list-type use the semantics of the list type: set: X + Y performs a union where the array positions of all elements in X are preserved and non-intersecting elements in Y are appended, retaining their partial order. map: X + Y performs a merge where the array positions of all keys in X are preserved but the values are overwritten by values in Y when the key sets of X and Y intersect. Elements in Y with non-intersecting keys are appended, retaining their partial order. Here is the declarations type mapping between OpenAPIv3 and CEL type: | OpenAPIv3 type | CEL type | |:|:--| | 'object' with Properties | object / \"message type\" | | 'object' with AdditionalProperties | map | | 'object' with x-kubernetes-embedded-type | object / \"message type\", 'apiVersion', 'kind', 'metadata.name' and 'metadata.generateName' are implicitly included in schema | | 'object' with x-kubernetes-preserve-unknown-fields | object / \"message type\", unknown fields are NOT accessible in CEL expression | | x-kubernetes-int-or-string | dynamic object that is either an int or a string, type(value) can be used to check the type | | 'array | list | | 'array' with x-kubernetes-list-type=map | list with map based Equality & unique key guarantees | | 'array' with x-kubernetes-list-type=set | list with set based Equality & unique entry guarantees | | 'boolean' | boolean | | 'number' (all formats) | double | | 'integer' (all formats) | int (64) | | 'null' | null_type | | 'string' | string | | 'string' with format=byte (base64 encoded) | bytes | | 'string' with format=date | timestamp (google.protobuf.Timestamp) | | 'string' with format=datetime | timestamp (google.protobuf.Timestamp) | | 'string' with format=duration | duration (google.protobuf.Duration) | xref: CEL types, OpenAPI types, Kubernetes Structural Schemas. Similar to the message field, which defines the string reported for a validation rule failure, messageExpression allows you to use a CEL expression to construct the message string. This allows you to insert more descriptive information into the validation failure message. messageExpression must evaluate a string and may use the same variables that are available to the rule field. For example: ``` x-kubernetes-validations: rule: \"self.x <= self.maxLimit\" messageExpression: '\"x exceeded max limit of \" + string(self.maxLimit)' ``` Keep in mind that CEL string concatenation (+ operator) does not auto-cast to string. If you have a non-string scalar, use the string(<value>) function to cast the scalar to a string like shown in the above example. messageExpression must evaluate to a string, and this is checked while the CRD is being" }, { "data": "Note that it is possible to set message and messageExpression on the same rule, and if both are present, messageExpression will be used. However, if messageExpression evaluates to an error, the string defined in message will be used instead, and the messageExpression error will be logged. This fallback will also occur if the CEL expression defined in messageExpression generates an empty string, or a string containing line breaks. If one of the above conditions are met and no message has been set, then the default validation failure message will be used instead. messageExpression is a CEL expression, so the restrictions listed in Resource use by validation functions apply. If evaluation halts due to resource constraints during messageExpression execution, then no further validation rules will be executed. Setting messageExpression is optional. If you want to set a static message, you can supply message rather than messageExpression. The value of message is used as an opaque error string if validation fails. Setting message is optional. You can add a machine-readable validation failure reason within a validation, to be returned whenever a request fails this validation rule. For example: ``` x-kubernetes-validations: rule: \"self.x <= self.maxLimit\" reason: \"FieldValueInvalid\" ``` The HTTP status code returned to the caller will match the reason of the first failed validation rule. The currently supported reasons are: \"FieldValueInvalid\", \"FieldValueForbidden\", \"FieldValueRequired\", \"FieldValueDuplicate\". If not set or unknown reasons, default to use \"FieldValueInvalid\". Setting reason is optional. You can specify the field path returned when the validation fails. For example: ``` x-kubernetes-validations: rule: \"self.foo.test.x <= self.maxLimit\" fieldPath: \".foo.test.x\" ``` In the example above, the validation checks the value of field x should be less than the value of maxLimit. If no fieldPath specified, when validation fails, the fieldPath would be default to wherever self scoped. With fieldPath specified, the returned error will have fieldPath properly refer to the location of field x. The fieldPath value must be a relative JSON path that is scoped to the location of this x-kubernetes-validations extension in the schema. Additionally, it should refer to an existing field within the schema. For example when validation checks if a specific attribute foo under a map testMap, you could set fieldPath to \".testMap.foo\" or .testMap['foo']'. If the validation requires checking for unique attributes in two lists, the fieldPath can be set to either of the lists. For example, it can be set to .testList1 or .testList2. It supports child operation to refer to an existing field currently. Refer to JSONPath support in Kubernetes for more info. The fieldPath field does not support indexing arrays numerically. Setting fieldPath is optional. If your cluster does not have CRD validation ratcheting enabled, the CustomResourceDefinition API doesn't include this field, and trying to set it may result in an error. The optionalOldSelf field is a boolean field that alters the behavior of Transition Rules described below. Normally, a transition rule will not evaluate if oldSelf cannot be determined: during object creation or when a new value is introduced in an update. If optionalOldSelf is set to true, then transition rules will always be evaluated and the type of oldSelf be changed to a CEL Optional type. optionalOldSelf is useful in cases where schema authors would like a more control tool than provided by the default equality based behavior of to introduce newer, usually stricter constraints on new values, while still allowing old values to be \"grandfathered\" or ratcheted using the older validation. Example Usage: | CEL | Description | |:-|--:| |" }, { "data": "== \"foo\" | nan | | [oldSelf.orValue(\"\"), self].all(x, [\"OldCase1\", \"OldCase2\"].exists(case, x == case)) | nan | | oldSelf.optMap(o, o.size()).orValue(0) < 4 | nan | Functions available include: A rule that contains an expression referencing the identifier oldSelf is implicitly considered a transition rule. Transition rules allow schema authors to prevent certain transitions between two otherwise valid states. For example: ``` type: string enum: [\"low\", \"medium\", \"high\"] x-kubernetes-validations: rule: \"!(self == 'high' && oldSelf == 'low') && !(self == 'low' && oldSelf == 'high')\" message: cannot transition directly between 'low' and 'high' ``` Unlike other rules, transition rules apply only to operations meeting the following criteria: The operation updates an existing object. Transition rules never apply to create operations. Both an old and a new value exist. It remains possible to check if a value has been added or removed by placing a transition rule on the parent node. Transition rules are never applied to custom resource creation. When placed on an optional field, a transition rule will not apply to update operations that set or unset the field. The path to the schema node being validated by a transition rule must resolve to a node that is comparable between the old object and the new object. For example, list items and their descendants (spec.foo[10].bar) can't necessarily be correlated between an existing object and a later update to the same object. Errors will be generated on CRD writes if a schema node contains a transition rule that can never be applied, e.g. \"path: update rule rule cannot be set on schema because the schema or its parent schema is not mergeable\". Transition rules are only allowed on correlatable portions of a schema. A portion of the schema is correlatable if all array parent schemas are of type x-kubernetes-list-type=map; any setor atomicarray parent schemas make it impossible to unambiguously correlate a self with oldSelf. Here are some examples for transition rules: | Use Case | Rule | |:|:-| | Immutability | self.foo == oldSelf.foo | | Prevent modification/removal once assigned | oldSelf != 'bar' || self == 'bar' or !has(oldSelf.field) || has(self.field) | | Append-only set | self.all(element, element in oldSelf) | | If previous value was X, new value can only be A or B, not Y or Z | oldSelf != 'X' || self in ['A', 'B'] | | Monotonic (non-decreasing) counters | self >= oldSelf | When you create or update a CustomResourceDefinition that uses validation rules, the API server checks the likely impact of running those validation rules. If a rule is estimated to be prohibitively expensive to execute, the API server rejects the create or update operation, and returns an error message. A similar system is used at runtime that observes the actions the interpreter takes. If the interpreter executes too many instructions, execution of the rule will be halted, and an error will result. Each CustomResourceDefinition is also allowed a certain amount of resources to finish executing all of its validation rules. If the sum total of its rules are estimated at creation time to go over that limit, then a validation error will also occur. You are unlikely to encounter issues with the resource budget for validation if you only specify rules that always take the same amount of time regardless of how large their input is. For example, a rule that asserts that self.foo == 1 does not by itself have any risk of rejection on validation resource budget" }, { "data": "But if foo is a string and you define a validation rule self.foo.contains(\"someString\"), that rule takes longer to execute depending on how long foo is. Another example would be if foo were an array, and you specified a validation rule self.foo.all(x, x > 5). The cost system always assumes the worst-case scenario if a limit on the length of foo is not given, and this will happen for anything that can be iterated over (lists, maps, etc.). Because of this, it is considered best practice to put a limit via maxItems, maxProperties, and maxLength for anything that will be processed in a validation rule in order to prevent validation errors during cost estimation. For example, given this schema with one rule: ``` openAPIV3Schema: type: object properties: foo: type: array items: type: string x-kubernetes-validations: rule: \"self.all(x, x.contains('a string'))\" ``` then the API server rejects this rule on validation budget grounds with error: ``` spec.validation.openAPIV3Schema.properties[spec].properties[foo].x-kubernetes-validations[0].rule: Forbidden: CEL rule exceeded budget by more than 100x (try simplifying the rule, or adding maxItems, maxProperties, and maxLength where arrays, maps, and strings are used) ``` The rejection happens because self.all implies calling contains() on every string in foo, which in turn will check the given string to see if it contains 'a string'. Without limits, this is a very expensive rule. If you do not specify any validation limit, the estimated cost of this rule will exceed the per-rule cost limit. But if you add limits in the appropriate places, the rule will be allowed: ``` openAPIV3Schema: type: object properties: foo: type: array maxItems: 25 items: type: string maxLength: 10 x-kubernetes-validations: rule: \"self.all(x, x.contains('a string'))\" ``` The cost estimation system takes into account how many times the rule will be executed in addition to the estimated cost of the rule itself. For instance, the following rule will have the same estimated cost as the previous example (despite the rule now being defined on the individual array items): ``` openAPIV3Schema: type: object properties: foo: type: array maxItems: 25 items: type: string x-kubernetes-validations: rule: \"self.contains('a string'))\" maxLength: 10 ``` If a list inside of a list has a validation rule that uses self.all, that is significantly more expensive than a non-nested list with the same rule. A rule that would have been allowed on a non-nested list might need lower limits set on both nested lists in order to be allowed. For example, even without having limits set, the following rule is allowed: ``` openAPIV3Schema: type: object properties: foo: type: array items: type: integer x-kubernetes-validations: rule: \"self.all(x, x == 5)\" ``` But the same rule on the following schema (with a nested array added) produces a validation error: ``` openAPIV3Schema: type: object properties: foo: type: array items: type: array items: type: integer x-kubernetes-validations: rule: \"self.all(x, x == 5)\" ``` This is because each item of foo is itself an array, and each subarray in turn calls self.all. Avoid nested lists and maps if possible where validation rules are used. Defaulting allows to specify default values in the OpenAPI v3 validation schema: ``` apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com spec: group: stable.example.com versions: name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: cronSpec: type: string pattern: '^(\\d+|\\)(/\\d+)?(\\s+(\\d+|\\)(/\\d+)?){4}$' default: \"5 0 *\" image: type: string replicas: type: integer minimum: 1 maximum: 10 default: 1 scope: Namespaced names: plural: crontabs singular: crontab kind: CronTab shortNames: ct ``` With this both cronSpec and replicas are defaulted: ``` apiVersion:" }, { "data": "kind: CronTab metadata: name: my-new-cron-object spec: image: my-awesome-cron-image ``` leads to ``` apiVersion: \"stable.example.com/v1\" kind: CronTab metadata: name: my-new-cron-object spec: cronSpec: \"5 0 *\" image: my-awesome-cron-image replicas: 1 ``` Defaulting happens on the object Defaults applied when reading data from etcd are not automatically written back to etcd. An update request via the API is required to persist those defaults back into etcd. Default values must be pruned (with the exception of defaults for metadata fields) and must validate against a provided schema. Default values for metadata fields of x-kubernetes-embedded-resources: true nodes (or parts of a default value covering metadata) are not pruned during CustomResourceDefinition creation, but through the pruning step during handling of requests. Null values for fields that either don't specify the nullable flag, or give it a false value, will be pruned before defaulting happens. If a default is present, it will be applied. When nullable is true, null values will be conserved and won't be defaulted. For example, given the OpenAPI schema below: ``` type: object properties: spec: type: object properties: foo: type: string nullable: false default: \"default\" bar: type: string nullable: true baz: type: string ``` creating an object with null values for foo and bar and baz ``` spec: foo: null bar: null baz: null ``` leads to ``` spec: foo: \"default\" bar: null ``` with foo pruned and defaulted because the field is non-nullable, bar maintaining the null value due to nullable: true, and baz pruned because the field is non-nullable and has no default. CustomResourceDefinition OpenAPI v3 validation schemas which are structural and enable pruning are published as OpenAPI v3 and OpenAPI v2 from Kubernetes API server. It is recommended to use the OpenAPI v3 document as it is a lossless representation of the CustomResourceDefinition OpenAPI v3 validation schema while OpenAPI v2 represents a lossy conversion. The kubectl command-line tool consumes the published schema to perform client-side validation (kubectl create and kubectl apply), schema explanation (kubectl explain) on custom resources. The published schema can be consumed for other purposes as well, like client generation or documentation. For compatibility with OpenAPI V2, the OpenAPI v3 validation schema performs a lossy conversion to the OpenAPI v2 schema. The schema show up in definitions and paths fields in the OpenAPI v2 spec. The following modifications are applied during the conversion to keep backwards compatibility with kubectl in previous 1.13 version. These modifications prevent kubectl from being over-strict and rejecting valid OpenAPI schemas that it doesn't understand. The conversion won't modify the validation schema defined in CRD, and therefore won't affect validation in the API server. The following fields are removed as they aren't supported by OpenAPI v2. If nullable: true is set, we drop type, nullable, items and properties because OpenAPI v2 is not able to express nullable. To avoid kubectl to reject good objects, this is necessary. The kubectl tool relies on server-side output formatting. Your cluster's API server decides which columns are shown by the kubectl get command. You can customize these columns for a CustomResourceDefinition. The following example adds the Spec, Replicas, and Age columns. Save the CustomResourceDefinition to resourcedefinition.yaml: ``` apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com spec: group:" }, { "data": "scope: Namespaced names: plural: crontabs singular: crontab kind: CronTab shortNames: ct versions: name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: cronSpec: type: string image: type: string replicas: type: integer additionalPrinterColumns: name: Spec type: string description: The cron spec defining the interval a CronJob is run jsonPath: .spec.cronSpec name: Replicas type: integer description: The number of jobs launched by the CronJob jsonPath: .spec.replicas name: Age type: date jsonPath: .metadata.creationTimestamp ``` Create the CustomResourceDefinition: ``` kubectl apply -f resourcedefinition.yaml ``` Create an instance using the my-crontab.yaml from the previous section. Invoke the server-side printing: ``` kubectl get crontab my-new-cron-object ``` Notice the NAME, SPEC, REPLICAS, and AGE columns in the output: ``` NAME SPEC REPLICAS AGE my-new-cron-object * 1 7s ``` Field Selectors let clients select custom resources based on the value of one or more resource fields. All custom resources support the metadata.name and metadata.namespace field selectors. Fields declared in a CustomResourceDefinition may also be used with field selectors when included in the spec.versions[*].selectableFields field of the CustomResourceDefinition. You need to enable the CustomResourceFieldSelectors feature gate to use this behavior, which then applies to all CustomResourceDefinitions in your cluster. The spec.versions[*].selectableFields field of a CustomResourceDefinition may be used to declare which other fields in a custom resource may be used in field selectors. The following example adds the .spec.color and .spec.size fields as selectable fields. Save the CustomResourceDefinition to shirt-resource-definition.yaml: ``` apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: shirts.stable.example.com spec: group: stable.example.com scope: Namespaced names: plural: shirts singular: shirt kind: Shirt versions: name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: color: type: string size: type: string selectableFields: jsonPath: .spec.color jsonPath: .spec.size additionalPrinterColumns: jsonPath: .spec.color name: Color type: string jsonPath: .spec.size name: Size type: string ``` Create the CustomResourceDefinition: ``` kubectl apply -f https://k8s.io/examples/customresourcedefinition/shirt-resource-definition.yaml ``` Define some Shirts by editing shirt-resources.yaml; for example: ``` apiVersion: stable.example.com/v1 kind: Shirt metadata: name: example1 spec: color: blue size: S apiVersion: stable.example.com/v1 kind: Shirt metadata: name: example2 spec: color: blue size: M apiVersion: stable.example.com/v1 kind: Shirt metadata: name: example3 spec: color: green size: M ``` Create the custom resources: ``` kubectl apply -f https://k8s.io/examples/customresourcedefinition/shirt-resources.yaml ``` Get all the resources: ``` kubectl get shirts.stable.example.com ``` The output is: ``` NAME COLOR SIZE example1 blue S example2 blue M example3 green M ``` Fetch blue shirts (retrieve Shirts with a color of blue): ``` kubectl get shirts.stable.example.com --field-selector spec.color=blue ``` Should output: ``` NAME COLOR SIZE example1 blue S example2 blue M ``` Get only resources with a color of green and a size of M: ``` kubectl get shirts.stable.example.com --field-selector spec.color=green,spec.size=M ``` Should output: ``` NAME COLOR SIZE example2 blue M ``` Each column includes a priority field. Currently, the priority differentiates between columns shown in standard view or wide view (using the -o wide flag). A column's type field can be any of the following (compare OpenAPI v3 data types): If the value inside a CustomResource does not match the type specified for the column, the value is omitted. Use CustomResource validation to ensure that the value types are correct. A column's format field can be any of the following: The column's format controls the style used when kubectl prints the value. Custom resources support /status and /scale subresources. The status and scale subresources can be optionally enabled by defining them in the CustomResourceDefinition. When the status subresource is enabled, the /status subresource for the custom resource is exposed. The status and the spec stanzas are represented by the .status and .spec JSONPaths respectively inside of a custom resource. PUT requests to the /status subresource take a custom resource object and ignore changes to anything except the status" }, { "data": "PUT requests to the /status subresource only validate the status stanza of the custom resource. PUT/POST/PATCH requests to the custom resource ignore changes to the status stanza. The .metadata.generation value is incremented for all changes, except for changes to .metadata or .status. Only the following constructs are allowed at the root of the CRD OpenAPI validation schema: When the scale subresource is enabled, the /scale subresource for the custom resource is exposed. The autoscaling/v1.Scale object is sent as the payload for /scale. To enable the scale subresource, the following fields are defined in the CustomResourceDefinition. specReplicasPath defines the JSONPath inside of a custom resource that corresponds to scale.spec.replicas. statusReplicasPath defines the JSONPath inside of a custom resource that corresponds to scale.status.replicas. labelSelectorPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Selector. In the following example, both status and scale subresources are enabled. Save the CustomResourceDefinition to resourcedefinition.yaml: ``` apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com spec: group: stable.example.com versions: name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: cronSpec: type: string image: type: string replicas: type: integer status: type: object properties: replicas: type: integer labelSelector: type: string subresources: status: {} scale: specReplicasPath: .spec.replicas statusReplicasPath: .status.replicas labelSelectorPath: .status.labelSelector scope: Namespaced names: plural: crontabs singular: crontab kind: CronTab shortNames: ct ``` And create it: ``` kubectl apply -f resourcedefinition.yaml ``` After the CustomResourceDefinition object has been created, you can create custom objects. If you save the following YAML to my-crontab.yaml: ``` apiVersion: \"stable.example.com/v1\" kind: CronTab metadata: name: my-new-cron-object spec: cronSpec: \" */5\" image: my-awesome-cron-image replicas: 3 ``` and create it: ``` kubectl apply -f my-crontab.yaml ``` Then new namespaced RESTful API endpoints are created at: ``` /apis/stable.example.com/v1/namespaces/*/crontabs/status ``` and ``` /apis/stable.example.com/v1/namespaces/*/crontabs/scale ``` A custom resource can be scaled using the kubectl scale command. For example, the following command sets .spec.replicas of the custom resource created above to 5: ``` kubectl scale --replicas=5 crontabs/my-new-cron-object crontabs \"my-new-cron-object\" scaled kubectl get crontabs my-new-cron-object -o jsonpath='{.spec.replicas}' 5 ``` You can use a PodDisruptionBudget to protect custom resources that have the scale subresource enabled. Categories is a list of grouped resources the custom resource belongs to (eg. all). You can use kubectl get <category-name> to list the resources belonging to the category. The following example adds all in the list of categories in the CustomResourceDefinition and illustrates how to output the custom resource using kubectl get all. Save the following CustomResourceDefinition to resourcedefinition.yaml: ``` apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com spec: group: stable.example.com versions: name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: cronSpec: type: string image: type: string replicas: type: integer scope: Namespaced names: plural: crontabs singular: crontab kind: CronTab shortNames: ct categories: all ``` and create it: ``` kubectl apply -f resourcedefinition.yaml ``` After the CustomResourceDefinition object has been created, you can create custom objects. Save the following YAML to my-crontab.yaml: ``` apiVersion: \"stable.example.com/v1\" kind: CronTab metadata: name: my-new-cron-object spec: cronSpec: \" */5\" image: my-awesome-cron-image ``` and create it: ``` kubectl apply -f my-crontab.yaml ``` You can specify the category when using kubectl get: ``` kubectl get all ``` and it will include the custom resources of kind CronTab: ``` NAME AGE crontabs/my-new-cron-object 3s ``` Read about custom resources. See CustomResourceDefinition. Serve multiple versions of a CustomResourceDefinition. Was this page helpful? Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub Repository if you want to report a problem or suggest an improvement." } ]
{ "category": "Orchestration & Management", "file_name": "#load-balancing.md", "project_name": "Docker Swarm", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Docker recommends you use the Docker Official Images in your projects. These images have clear documentation, promote best practices, and are regularly updated. Docker Official Images support most common use cases, making them perfect for new Docker users. Advanced users can benefit from more specialized image variants as well as review Docker Official Images as part of your Dockerfile learning process. The repository description for each Docker Official Image contains a Supported tags and respective Dockerfile links section that lists all the current tags with links to the Dockerfiles that created the image with those tags. The purpose of this section is to show what image variants are available. Tags listed on the same line all refer to the same underlying image. Multiple tags can point to the same image. For example, in the previous screenshot taken from the ubuntu Docker Official Images repository, the tags 24.04, noble-20240225, noble, and devel all refer to the same image. The latest tag for a Docker Official Image is often optimized for ease of use and includes a wide variety of useful software, such as developer and build tools. By tagging an image as latest, the image maintainers are essentially suggesting that image be used as the default. In other words, if you do not know what tag to use or are unfamiliar with the underlying software, you should probably start with the latest image. As your understanding of the software and image variants advances, you may find other image variants better suit your needs. A number of language stacks such as Node.js, Python, and Ruby have slim tag variants designed to provide a lightweight, production-ready base image with fewer packages. A typical consumption pattern for slim images is as the base image for the final stage of a multi-staged build. For example, you build your application in the first stage of the build using the latest variant and then copy your application into the final stage based upon the slim variant. Here is an example Dockerfile. ``` FROM node:latest AS build WORKDIR /app COPY package.json package-lock.json" }, { "data": "RUN npm ci COPY . ./ FROM node:slim WORKDIR /app COPY --from=build /app /app CMD [\"node\", \"app.js\"]``` Many Docker Official Images repositories also offer alpine variants. These images are built on top of the Alpine Linux distribution rather than Debian or Ubuntu. Alpine Linux is focused on providing a small, simple, and secure base for container images, and Docker Official Images alpine variants typically aim to install only necessary packages. As a result, Docker Official Images alpine variants are typically even smaller than slim variants. The main caveat to note is that Alpine Linux uses musl libc instead of glibc. Additionally, to minimize image size, it's uncommon for Alpine-based images to include tools such as Git or Bash by default. Depending on the depth of libc requirements or assumptions in your programs, you may find yourself running into issues due to missing libraries or tools. When you use Alpine images as a base, consider the following options in order to make your program compatible with Alpine Linux and musl: Refer to the alpine image description on Docker Hub for examples on how to install packages if you are unfamiliar. Tags with words that look like Toy Story characters (for example, bookworm, bullseye, and trixie) or adjectives (such as focal, jammy, and noble), indicate the codename of the Linux distribution they use as a base image. Debian release codenames are based on Toy Story characters, and Ubuntu's take the form of \"Adjective Animal\". For example, the codename for Ubuntu 24.04 is \"Noble Numbat\". Linux distribution indicators are helpful because many Docker Official Images provide variants built upon multiple underlying distribution versions (for example, postgres:bookworm and postgres:bullseye). Docker Official Images tags may contain other hints to the purpose of their image variant in addition to those described here. Often these tag variants are explained in the Docker Official Images repository documentation. Reading through the How to use this image and Image Variants sections will help you to understand how to use these variants. Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "Orchestration & Management", "file_name": "#declarative-service-model.md", "project_name": "Docker Swarm", "subcategory": "Scheduling & Orchestration" }
[ { "data": "You can add seats and manage invitations to your Docker Build Cloud Team in the Docker Build Cloud dashboard. Note If you have a Docker Build Cloud Business subscription, you can add and remove seats by working with your account executive, then assign your purchased seats in the Docker Build Cloud dashboard. The number of seats will be charged to your payment information on file, and are added immediately. The charge for the reduced seat count will be reflected on the next billing cycle. Optionally, you can cancel the seat downgrade any time before the next billing cycle. As an owner of the Docker Build Cloud team, you can invite members to access cloud builders. To invite team members to your team in Docker Build Cloud: Invitees receive an email with instructions on how they can accept the invite. After they accept, the seat will be marked as Allocated in the User management section in the Docker Build Cloud dashboard. For more information on the permissions granted to members, see Roles and permissions. Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "Orchestration & Management", "file_name": "#rolling-updates.md", "project_name": "Docker Swarm", "subcategory": "Scheduling & Orchestration" }
[ { "data": "The Docker Verified Publisher Program provides high-quality images from commercial publishers verified by Docker. These images help development teams build secure software supply chains, minimizing exposure to malicious content early in the process to save time and money later. Images that are part of this program have a special badge on Docker Hub making it easier for users to identify projects that Docker has verified as high-quality commercial publishers. The Docker Verified Publisher Program (DVP) provides several features and benefits to Docker Hub publishers. The program grants the following perks based on participation tier: DVP organizations can upload custom images for individual repositories on Docker Hub. This lets you override the default organization-level logo on a per-repository basis. Only a user with administrative access (owner or team member with administrator permission) over the repository can change the repository logo. Select the Clear button ( ) to remove a logo. Removing the logo makes the repository default to using the organization logo, if set, or the following default logo if not. Images that are part of this program have a badge on Docker Hub making it easier for developers to identify projects that Docker has verified as high quality publishers and with content they can trust. The insights and analytics service provides usage metrics for how the community uses Docker images, granting insight into user behavior. The usage metrics show the number of image pulls by tag or by digest, and breakdowns by geolocation, cloud provider, client, and more. You can select the time span for which you want to view analytics data. You can also export the data in either a summary or raw format. Docker Scout provides automatic vulnerability analysis for DVP images published to Docker Hub. Scanning images ensures that the published content is secure, and proves to developers that they can trust the image. You can enable analysis on a per-repository basis. For more about using this feature, see Basic vulnerability scanning. Any independent software vendor who distributes software on Docker Hub can join the Verified Publisher Program. Find out more by heading to the Docker Verified Publisher Program page. Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Docker Swarm", "subcategory": "Scheduling & Orchestration" }
[ { "data": "| 0 | 1 | |:|:-| | Description | Initialize a swarm | | Usage | docker swarm init [OPTIONS] | Swarm This command works with the Swarm orchestrator. Initialize a swarm. The Docker Engine targeted by this command becomes a manager in the newly created single-node swarm. | Option | Default | Description | |:--|:-|:| | --advertise-addr | nan | Advertised address (format: <ip|interface>[:port]) | | --autolock | nan | Enable manager autolocking (requiring an unlock key to start a stopped manager) | | --availability | active | Availability of the node (active, pause, drain) | | --cert-expiry | 2160h0m0s | Validity period for node certificates (ns|us|ms|s|m|h) | | --data-path-addr | nan | API 1.31+ Address or interface to use for data path traffic (format: <ip|interface>) | | --data-path-port | nan | API 1.40+ Port number to use for data path traffic (1024 - 49151). If no value is set or is set to 0, the default port (4789) is used. | | --default-addr-pool | nan | API 1.39+ default address pool in CIDR format | | --default-addr-pool-mask-length | 24 | API 1.39+ default address pool subnet mask length | | --dispatcher-heartbeat | 5s | Dispatcher heartbeat period (ns|us|ms|s|m|h) | | --external-ca | nan | Specifications of one or more certificate signing endpoints | | --force-new-cluster | nan | Force create a new cluster from current state | | --listen-addr | 0.0.0.0:2377 | Listen address (format: <ip|interface>[:port]) | | --max-snapshots | nan | API 1.25+ Number of additional Raft snapshots to retain | | --snapshot-interval | 10000 | API 1.25+ Number of log entries between Raft snapshots | | --task-history-limit | 5 | Task history retention limit | ``` $ docker swarm init --advertise-addr 192.168.99.121 Swarm initialized: current node (bvz81updecsj6wjz393c09vti) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx 172.17.0.2:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. ``` The docker swarm init command generates two random tokens: a worker token and a manager token. When you join a new node to the swarm, the node joins as a worker or manager node based upon the token you pass to swarm join. After you create the swarm, you can display or rotate the token using swarm join-token. The --autolock flag enables automatic locking of managers with an encryption" }, { "data": "The private keys and data stored by all managers are protected by the encryption key printed in the output, and is inaccessible without it. Make sure to store this key securely, in order to reactivate a manager after it restarts. Pass the key to the docker swarm unlock command to reactivate the manager. You can disable autolock by running docker swarm update --autolock=false. After disabling it, the encryption key is no longer required to start the manager, and it will start up on its own without user intervention. The --dispatcher-heartbeat flag sets the frequency at which nodes are told to report their health. This flag sets up the swarm to use an external CA to issue node certificates. The value takes the form protocol=X,url=Y. The value for protocol specifies what protocol should be used to send signing requests to the external CA. Currently, the only supported value is cfssl. The URL specifies the endpoint where signing requests should be submitted. This flag forces an existing node that was part of a quorum that was lost to restart as a single-node Manager without losing its data. The node listens for inbound swarm manager traffic on this address. The default is to listen on 0.0.0.0:2377. It is also possible to specify a network interface to listen on that interface's address; for example --listen-addr eth0:2377. Specifying a port is optional. If the value is a bare IP address or interface name, the default port 2377 is used. The --advertise-addr flag specifies the address that will be advertised to other members of the swarm for API access and overlay networking. If unspecified, Docker will check if the system has a single IP address, and use that IP address with the listening port (see --listen-addr). If the system has multiple IP addresses, --advertise-addr must be specified so that the correct address is chosen for inter-manager communication and overlay networking. It is also possible to specify a network interface to advertise that interface's address; for example --advertise-addr eth0:2377. Specifying a port is optional. If the value is a bare IP address or interface name, the default port 2377 is used. The --data-path-addr flag specifies the address that global scope network drivers will publish towards other nodes in order to reach the containers running on this node. Using this parameter you can separate the container's data traffic from the management traffic of the" }, { "data": "If unspecified, the IP address or interface of the advertise address is used. Setting --data-path-addr does not restrict which interfaces or source IP addresses the VXLAN socket is bound to. Similar to --advertise-addr, the purpose of this flag is to inform other members of the swarm about which address to use for control plane traffic. To restrict access to the VXLAN port of the node, use firewall rules. The --data-path-port flag allows you to configure the UDP port number to use for data path traffic. The provided port number must be within the 1024 - 49151 range. If this flag isn't set, or if it's set to 0, the default port number 4789 is used. The data path port can only be configured when initializing the swarm, and applies to all nodes that join the swarm. The following example initializes a new Swarm, and configures the data path port to UDP port 7777; ``` $ docker swarm init --data-path-port=7777 ``` After the swarm is initialized, use the docker info command to verify that the port is configured: ``` $ docker info <...> ClusterID: 9vs5ygs0gguyyec4iqf2314c0 Managers: 1 Nodes: 1 Data Path Port: 7777 <...> ``` The --default-addr-pool flag specifies default subnet pools for global scope networks. For example, to specify two address pools: ``` $ docker swarm init \\ --default-addr-pool 30.30.0.0/16 \\ --default-addr-pool 40.40.0.0/16 ``` Use the --default-addr-pool-mask-length flag to specify the default subnet pools mask length for the subnet pools. This flag sets the number of old Raft snapshots to retain in addition to the current Raft snapshots. By default, no old snapshots are retained. This option may be used for debugging, or to store old snapshots of the swarm state for disaster recovery purposes. The --snapshot-interval flag specifies how many log entries to allow in between Raft snapshots. Setting this to a high number will trigger snapshots less frequently. Snapshots compact the Raft log and allow for more efficient transfer of the state to new managers. However, there is a performance cost to taking snapshots frequently. The --availability flag specifies the availability of the node at the time the node joins a master. Possible availability values are active, pause, or drain. This flag is useful in certain situations. For example, a cluster may want to have dedicated manager nodes that don't serve as worker nodes. You can do this by passing --availability=drain to docker swarm init. Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "Orchestration & Management", "file_name": "#cluster-management-integrated-with-docker-engine.md", "project_name": "Docker Swarm", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Docker Desktop is licensed under the Docker Subscription Service Agreement. When you download and install Docker Desktop, you will be asked to agree to the updated terms. Our Docker Subscription Service Agreement states: Read the Blog and Docker subscription FAQs to learn how this may affect companies using Docker Desktop. Note The licensing and distribution terms for Docker and Moby open-source projects, such as Docker Engine, aren't changing. Docker Desktop is built using open-source software. For information about the licensing of open-source components in Docker Desktop, select About Docker Desktop > Acknowledgements. Docker Desktop distributes some components that are licensed under the GNU General Public License. Select here to download the source for these components. Tip Explore Docker subscriptions to see what else Docker can offer you. Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "Orchestration & Management", "file_name": "#scaling.md", "project_name": "Docker Swarm", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Docker, Inc. sponsors a dedicated team that's responsible for reviewing and publishing all content in Docker Official Images. This team works in collaboration with upstream software maintainers, security experts, and the broader Docker community. While it's preferable to have upstream software authors maintaining their Docker Official Images, this isn't a strict requirement. Creating and maintaining images for Docker Official Images is a collaborative process. It takes place openly on GitHub where participation is encouraged. Anyone can provide feedback, contribute code, suggest process changes, or even propose a new Official Image. From a high level, an Official Image starts out as a proposal in the form of a set of GitHub pull requests. The following GitHub repositories detail the proposal requirements: The Docker Official Images team, with help from community contributors, formally review each proposal and provide feedback to the author. This initial review process can be lengthy, often requiring a bit of back-and-forth before the proposal is accepted. There are subjective considerations during the review process. These subjective concerns boil down to the basic question: \"is this image generally useful?\" For example, the Python Docker Official Image is \"generally useful\" to the larger Python developer community, whereas an obscure text adventure game written in Python last week is not. Once a new proposal is accepted, the author is responsible for keeping their images and documentation up-to-date and responding to user feedback. Docker is responsible for building and publishing the images on Docker Hub. Updates to Docker Official Images follow the same pull request process as for new images, although the review process for updates is more streamlined. The Docker Official Images team ultimately acts as a gatekeeper for all changes, which helps ensures consistency, quality, and security. All Docker Official Images contain a User Feedback section in their documentation which covers the details for that specific repository. In most cases, the GitHub repository which contains the Dockerfiles for an Official Image also has an active issue tracker. General feedback and support questions about Docker Official Images should be directed to the #general channel in the Docker Community Slack. If you're a maintainer or contributor to Docker Official Images and you're looking for help or advice, use the #docker-library channel on Libera.Chat IRC. Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "Orchestration & Management", "file_name": "#decentralized-design.md", "project_name": "Docker Swarm", "subcategory": "Scheduling & Orchestration" }
[ { "data": "You can enhance your teams' builds with a Build Cloud subscription. This page describes the features available for the different subscription tiers. To compare features available for each tier, see Docker Build Cloud pricing. If you have an existing Docker Core subscription, a base level of Build Cloud minutes and cache are included. The features available vary depending on your Docker Core subscription tier. You can buy Docker Build Cloud Team if you dont have a Docker Core subscription, or upgrade any Docker Core tier to enhance your developers' experience with the following features: The Docker Build Cloud Team subscription is tied to a Docker organization. To use the build minutes or shared cache of a Docker Build Cloud Team subscription, users must be a part of the organization associated with the subscription. See Manage seats and invites. To learn how to buy this subscription for your Docker organization, see Buy your subscription - existing account or organization. If you havent created a Docker organization yet and dont have an existing Docker Core subscription, see Buy your subscription - new organization. For organizations without a Docker Core subscription, this plan also includes 50 shared minutes in addition to the Docker Build Cloud Team minutes. For enterprise features such as paying via invoice and additional build minutes, contact sales. Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "Orchestration & Management", "file_name": "new_template=doc_issue.yml&location=https%3a%2f%2fdocs.docker.com%2fengine%2fswarm%2f&labels=status%2Ftriage.md", "project_name": "Docker Swarm", "subcategory": "Scheduling & Orchestration" }
[ { "data": "| 0 | 1 | |:|:| | Description | Create a new service | | Usage | docker service create [OPTIONS] IMAGE [COMMAND] [ARG...] | Swarm This command works with the Swarm orchestrator. Creates a service as described by the specified parameters. Note This is a cluster management command, and must be executed on a swarm manager node. To learn about managers and workers, refer to the Swarm mode section in the documentation. | Option | Default | Description | |:--|:--|:--| | --cap-add | nan | API 1.41+ Add Linux capabilities | | --cap-drop | nan | API 1.41+ Drop Linux capabilities | | --config | nan | API 1.30+ Specify configurations to expose to the service | | --constraint | nan | Placement constraints | | --container-label | nan | Container labels | | --credential-spec | nan | API 1.29+ Credential spec for managed service account (Windows only) | | -d, --detach | nan | API 1.29+ Exit immediately instead of waiting for the service to converge | | --dns | nan | API 1.25+ Set custom DNS servers | | --dns-option | nan | API 1.25+ Set DNS options | | --dns-search | nan | API 1.25+ Set custom DNS search domains | | --endpoint-mode | vip | Endpoint mode (vip or dnsrr) | | --entrypoint | nan | Overwrite the default ENTRYPOINT of the image | | -e, --env | nan | Set environment variables | | --env-file | nan | Read in a file of environment variables | | --generic-resource | nan | User defined resources | | --group | nan | API 1.25+ Set one or more supplementary user groups for the container | | --health-cmd | nan | API 1.25+ Command to run to check health | | --health-interval | nan | API 1.25+ Time between running the check (ms|s|m|h) | | --health-retries | nan | API 1.25+ Consecutive failures needed to report unhealthy | | --health-start-interval | nan | API 1.44+ Time between running the check during the start period (ms|s|m|h) | | --health-start-period | nan | API 1.29+ Start period for the container to initialize before counting retries towards unstable (ms|s|m|h) | | --health-timeout | nan | API 1.25+ Maximum time to allow one check to run (ms|s|m|h) | | --host | nan | API 1.25+ Set one or more custom host-to-IP mappings (host:ip) | | --hostname | nan | API 1.25+ Container hostname | | --init | nan | API 1.37+ Use an init inside each service container to forward signals and reap processes | | --isolation | nan | API 1.35+ Service container isolation mode | | -l, --label | nan | Service labels | | --limit-cpu | nan | Limit CPUs | | --limit-memory | nan | Limit Memory | | --limit-pids | nan | API 1.41+ Limit maximum number of processes (default 0 = unlimited) | | --log-driver | nan | Logging driver for service | | --log-opt | nan | Logging driver options | | --max-concurrent | nan | API 1.41+ Number of job tasks to run concurrently (default equal to --replicas) | | --mode | replicated | Service mode (replicated, global, replicated-job, global-job) | | --mount | nan | Attach a filesystem mount to the service | | --name | nan | Service name | | --network | nan | Network attachments | | --no-healthcheck | nan | API 1.25+ Disable any container-specified HEALTHCHECK | | --no-resolve-image | nan | API" }, { "data": "Do not query the registry to resolve image digest and supported platforms | | --placement-pref | nan | API 1.28+ Add a placement preference | | -p, --publish | nan | Publish a port as a node port | | -q, --quiet | nan | Suppress progress output | | --read-only | nan | API 1.28+ Mount the container's root filesystem as read only | | --replicas | nan | Number of tasks | | --replicas-max-per-node | nan | API 1.40+ Maximum number of tasks per node (default 0 = unlimited) | | --reserve-cpu | nan | Reserve CPUs | | --reserve-memory | nan | Reserve Memory | | --restart-condition | nan | Restart when condition is met (none, on-failure, any) (default any) | | --restart-delay | nan | Delay between restart attempts (ns|us|ms|s|m|h) (default 5s) | | --restart-max-attempts | nan | Maximum number of restarts before giving up | | --restart-window | nan | Window used to evaluate the restart policy (ns|us|ms|s|m|h) | | --rollback-delay | nan | API 1.28+ Delay between task rollbacks (ns|us|ms|s|m|h) (default 0s) | | --rollback-failure-action | nan | API 1.28+ Action on rollback failure (pause, continue) (default pause) | | --rollback-max-failure-ratio | nan | API 1.28+ Failure rate to tolerate during a rollback (default 0) | | --rollback-monitor | nan | API 1.28+ Duration after each task rollback to monitor for failure (ns|us|ms|s|m|h) (default 5s) | | --rollback-order | nan | API 1.29+ Rollback order (start-first, stop-first) (default stop-first) | | --rollback-parallelism | 1 | API 1.28+ Maximum number of tasks rolled back simultaneously (0 to roll back all at once) | | --secret | nan | API 1.25+ Specify secrets to expose to the service | | --stop-grace-period | nan | Time to wait before force killing a container (ns|us|ms|s|m|h) (default 10s) | | --stop-signal | nan | API 1.28+ Signal to stop the container | | --sysctl | nan | API 1.40+ Sysctl options | | -t, --tty | nan | API 1.25+ Allocate a pseudo-TTY | | --ulimit | nan | API 1.41+ Ulimit options | | --update-delay | nan | Delay between updates (ns|us|ms|s|m|h) (default 0s) | | --update-failure-action | nan | Action on update failure (pause, continue, rollback) (default pause) | | --update-max-failure-ratio | nan | API 1.25+ Failure rate to tolerate during an update (default 0) | | --update-monitor | nan | API 1.25+ Duration after each task update to monitor for failure (ns|us|ms|s|m|h) (default 5s) | | --update-order | nan | API 1.29+ Update order (start-first, stop-first) (default stop-first) | | --update-parallelism | 1 | Maximum number of tasks updated simultaneously (0 to update all at once) | | -u, --user | nan | Username or UID (format: <name|uid>[:<group|gid>]) | | --with-registry-auth | nan | Send registry authentication details to swarm agents | | -w, --workdir | nan | Working directory inside the container | ``` $ docker service create --name redis redis:3.0.6 dmu1ept4cxcfe8k8lhtux3ro3 $ docker service create --mode global --name redis2 redis:3.0.6 a8q9dasaafudfs8q8w32udass $ docker service ls ID NAME MODE REPLICAS IMAGE dmu1ept4cxcf redis replicated 1/1 redis:3.0.6 a8q9dasaafud redis2 global 1/1 redis:3.0.6 ``` If your image is available on a private registry which requires login, use the --with-registry-auth flag with docker service create, after logging in. If your image is stored on registry.example.com, which is a private registry, use a command like the following: ``` $ docker login registry.example.com $ docker service create \\ --with-registry-auth \\ --name my_service \\" }, { "data": "``` This passes the login token from your local client to the swarm nodes where the service is deployed, using the encrypted WAL logs. With this information, the nodes are able to log in to the registry and pull the image. Use the --replicas flag to set the number of replica tasks for a replicated service. The following command creates a redis service with 5 replica tasks: ``` $ docker service create --name redis --replicas=5 redis:3.0.6 4cdgfyky7ozwh3htjfw0d12qv ``` The above command sets the desired number of tasks for the service. Even though the command returns immediately, actual scaling of the service may take some time. The REPLICAS column shows both the actual and desired number of replica tasks for the service. In the following example the desired state is 5 replicas, but the current number of RUNNING tasks is 3: ``` $ docker service ls ID NAME MODE REPLICAS IMAGE 4cdgfyky7ozw redis replicated 3/5 redis:3.0.7 ``` Once all the tasks are created and RUNNING, the actual number of tasks is equal to the desired number: ``` $ docker service ls ID NAME MODE REPLICAS IMAGE 4cdgfyky7ozw redis replicated 5/5 redis:3.0.7 ``` Use the --secret flag to give a container access to a secret. Create a service specifying a secret: ``` $ docker service create --name redis --secret secret.json redis:3.0.6 4cdgfyky7ozwh3htjfw0d12qv ``` Create a service specifying the secret, target, user/group ID, and mode: ``` $ docker service create --name redis \\ --secret source=ssh-key,target=ssh \\ --secret source=app-key,target=app,uid=1000,gid=1001,mode=0400 \\ redis:3.0.6 4cdgfyky7ozwh3htjfw0d12qv ``` To grant a service access to multiple secrets, use multiple --secret flags. Secrets are located in /run/secrets in the container if no target is specified. If no target is specified, the name of the secret is used as the in memory file in the container. If a target is specified, that is used as the filename. In the example above, two files are created: /run/secrets/ssh and /run/secrets/app for each of the secret targets specified. Use the --config flag to give a container access to a config. Create a service with a config. The config will be mounted into redis-config, be owned by the user who runs the command inside the container (often root), and have file mode 0444 or world-readable. You can specify the uid and gid as numerical IDs or names. When using names, the provided group/user names must pre-exist in the container. The mode is specified as a 4-number sequence such as 0755. ``` $ docker service create --name=redis --config redis-conf redis:3.0.6 ``` Create a service with a config and specify the target location and file mode: ``` $ docker service create --name redis \\ --config source=redis-conf,target=/etc/redis/redis.conf,mode=0400 redis:3.0.6 ``` To grant a service access to multiple configs, use multiple --config flags. Configs are located in / in the container if no target is specified. If no target is specified, the name of the config is used as the name of the file in the container. If a target is specified, that is used as the filename. ``` $ docker service create \\ --replicas 10 \\ --name redis \\ --update-delay 10s \\ --update-parallelism 2 \\ redis:3.0.6 ``` When you run a service update, the scheduler updates a maximum of 2 tasks at a time, with 10s between updates. For more information, refer to the rolling updates tutorial. This sets an environment variable for all tasks in a service. For example: ``` $ docker service create \\ --name redis_2 \\ --replicas 5 \\ --env MYVAR=foo \\ redis:3.0.6 ``` To specify multiple environment variables, specify multiple --env flags, each with a separate key-value" }, { "data": "``` $ docker service create \\ --name redis_2 \\ --replicas 5 \\ --env MYVAR=foo \\ --env MYVAR2=bar \\ redis:3.0.6 ``` This option sets the docker service containers hostname to a specific string. For example: ``` $ docker service create --name redis --hostname myredis redis:3.0.6 ``` A label is a key=value pair that applies metadata to a service. To label a service with two labels: ``` $ docker service create \\ --name redis_2 \\ --label com.example.foo=\"bar\" \\ --label bar=baz \\ redis:3.0.6 ``` For more information about labels, refer to apply custom metadata. Docker supports three different kinds of mounts, which allow containers to read from or write to files or directories, either on the host operating system, or on memory filesystems. These types are data volumes (often referred to simply as volumes), bind mounts, tmpfs, and named pipes. A bind mount makes a file or directory on the host available to the container it is mounted within. A bind mount may be either read-only or read-write. For example, a container might share its host's DNS information by means of a bind mount of the host's /etc/resolv.conf or a container might write logs to its host's /var/log/myContainerLogs directory. If you use bind mounts and your host and containers have different notions of permissions, access controls, or other such details, you will run into portability issues. A named volume is a mechanism for decoupling persistent data needed by your container from the image used to create the container and from the host machine. Named volumes are created and managed by Docker, and a named volume persists even when no container is currently using it. Data in named volumes can be shared between a container and the host machine, as well as between multiple containers. Docker uses a volume driver to create, manage, and mount volumes. You can back up or restore volumes using Docker commands. A tmpfs mounts a tmpfs inside a container for volatile data. A npipe mounts a named pipe from the host into the container. Consider a situation where your image starts a lightweight web server. You could use that image as a base image, copy in your website's HTML files, and package that into another image. Each time your website changed, you'd need to update the new image and redeploy all of the containers serving your website. A better solution is to store the website in a named volume which is attached to each of your web server containers when they start. To update the website, you just update the named volume. For more information about named volumes, see Data Volumes. The following table describes options which apply to both bind mounts and named volumes in a service: | Option | Required | Description | |:--|:--|:-| | type | nan | The type of mount, can be either volume, bind, tmpfs, or npipe. Defaults to volume if no type is specified.volume: mounts a managed volume into the container.bind: bind-mounts a directory or file from the host into the container.tmpfs: mount a tmpfs in the containernpipe: mounts named pipe from the host into the container (Windows containers only). | | src or source | for type=bind and type=npipe | type=volume: src is an optional way to specify the name of the volume (for example, src=my-volume). If the named volume does not exist, it is automatically created. If no src is specified, the volume is assigned a random name which is guaranteed to be unique on the host, but may not be unique" }, { "data": "A randomly-named volume has the same lifecycle as its container and is destroyed when the container is destroyed (which is upon service update, or when scaling or re-balancing the service)type=bind: src is required, and specifies an absolute path to the file or directory to bind-mount (for example, src=/path/on/host/). An error is produced if the file or directory does not exist.type=tmpfs: src is not supported. | | dst or destination or target | yes | Mount path inside the container, for example /some/path/in/container/. If the path does not exist in the container's filesystem, the Engine creates a directory at the specified location before mounting the volume or bind mount. | | readonly or ro | nan | The Engine mounts binds and volumes read-write unless readonly option is given when mounting the bind or volume. Note that setting readonly for a bind-mount may not make its submounts readonly depending on the kernel version. See also bind-recursive.true or 1 or no value: Mounts the bind or volume read-only.false or 0: Mounts the bind or volume read-write. | The type of mount, can be either volume, bind, tmpfs, or npipe. Defaults to volume if no type is specified. dst or destination or target Mount path inside the container, for example /some/path/in/container/. If the path does not exist in the container's filesystem, the Engine creates a directory at the specified location before mounting the volume or bind mount. readonly or ro The Engine mounts binds and volumes read-write unless readonly option is given when mounting the bind or volume. Note that setting readonly for a bind-mount may not make its submounts readonly depending on the kernel version. See also bind-recursive. The following options can only be used for bind mounts (type=bind): | Option | Description | |:|:--| | bind-propagation | See the bind propagation section. | | consistency | The consistency requirements for the mount; one ofdefault: Equivalent to consistent.consistent: Full consistency. The container runtime and the host maintain an identical view of the mount at all times.cached: The host's view of the mount is authoritative. There may be delays before updates made on the host are visible within a container.delegated: The container runtime's view of the mount is authoritative. There may be delays before updates made in a container are visible on the host. | | bind-recursive | By default, submounts are recursively bind-mounted as well. However, this behavior can be confusing when a bind mount is configured with readonly option, because submounts may not be mounted as read-only, depending on the kernel version. Set bind-recursive to control the behavior of the recursive bind-mount.A value is one of:<enabled: Enables recursive bind-mount. Read-only mounts are made recursively read-only if kernel is v5.12 or later. Otherwise they are not made recursively read-only.<disabled: Disables recursive bind-mount.<writable: Enables recursive bind-mount. Read-only mounts are not made recursively read-only.<readonly: Enables recursive bind-mount. Read-only mounts are made recursively read-only if kernel is v5.12 or later. Otherwise the Engine raises an error.When the option is not specified, the default behavior correponds to setting enabled. | | bind-nonrecursive | bind-nonrecursive is deprecated since Docker Engine v25.0. Use bind-recursiveinstead.A value is optional:true or 1: Equivalent to bind-recursive=disabled.false or 0: Equivalent to bind-recursive=enabled. | See the bind propagation section. The consistency requirements for the mount; one of Bind propagation refers to whether or not mounts created within a given bind mount or named volume can be propagated to replicas of that mount. Consider a mount point /mnt, which is also mounted on /tmp. The propagation settings control whether a mount on /tmp/a would also be available on" }, { "data": "Each propagation setting has a recursive counterpoint. In the case of recursion, consider that /tmp/a is also mounted as /foo. The propagation settings control whether /mnt/a and/or /tmp/a would exist. The bind-propagation option defaults to rprivate for both bind mounts and volume mounts, and is only configurable for bind mounts. In other words, named volumes do not support bind propagation. For more information about bind propagation, see the Linux kernel documentation for shared subtree. The following options can only be used for named volumes (type=volume): | Option | Description | |:--|:--| | volume-driver | Name of the volume-driver plugin to use for the volume. Defaults to \"local\", to use the local volume driver to create the volume if the volume does not exist. | | volume-label | One or more custom metadata (\"labels\") to apply to the volume upon creation. For example, volume-label=mylabel=hello-world,my-other-label=hello-mars. For more information about labels, refer to apply custom metadata. | | volume-nocopy | By default, if you attach an empty volume to a container, and files or directories already existed at the mount-path in the container (dst), the Engine copies those files and directories into the volume, allowing the host to access them. Set volume-nocopy to disable copying files from the container's filesystem to the volume and mount the empty volume.A value is optional:true or 1: Default if you do not provide a value. Disables copying.false or 0: Enables copying. | | volume-opt | Options specific to a given volume driver, which will be passed to the driver when creating the volume. Options are provided as a comma-separated list of key/value pairs, for example, volume-opt=some-option=some-value,volume-opt=some-other-option=some-other-value. For available options for a given driver, refer to that driver's documentation. | Name of the volume-driver plugin to use for the volume. Defaults to \"local\", to use the local volume driver to create the volume if the volume does not exist. The following options can only be used for tmpfs mounts (type=tmpfs); | Option | Description | |:--|:--| | tmpfs-size | Size of the tmpfs mount in bytes. Unlimited by default in Linux. | | tmpfs-mode | File mode of the tmpfs in octal. (e.g. \"700\" or \"0700\".) Defaults to \"1777\" in Linux. | The --mount flag supports most options that are supported by the -v or --volume flag for docker run, with some important exceptions: The --mount flag allows you to specify a volume driver and volume driver options per volume, without creating the volumes in advance. In contrast, docker run allows you to specify a single volume driver which is shared by all volumes, using the --volume-driver flag. The --mount flag allows you to specify custom metadata (\"labels\") for a volume, before the volume is created. When you use --mount with type=bind, the host-path must refer to an existing path on the host. The path will not be created for you and the service will fail with an error if the path does not exist. The --mount flag does not allow you to relabel a volume with Z or z flags, which are used for selinux labeling. The following example creates a service that uses a named volume: ``` $ docker service create \\ --name my-service \\ --replicas 3 \\ --mount type=volume,source=my-volume,destination=/path/in/container,volume-label=\"color=red\",volume-label=\"shape=round\" \\ nginx:alpine ``` For each replica of the service, the engine requests a volume named \"my-volume\" from the default (\"local\") volume driver where the task is deployed. If the volume does not exist, the engine creates a new volume and applies the \"color\" and \"shape\"" }, { "data": "When the task is started, the volume is mounted on /path/in/container/ inside the container. Be aware that the default (\"local\") volume is a locally scoped volume driver. This means that depending on where a task is deployed, either that task gets a new volume named \"my-volume\", or shares the same \"my-volume\" with other tasks of the same service. Multiple containers writing to a single shared volume can cause data corruption if the software running inside the container is not designed to handle concurrent processes writing to the same location. Also take into account that containers can be re-scheduled by the Swarm orchestrator and be deployed on a different node. The following command creates a service with three replicas with an anonymous volume on /path/in/container: ``` $ docker service create \\ --name my-service \\ --replicas 3 \\ --mount type=volume,destination=/path/in/container \\ nginx:alpine ``` In this example, no name (source) is specified for the volume, so a new volume is created for each task. This guarantees that each task gets its own volume, and volumes are not shared between tasks. Anonymous volumes are removed after the task using them is complete. The following example bind-mounts a host directory at /path/in/container in the containers backing the service: ``` $ docker service create \\ --name my-service \\ --mount type=bind,source=/path/on/host,destination=/path/in/container \\ nginx:alpine ``` The service mode determines whether this is a replicated service or a global service. A replicated service runs as many tasks as specified, while a global service runs on each active node in the swarm. The following command creates a global service: ``` $ docker service create \\ --name redis_2 \\ --mode global \\ redis:3.0.6 ``` You can limit the set of nodes where a task can be scheduled by defining constraint expressions. Constraint expressions can either use a match (==) or exclude (!=) rule. Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match node or Docker Engine labels as follows: | node attribute | matches | example | |:-|:|:--| | node.id | Node ID | node.id==2ivku8v2gvtg4 | | node.hostname | Node hostname | node.hostname!=node-2 | | node.role | Node role (manager/worker) | node.role==manager | | node.platform.os | Node operating system | node.platform.os==windows | | node.platform.arch | Node architecture | node.platform.arch==x86_64 | | node.labels | User-defined node labels | node.labels.security==high | | engine.labels | Docker Engine's labels | engine.labels.operatingsystem==ubuntu-22.04 | engine.labels apply to Docker Engine labels like operating system, drivers, etc. Swarm administrators add node.labels for operational purposes by using the docker node update command. For example, the following limits tasks for the redis service to nodes where the node type label equals queue: ``` $ docker service create \\ --name redis_2 \\ --constraint node.platform.os==linux \\ --constraint node.labels.type==queue \\ redis:3.0.6 ``` If the service constraints exclude all nodes in the cluster, a message is printed that no suitable node is found, but the scheduler will start a reconciliation loop and deploy the service once a suitable node becomes available. In the example below, no node satisfying the constraint was found, causing the service to not reconcile with the desired state: ``` $ docker service create \\ --name web \\ --constraint" }, { "data": "\\ nginx:alpine lx1wrhhpmbbu0wuk0ybws30bc overall progress: 0 out of 1 tasks 1/1: no suitable node (scheduling constraints not satisfied on 5 nodes) $ docker service ls ID NAME MODE REPLICAS IMAGE PORTS b6lww17hrr4e web replicated 0/1 nginx:alpine ``` After adding the region=east label to a node in the cluster, the service reconciles, and the desired number of replicas are deployed: ``` $ docker node update --label-add region=east yswe2dm4c5fdgtsrli1e8ya5l yswe2dm4c5fdgtsrli1e8ya5l $ docker service ls ID NAME MODE REPLICAS IMAGE PORTS b6lww17hrr4e web replicated 1/1 nginx:alpine ``` You can set up the service to divide tasks evenly over different categories of nodes. One example of where this can be useful is to balance tasks over a set of datacenters or availability zones. The example below illustrates this: ``` $ docker service create \\ --replicas 9 \\ --name redis_2 \\ --placement-pref spread=node.labels.datacenter \\ redis:3.0.6 ``` This uses --placement-pref with a spread strategy (currently the only supported strategy) to spread tasks evenly over the values of the datacenter node label. In this example, we assume that every node has a datacenter node label attached to it. If there are three different values of this label among nodes in the swarm, one third of the tasks will be placed on the nodes associated with each value. This is true even if there are more nodes with one value than another. For example, consider the following set of nodes: Since we are spreading over the values of the datacenter label and the service has 9 replicas, 3 replicas will end up in each datacenter. There are three nodes associated with the value east, so each one will get one of the three replicas reserved for this value. There are two nodes with the value south, and the three replicas for this value will be divided between them, with one receiving two replicas and another receiving just one. Finally, west has a single node that will get all three replicas reserved for west. If the nodes in one category (for example, those with node.labels.datacenter=south) can't handle their fair share of tasks due to constraints or resource limitations, the extra tasks will be assigned to other nodes instead, if possible. Both engine labels and node labels are supported by placement preferences. The example above uses a node label, because the label is referenced with node.labels.datacenter. To spread over the values of an engine label, use --placement-pref spread=engine.labels.<labelname>. It is possible to add multiple placement preferences to a service. This establishes a hierarchy of preferences, so that tasks are first divided over one category, and then further divided over additional categories. One example of where this may be useful is dividing tasks fairly between datacenters, and then splitting the tasks within each datacenter over a choice of racks. To add multiple placement preferences, specify the --placement-pref flag multiple times. The order is significant, and the placement preferences will be applied in the order given when making scheduling decisions. The following example sets up a service with multiple placement preferences. Tasks are spread first over the various datacenters, and then over racks (as indicated by the respective labels): ``` $ docker service create \\ --replicas 9 \\ --name redis_2 \\ --placement-pref 'spread=node.labels.datacenter' \\ --placement-pref 'spread=node.labels.rack' \\ redis:3.0.6 ``` When updating a service with docker service update, --placement-pref-add appends a new placement preference after all existing placement preferences. --placement-pref-rm removes an existing placement preference that matches the argument. If your service needs a minimum amount of memory in order to run correctly, you can use --reserve-memory to specify that the service should only be scheduled on a node with this much memory available to reserve. If no node is available that meets the criteria, the task is not scheduled, but remains in a pending state. The following example requires that 4GB of memory be available and reservable on a given node before scheduling the service to run on that" }, { "data": "``` $ docker service create --reserve-memory=4GB --name=too-big nginx:alpine ``` The managers won't schedule a set of containers on a single node whose combined reservations exceed the memory available on that node. After a task is scheduled and running, --reserve-memory does not enforce a memory limit. Use --limit-memory to ensure that a task uses no more than a given amount of memory on a node. This example limits the amount of memory used by the task to 4GB. The task will be scheduled even if each of your nodes has only 2GB of memory, because --limit-memory is an upper limit. ``` $ docker service create --limit-memory=4GB --name=too-big nginx:alpine ``` Using --reserve-memory and --limit-memory does not guarantee that Docker will not use more memory on your host than you want. For instance, you could create many services, the sum of whose memory usage could exhaust the available memory. You can prevent this scenario from exhausting the available memory by taking into account other (non-containerized) software running on the host as well. If --reserve-memory is greater than or equal to --limit-memory, Docker won't schedule a service on a host that doesn't have enough memory. --limit-memory will limit the service's memory to stay within that limit, so if every service has a memory-reservation and limit set, Docker services will be less likely to saturate the host. Other non-service containers or applications running directly on the Docker host could still exhaust memory. There is a downside to this approach. Reserving memory also means that you may not make optimum use of the memory available on the node. Consider a service that under normal circumstances uses 100MB of memory, but depending on load can \"peak\" at 500MB. Reserving 500MB for that service (to guarantee can have 500MB for those \"peaks\") results in 400MB of memory being wasted most of the time. In short, you can take a more conservative or more flexible approach: Conservative: reserve 500MB, and limit to 500MB. Basically you're now treating the service containers as VMs, and you may be losing a big advantage containers, which is greater density of services per host. Flexible: limit to 500MB in the assumption that if the service requires more than 500MB, it is malfunctioning. Reserve something between the 100MB \"normal\" requirement and the 500MB \"peak\" requirement\". This assumes that when this service is at \"peak\", other services or non-container workloads probably won't be. The approach you take depends heavily on the memory-usage patterns of your workloads. You should test under normal and peak conditions before settling on an approach. On Linux, you can also limit a service's overall memory footprint on a given host at the level of the host operating system, using cgroups or other relevant operating system tools. Use the --replicas-max-per-node flag to set the maximum number of replica tasks that can run on a node. The following command creates a nginx service with 2 replica tasks but only one replica task per node. One example where this can be useful is to balance tasks over a set of data centers together with --placement-pref and let --replicas-max-per-node setting make sure that replicas are not migrated to another datacenter during maintenance or datacenter failure. The example below illustrates this: ``` $ docker service create \\ --name nginx \\ --replicas 2 \\ --replicas-max-per-node 1 \\ --placement-pref 'spread=node.labels.datacenter' \\ nginx ``` You can use overlay networks to connect one or more services within the" }, { "data": "First, create an overlay network on a manager node the docker network create command: ``` $ docker network create --driver overlay my-network etjpu59cykrptrgw0z0hk5snf ``` After you create an overlay network in swarm mode, all manager nodes have access to the network. When you create a service and pass the --network flag to attach the service to the overlay network: ``` $ docker service create \\ --replicas 3 \\ --network my-network \\ --name my-web \\ nginx 716thylsndqma81j6kkkb5aus ``` The swarm extends my-network to each node running the service. Containers on the same network can access each other using service discovery. Long form syntax of --network allows to specify list of aliases and driver options: --network name=my-network,alias=web1,driver-opt=field1=value1 You can publish service ports to make them available externally to the swarm using the --publish flag. The --publish flag can take two different styles of arguments. The short version is positional, and allows you to specify the published port and target port separated by a colon (:). ``` $ docker service create --name my_web --replicas 3 --publish 8080:80 nginx ``` There is also a long format, which is easier to read and allows you to specify more options. The long format is preferred. You cannot specify the service's mode when using the short format. Here is an example of using the long format for the same service as above: ``` $ docker service create --name my_web --replicas 3 --publish published=8080,target=80 nginx ``` The options you can specify are: | Option | Short syntax | Long syntax | Description | |:--|:-|:|:| | published and target port | --publish 8080:80 | --publish published=8080,target=80 | The target port within the container and the port to map it to on the nodes, using the routing mesh (ingress) or host-level networking. More options are available, later in this table. The key-value syntax is preferred, because it is somewhat self-documenting. | | mode | Not possible to set using short syntax. | --publish published=8080,target=80,mode=host | The mode to use for binding the port, either ingress or host. Defaults to ingress to use the routing mesh. | | protocol | --publish 8080:80/tcp | --publish published=8080,target=80,protocol=tcp | The protocol to use, tcp , udp, or sctp. Defaults to tcp. To bind a port for both protocols, specify the -p or --publish flag twice. | The target port within the container and the port to map it to on the nodes, using the routing mesh (ingress) or host-level networking. More options are available, later in this table. The key-value syntax is preferred, because it is somewhat self-documenting. The mode to use for binding the port, either ingress or host. Defaults to ingress to use the routing mesh. The protocol to use, tcp , udp, or sctp. Defaults to tcp. To bind a port for both protocols, specify the -p or --publish flag twice. When you publish a service port using ingress mode, the swarm routing mesh makes the service accessible at the published port on every node regardless if there is a task for the service running on the node. If you use host mode, the port is only bound on nodes where the service is running, and a given port on a node can only be bound once. You can only set the publication mode using the long syntax. For more information refer to Use swarm mode routing mesh. This option is only used for services using Windows containers. The --credential-spec must be in the format file://<filename> or registry://<value-name>. When using the file://<filename> format, the referenced file must be present in the CredentialSpecs subdirectory in the docker data directory, which defaults to C:\\ProgramData\\Docker\\ on" }, { "data": "For example, specifying file://spec.json loads C:\\ProgramData\\Docker\\CredentialSpecs\\spec.json. When using the registry://<value-name> format, the credential spec is read from the Windows registry on the daemon's host. The specified registry value must be located in: ``` HKLM\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\Virtualization\\Containers\\CredentialSpecs ``` You can use templates for some flags of service create, using the syntax provided by the Go's text/template package. The supported flags are the following : Valid placeholders for the Go template are listed below: | Placeholder | Description | |:-|:| | .Service.ID | Service ID | | .Service.Name | Service name | | .Service.Labels | Service labels | | .Node.ID | Node ID | | .Node.Hostname | Node Hostname | | .Task.ID | Task ID | | .Task.Name | Task name | | .Task.Slot | Task slot | In this example, we are going to set the template of the created containers based on the service's name, the node's ID and hostname where it sits. ``` $ docker service create \\ --name hosttempl \\ --hostname=\"{{.Node.Hostname}}-{{.Node.ID}}-{{.Service.Name}}\"\\ busybox top va8ew30grofhjoychbr6iot8c $ docker service ps va8ew30grofhjoychbr6iot8c ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS wo41w8hg8qan hosttempl.1 busybox:latest@sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912 2e7a8a9c4da2 Running Running about a minute ago $ docker inspect --format=\"{{.Config.Hostname}}\" 2e7a8a9c4da2-wo41w8hg8qanxwjwsg4kxpprj-hosttempl x3ti0erg11rjpg64m75kej2mz-hosttempl ``` By default, tasks scheduled on Windows nodes are run using the default isolation mode configured for this particular node. To force a specific isolation mode, you can use the --isolation flag: ``` $ docker service create --name myservice --isolation=process microsoft/nanoserver ``` Supported isolation modes on Windows are: You can narrow the kind of nodes your task can land on through the using the --generic-resource flag (if the nodes advertise these resources): ``` $ docker service create \\ --name cuda \\ --generic-resource \"NVIDIA-GPU=2\" \\ --generic-resource \"SSD=1\" \\ nvidia/cuda ``` Jobs are a special kind of service designed to run an operation to completion and then stop, as opposed to running long-running daemons. When a Task belonging to a job exits successfully (return value 0), the Task is marked as \"Completed\", and is not run again. Jobs are started by using one of two modes, replicated-job or global-job ``` $ docker service create --name myjob \\ --mode replicated-job \\ bash \"true\" ``` This command will run one Task, which will, using the bash image, execute the command true, which will return 0 and then exit. Though Jobs are ultimately a different kind of service, they a couple of caveats compared to other services: Jobs are available in both replicated and global modes. A replicated job is like a replicated service. Setting the --replicas flag will specify total number of iterations of a job to execute. By default, all replicas of a replicated job will launch at once. To control the total number of replicas that are executing simultaneously at any one time, the --max-concurrent flag can be used: ``` $ docker service create \\ --name mythrottledjob \\ --mode replicated-job \\ --replicas 10 \\ --max-concurrent 2 \\ bash \"true\" ``` The above command will execute 10 Tasks in total, but only 2 of them will be run at any given time. Global jobs are like global services, in that a Task is executed once on each node matching placement constraints. Global jobs are represented by the mode global-job. Note that after a Global job is created, any new Nodes added to the cluster will have a Task from that job started on them. The Global Job does not as a whole have a \"done\" state, except insofar as every Node meeting the job's constraints has a Completed task. Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "Orchestration & Management", "file_name": "docs.github.com.md", "project_name": "Godel-Scheduler", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "understanding-github-code-search-syntax.md", "project_name": "Godel-Scheduler", "subcategory": "Scheduling & Orchestration" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "benchmark.html.md", "project_name": "hami", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Three instances from ai-benchmark have been used to evaluate vGPU-device-plugin performance as follows | Test Environment | description | |:-|:--| | Kubernetes version | v1.12.9 | | Docker version | 18.09.1 | | GPU Type | Tesla V100 | | GPU Num | 2 | | Test instance | description | |:|:| | nvidia-device-plugin | k8s + nvidia k8s-device-plugin | | vGPU-device-plugin | k8s + VGPU k8s-device-pluginwithout virtual device memory | | vGPU-device-plugin(virtual device memory) | k8s + VGPU k8s-device-pluginwith virtual device memory | Test Cases: | test id | case | type | params | |-:|:--|:-|:| | 1.1 | Resnet-V2-50 | inference | batch=50,size=346*346 | | 1.2 | Resnet-V2-50 | training | batch=20,size=346*346 | | 2.1 | Resnet-V2-152 | inference | batch=10,size=256*256 | | 2.2 | Resnet-V2-152 | training | batch=10,size=256*256 | | 3.1 | VGG-16 | inference | batch=20,size=224*224 | | 3.2 | VGG-16 | training | batch=2,size=224*224 | | 4.1 | DeepLab | inference | batch=2,size=512*512 | | 4.2 | DeepLab | training | batch=1,size=384*384 | | 5.1 | LSTM | inference | batch=100,size=1024*300 | | 5.2 | LSTM | training | batch=10,size=1024*300 | Test Result: To reproduce: ``` $ kubectl apply -f benchmarks/ai-benchmark/ai-benchmark.yml ``` ``` $ kubectl logs [pod id]" } ]
{ "category": "Orchestration & Management", "file_name": "github-terms-of-service.md", "project_name": "Godel-Scheduler", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "docs.md", "project_name": "Godel-Scheduler", "subcategory": "Scheduling & Orchestration" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:|:|:|-:|-:| | parent directory.. | parent directory.. | parent directory.. | nan | nan | | features | features | features | nan | nan | | functionality/job-level-affinity | functionality/job-level-affinity | functionality/job-level-affinity | nan | nan | | images | images | images | nan | nan | | View all files | View all files | View all files | nan | nan |" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Karmada", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Karmada (Kubernetes Armada) is a Kubernetes management system that enables you to run your cloud-native applications across multiple Kubernetes clusters and clouds, with no changes to your applications. By speaking Kubernetes-native APIs and providing advanced scheduling capabilities, Karmada enables truly open, multi-cloud Kubernetes. Karmada aims to provide turnkey automation for multi-cluster application management in multi-cloud and hybrid cloud scenarios, with key features such as centralized multi-cloud management, high availability, failure recovery, and traffic scheduling. Karmada is a sandbox project of the Cloud Native Computing Foundation (CNCF). K8s Native API Compatible Out of the Box Avoid Vendor Lock-in Centralized Management Fruitful Multi-Cluster Scheduling Policies Open and Neutral Notice: this project is developed in continuation of Kubernetes Federation v1 and v2. Some basic concepts are inherited from these two versions. Here are some recommended next steps:" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "KEDA", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Prometheus Latest Scale applications based on Prometheus. This specification describes the prometheus trigger that scales based on a Prometheus. ``` triggers: type: prometheus metadata: serverAddress: http://<prometheus-host>:9090 query: sum(rate(httprequeststotal{deployment=\"my-deployment\"}[2m])) # Note: query must return a vector/scalar single element response threshold: '100.50' activationThreshold: '5.5' namespace: example-namespace # for namespaced queries, eg. Thanos cortexOrgID: my-org # DEPRECATED: This parameter is deprecated as of KEDA v2.10 in favor of customHeaders and will be removed in version 2.12. Use custom headers instead to set X-Scope-OrgID header for Cortex. (see below) customHeaders: X-Client-Id=cid,X-Tenant-Id=tid,X-Organization-Id=oid # Optional. Custom headers to include in query. In case of auth header, use the custom authentication or relevant authModes. ignoreNullValues: false # Default is `true`, which means ignoring the empty value list from Prometheus. Set to `false` the scaler will return error when Prometheus target is lost queryParameters: key-1=value-1,key-2=value-2 unsafeSsl: \"false\" # Default is `false`, Used for skipping certificate check when having self-signed certs for Prometheus endpoint ``` Parameter list: Prometheus Scaler supports various types of authentication to help you integrate with Prometheus. You can use TriggerAuthentication CRD to configure the authentication. It is possible to specify multiple authentication types i.e. authModes: \"tls,basic\" Specify authModes and other trigger parameters along with secret credentials in TriggerAuthentication as mentioned below: Bearer authentication: Basic authentication: TLS authentication: Custom authentication: NOTE:Its also possible to set the CA certificate regardless of the selected authModes (also without any authentication). This might be useful if you are using an enterprise CA. Amazon Web Services (AWS) offers a managed service for Prometheus that provides a scalable and secure Prometheus deployment. The Prometheus scaler can be used to run Prometheus queries against this managed service. Using the managed service eliminates the operational burden of running your own Prometheus servers. Queries can be executed against a fully managed, auto-scaling Prometheus deployment on AWS. Costs scale linearly with usage. To gain a better understanding of creating a Prometheus trigger for Amazon Managed Service for Prometheus, refer to this example. Azure has a managed service for Prometheus and Prometheus scaler can be used to run prometheus query against that. To gain a better understanding of creating a Prometheus trigger for Azure Monitor Managed Service for Prometheus, refer to this example. Google Cloud Platform provides a comprehensive managed service for Prometheus, enabling you to effortlessly export and query Prometheus metrics. By utilizing Prometheus scaler, you can seamlessly integrate it with the GCP managed service and handle authentication using the GCP workload identity mechanism. See the follwowing steps to configure the scaler" }, { "data": "To gain a better understanding of creating a Prometheus trigger for Google Managed Prometheus, refer to this example. ``` apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prometheus-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment triggers: type: prometheus metadata: serverAddress: http://<prometheus-host>:9090 threshold: '100' query: sum(rate(httprequeststotal{deployment=\"my-deployment\"}[2m])) ``` Here is an example of a prometheus scaler with Bearer Authentication, define the Secret and TriggerAuthentication as follows ``` apiVersion: v1 kind: Secret metadata: name: keda-prom-secret namespace: default data: bearerToken: \"BEARER_TOKEN\" ca: \"CUSTOMCACERT\" apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-prom-creds namespace: default spec: secretTargetRef: parameter: bearerToken name: keda-prom-secret key: bearerToken parameter: ca name: keda-prom-secret key: ca apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prometheus-scaledobject namespace: keda labels: deploymentName: dummy spec: maxReplicaCount: 12 scaleTargetRef: name: dummy triggers: type: prometheus metadata: serverAddress: http://<prometheus-host>:9090 threshold: '100' query: sum(rate(httprequeststotal{deployment=\"my-deployment\"}[2m])) authModes: \"bearer\" authenticationRef: name: keda-prom-creds ``` Here is an example of a prometheus scaler with Basic Authentication, define the Secret and TriggerAuthentication as follows ``` apiVersion: v1 kind: Secret metadata: name: keda-prom-secret namespace: default data: username: \"dXNlcm5hbWUK\" # Must be base64 password: \"cGFzc3dvcmQK\" apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-prom-creds namespace: default spec: secretTargetRef: parameter: username name: keda-prom-secret key: username parameter: password name: keda-prom-secret key: password apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prometheus-scaledobject namespace: keda labels: deploymentName: dummy spec: maxReplicaCount: 12 scaleTargetRef: name: dummy triggers: type: prometheus metadata: serverAddress: http://<prometheus-host>:9090 threshold: '100' query: sum(rate(httprequeststotal{deployment=\"my-deployment\"}[2m])) authModes: \"basic\" authenticationRef: name: keda-prom-creds ``` Here is an example of a prometheus scaler with TLS Authentication, define the Secret and TriggerAuthentication as follows ``` apiVersion: v1 kind: Secret metadata: name: keda-prom-secret namespace: default data: cert: \"cert\" key: \"key\" ca: \"ca\" apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-prom-creds namespace: default spec: secretTargetRef: parameter: cert name: keda-prom-secret key: cert parameter: key name: keda-prom-secret key: key parameter: ca name: keda-prom-secret key: ca apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prometheus-scaledobject namespace: keda labels: deploymentName: dummy spec: maxReplicaCount: 12 scaleTargetRef: name: dummy triggers: type: prometheus metadata: serverAddress: http://<prometheus-host>:9090 threshold: '100' query: sum(rate(httprequeststotal{deployment=\"my-deployment\"}[2m])) authModes: \"tls\" authenticationRef: name: keda-prom-creds ``` Here is an example of a prometheus scaler with TLS and Basic Authentication, define the Secret and TriggerAuthentication as follows ``` apiVersion: v1 kind: Secret metadata: name: keda-prom-secret namespace: default data: cert: \"cert\" key: \"key\" ca: \"ca\" username: \"username\" password: \"password\" apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-prom-creds namespace: default spec: secretTargetRef: parameter: cert name: keda-prom-secret key: cert parameter: key name: keda-prom-secret key: key parameter: ca name: keda-prom-secret key: ca parameter: username name: keda-prom-secret key: username parameter: password name: keda-prom-secret key: password apiVersion:" }, { "data": "kind: ScaledObject metadata: name: prometheus-scaledobject namespace: keda labels: deploymentName: dummy spec: maxReplicaCount: 12 scaleTargetRef: name: dummy triggers: type: prometheus metadata: serverAddress: http://<prometheus-host>:9090 threshold: '100' query: sum(rate(httprequeststotal{deployment=\"my-deployment\"}[2m])) authModes: \"tls,basic\" authenticationRef: name: keda-prom-creds ``` Here is an example of a prometheus scaler with Custom Authentication, define the Secret and TriggerAuthentication as follows ``` apiVersion: v1 kind: Secret metadata: name: keda-prom-secret namespace: default data: customAuthHeader: \"X-AUTH-TOKEN\" customAuthValue: \"auth-token\" apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-prom-creds namespace: default spec: secretTargetRef: parameter: customAuthHeader name: keda-prom-secret key: customAuthHeader parameter: customAuthValue name: keda-prom-secret key: customAuthValue apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prometheus-scaledobject namespace: keda labels: deploymentName: dummy spec: maxReplicaCount: 12 scaleTargetRef: name: dummy triggers: type: prometheus metadata: serverAddress: http://<prometheus-host>:9090 threshold: '100' query: sum(rate(httprequeststotal{deployment=\"my-deployment\"}[2m])) authModes: \"custom\" authenticationRef: name: keda-prom-creds ``` Here is an example of a prometheus scaler with Azure Pod Identity and Azure Workload Identity, define the TriggerAuthentication and ScaledObject as follows ``` apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: azure-managed-prometheus-trigger-auth spec: podIdentity: provider: azure | azure-workload # use \"azure\" for pod identity and \"azure-workload\" for workload identity identityId: <identity-id> # Optional. Default: Identity linked with the label set when installing KEDA. apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: azure-managed-prometheus-scaler spec: scaleTargetRef: name: deployment-name-to-be-scaled minReplicaCount: 1 maxReplicaCount: 20 triggers: type: prometheus metadata: serverAddress: https://test-azure-monitor-workspace-name-9ksc.eastus.prometheus.monitor.azure.com query: sum(rate(httprequeststotal{deployment=\"my-deployment\"}[2m])) # Note: query must return a vector/scalar single element response threshold: '100.50' activationThreshold: '5.5' authenticationRef: name: azure-managed-prometheus-trigger-auth ``` Below is an example showcasing the use of Prometheus scaler with AWS EKS Pod Identity. Please note that in this particular example, the Deployment is named as keda-deploy. Also replace the AwsRegion and AMP WorkspaceId for your requirements. ``` apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-trigger-auth-aws-credentials spec: podIdentity: provider: aws apiVersion: apps/v1 kind: Deployment metadata: name: keda-deploy labels: app: keda-deploy spec: replicas: 0 selector: matchLabels: app: keda-deploy template: metadata: labels: app: keda-deploy spec: containers: name: nginx image: nginxinc/nginx-unprivileged ports: containerPort: 80 apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: keda-so labels: app: keda-deploy spec: scaleTargetRef: name: keda-deploy maxReplicaCount: 2 minReplicaCount: 0 cooldownPeriod: 1 advanced: horizontalPodAutoscalerConfig: behavior: scaleDown: stabilizationWindowSeconds: 15 triggers: type: prometheus authenticationRef: name: keda-trigger-auth-aws-credentials metadata: awsRegion: {{.AwsRegion}} serverAddress: \"https://aps-workspaces.{{.AwsRegion}}.amazonaws.com/workspaces/{{.WorkspaceID}}\" query: \"vector(100)\" threshold: \"50.0\" identityOwner: operator ``` Below is an example showcasing the use of Prometheus scaler with GCP Workload Identity. Please note that in this particular example, the Google project ID has been set as my-google-project. ``` apiVersion: keda.sh/v1alpha1 kind: ClusterTriggerAuthentication metadata: name: google-workload-identity-auth spec: podIdentity: provider: gcp apiVersion: keda.sh/v1alpha1 metadata: name: google-managed-prometheus-scaler spec: scaleTargetRef: name: deployment-name-to-be-scaled minReplicaCount: 1 maxReplicaCount: 20 triggers: type: prometheus metadata: serverAddress: https://monitoring.googleapis.com/v1/projects/my-google-project/location/global/prometheus query: sum(rate(httprequeststotal{deployment=\"my-deployment\"}[2m])) threshold: '50.0' authenticationRef: kind: ClusterTriggerAuthentication name: google-workload-identity-auth ``` Blog Community Project KEDA Authors 2014-2024 | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Orchestration & Management", "file_name": "authentication-providers.md", "project_name": "KEDA", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Operate Latest Guidance & requirements for operating KEDA We provide guidance & requirements around various areas to operate KEDA: Blog Community Project KEDA Authors 2014-2024 | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Orchestration & Management", "file_name": "2.14.md", "project_name": "KEDA", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Azure Pipelines Latest Scale applications based on agent pool queues for Azure Pipelines. This specification describes the azure-pipelines trigger for Azure Pipelines. It scales based on the amount of pipeline runs pending in a given agent pool. ``` triggers: type: azure-pipelines metadata: poolName: \"{agentPoolName}\" poolID: \"{agentPoolId}\" organizationURLFromEnv: \"AZP_URL\" personalAccessTokenFromEnv: \"AZP_TOKEN\" targetPipelinesQueueLength: \"1\" # Default 1 activationTargetPipelinesQueueLength: \"5\" # Default 0 parent: \"{parent ADO agent name}\" demands: \"{demands}\" requireAllDemands: false jobsToFetch: \"{jobsToFetch}\" authenticationRef: name: pipeline-trigger-auth ``` Parameter list: NOTE: You can either use poolID or poolName. If both are specified, then poolName will be used. As an alternative to using environment variables, you can authenticate with Azure Devops using a Personal Access Token or Managed identity via TriggerAuthentication configuration. If personalAccessTokenFromEnv or personalAccessTokenFrom is empty TriggerAuthentication must be configured using podIdentity. Personal Access Token Authentication: Pod Identity Authentication Azure AD Workload Identity providers can be used. There are several ways to get the poolID. The easiest could be using az cli to get it using the command az pipelines pool list --pool-name {agentPoolName} --organization {organizationURL} --query [0].id. It is also possible to get the pool ID using the UI by browsing to the agent pool from the organization (Organization settings -> Agent pools -> {agentPoolName}) and getting it from the URL. The URL should be similar to https://dev.azure.com/{organization}/_settings/agentpools?poolId={poolID}&view=jobs Careful - You should determine this on an organization-level, not project-level. Otherwise, you might get an incorrect id. Finally, it is also possible get the pool ID from the response of a HTTP request by calling the https://dev.azure.com/{organizationName}/_apis/distributedtask/pools?poolname={agentPoolName} endpoint in the key value[0].id. By default, if you do not wish to use demands in your agent scaler then it will scale based simply on the pools queue length. Demands (Capabilities) are useful when you have multiple agents with different capabilities existing within the same pool, for instance in a kube cluster you may have an agent supporting dotnet5, dotnet6, java or maven; particularly these would be exclusive agents where jobs would fail if run on the wrong agent. This is Microsofts demands feature. Using Parent: Azure DevOps is able to determine which agents can match any job it is waiting for. If you specify a parent template then KEDA will further interrogate the job request to determine if the parent is able to fulfill the job. If the parent is able to complete the job it scales the workload fulfill the request. The parent template that is generally offline must stay in the Pools Agent list. Using demands: KEDA will determine which agents can fulfill the job based on the demands provided. The demands are provided as a comma-separated list and must be a subset of the actual capabilities of the agent. (For example maven,java,make. Note: Agent.Version is ignored). If requireAllDemands is set to true it is checked if a jobs demands are fulfilled exactly by a trigger and only scales if this is true. This means a job with demands maven will not match an agent with capabilities maven,java. Microsofts documentation:" }, { "data": "Please note that the parent template feature is exclusive to KEDA and not Microsoft and is another way of supporting demands. If you wish to use demands in your agent scaler then you can do so by adding the following to your pipeline: ``` pool: name: \"{agentPoolName}\" demands: example-demands another-demand -equals /bin/executable ``` Then, you can use the demands parameter to specify the demands that your agent supports or the parent parameter to link a template that matches you scaled object. KEDA will use the following evaluation order: Note: If more than one scaling definition is able to fulfill the demands of the job then they will both spin up an agent. Azure DevOps has a Job Request API with returns a list of all jobs, and the agent that they are assigned to, or could potentially be assigned to. This is an undocumented Microsoft API which is available on https://dev.azure.com/<organisation>/_apis/distributedtask/pools/<poolid>/jobrequests. KEDA will interpret this request to find any matching template from the defined parent in the scaling definition, or any agent that can satisfy the demands specified in the scaling definition. Once it finds it, it will scale the workload that matched the definition and Azure DevOps will assign it to that agent. Microsoft self-hosted docker agent documentation: https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops#linux Please use the script in Step 5 as the entrypoint for your agent container. You will need to change this section of the shell script so that the agent will terminate and cleanup itself when the job is complete by using the --once switch. The if statement for cleanup is only required if you are using the auto-deployment parent template method. ``` print_header \"4. Running Azure Pipelines agent...\" trap 'cleanup; exit 0' EXIT trap 'cleanup; exit 130' INT trap 'cleanup; exit 143' TERM chmod +x ./run-docker.sh ./run-docker.sh \"$@\" & wait $! ``` to ``` print_header \"4. Running Azure Pipelines agent...\" if ! grep -q \"template\" <<< \"$AZPAGENTNAME\"; then echo \"Cleanup Traps Enabled\" trap 'cleanup; exit 0' EXIT trap 'cleanup; exit 130' INT trap 'cleanup; exit 143' TERM fi chmod +x ./run-docker.sh ./run-docker.sh \"$@\" --once & wait $! ``` ``` apiVersion: v1 kind: Secret type: Opaque metadata: name: pipeline-auth data: personalAccessToken: <encoded personalAccessToken> apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: pipeline-trigger-auth namespace: default spec: secretTargetRef: parameter: personalAccessToken name: pipeline-auth key: personalAccessToken apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: azure-pipelines-scaledobject namespace: default spec: scaleTargetRef: name: azdevops-deployment minReplicaCount: 1 maxReplicaCount: 5 triggers: type: azure-pipelines metadata: poolID: \"1\" organizationURLFromEnv: \"AZP_URL\" parent: \"example-keda-template\" demands: \"maven,docker\" authenticationRef: name: pipeline-trigger-auth ``` ``` apiVersion: apps/v1 kind: Deployment metadata: name: agent spec: replicas: 1 selector: matchLabels: app: agent spec: containers: name: agent image: [SAME AS SCALED JOB] envFrom: secretRef: name: ado-pat-tokens env: name: AZPAGENTNAME value: example-keda-template # Matches Scaled Job Parent ``` ``` apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: pipeline-trigger-auth spec: podIdentity: provider: azure-workload apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: azure-pipelines-scaledobject namespace: default spec: scaleTargetRef: name: azdevops-deployment minReplicaCount: 1 maxReplicaCount: 5 triggers: type: azure-pipelines metadata: poolID: \"1\" organizationURLFromEnv: \"AZP_URL\" parent: \"example-keda-template\" demands: \"maven,docker\" authenticationRef: name: pipeline-trigger-auth ``` Blog Community Project KEDA Authors 2014-2024 | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Orchestration & Management", "file_name": "contributing.md", "project_name": "Kestra", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Platform Overview Powerful capabilities from the UI Open Source Explore Kestra's Core Capabilities Enterprise Edition Security and Governance for Enterprise Needs Cloud EditionPrivate Alpha Register to the Cloud Edition Platform overview Features Declarative Orchestration Infrastructure as Code for All Your Workflows Automation Platform Scheduling and Automation Made Easy API-First Learn more about Kestras API features Language Agnostic Separate your Business Logic from Orchestration Logic Kestra's Terraform Provider Deploy and manage all Kestra resources with Terraform Use Cases For Data Engineers Orchestrate your Data Pipelines, Automate Processes, and Harness the Power of Your Data For Software Engineers Boost Productivity, Simplify Processes, and Accelerate Microservice Deployment For Platform Engineers Automate, Scale, Provision and Optimize Your Infrastructure Blog Company news, product updates, and engineering deep dives Video Tutorials Get started with our video tutorials Community Overview Ask any questions and share your feedback Customers Stories Learn how Enterprises orchestrate their business-critical workflows Partners Use our partner ecosystem to accelerate your Kesra adoption FAQ FAQ about the product and the company Explore blueprints About Us Read about our story and meet our team Careers Join a remote-first company Contact us Get in touch with us Search Contribute to our open-source community. You can contribute to Kestra in many ways depending on your skills and interests. We love plugin contributions. Check out our Plugin Developer Guide for instructions on how to build a new plugin. Read the Code Of Conduct to see our guidelines for contributing to Kestra. The following dependencies are required to build Kestra docs locally: To start contributing: ``` $ git clone [emailprotected]:{YOUR_USERNAME}/docs.git $ cd docs ``` Use the following commands to serve the docs locally: ``` $ npm install $ npm run dev $ npm run generate $ npm run build ``` You can contribute an article about how you use Kestra to our blog. Email [emailprotected] to start the collaboration. And if you wrote a post mentioning Kestra on your personal blog, we'd be happy to feature it in our community section. The following dependencies are required to build Kestra locally: To start contributing: ``` $ git clone [emailprotected]:{YOUR_USERNAME}/kestra.git $ cd kestra ``` The backend is made with Micronaut. Open the cloned repository in your favorite" }, { "data": "In many IDEs, Gradle build will be detected and all dependencies will be downloaded. You can also build it from a terminal using ./gradlew build, the Gradle wrapper will download the right Gradle version to use. If you want to launch all tests, you need Python and some packages installed on your machine. On Ubuntu, you can install them with the following command: ``` $ sudo apt install python3 pip python3-venv $ python3 -m pip install virtualenv ``` The frontend is made with Vue.js and located in the /ui folder. ``` $ npm install ``` ``` micronaut: server: cors: enabled: true configurations: all: allowedOrigins: http://localhost:5173 ``` A documentation for developing a plugin can be found in the Plugin Documentation. This project and everyone participating in it is governed by the Kestra Code of Conduct. By participating, you are expected to uphold this code. Please report unacceptable behavior to [emailprotected]. When contributing to this project, you must agree that you have authored 100% of the content, that you have the necessary rights to the content and that the content you contribute may be provided under the project license. To submit features and bugs, please create them at the issues page. Bug reports help us make Kestra better for everyone. We provide a preconfigured template for bugs to make it very clear what information we need. Please search within our already reported bugs before raising a new one to make sure you're not raising a duplicate. Please do not create a public GitHub issue. If you've found a security issue, please email us directly at [emailprotected] instead of raising an issue. To request new features, please create an issue on this project. If you would like to suggest a new feature, we ask that you please use our issue template. It contains a few essential questions that help us understand the problem you are looking to solve and how you think your recommendation will address it. To see what has already been proposed by the community, you can look here. Watch out for duplicates! If you are creating a new issue, please check existing open, or recently closed. Having a single voted for issue is far easier for us to prioritize. Was this page helpful? Open Source Declarative Data Orchestration 2024 Kestra Technologies. Developed with in the . Privacy Policy / Cookie Policy" } ]
{ "category": "Orchestration & Management", "file_name": "deploy.md", "project_name": "KEDA", "subcategory": "Scheduling & Orchestration" }
[ { "data": "KEDA Concepts Latest What KEDA is and how it works KEDA is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed. KEDA is a single-purpose and lightweight component that can be added into any Kubernetes cluster. KEDA works alongside standard Kubernetes components like the Horizontal Pod Autoscaler and can extend functionality without overwriting or duplication. With KEDA you can explicitly map the apps you want to use event-driven scale, with other apps continuing to function. This makes KEDA a flexible and safe option to run alongside any number of any other Kubernetes applications or frameworks. KEDA performs three key roles within Kubernetes: The diagram below shows how KEDA works in conjunction with the Kubernetes Horizontal Pod Autoscaler, external event sources, and Kubernetes etcd data store: KEDA has a wide range of scalers that can both detect if a deployment should be activated or deactivated, and feed custom metrics for a specific event source. The following scalers are available: When you install KEDA, it creates four custom resources: These custom resources enable you to map an event source (and the authentication to that event source) to a Deployment, StatefulSet, Custom Resource or Job for scaling. See the Deployment documentation for instructions on how to deploy KEDA into any cluster using tools like Helm. Blog Community Project KEDA Authors 2014-2024 | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Orchestration & Management", "file_name": "docs.md", "project_name": "Kestra", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Platform Overview Powerful capabilities from the UI Open Source Explore Kestra's Core Capabilities Enterprise Edition Security and Governance for Enterprise Needs Cloud EditionPrivate Alpha Register to the Cloud Edition Platform overview Features Declarative Orchestration Infrastructure as Code for All Your Workflows Automation Platform Scheduling and Automation Made Easy API-First Learn more about Kestras API features Language Agnostic Separate your Business Logic from Orchestration Logic Kestra's Terraform Provider Deploy and manage all Kestra resources with Terraform Use Cases For Data Engineers Orchestrate your Data Pipelines, Automate Processes, and Harness the Power of Your Data For Software Engineers Boost Productivity, Simplify Processes, and Accelerate Microservice Deployment For Platform Engineers Automate, Scale, Provision and Optimize Your Infrastructure Blog Company news, product updates, and engineering deep dives Video Tutorials Get started with our video tutorials Community Overview Ask any questions and share your feedback Customers Stories Learn how Enterprises orchestrate their business-critical workflows Partners Use our partner ecosystem to accelerate your Kesra adoption FAQ FAQ about the product and the company Explore blueprints About Us Read about our story and meet our team Careers Join a remote-first company Contact us Get in touch with us Search Kestra is an open-source infinitely-scalable orchestration platform enabling all engineers to manage business-critical workflows declaratively in code. Thanks to hundreds of built-in plugins and embedded Code editor with Git and Terraform integrations, Kestra makes scheduled and event-driven data pipelines effortless. Follow the Quickstart Guide to install Kestra and start orchestrating your first workflows. Then, explore the following pages to start building more advanced workflows: Start Kestra in a Docker container and create your first flow. Install Kestra in your preferred environment. Follow the tutorial to schedule and orchestrate your first workflows. Get to know the main orchestration components of a Kestra workflow. Learn the concepts and best practices to get the most out of Kestra. Explore unique features of the Enterprise Edition and Kestra Cloud. Dive into Kestra's architecture and learn how it differs between various editions. Develop Python, R, Shell, PowerShell, Julia, Ruby or Node.js scripts and integrate them with Git and CI/CD. Browse Kestra's integrations and learn how to create your own plugins. Deploy, configure, secure, and manage Kestra in production. Almost everything is configurable in Kestra. Here you'll find the different configuration options available to Administrators. Check the API reference for the Open-Source and Enterprise Edition. Migrate to the latest version of Kestra. Manage resources and their underlying infrastructure with our official Terraform provider. Kestra comes with a rich web user interface located by default on port 8080. If you followed the Quickstart guide, the UI will be available on http://localhost:8080. Contribute to our open-source community. Tutorials covering various use cases step by step. Was this page helpful? Open Source Declarative Data Orchestration 2024 Kestra Technologies. Developed with in the . Privacy Policy / Cookie Policy" } ]
{ "category": "Orchestration & Management", "file_name": "getting-started.md", "project_name": "Kestra", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Platform Overview Powerful capabilities from the UI Open Source Explore Kestra's Core Capabilities Enterprise Edition Security and Governance for Enterprise Needs Cloud EditionPrivate Alpha Register to the Cloud Edition Platform overview Features Declarative Orchestration Infrastructure as Code for All Your Workflows Automation Platform Scheduling and Automation Made Easy API-First Learn more about Kestras API features Language Agnostic Separate your Business Logic from Orchestration Logic Kestra's Terraform Provider Deploy and manage all Kestra resources with Terraform Use Cases For Data Engineers Orchestrate your Data Pipelines, Automate Processes, and Harness the Power of Your Data For Software Engineers Boost Productivity, Simplify Processes, and Accelerate Microservice Deployment For Platform Engineers Automate, Scale, Provision and Optimize Your Infrastructure Blog Company news, product updates, and engineering deep dives Video Tutorials Get started with our video tutorials Community Overview Ask any questions and share your feedback Customers Stories Learn how Enterprises orchestrate their business-critical workflows Partners Use our partner ecosystem to accelerate your Kesra adoption FAQ FAQ about the product and the company Explore blueprints About Us Read about our story and meet our team Careers Join a remote-first company Contact us Get in touch with us Search Follow the Quickstart Guide to install Kestra and start building your first workflows. Start Kestra in a Docker container and create your first flow. Install Kestra in your preferred environment. Follow the tutorial to schedule and orchestrate your first workflows. Get to know the main orchestration components of a Kestra workflow. Learn the concepts and best practices to get the most out of Kestra. Explore unique features of the Enterprise Edition and Kestra Cloud. Dive into Kestra's architecture and learn how it differs between various editions. Develop Python, R, Shell, PowerShell, Julia, Ruby or Node.js scripts and integrate them with Git and CI/CD. Browse Kestra's integrations and learn how to create your own plugins. Deploy, configure, secure, and manage Kestra in production. Almost everything is configurable in Kestra. Here you'll find the different configuration options available to Administrators. Check the API reference for the Open-Source and Enterprise Edition. Migrate to the latest version of Kestra. Manage resources and their underlying infrastructure with our official Terraform provider. Kestra comes with a rich web user interface located by default on port 8080. If you followed the Quickstart guide, the UI will be available on http://localhost:8080. Contribute to our open-source community. Tutorials covering various use cases step by step. Was this page helpful? Open Source Declarative Data Orchestration 2024 Kestra Technologies. Developed with in the . Privacy Policy / Cookie Policy" } ]
{ "category": "Orchestration & Management", "file_name": "administrator-guide.md", "project_name": "Kestra", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Platform Overview Powerful capabilities from the UI Open Source Explore Kestra's Core Capabilities Enterprise Edition Security and Governance for Enterprise Needs Cloud EditionPrivate Alpha Register to the Cloud Edition Platform overview Features Declarative Orchestration Infrastructure as Code for All Your Workflows Automation Platform Scheduling and Automation Made Easy API-First Learn more about Kestras API features Language Agnostic Separate your Business Logic from Orchestration Logic Kestra's Terraform Provider Deploy and manage all Kestra resources with Terraform Use Cases For Data Engineers Orchestrate your Data Pipelines, Automate Processes, and Harness the Power of Your Data For Software Engineers Boost Productivity, Simplify Processes, and Accelerate Microservice Deployment For Platform Engineers Automate, Scale, Provision and Optimize Your Infrastructure Blog Company news, product updates, and engineering deep dives Video Tutorials Get started with our video tutorials Community Overview Ask any questions and share your feedback Customers Stories Learn how Enterprises orchestrate their business-critical workflows Partners Use our partner ecosystem to accelerate your Kesra adoption FAQ FAQ about the product and the company Explore blueprints About Us Read about our story and meet our team Careers Join a remote-first company Contact us Get in touch with us Search The Administrator Guide covers everything you need to know about managing your Kestra cluster. This page describes the software and hardware requirements for Kestra. Here are some best practices for alerting and monitoring your Kestra instance. This page describes the different server commands available in Kestra. Kestra is designed to be highly available and fault-tolerant. This section describes how to configure Kestra for high availability. Kestra is a fast-evolving project. This section will guide you through the process of upgrading your Kestra installation. Was this page helpful? Open Source Declarative Data Orchestration 2024 Kestra Technologies. Developed with in the . Privacy Policy / Cookie Policy" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Koordinator", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Welcome to Koordinator! Koordinator is a QoS-based scheduling for efficient orchestration of microservices, AI, and big data workloads on Kubernetes. It aims to improve the runtime efficiency and reliability of both latency sensitive workloads and batch jobs, simplify the complexity of resource-related configuration tuning, and increase pod deployment density to improve resource utilizations. Koordinator enhances the kubernetes user experiences in the workload management by providing the following: Kubernetes provides three types of QoS: Guaranteed/Burstable/BestEffort, of which Guaranteed/Burstable is widely used and BestEffort is rarely used. Koordinator is compatible with Kubernetes QoS and has numerous enhancements on each type. In order to avoid interfering with the native QoS semantics, Koordinator introduces an independent field koordinator.sh/qosClass to describe the co-location QoS. This QoS describes the service quality of the Pod running on the node in the co-location scenario. It is the most critical semantics of the mixed system. Koordinator is compatible with Kubernetes QoS and has numerous enhancements on each type. Koordinator scheduler is not designed to replace kube-scheduler, but to make co-located workloads run better on kubernetes. Koordinator scheduler is developed based on schedule-framework, adding scheduling plugins related to co-location and priority preemption on top of native scheduling capabilities. Koordinator will be committed to promoting related enhancements into the upstream community of kubernetes and promoting the standardization of co-location technology. Here are some recommended next steps:" } ]
{ "category": "Orchestration & Management", "file_name": "docs.md", "project_name": "Koordinator", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Welcome to Koordinator! Koordinator is a QoS-based scheduling for efficient orchestration of microservices, AI, and big data workloads on Kubernetes. It aims to improve the runtime efficiency and reliability of both latency sensitive workloads and batch jobs, simplify the complexity of resource-related configuration tuning, and increase pod deployment density to improve resource utilizations. Koordinator enhances the kubernetes user experiences in the workload management by providing the following: Kubernetes provides three types of QoS: Guaranteed/Burstable/BestEffort, of which Guaranteed/Burstable is widely used and BestEffort is rarely used. Koordinator is compatible with Kubernetes QoS and has numerous enhancements on each type. In order to avoid interfering with the native QoS semantics, Koordinator introduces an independent field koordinator.sh/qosClass to describe the co-location QoS. This QoS describes the service quality of the Pod running on the node in the co-location scenario. It is the most critical semantics of the mixed system. Koordinator is compatible with Kubernetes QoS and has numerous enhancements on each type. Koordinator scheduler is not designed to replace kube-scheduler, but to make co-located workloads run better on kubernetes. Koordinator scheduler is developed based on schedule-framework, adding scheduling plugins related to co-location and priority preemption on top of native scheduling capabilities. Koordinator will be committed to promoting related enhancements into the upstream community of kubernetes and promoting the standardization of co-location technology. Here are some recommended next steps:" } ]
{ "category": "Orchestration & Management", "file_name": "overview.md", "project_name": "Koordinator", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Koordinator requires Kubernetes version >= 1.18. Koordinator need collect metrics from kubelet read-only port(default is disabled). you can get more info form here. For the best experience, koordinator recommands linux kernel 4.19 or higher. Koordinator can be simply installed by helm v3.5+, which is a simple command-line tool and you can get it from here. ``` ``` Note that: If you have problem with connecting to https://koordinator-sh.github.io/charts/ in production, you might need to download the chart from here manually and install or upgrade with it. ``` $ helm install/upgrade koordinator /PATH/TO/CHART``` NRI mode resource management is Enabled by default. You can use it without any modification on the koordlet config. You can also disable it to set enable-nri-runtime-hook=false in koordlet start args. It doesn't matter if all prerequisites are not meet. You can use all other features as expected. Note that installing this chart directly means it will use the default template values for Koordinator. You may have to set your specific configurations if it is deployed into a production cluster, or you want to configure feature-gates. The following table lists the configurable parameters of the chart and their default values. | Parameter | Description | Default | |:-|:--|:| | featureGates | Feature gates for Koordinator, empty string means all by default | nan | | installation.namespace | namespace for Koordinator installation | koordinator-system | | installation.createNamespace | Whether to create the installation.namespace | true | | imageRepositoryHost | Image repository host | ghcr.io | | manager.log.level | Log level that koord-manager printed | 4 | | manager.replicas | Replicas of koord-manager deployment | 2 | | manager.image.repository | Repository for koord-manager image | koordinatorsh/koord-manager | | manager.image.tag | Tag for koord-manager image | v1.4.0 | | manager.resources.limits.cpu | CPU resource limit of koord-manager container | 1000m | | manager.resources.limits.memory | Memory resource limit of koord-manager container | 1Gi | | manager.resources.requests.cpu | CPU resource request of koord-manager container | 500m | | manager.resources.requests.memory | Memory resource request of koord-manager container | 256Mi | | manager.metrics.port | Port of metrics served | 8080 | | manager.webhook.port | Port of webhook served | 9443 | | manager.nodeAffinity | Node affinity policy for koord-manager pod | {} | | manager.nodeSelector | Node labels for koord-manager pod | {} | | manager.tolerations | Tolerations for koord-manager pod | [] | | manager.resyncPeriod | Resync period of informer koord-manager, defaults no resync | 0 | | manager.hostNetwork | Whether koord-manager pod should run with hostnetwork | false | | scheduler.log.level | Log level that koord-scheduler printed | 4 | | scheduler.replicas | Replicas of koord-scheduler deployment | 2 | | scheduler.image.repository | Repository for koord-scheduler image | koordinatorsh/koord-scheduler | | scheduler.image.tag | Tag for koord-scheduler image | v1.4.0 | | scheduler.resources.limits.cpu | CPU resource limit of koord-scheduler container | 1000m | | scheduler.resources.limits.memory | Memory resource limit of koord-scheduler container | 1Gi | | scheduler.resources.requests.cpu | CPU resource request of koord-scheduler container | 500m | |" }, { "data": "| Memory resource request of koord-scheduler container | 256Mi | | scheduler.port | Port of metrics served | 10251 | | scheduler.nodeAffinity | Node affinity policy for koord-scheduler pod | {} | | scheduler.nodeSelector | Node labels for koord-scheduler pod | {} | | scheduler.tolerations | Tolerations for koord-scheduler pod | [] | | scheduler.hostNetwork | Whether koord-scheduler pod should run with hostnetwork | false | | koordlet.log.level | Log level that koordlet printed | 4 | | koordlet.image.repository | Repository for koordlet image | koordinatorsh/koordlet | | koordlet.image.tag | Tag for koordlet image | v1.4.0 | | koordlet.resources.limits.cpu | CPU resource limit of koordlet container | 500m | | koordlet.resources.limits.memory | Memory resource limit of koordlet container | 256Mi | | koordlet.resources.requests.cpu | CPU resource request of koordlet container | 0 | | koordlet.resources.requests.memory | Memory resource request of koordlet container | 0 | | koordlet.enableServiceMonitor | Whether to enable ServiceMonitor for koordlet | false | | webhookConfiguration.failurePolicy.pods | The failurePolicy for pods in mutating webhook configuration | Ignore | | webhookConfiguration.timeoutSeconds | The timeoutSeconds for all webhook configuration | 30 | | crds.managed | Koordinator will not install CRDs with chart if this is false | true | | imagePullSecrets | The list of image pull secrets for koordinator image | false | Specify each parameter using the --set key=value[,key=value] argument to helm install or helm upgrade. Feature-gate controls some influential features in Koordinator: | Name | Description | Default | Effect (if closed) | |:|:--|:-|:--| | PodMutatingWebhook | Whether to open a mutating webhook for Pod create | True | Don't inject koordinator.sh/qosClass, koordinator.sh/priority and don't replace koordinator extend resources ad so on | | PodValidatingWebhook | Whether to open a validating webhook for Pod create/update | True | It is possible to create some Pods that do not conform to the Koordinator specification, causing some unpredictable problems | If you want to configure the feature-gate, just set the parameter when install or upgrade. Such as: ``` $ helm install koordinator https://... --set featureGates=\"PodMutatingWebhook=true\\,PodValidatingWebhook=true\"``` If you want to enable all feature-gates, set the parameter as featureGates=AllAlpha=true. If you are in China and have problem to pull image from official DockerHub, you can use the registry hosted on Alibaba Cloud: ``` $ helm install koordinator https://... --set imageRepositoryHost=registry.cn-beijing.aliyuncs.com``` When using a custom CNI (such as Weave or Calico) on EKS, the webhook cannot be reached by default. This happens because the control plane cannot be configured to run on a custom CNI on EKS, so the CNIs differ between control plane and worker nodes. To address this, the webhook can be run in the host network so it can be reached, by setting --set manager.hostNetwork=true when use helm install or upgrade. Note that this will lead to all resources created by Koordinator, including webhook configurations, services, namespace, CRDs and CR instances managed by Koordinator controller, to be deleted! Please do this ONLY when you fully understand the consequence. To uninstall koordinator if it is installed with helm charts: ``` $ helm uninstall koordinatorrelease \"koordinator\" uninstalled```" } ]
{ "category": "Orchestration & Management", "file_name": "installation.md", "project_name": "Koordinator", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Koordinator requires Kubernetes version >= 1.18. Koordinator need collect metrics from kubelet read-only port(default is disabled). you can get more info form here. For the best experience, koordinator recommands linux kernel 4.19 or higher. Koordinator can be simply installed by helm v3.5+, which is a simple command-line tool and you can get it from here. ``` ``` Note that: If you have problem with connecting to https://koordinator-sh.github.io/charts/ in production, you might need to download the chart from here manually and install or upgrade with it. ``` $ helm install/upgrade koordinator /PATH/TO/CHART``` NRI mode resource management is Enabled by default. You can use it without any modification on the koordlet config. You can also disable it to set enable-nri-runtime-hook=false in koordlet start args. It doesn't matter if all prerequisites are not meet. You can use all other features as expected. Note that installing this chart directly means it will use the default template values for Koordinator. You may have to set your specific configurations if it is deployed into a production cluster, or you want to configure feature-gates. The following table lists the configurable parameters of the chart and their default values. | Parameter | Description | Default | |:-|:--|:| | featureGates | Feature gates for Koordinator, empty string means all by default | nan | | installation.namespace | namespace for Koordinator installation | koordinator-system | | installation.createNamespace | Whether to create the installation.namespace | true | | imageRepositoryHost | Image repository host | ghcr.io | | manager.log.level | Log level that koord-manager printed | 4 | | manager.replicas | Replicas of koord-manager deployment | 2 | | manager.image.repository | Repository for koord-manager image | koordinatorsh/koord-manager | | manager.image.tag | Tag for koord-manager image | v1.4.0 | | manager.resources.limits.cpu | CPU resource limit of koord-manager container | 1000m | | manager.resources.limits.memory | Memory resource limit of koord-manager container | 1Gi | | manager.resources.requests.cpu | CPU resource request of koord-manager container | 500m | | manager.resources.requests.memory | Memory resource request of koord-manager container | 256Mi | | manager.metrics.port | Port of metrics served | 8080 | | manager.webhook.port | Port of webhook served | 9443 | | manager.nodeAffinity | Node affinity policy for koord-manager pod | {} | | manager.nodeSelector | Node labels for koord-manager pod | {} | | manager.tolerations | Tolerations for koord-manager pod | [] | | manager.resyncPeriod | Resync period of informer koord-manager, defaults no resync | 0 | | manager.hostNetwork | Whether koord-manager pod should run with hostnetwork | false | | scheduler.log.level | Log level that koord-scheduler printed | 4 | | scheduler.replicas | Replicas of koord-scheduler deployment | 2 | | scheduler.image.repository | Repository for koord-scheduler image | koordinatorsh/koord-scheduler | | scheduler.image.tag | Tag for koord-scheduler image | v1.4.0 | | scheduler.resources.limits.cpu | CPU resource limit of koord-scheduler container | 1000m | | scheduler.resources.limits.memory | Memory resource limit of koord-scheduler container | 1Gi | | scheduler.resources.requests.cpu | CPU resource request of koord-scheduler container | 500m | |" }, { "data": "| Memory resource request of koord-scheduler container | 256Mi | | scheduler.port | Port of metrics served | 10251 | | scheduler.nodeAffinity | Node affinity policy for koord-scheduler pod | {} | | scheduler.nodeSelector | Node labels for koord-scheduler pod | {} | | scheduler.tolerations | Tolerations for koord-scheduler pod | [] | | scheduler.hostNetwork | Whether koord-scheduler pod should run with hostnetwork | false | | koordlet.log.level | Log level that koordlet printed | 4 | | koordlet.image.repository | Repository for koordlet image | koordinatorsh/koordlet | | koordlet.image.tag | Tag for koordlet image | v1.4.0 | | koordlet.resources.limits.cpu | CPU resource limit of koordlet container | 500m | | koordlet.resources.limits.memory | Memory resource limit of koordlet container | 256Mi | | koordlet.resources.requests.cpu | CPU resource request of koordlet container | 0 | | koordlet.resources.requests.memory | Memory resource request of koordlet container | 0 | | koordlet.enableServiceMonitor | Whether to enable ServiceMonitor for koordlet | false | | webhookConfiguration.failurePolicy.pods | The failurePolicy for pods in mutating webhook configuration | Ignore | | webhookConfiguration.timeoutSeconds | The timeoutSeconds for all webhook configuration | 30 | | crds.managed | Koordinator will not install CRDs with chart if this is false | true | | imagePullSecrets | The list of image pull secrets for koordinator image | false | Specify each parameter using the --set key=value[,key=value] argument to helm install or helm upgrade. Feature-gate controls some influential features in Koordinator: | Name | Description | Default | Effect (if closed) | |:|:--|:-|:--| | PodMutatingWebhook | Whether to open a mutating webhook for Pod create | True | Don't inject koordinator.sh/qosClass, koordinator.sh/priority and don't replace koordinator extend resources ad so on | | PodValidatingWebhook | Whether to open a validating webhook for Pod create/update | True | It is possible to create some Pods that do not conform to the Koordinator specification, causing some unpredictable problems | If you want to configure the feature-gate, just set the parameter when install or upgrade. Such as: ``` $ helm install koordinator https://... --set featureGates=\"PodMutatingWebhook=true\\,PodValidatingWebhook=true\"``` If you want to enable all feature-gates, set the parameter as featureGates=AllAlpha=true. If you are in China and have problem to pull image from official DockerHub, you can use the registry hosted on Alibaba Cloud: ``` $ helm install koordinator https://... --set imageRepositoryHost=registry.cn-beijing.aliyuncs.com``` When using a custom CNI (such as Weave or Calico) on EKS, the webhook cannot be reached by default. This happens because the control plane cannot be configured to run on a custom CNI on EKS, so the CNIs differ between control plane and worker nodes. To address this, the webhook can be run in the host network so it can be reached, by setting --set manager.hostNetwork=true when use helm install or upgrade. Note that this will lead to all resources created by Koordinator, including webhook configurations, services, namespace, CRDs and CR instances managed by Koordinator controller, to be deleted! Please do this ONLY when you fully understand the consequence. To uninstall koordinator if it is installed with helm charts: ``` $ helm uninstall koordinatorrelease \"koordinator\" uninstalled```" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "kube-green", "subcategory": "Scheduling & Orchestration" }
[ { "data": "How many of your dev/preview pods stay on during weekends? Or at night? It's a waste of resources! And money! But fear not, kube-green is here to the rescue. kube-green is a simple k8s addon that automatically shuts down (some of) your resources when you don't need them. How many CO2 produces yearly a pod? By our assumption, it's about 11 Kg CO2eq per year per pod (here the calculation). Use this tool to calculate it: Keep reading to find out how to use it, and if you have ideas on how to improve kube-green, open an issue or start a discussion, we'd love to hear them! Try our tutorials to get started. Are available here. To start using kube-green, you need to install it in a kubernetes cluster. Click here to see how to install. You can take a look at example configuration available here, or create it with the docs here. And that's it! Now, let kube-green to sleep your pods and to save CO2! To see the real use case example, check here." } ]
{ "category": "Orchestration & Management", "file_name": "kube.md", "project_name": "kube-rs", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Kube is an umbrella-crate for interacting with Kubernetes in Rust. Kube contains a Kubernetes client, a controller runtime, a custom resource derive, and various tooling required for building applications or controllers that interact with Kubernetes. The main modules are: You can use each of these as you need with the help of the exported features. ``` use futures::{StreamExt, TryStreamExt}; use kube::{Client, api::{Api, ResourceExt, ListParams, PostParams}}; use k8s_openapi::api::core::v1::Pod; async fn main() -> Result<(), Box<dyn std::error::Error>> { // Infer the runtime environment and try to create a Kubernetes Client let client = Client::try_default().await?; // Read pods in the configured namespace into the typed interface from k8s-openapi let pods: Api<Pod> = Api::default_namespaced(client); for p in pods.list(&ListParams::default()).await? { println!(\"found pod {}\", p.name_any()); } Ok(()) }``` For details, see: ``` use schemars::JsonSchema; use serde::{Deserialize, Serialize}; use serde_json::json; use futures::{StreamExt, TryStreamExt}; use k8sopenapi::apiextensionsapiserver::pkg::apis::apiextensions::v1::CustomResourceDefinition; use kube::{ api::{Api, DeleteParams, PatchParams, Patch, ResourceExt}, core::CustomResourceExt, Client, CustomResource, runtime::{watcher, WatchStreamExt, wait::{conditions, await_condition}}, }; // Our custom resource pub struct FooSpec { info: String, name: String, replicas: i32, } async fn main() -> Result<(), Box<dyn std::error::Error>> { let client = Client::try_default().await?; let crds: Api<CustomResourceDefinition> = Api::all(client.clone()); // Apply the CRD so users can create Foo instances in Kubernetes crds.patch(\"foos.clux.dev\", &PatchParams::apply(\"my_manager\"), &Patch::Apply(Foo::crd()) ).await?; // Wait for the CRD to be ready tokio::time::timeout( std::time::Duration::from_secs(10), awaitcondition(crds, \"foos.clux.dev\", conditions::iscrd_established()) ).await?; // Watch for changes to foos in the configured namespace let foos: Api<Foo> = Api::default_namespaced(client.clone()); let wc = watcher::Config::default(); let mut applystream = watcher(foos, wc).appliedobjects().boxed(); while let Some(f) = applystream.trynext().await? { println!(\"saw apply to {}\", f.name_any()); } Ok(()) }``` For details, see: A large list of complete, runnable examples with explainations are available in the examples folder." } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "kube-rs", "subcategory": "Scheduling & Orchestration" }
[ { "data": "The Kubernetes API is a resource-based (RESTful) programmatic interface provided via HTTP. It supports retrieving, creating, updating, and deleting primary resources via the standard HTTP verbs (POST, PUT, PATCH, DELETE, GET). For some resources, the API includes additional subresources that allow fine grained authorization (such as separate views for Pod details and log retrievals), and can accept and serve those resources in different representations for convenience or efficiency. Kubernetes supports efficient change notifications on resources via watches. Kubernetes also provides consistent list operations so that API clients can effectively cache, track, and synchronize the state of resources. You can view the API reference online, or read on to learn about the API in general. Kubernetes generally leverages common RESTful terminology to describe the API concepts: Most Kubernetes API resource types are objects they represent a concrete instance of a concept on the cluster, like a pod or namespace. A smaller number of API resource types are virtual in that they often represent operations on objects, rather than objects, such as a permission check (use a POST with a JSON-encoded body of SubjectAccessReview to the subjectaccessreviews resource), or the eviction sub-resource of a Pod (used to trigger API-initiated eviction). All objects you can create via the API have a unique object name to allow idempotent creation and retrieval, except that virtual resource types may not have unique names if they are not retrievable, or do not rely on idempotency. Within a namespace, only one object of a given kind can have a given name at a time. However, if you delete the object, you can make a new object with the same name. Some objects are not namespaced (for example: Nodes), and so their names must be unique across the whole cluster. Almost all object resource types support the standard HTTP verbs - GET, POST, PUT, PATCH, and DELETE. Kubernetes also uses its own verbs, which are often written lowercase to distinguish them from HTTP verbs. Kubernetes uses the term list to describe returning a collection of resources to distinguish from retrieving a single resource which is usually called a get. If you sent an HTTP GET request with the ?watch query parameter, Kubernetes calls this a watch and not a get (see Efficient detection of changes for more details). For PUT requests, Kubernetes internally classifies these as either create or update based on the state of the existing object. An update is different from a patch; the HTTP verb for a patch is PATCH. All resource types are either scoped by the cluster (/apis/GROUP/VERSION/*) or to a namespace (/apis/GROUP/VERSION/namespaces/NAMESPACE/*). A namespace-scoped resource type will be deleted when its namespace is deleted and access to that resource type is controlled by authorization checks on the namespace scope. Note: core resources use /api instead of /apis and omit the GROUP path segment. Examples: You can also access collections of resources (for example: listing all Nodes). The following paths are used to retrieve collections and resources: Cluster-scoped resources: Namespace-scoped resources: Since a namespace is a cluster-scoped resource type, you can retrieve the list (collection) of all namespaces with GET /api/v1/namespaces and details about a particular namespace with GET /api/v1/namespaces/NAME. The verbs supported for each subresource will differ depending on the object - see the API reference for more" }, { "data": "It is not possible to access sub-resources across multiple resources - generally a new virtual resource type would be used if that becomes necessary. The Kubernetes API allows clients to make an initial request for an object or a collection, and then to track changes since that initial request: a watch. Clients can send a list or a get and then make a follow-up watch request. To make this change tracking possible, every Kubernetes object has a resourceVersion field representing the version of that resource as stored in the underlying persistence layer. When retrieving a collection of resources (either namespace or cluster scoped), the response from the API server contains a resourceVersion value. The client can use that resourceVersion to initiate a watch against the API server. When you send a watch request, the API server responds with a stream of changes. These changes itemize the outcome of operations (such as create, delete, and update) that occurred after the resourceVersion you specified as a parameter to the watch request. The overall watch mechanism allows a client to fetch the current state and then subscribe to subsequent changes, without missing any events. If a client watch is disconnected then that client can start a new watch from the last returned resourceVersion; the client could also perform a fresh get / list request and begin again. See Resource Version Semantics for more detail. For example: List all of the pods in a given namespace. ``` GET /api/v1/namespaces/test/pods 200 OK Content-Type: application/json { \"kind\": \"PodList\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\":\"10245\"}, \"items\": [...] } ``` Starting from resource version 10245, receive notifications of any API operations (such as create, delete, patch or update) that affect Pods in the test namespace. Each change notification is a JSON document. The HTTP response body (served as application/json) consists a series of JSON documents. ``` GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245 200 OK Transfer-Encoding: chunked Content-Type: application/json { \"type\": \"ADDED\", \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10596\", ...}, ...} } { \"type\": \"MODIFIED\", \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"11020\", ...}, ...} } ... ``` A given Kubernetes server will only preserve a historical record of changes for a limited time. Clusters using etcd 3 preserve changes in the last 5 minutes by default. When the requested watch operations fail because the historical version of that resource is not available, clients must handle the case by recognizing the status code 410 Gone, clearing their local cache, performing a new get or list operation, and starting the watch from the resourceVersion that was returned. For subscribing to collections, Kubernetes client libraries typically offer some form of standard tool for this list-then-watch logic. (In the Go client library, this is called a Reflector and is located in the k8s.io/client-go/tools/cache package.) To mitigate the impact of short history window, the Kubernetes API provides a watch event named BOOKMARK. It is a special kind of event to mark that all changes up to a given resourceVersion the client is requesting have already been sent. The document representing the BOOKMARK event is of the type requested by the request, but only includes a .metadata.resourceVersion field. For example: ``` GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true 200 OK Transfer-Encoding: chunked Content-Type: application/json { \"type\": \"ADDED\", \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10596\", ...}, ...} }" }, { "data": "{ \"type\": \"BOOKMARK\", \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"12746\"} } } ``` As a client, you can request BOOKMARK events by setting the allowWatchBookmarks=true query parameter to a watch request, but you shouldn't assume bookmarks are returned at any specific interval, nor can clients assume that the API server will send any BOOKMARK event even when requested. On large clusters, retrieving the collection of some resource types may result in a significant increase of resource usage (primarily RAM) on the control plane. In order to alleviate its impact and simplify the user experience of the list + watch pattern, Kubernetes v1.27 introduces as an alpha feature the support for requesting the initial state (previously requested via the list request) as part of the watch request. Provided that the WatchList feature gate is enabled, this can be achieved by specifying sendInitialEvents=true as query string parameter in a watch request. If set, the API server starts the watch stream with synthetic init events (of type ADDED) to build the whole state of all existing objects followed by a BOOKMARK event (if requested via allowWatchBookmarks=true option). The bookmark event includes the resource version to which is synced. After sending the bookmark event, the API server continues as for any other watch request. When you set sendInitialEvents=true in the query string, Kubernetes also requires that you set resourceVersionMatch to NotOlderThan value. If you provided resourceVersion in the query string without providing a value or don't provide it at all, this is interpreted as a request for consistent read; the bookmark event is sent when the state is synced at least to the moment of a consistent read from when the request started to be processed. If you specify resourceVersion (in the query string), the bookmark event is sent when the state is synced at least to the provided resource version. An example: you want to watch a collection of Pods. For that collection, the current resource version is 10245 and there are two pods: foo and bar. Then sending the following request (explicitly requesting consistent read by setting empty resource version using resourceVersion=) could result in the following sequence of events: ``` GET /api/v1/namespaces/test/pods?watch=1&sendInitialEvents=true&allowWatchBookmarks=true&resourceVersion=&resourceVersionMatch=NotOlderThan 200 OK Transfer-Encoding: chunked Content-Type: application/json { \"type\": \"ADDED\", \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"8467\", \"name\": \"foo\"}, ...} } { \"type\": \"ADDED\", \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"5726\", \"name\": \"bar\"}, ...} } { \"type\": \"BOOKMARK\", \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10245\"} } } ... <followed by regular watch stream starting from resourceVersion=\"10245\"> ``` APIResponseCompression is an option that allows the API server to compress the responses for get and list requests, reducing the network bandwidth and improving the performance of large-scale clusters. It is enabled by default since Kubernetes 1.16 and it can be disabled by including APIResponseCompression=false in the --feature-gates flag on the API server. API response compression can significantly reduce the size of the response, especially for large resources or collections. For example, a list request for pods can return hundreds of kilobytes or even megabytes of data, depending on the number of pods and their attributes. By compressing the response, the network bandwidth can be saved and the latency can be reduced. To verify if APIResponseCompression is working, you can send a get or list request to the API server with an Accept-Encoding header, and check the response size and headers. For example: ``` GET /api/v1/pods Accept-Encoding: gzip 200 OK Content-Type: application/json content-encoding: gzip ... ``` The content-encoding header indicates that the response is compressed with" }, { "data": "On large clusters, retrieving the collection of some resource types may result in very large responses that can impact the server and client. For instance, a cluster may have tens of thousands of Pods, each of which is equivalent to roughly 2 KiB of encoded JSON. Retrieving all pods across all namespaces may result in a very large response (10-20MB) and consume a large amount of server resources. The Kubernetes API server supports the ability to break a single large collection request into many smaller chunks while preserving the consistency of the total request. Each chunk can be returned sequentially which reduces both the total size of the request and allows user-oriented clients to display results incrementally to improve responsiveness. You can request that the API server handles a list by serving single collection using pages (which Kubernetes calls chunks). To retrieve a single collection in chunks, two query parameters limit and continue are supported on requests against collections, and a response field continue is returned from all list operations in the collection's metadata field. A client should specify the maximum results they wish to receive in each chunk with limit and the server will return up to limit resources in the result and include a continue value if there are more resources in the collection. As an API client, you can then pass this continue value to the API server on the next request, to instruct the server to return the next page (chunk) of results. By continuing until the server returns an empty continue value, you can retrieve the entire collection. Like a watch operation, a continue token will expire after a short amount of time (by default 5 minutes) and return a 410 Gone if more results cannot be returned. In this case, the client will need to start from the beginning or omit the limit parameter. For example, if there are 1,253 pods on the cluster and you want to receive chunks of 500 pods at a time, request those chunks as follows: List all of the pods on a cluster, retrieving up to 500 pods each time. ``` GET /api/v1/pods?limit=500 200 OK Content-Type: application/json { \"kind\": \"PodList\", \"apiVersion\": \"v1\", \"metadata\": { \"resourceVersion\":\"10245\", \"continue\": \"ENCODEDCONTINUETOKEN\", \"remainingItemCount\": 753, ... }, \"items\": [...] // returns pods 1-500 } ``` Continue the previous call, retrieving the next set of 500 pods. ``` GET /api/v1/pods?limit=500&continue=ENCODEDCONTINUETOKEN 200 OK Content-Type: application/json { \"kind\": \"PodList\", \"apiVersion\": \"v1\", \"metadata\": { \"resourceVersion\":\"10245\", \"continue\": \"ENCODEDCONTINUETOKEN_2\", \"remainingItemCount\": 253, ... }, \"items\": [...] // returns pods 501-1000 } ``` Continue the previous call, retrieving the last 253 pods. ``` GET /api/v1/pods?limit=500&continue=ENCODEDCONTINUETOKEN_2 200 OK Content-Type: application/json { \"kind\": \"PodList\", \"apiVersion\": \"v1\", \"metadata\": { \"resourceVersion\":\"10245\", \"continue\": \"\", // continue token is empty because we have reached the end of the list ... }, \"items\": [...] // returns pods 1001-1253 } ``` Notice that the resourceVersion of the collection remains constant across each request, indicating the server is showing you a consistent snapshot of the pods. Pods that are created, updated, or deleted after version 10245 would not be shown unless you make a separate list request without the continue token. This allows you to break large requests into smaller chunks and then perform a watch operation on the full set without missing any updates. remainingItemCount is the number of subsequent items in the collection that are not included in this" }, { "data": "If the list request contained label or field selectors then the number of remaining items is unknown and the API server does not include a remainingItemCount field in its response. If the list is complete (either because it is not chunking, or because this is the last chunk), then there are no more remaining items and the API server does not include a remainingItemCount field in its response. The intended use of the remainingItemCount is estimating the size of a collection. In Kubernetes terminology, the response you get from a list is a collection. However, Kubernetes defines concrete kinds for collections of different types of resource. Collections have a kind named for the resource kind, with List appended. When you query the API for a particular type, all items returned by that query are of that type. For example, when you list Services, the collection response has kind set to ServiceList; each item in that collection represents a single Service. For example: ``` GET /api/v1/services ``` ``` { \"kind\": \"ServiceList\", \"apiVersion\": \"v1\", \"metadata\": { \"resourceVersion\": \"2947301\" }, \"items\": [ { \"metadata\": { \"name\": \"kubernetes\", \"namespace\": \"default\", ... \"metadata\": { \"name\": \"kube-dns\", \"namespace\": \"kube-system\", ... ``` There are dozens of collection types (such as PodList, ServiceList, and NodeList) defined in the Kubernetes API. You can get more information about each collection type from the Kubernetes API documentation. Some tools, such as kubectl, represent the Kubernetes collection mechanism slightly differently from the Kubernetes API itself. Because the output of kubectl might include the response from multiple list operations at the API level, kubectl represents a list of items using kind: List. For example: ``` kubectl get services -A -o yaml ``` ``` apiVersion: v1 kind: List metadata: resourceVersion: \"\" selfLink: \"\" items: apiVersion: v1 kind: Service metadata: creationTimestamp: \"2021-06-03T14:54:12Z\" labels: component: apiserver provider: kubernetes name: kubernetes namespace: default ... apiVersion: v1 kind: Service metadata: annotations: prometheus.io/port: \"9153\" prometheus.io/scrape: \"true\" creationTimestamp: \"2021-06-03T14:54:14Z\" labels: k8s-app: kube-dns kubernetes.io/cluster-service: \"true\" kubernetes.io/name: CoreDNS name: kube-dns namespace: kube-system ``` Keep in mind that the Kubernetes API does not have a kind named List. kind: List is a client-side, internal implementation detail for processing collections that might be of different kinds of object. Avoid depending on kind: List in automation or other code. When you run kubectl get, the default output format is a simple tabular representation of one or more instances of a particular resource type. In the past, clients were required to reproduce the tabular and describe output implemented in kubectl to perform simple lists of objects. A few limitations of that approach include non-trivial logic when dealing with certain objects. Additionally, types provided by API aggregation or third party resources are not known at compile time. This means that generic implementations had to be in place for types unrecognized by a client. In order to avoid potential limitations as described above, clients may request the Table representation of objects, delegating specific details of printing to the server. The Kubernetes API implements standard HTTP content type negotiation: passing an Accept header containing a value of application/json;as=Table;g=meta.k8s.io;v=v1 with a GET call will request that the server return objects in the Table content type. For example, list all of the pods on a cluster in the Table format. ``` GET /api/v1/pods Accept: application/json;as=Table;g=meta.k8s.io;v=v1 200 OK Content-Type: application/json { \"kind\": \"Table\", \"apiVersion\": \"meta.k8s.io/v1\", ... \"columnDefinitions\": [" }, { "data": "] } ``` For API resource types that do not have a custom Table definition known to the control plane, the API server returns a default Table response that consists of the resource's name and creationTimestamp fields. ``` GET /apis/crd.example.com/v1alpha1/namespaces/default/resources 200 OK Content-Type: application/json ... { \"kind\": \"Table\", \"apiVersion\": \"meta.k8s.io/v1\", ... \"columnDefinitions\": [ { \"name\": \"Name\", \"type\": \"string\", ... }, { \"name\": \"Created At\", \"type\": \"date\", ... } ] } ``` Not all API resource types support a Table response; for example, a CustomResourceDefinitions might not define field-to-table mappings, and an APIService that extends the core Kubernetes API might not serve Table responses at all. If you are implementing a client that uses the Table information and must work against all resource types, including extensions, you should make requests that specify multiple content types in the Accept header. For example: ``` Accept: application/json;as=Table;g=meta.k8s.io;v=v1, application/json ``` By default, Kubernetes returns objects serialized to JSON with content type application/json. This is the default serialization format for the API. However, clients may request the more efficient Protobuf representation of these objects for better performance at scale. The Kubernetes API implements standard HTTP content type negotiation: passing an Accept header with a GET call will request that the server tries to return a response in your preferred media type, while sending an object in Protobuf to the server for a PUT or POST call means that you must set the Content-Type header appropriately. The server will return a response with a Content-Type header if the requested format is supported, or the 406 Not acceptable error if none of the media types you requested are supported. All built-in resource types support the application/json media type. See the Kubernetes API reference for a list of supported content types for each API. For example: List all of the pods on a cluster in Protobuf format. ``` GET /api/v1/pods Accept: application/vnd.kubernetes.protobuf 200 OK Content-Type: application/vnd.kubernetes.protobuf ... binary encoded PodList object ``` Create a pod by sending Protobuf encoded data to the server, but request a response in JSON. ``` POST /api/v1/namespaces/test/pods Content-Type: application/vnd.kubernetes.protobuf Accept: application/json ... binary encoded Pod object 200 OK Content-Type: application/json { \"kind\": \"Pod\", \"apiVersion\": \"v1\", ... } ``` Not all API resource types support Protobuf; specifically, Protobuf isn't available for resources that are defined as CustomResourceDefinitions or are served via the aggregation layer. As a client, if you might need to work with extension types you should specify multiple content types in the request Accept header to support fallback to JSON. For example: ``` Accept: application/vnd.kubernetes.protobuf, application/json ``` Kubernetes uses an envelope wrapper to encode Protobuf responses. That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf (as opposed to JSON), and then is followed by a Protobuf encoded wrapper message, which describes the encoding and type of the underlying object and then contains the object. The wrapper format is: ``` A four byte magic number prefix: Bytes 0-3: \"k8s\\x00\" [0x6b, 0x38, 0x73, 0x00] An encoded Protobuf message with the following IDL: message Unknown { // typeMeta should have the string values for \"kind\" and \"apiVersion\" as set on the JSON object optional TypeMeta typeMeta = 1; // raw will hold the complete serialized object in protobuf. See the protobuf definitions in the client libraries for a given kind. optional bytes raw = 2; // contentEncoding is encoding used for the raw data. Unspecified means no encoding. optional string contentEncoding = 3; // contentType is the serialization method used to serialize 'raw'. Unspecified means application/vnd.kubernetes.protobuf and is usually //" }, { "data": "optional string contentType = 4; } message TypeMeta { // apiVersion is the group/version for this type optional string apiVersion = 1; // kind is the name of the object schema. A protobuf definition should exist for this object. optional string kind = 2; } ``` When you delete a resource this takes place in two phases. ``` { \"kind\": \"ConfigMap\", \"apiVersion\": \"v1\", \"metadata\": { \"finalizers\": [\"url.io/neat-finalization\", \"other-url.io/my-finalizer\"], \"deletionTimestamp\": nil, } } ``` When a client first sends a delete to request the removal of a resource, the .metadata.deletionTimestamp is set to the current time. Once the .metadata.deletionTimestamp is set, external controllers that act on finalizers may start performing their cleanup work at any time, in any order. Order is not enforced between finalizers because it would introduce significant risk of stuck .metadata.finalizers. The .metadata.finalizers field is shared: any actor with permission can reorder it. If the finalizer list were processed in order, then this might lead to a situation in which the component responsible for the first finalizer in the list is waiting for some signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering, finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list. Once the last finalizer is removed, the resource is actually removed from etcd. The Kubernetes API verbs get, create, update, patch, delete and proxy support single resources only. These verbs with single resource support have no support for submitting multiple resources together in an ordered or unordered list or transaction. When clients (including kubectl) act on a set of resources, the client makes a series of single-resource API requests, then aggregates the responses if needed. By contrast, the Kubernetes API verbs list and watch allow getting multiple resources, and deletecollection allows deleting multiple resources. Kubernetes always validates the type of fields. For example, if a field in the API is defined as a number, you cannot set the field to a text value. If a field is defined as an array of strings, you can only provide an array. Some fields allow you to omit them, other fields are required. Omitting a required field from an API request is an error. If you make a request with an extra field, one that the cluster's control plane does not recognize, then the behavior of the API server is more complicated. By default, the API server drops fields that it does not recognize from an input that it receives (for example, the JSON body of a PUT request). There are two situations where the API server drops fields that you supplied in an HTTP request. These situations are: From 1.25 onward, unrecognized or duplicate fields in an object are detected via validation on the server when you use HTTP verbs that can submit data (POST, PUT, and PATCH). Possible levels of validation are Ignore, Warn (default), and Strict. The field validation level is set by the fieldValidation query" }, { "data": "If you submit a request that specifies an unrecognized field, and that is also invalid for a different reason (for example, the request provides a string value where the API expects an integer for a known field), then the API server responds with a 400 Bad Request error, but will not provide any information on unknown or duplicate fields (only which fatal error it encountered first). You always receive an error response in this case, no matter what field validation level you requested. Tools that submit requests to the server (such as kubectl), might set their own defaults that are different from the Warn validation level that the API server uses by default. The kubectl tool uses the --validate flag to set the level of field validation. It accepts the values ignore, warn, and strict while also accepting the values true (equivalent to strict) and false (equivalent to ignore). The default validation setting for kubectl is --validate=true, which means strict server-side field validation. When kubectl cannot connect to an API server with field validation (API servers prior to Kubernetes 1.27), it will fall back to using client-side validation. Client-side validation will be removed entirely in a future version of kubectl. When you use HTTP verbs that can modify resources (POST, PUT, PATCH, and DELETE), you can submit your request in a dry run mode. Dry run mode helps to evaluate a request through the typical request stages (admission chain, validation, merge conflicts) up until persisting objects to storage. The response body for the request is as close as possible to a non-dry-run response. Kubernetes guarantees that dry-run requests will not be persisted in storage or have any other side effects. Dry-run is triggered by setting the dryRun query parameter. This parameter is a string, working as an enum, and the only accepted values are: When you set ?dryRun=All, any relevant admission controllers are run, validating admission controllers check the request post-mutation, merge is performed on PATCH, fields are defaulted, and schema validation occurs. The changes are not persisted to the underlying storage, but the final object which would have been persisted is still returned to the user, along with the normal status code. If the non-dry-run version of a request would trigger an admission controller that has side effects, the request will be failed rather than risk an unwanted side effect. All built in admission control plugins support dry-run. Additionally, admission webhooks can declare in their configuration object that they do not have side effects, by setting their sideEffects field to None. Here is an example dry-run request that uses ?dryRun=All: ``` POST /api/v1/namespaces/test/pods?dryRun=All Content-Type: application/json Accept: application/json ``` The response would look the same as for non-dry-run request, but the values of some generated fields may differ. Some values of an object are typically generated before the object is persisted. It is important not to rely upon the values of these fields set by a dry-run request, since these values will likely be different in dry-run mode from when the real request is made. Some of these fields are: Authorization for dry-run and non-dry-run requests is identical. Thus, to make a dry-run request, you must be authorized to make the non-dry-run request. For example, to run a dry-run patch for a Deployment, you must be authorized to perform that patch. Here is an example of a rule for Kubernetes RBAC that allows patching Deployments: ``` rules: apiGroups: [\"apps\"] resources: [\"deployments\"] verbs: [\"patch\"] ``` See Authorization Overview. Kubernetes provides several ways to update existing objects. You can read choosing an update mechanism to learn about which approach might be best for your use case. You can overwrite (update) an existing resource - for example, a ConfigMap - using an HTTP" }, { "data": "For a PUT request, it is the client's responsibility to specify the resourceVersion (taking this from the object being updated). Kubernetes uses that resourceVersion information so that the API server can detect lost updates and reject requests made by a client that is out of date with the cluster. In the event that the resource has changed (the resourceVersion the client provided is stale), the API server returns a 409 Conflict error response. Instead of sending a PUT request, the client can send an instruction to the API server to patch an existing resource. A patch is typically appropriate if the change that the client wants to make isn't conditional on the existing data. Clients that need effective detection of lost updates should consider making their request conditional on the existing resourceVersion (either HTTP PUT or HTTP PATCH), and then handle any retries that are needed in case there is a conflict. The Kubernetes API supports four different PATCH operations, determined by their corresponding HTTP Content-Type header: A patch using application/json-patch+json can include conditions to validate consistency, allowing the operation to fail if those conditions are not met (for example, to avoid a lost update). Kubernetes' Server Side Apply feature allows the control plane to track managed fields for newly created objects. Server Side Apply provides a clear pattern for managing field conflicts, offers server-side apply and update operations, and replaces the client-side functionality of kubectl apply. For Server-Side Apply, Kubernetes treats the request as a create if the object does not yet exist, and a patch otherwise. For other requests that use PATCH at the HTTP level, the logical Kubernetes operation is always patch. See Server Side Apply for more details. The update (HTTP PUT) operation is simple to implement and flexible, but has drawbacks: A patch update is helpful, because: However: Server-Side Apply has some clear benefits: However: Resource versions are strings that identify the server's internal version of an object. Resource versions can be used by clients to determine when objects have changed, or to express data consistency requirements when getting, listing and watching resources. Resource versions must be treated as opaque by clients and passed unmodified back to the server. You must not assume resource versions are numeric or collatable. API clients may only compare two resource versions for equality (this means that you must not compare resource versions for greater-than or less-than relationships). Clients find resource versions in resources, including the resources from the response stream for a watch, or when using list to enumerate resources. v1.meta/ObjectMeta - The metadata.resourceVersion of a resource instance identifies the resource version the instance was last modified at. v1.meta/ListMeta - The metadata.resourceVersion of a resource collection (the response to a list) identifies the resource version at which the collection was constructed. The get, list, and watch operations support the resourceVersion parameter. From version v1.19, Kubernetes API servers also support the resourceVersionMatch parameter on list requests. The API server interprets the resourceVersion parameter differently depending on the operation you request, and on the value of resourceVersion. If you set resourceVersionMatch then this also affects the way matching happens. For get and list, the semantics of resourceVersion are: get: | resourceVersion unset | resourceVersion=\"0\" | resourceVersion=\"{value other than 0}\" | |:|:-|:--| | Most Recent | Any | Not older than | list: From version v1.19, Kubernetes API servers support the resourceVersionMatch parameter on list" }, { "data": "If you set both resourceVersion and resourceVersionMatch, the resourceVersionMatch parameter determines how the API server interprets resourceVersion. You should always set the resourceVersionMatch parameter when setting resourceVersion on a list request. However, be prepared to handle the case where the API server that responds is unaware of resourceVersionMatch and ignores it. Unless you have strong consistency requirements, using resourceVersionMatch=NotOlderThan and a known resourceVersion is preferable since it can achieve better performance and scalability of your cluster than leaving resourceVersion and resourceVersionMatch unset, which requires quorum read to be served. Setting the resourceVersionMatch parameter without setting resourceVersion is not valid. This table explains the behavior of list requests with various combinations of resourceVersion and resourceVersionMatch: | resourceVersionMatch param | paging params | resourceVersion not set | resourceVersion=\"0\" | resourceVersion=\"{value other than 0}\" | |:-|:-|:--|:|:--| | unset | limit unset | Most Recent | Any | Not older than | | unset | limit=<n>, continue unset | Most Recent | Any | Exact | | unset | limit=<n>, continue=<token> | Continue Token, Exact | Invalid, treated as Continue Token, Exact | Invalid, HTTP 400 Bad Request | | resourceVersionMatch=Exact | limit unset | Invalid | Invalid | Exact | | resourceVersionMatch=Exact | limit=<n>, continue unset | Invalid | Invalid | Exact | | resourceVersionMatch=NotOlderThan | limit unset | Invalid | Any | Not older than | | resourceVersionMatch=NotOlderThan | limit=<n>, continue unset | Invalid | Any | Not older than | The meaning of the get and list semantics are: When using resourceVersionMatch=NotOlderThan and limit is set, clients must handle HTTP 410 \"Gone\" responses. For example, the client might retry with a newer resourceVersion or fall back to resourceVersion=\"\". When using resourceVersionMatch=Exact and limit is unset, clients must verify that the collection's .metadata.resourceVersion matches the requested resourceVersion, and handle the case where it does not. For example, the client might fall back to a request with limit set. For watch, the semantics of resource version are: watch: | resourceVersion unset | resourceVersion=\"0\" | resourceVersion=\"{value other than 0}\" | |:--|:|:--| | Get State and Start at Most Recent | Get State and Start at Any | Start at Exact | The meaning of those watch semantics are: Servers are not required to serve all older resource versions and may return a HTTP 410 (Gone) status code if a client requests a resourceVersion older than the server has retained. Clients must be able to tolerate 410 (Gone) responses. See Efficient detection of changes for details on how to handle 410 (Gone) responses when watching resources. If you request a resourceVersion outside the applicable limit then, depending on whether a request is served from cache or not, the API server may reply with a 410 Gone HTTP response. Servers are not required to serve unrecognized resource versions. If you request list or get for a resource version that the API server does not recognize, then the API server may either: If you request a resource version that an API server does not recognize, the kube-apiserver additionally identifies its error responses with a \"Too large resource version\" message. If you make a watch request for an unrecognized resource version, the API server may wait indefinitely (until the request timeout) for the resource version to become available. Was this page helpful? Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub Repository if you want to report a problem or suggest an improvement." } ]
{ "category": "Orchestration & Management", "file_name": "docs.github.com.md", "project_name": "KubeAdmiral", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "docs.md", "project_name": "KubeAdmiral", "subcategory": "Scheduling & Orchestration" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:-|:-|:-|-:|-:| | parent directory.. | parent directory.. | parent directory.. | nan | nan | | images | images | images | nan | nan | | cluster-joining.md | cluster-joining.md | cluster-joining.md | nan | nan | | code-style.md | code-style.md | code-style.md | nan | nan | | e2e-tests.md | e2e-tests.md | e2e-tests.md | nan | nan | | quickstart.md | quickstart.md | quickstart.md | nan | nan | | View all files | View all files | View all files | nan | nan |" } ]
{ "category": "Orchestration & Management", "file_name": "github-terms-of-service.md", "project_name": "KubeAdmiral", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "#how-a-replicaset-works.md", "project_name": "Kubernetes", "subcategory": "Scheduling & Orchestration" }
[ { "data": "A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up the Pods it created. Suspending a Job will delete its active Pods until the Job is resumed again. A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot). You can also use a Job to run multiple Pods in parallel. If you want to run a Job (either a single task, or several in parallel) on a schedule, see CronJob. Here is an example Job config. It computes to 2000 places and prints it out. It takes around 10s to complete. ``` apiVersion: batch/v1 kind: Job metadata: name: pi spec: template: spec: containers: name: pi image: perl:5.34.0 command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: Never backoffLimit: 4 ``` You can run the example with this command: ``` kubectl apply -f https://kubernetes.io/examples/controllers/job.yaml ``` The output is similar to this: ``` job.batch/pi created ``` Check on the status of the Job with kubectl: ``` Name: pi Namespace: default Selector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c batch.kubernetes.io/job-name=pi ... Annotations: batch.kubernetes.io/job-tracking: \"\" Parallelism: 1 Completions: 1 Start Time: Mon, 02 Dec 2019 15:20:11 +0200 Completed At: Mon, 02 Dec 2019 15:21:16 +0200 Duration: 65s Pods Statuses: 0 Running / 1 Succeeded / 0 Failed Pod Template: Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c batch.kubernetes.io/job-name=pi Containers: pi: Image: perl:5.34.0 Port: <none> Host Port: <none> Command: perl -Mbignum=bpi -wle print bpi(2000) Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message - - - Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4 Normal Completed 18s job-controller Job completed ``` ``` apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223 batch.kubernetes.io/job-name: pi name: pi namespace: default resourceVersion: \"4751\" uid: 204fb678-040b-497f-9266-35ffa8716d14 spec: backoffLimit: 4 completionMode: NonIndexed completions: 1 parallelism: 1 selector: matchLabels: batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223 suspend: false template: metadata: creationTimestamp: null labels: batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223 batch.kubernetes.io/job-name: pi spec: containers: command: perl -Mbignum=bpi -wle print bpi(2000) image: perl:5.34.0 imagePullPolicy: IfNotPresent name: pi resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Never schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: active: 1 ready: 0 startTime: \"2022-11-10T17:53:57Z\" uncountedTerminatedPods: {} ``` To view completed Pods of a Job, use kubectl get pods. To list all the Pods that belong to a Job in a machine readable form, you can use a command like this: ``` pods=$(kubectl get pods --selector=batch.kubernetes.io/job-name=pi --output=jsonpath='{.items[*].metadata.name}') echo $pods ``` The output is similar to this: ``` pi-5rwd7 ``` Here, the selector is the same as the selector for the Job. The --output=jsonpath option specifies an expression with the name from each Pod in the returned list. View the standard output of one of the pods: ``` kubectl logs $pods ``` Another way to view the logs of a Job: ``` kubectl logs jobs/pi ``` The output is similar to this: ``` 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901 ``` As with all other Kubernetes config, a Job needs apiVersion, kind, and metadata fields. When the control plane creates new Pods for a Job, the .metadata.name of the Job is part of the basis for naming those Pods. The name of a Job must be a valid DNS subdomain value, but this can produce unexpected results for the Pod" }, { "data": "For best compatibility, the name should follow the more restrictive rules for a DNS label. Even when the name is a DNS subdomain, the name must be no longer than 63 characters. A Job also needs a .spec section. Job labels will have batch.kubernetes.io/ prefix for job-name and controller-uid. The .spec.template is the only required field of the .spec. The .spec.template is a pod template. It has exactly the same schema as a Pod, except it is nested and does not have an apiVersion or kind. In addition to required fields for a Pod, a pod template in a Job must specify appropriate labels (see pod selector) and an appropriate restart policy. Only a RestartPolicy equal to Never or OnFailure is allowed. The .spec.selector field is optional. In almost all cases you should not specify it. See section specifying your own pod selector. There are three main types of task suitable to run as a Job: For a non-parallel Job, you can leave both .spec.completions and .spec.parallelism unset. When both are unset, both are defaulted to 1. For a fixed completion count Job, you should set .spec.completions to the number of completions needed. You can set .spec.parallelism, or leave it unset and it will default to 1. For a work queue Job, you must leave .spec.completions unset, and set .spec.parallelism to a non-negative integer. For more information about how to make use of the different types of job, see the job patterns section. The requested parallelism (.spec.parallelism) can be set to any non-negative value. If it is unspecified, it defaults to 1. If it is specified as 0, then the Job is effectively paused until it is increased. Actual parallelism (number of pods running at any instant) may be more or less than requested parallelism, for a variety of reasons: Jobs with fixed completion count - that is, jobs that have non null .spec.completions - can have a completion mode that is specified in .spec.completionMode: NonIndexed (default): the Job is considered complete when there have been .spec.completions successfully completed Pods. In other words, each Pod completion is homologous to each other. Note that Jobs that have null .spec.completions are implicitly NonIndexed. Indexed: the Pods of a Job get an associated completion index from 0 to .spec.completions-1. The index is available through four mechanisms: The Job is considered complete when there is one successfully completed Pod for each index. For more information about how to use this mode, see Indexed Job for Parallel Processing with Static Work Assignment. A container in a Pod may fail for a number of reasons, such as because the process in it exited with a non-zero exit code, or the container was killed for exceeding a memory limit, etc. If this happens, and the .spec.template.spec.restartPolicy = \"OnFailure\", then the Pod stays on the node, but the container is re-run. Therefore, your program needs to handle the case when it is restarted locally, or else specify .spec.template.spec.restartPolicy = \"Never\". See pod lifecycle for more information on restartPolicy. An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node (node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the .spec.template.spec.restartPolicy = \"Never\". When a Pod fails, then the Job controller starts a new Pod. This means that your application needs to handle the case when it is restarted in a new pod. In particular, it needs to handle temporary files, locks, incomplete output and the like caused by previous" }, { "data": "By default, each pod failure is counted towards the .spec.backoffLimit limit, see pod backoff failure policy. However, you can customize handling of pod failures by setting the Job's pod failure policy. Additionally, you can choose to count the pod failures independently for each index of an Indexed Job by setting the .spec.backoffLimitPerIndex field (for more information, see backoff limit per index). Note that even if you specify .spec.parallelism = 1 and .spec.completions = 1 and .spec.template.spec.restartPolicy = \"Never\", the same program may sometimes be started twice. If you do specify .spec.parallelism and .spec.completions both greater than 1, then there may be multiple pods running at once. Therefore, your pods must also be tolerant of concurrency. When the feature gates PodDisruptionConditions and JobPodFailurePolicy are both enabled, and the .spec.podFailurePolicy field is set, the Job controller does not consider a terminating Pod (a pod that has a .metadata.deletionTimestamp field set) as a failure until that Pod is terminal (its .status.phase is Failed or Succeeded). However, the Job controller creates a replacement Pod as soon as the termination becomes apparent. Once the pod terminates, the Job controller evaluates .backoffLimit and .podFailurePolicy for the relevant Job, taking this now-terminated Pod into consideration. If either of these requirements is not satisfied, the Job controller counts a terminating Pod as an immediate failure, even if that Pod later terminates with phase: \"Succeeded\". There are situations where you want to fail a Job after some amount of retries due to a logical error in configuration etc. To do so, set .spec.backoffLimit to specify the number of retries before considering a Job as failed. The back-off limit is set by default to 6. Failed Pods associated with the Job are recreated by the Job controller with an exponential back-off delay (10s, 20s, 40s ...) capped at six minutes. The number of retries is calculated in two ways: If either of the calculations reaches the .spec.backoffLimit, the Job is considered failed. When you run an indexed Job, you can choose to handle retries for pod failures independently for each index. To do so, set the .spec.backoffLimitPerIndex to specify the maximal number of pod failures per index. When the per-index backoff limit is exceeded for an index, Kubernetes considers the index as failed and adds it to the .status.failedIndexes field. The succeeded indexes, those with a successfully executed pods, are recorded in the .status.completedIndexes field, regardless of whether you set the backoffLimitPerIndex field. Note that a failing index does not interrupt execution of other indexes. Once all indexes finish for a Job where you specified a backoff limit per index, if at least one of those indexes did fail, the Job controller marks the overall Job as failed, by setting the Failed condition in the status. The Job gets marked as failed even if some, potentially nearly all, of the indexes were processed successfully. You can additionally limit the maximal number of indexes marked failed by setting the .spec.maxFailedIndexes field. When the number of failed indexes exceeds the maxFailedIndexes field, the Job controller triggers termination of all remaining running Pods for that Job. Once all pods are terminated, the entire Job is marked failed by the Job controller, by setting the Failed condition in the Job status. Here is an example manifest for a Job that defines a backoffLimitPerIndex: ``` apiVersion: batch/v1 kind: Job metadata: name: job-backoff-limit-per-index-example spec: completions: 10 parallelism: 3 completionMode: Indexed # required for the feature backoffLimitPerIndex: 1 # maximal number of failures per index maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution template: spec: restartPolicy:" }, { "data": "# required for the feature containers: name: example image: python command: # The jobs fails as there is at least one failed index python3 -c | import os, sys print(\"Hello world\") if int(os.environ.get(\"JOBCOMPLETIONINDEX\")) % 2 == 0: sys.exit(1) ``` In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated. Once the job is finished, the Job status looks as follows: ``` kubectl get -o yaml job job-backoff-limit-per-index-example ``` ``` status: completedIndexes: 1,3,5,7,9 failedIndexes: 0,2,4,6,8 succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes conditions: message: Job has failed indexes reason: FailedIndexes status: \"True\" type: Failed ``` Additionally, you may want to use the per-index backoff along with a pod failure policy. When using per-index backoff, there is a new FailIndex action available which allows you to avoid unnecessary retries within an index. A Pod failure policy, defined with the .spec.podFailurePolicy field, enables your cluster to handle Pod failures based on the container exit codes and the Pod conditions. In some situations, you may want to have a better control when handling Pod failures than the control provided by the Pod backoff failure policy, which is based on the Job's .spec.backoffLimit. These are some examples of use cases: You can configure a Pod failure policy, in the .spec.podFailurePolicy field, to meet the above use cases. This policy can handle Pod failures based on the container exit codes and the Pod conditions. Here is a manifest for a Job that defines a podFailurePolicy: ``` apiVersion: batch/v1 kind: Job metadata: name: job-pod-failure-policy-example spec: completions: 12 parallelism: 3 template: spec: restartPolicy: Never containers: name: main image: docker.io/library/bash:5 command: [\"bash\"] # example command simulating a bug which triggers the FailJob action args: -c echo \"Hello world!\" && sleep 5 && exit 42 backoffLimit: 6 podFailurePolicy: rules: action: FailJob onExitCodes: containerName: main # optional operator: In # one of: In, NotIn values: [42] action: Ignore # one of: Ignore, FailJob, Count onPodConditions: type: DisruptionTarget # indicates Pod disruption ``` In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the main container specifically: The second rule of the Pod failure policy, specifying the Ignore action for failed Pods with condition DisruptionTarget excludes Pod disruptions from being counted towards the .spec.backoffLimit limit of retries. These are some requirements and semantics of the API: When creating an Indexed Job, you can define when a Job can be declared as succeeded using a .spec.successPolicy, based on the pods that succeeded. By default, a Job succeeds when the number of succeeded Pods equals .spec.completions. These are some situations where you might want additional control for declaring a Job succeeded: You can configure a success policy, in the .spec.successPolicy field, to meet the above use cases. This policy can handle Job success based on the succeeded pods. After the Job meets the success policy, the job controller terminates the lingering Pods. A success policy is defined by rules. Each rule can take one of the following forms: Note that when you specify multiple rules in the .spec.successPolicy.rules, the job controller evaluates the rules in order. Once the Job meets a rule, the job controller ignores remaining" }, { "data": "Here is a manifest for a Job with successPolicy: ``` apiVersion: batch/v1 kind: Job metadata: name: job-success spec: parallelism: 10 completions: 10 completionMode: Indexed # Required for the success policy successPolicy: rules: succeededIndexes: 0,2-3 succeededCount: 1 template: spec: containers: name: main image: python command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded, python3 -c | import os, sys if os.environ.get(\"JOBCOMPLETIONINDEX\") == \"2\": sys.exit(0) else: sys.exit(1) restartPolicy: Never ``` In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the specified indexes, 0, 2, or 3, succeed. The Job that meets the success policy gets the SuccessCriteriaMet condition. After the removal of the lingering Pods is issued, the Job gets the Complete condition. Note that the succeededIndexes is represented as intervals separated by a hyphen. The number are listed in represented by the first and last element of the series, separated by a hyphen. When a Job completes, no more Pods are created, but the Pods are usually not deleted either. Keeping them around allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output. The job object also remains after it is completed so that you can view its status. It is up to the user to delete old jobs after noting their status. Delete the job with kubectl (e.g. kubectl delete jobs/pi or kubectl delete -f ./job.yaml). When you delete the job using kubectl, all the pods it created are deleted too. By default, a Job will run uninterrupted unless a Pod fails (restartPolicy=Never) or a Container exits in error (restartPolicy=OnFailure), at which point the Job defers to the .spec.backoffLimit described above. Once .spec.backoffLimit has been reached the Job will be marked as failed and any running Pods will be terminated. Another way to terminate a Job is by setting an active deadline. Do this by setting the .spec.activeDeadlineSeconds field of the Job to a number of seconds. The activeDeadlineSeconds applies to the duration of the job, no matter how many Pods are created. Once a Job reaches activeDeadlineSeconds, all of its running Pods are terminated and the Job status will become type: Failed with reason: DeadlineExceeded. Note that a Job's .spec.activeDeadlineSeconds takes precedence over its .spec.backoffLimit. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by activeDeadlineSeconds, even if the backoffLimit is not yet reached. Example: ``` apiVersion: batch/v1 kind: Job metadata: name: pi-with-timeout spec: backoffLimit: 5 activeDeadlineSeconds: 100 template: spec: containers: name: pi image: perl:5.34.0 command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: Never ``` Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level. Keep in mind that the restartPolicy applies to the Pod, and not to the Job itself: there is no automatic Job restart once the Job status is type: Failed. That is, the Job termination mechanisms activated with .spec.activeDeadlineSeconds and .spec.backoffLimit result in a permanent Job failure that requires manual intervention to resolve. Finished Jobs are usually no longer needed in the system. Keeping them around in the system will put pressure on the API server. If the Jobs are managed directly by a higher level controller, such as CronJobs, the Jobs can be cleaned up by CronJobs based on the specified capacity-based cleanup policy. Another way to clean up finished Jobs (either Complete or Failed) automatically is to use a TTL mechanism provided by a TTL controller for finished resources, by specifying the" }, { "data": "field of the Job. When the TTL controller cleans up the Job, it will delete the Job cascadingly, i.e. delete its dependent objects, such as Pods, together with the Job. Note that when the Job is deleted, its lifecycle guarantees, such as finalizers, will be honored. For example: ``` apiVersion: batch/v1 kind: Job metadata: name: pi-with-ttl spec: ttlSecondsAfterFinished: 100 template: spec: containers: name: pi image: perl:5.34.0 command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: Never ``` The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes. If the field is set to 0, the Job will be eligible to be automatically deleted immediately after it finishes. If the field is unset, this Job won't be cleaned up by the TTL controller after it finishes. It is recommended to set ttlSecondsAfterFinished field because unmanaged jobs (Jobs that you created directly, and not indirectly through other workload APIs such as CronJob) have a default deletion policy of orphanDependents causing Pods created by an unmanaged Job to be left around after that Job is fully deleted. Even though the control plane eventually garbage collects the Pods from a deleted Job after they either fail or complete, sometimes those lingering pods may cause cluster performance degradation or in worst case cause the cluster to go offline due to this degradation. You can use LimitRanges and ResourceQuotas to place a cap on the amount of resources that a particular namespace can consume. The Job object can be used to process a set of independent but related work items. These might be emails to be sent, frames to be rendered, files to be transcoded, ranges of keys in a NoSQL database to scan, and so on. In a complex system, there may be multiple different sets of work items. Here we are just considering one set of work items that the user wants to manage together a batch job. There are several different patterns for parallel computation, each with strengths and weaknesses. The tradeoffs are: The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs. The pattern names are also links to examples and more detailed description. | Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? | |:-|:--|:|:-| | Queue with Pod Per Work Item | | nan | sometimes | | Queue with Variable Pod Count | | | nan | | Indexed Job with Static Work Assignment | | nan | | | Job with Pod-to-Pod Communication | | sometimes | sometimes | | Job Template Expansion | nan | nan | | When you specify completions with .spec.completions, each Pod created by the Job controller has an identical spec. This means that all pods for a task will have the same command line and the same image, the same volumes, and (almost) the same environment variables. These patterns are different ways to arrange for pods to work on different things. This table shows the required settings for .spec.parallelism and .spec.completions for each of the patterns. Here, W is the number of work items. | Pattern | .spec.completions |" }, { "data": "| |:-|:--|:--| | Queue with Pod Per Work Item | W | any | | Queue with Variable Pod Count | nan | any | | Indexed Job with Static Work Assignment | W | any | | Job with Pod-to-Pod Communication | W | W | | Job Template Expansion | 1 | should be 1 | When a Job is created, the Job controller will immediately begin creating Pods to satisfy the Job's requirements and will continue to do so until the Job is complete. However, you may want to temporarily suspend a Job's execution and resume it later, or start Jobs in suspended state and have a custom controller decide later when to start them. To suspend a Job, you can update the .spec.suspend field of the Job to true; later, when you want to resume it again, update it to false. Creating a Job with .spec.suspend set to true will create it in the suspended state. When a Job is resumed from suspension, its .status.startTime field will be reset to the current time. This means that the .spec.activeDeadlineSeconds timer will be stopped and reset when a Job is suspended and resumed. When you suspend a Job, any running Pods that don't have a status of Completed will be terminated with a SIGTERM signal. The Pod's graceful termination period will be honored and your Pod must handle this signal in this period. This may involve saving progress for later or undoing changes. Pods terminated this way will not count towards the Job's completions count. An example Job definition in the suspended state can be like so: ``` kubectl get job myjob -o yaml ``` ``` apiVersion: batch/v1 kind: Job metadata: name: myjob spec: suspend: true parallelism: 1 completions: 5 template: spec: ... ``` You can also toggle Job suspension by patching the Job using the command line. Suspend an active Job: ``` kubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":true}}' ``` Resume a suspended Job: ``` kubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":false}}' ``` The Job's status can be used to determine if a Job is suspended or has been suspended in the past: ``` kubectl get jobs/myjob -o yaml ``` ``` apiVersion: batch/v1 kind: Job status: conditions: lastProbeTime: \"2021-02-05T13:14:33Z\" lastTransitionTime: \"2021-02-05T13:14:33Z\" status: \"True\" type: Suspended startTime: \"2021-02-05T13:13:48Z\" ``` The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of that condition is \"False\", then the Job was previously suspended and is now running. If such a condition does not exist in the Job's status, the Job has never been stopped. Events are also created when the Job is suspended and resumed: ``` kubectl describe jobs/myjob ``` ``` Name: myjob ... Events: Type Reason Age From Message - - - Normal SuccessfulCreate 12m job-controller Created pod: myjob-hlrpl Normal SuccessfulDelete 11m job-controller Deleted pod: myjob-hlrpl Normal Suspended 11m job-controller Job suspended Normal SuccessfulCreate 3s job-controller Created pod: myjob-jvb44 Normal Resumed 3s job-controller Job resumed ``` The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were created, but Pod creation restarted as soon as the Job was resumed. In most cases, a parallel job will want the pods to run with constraints, like all in the same zone, or all either on GPU model x or y but not a mix of both. The suspend field is the first step towards achieving those semantics. Suspend allows a custom queue controller to decide when a job should start; However, once a job is unsuspended, a custom queue controller has no influence on where the pods of a job will actually land. This feature allows updating a Job's scheduling directives before it starts, which gives custom queue controllers the ability to influence pod placement while at the same time offloading actual pod-to-node assignment to" }, { "data": "This is allowed only for suspended Jobs that have never been unsuspended before. The fields in a Job's pod template that can be updated are node affinity, node selector, tolerations, labels, annotations and scheduling gates. Normally, when you create a Job object, you do not specify .spec.selector. The system defaulting logic adds this field when the Job is created. It picks a selector value that will not overlap with any other jobs. However, in some cases, you might need to override this automatically set selector. To do this, you can specify the .spec.selector of the Job. Be very careful when doing this. If you specify a label selector which is not unique to the pods of that Job, and which matches unrelated Pods, then pods of the unrelated job may be deleted, or this Job may count other Pods as completing it, or one or both Jobs may refuse to create Pods or run to completion. If a non-unique selector is chosen, then other controllers (e.g. ReplicationController) and their Pods may behave in unpredictable ways too. Kubernetes will not stop you from making a mistake when specifying .spec.selector. Here is an example of a case when you might want to use this feature. Say Job old is already running. You want existing Pods to keep running, but you want the rest of the Pods it creates to use a different pod template and for the Job to have a new name. You cannot update the Job because these fields are not updatable. Therefore, you delete Job old but leave its pods running, using kubectl delete jobs/old --cascade=orphan. Before deleting it, you make a note of what selector it uses: ``` kubectl get job old -o yaml ``` The output is similar to this: ``` kind: Job metadata: name: old ... spec: selector: matchLabels: batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002 ... ``` Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they are controlled by Job new as well. You need to specify manualSelector: true in the new Job since you are not using the selector that the system normally generates for you automatically. ``` kind: Job metadata: name: new ... spec: manualSelector: true selector: matchLabels: batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002 ... ``` The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch. The control plane keeps track of the Pods that belong to any Job and notices if any such Pod is removed from the API server. To do that, the Job controller creates Pods with the finalizer batch.kubernetes.io/job-tracking. The controller removes the finalizer only after the Pod has been accounted for in the Job status, allowing the Pod to be removed by other controllers or users. You can scale Indexed Jobs up or down by mutating both .spec.parallelism and .spec.completions together such that .spec.parallelism == .spec.completions. When the ElasticIndexedJobfeature gate on the API server is disabled, .spec.completions is immutable. Use cases for elastic Indexed Jobs include batch workloads which require scaling an indexed Job, such as MPI, Horovord, Ray, and PyTorch training jobs. By default, the Job controller recreates Pods as soon they either fail or are terminating (have a deletion timestamp). This means that, at a given time, when some of the Pods are terminating, the number of running Pods for a Job can be greater than parallelism or greater than one Pod per index (if you are using an Indexed" }, { "data": "You may choose to create replacement Pods only when the terminating Pod is fully terminal (has status.phase: Failed). To do this, set the .spec.podReplacementPolicy: Failed. The default replacement policy depends on whether the Job has a podFailurePolicy set. With no Pod failure policy defined for a Job, omitting the podReplacementPolicy field selects the TerminatingOrFailed replacement policy: the control plane creates replacement Pods immediately upon Pod deletion (as soon as the control plane sees that a Pod for this Job has deletionTimestamp set). For Jobs with a Pod failure policy set, the default podReplacementPolicy is Failed, and no other value is permitted. See Pod failure policy to learn more about Pod failure policies for Jobs. ``` kind: Job metadata: name: new ... spec: podReplacementPolicy: Failed ... ``` Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating. ``` kubectl get jobs/myjob -o yaml ``` ``` apiVersion: batch/v1 kind: Job status: terminating: 3 # three Pods are terminating and have not yet reached the Failed phase ``` This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller. You indicate the controller that reconciles the Job by setting a custom value for the spec.managedBy field - any value other than kubernetes.io/job-controller. The value of the field is immutable. When developing an external Job controller be aware that your controller needs to operate in a fashion conformant with the definitions of the API spec and status fields of the Job object. Please review these in detail in the Job API. We also recommend that you run the e2e conformance tests for the Job object to verify your implementation. Finally, when developing an external Job controller make sure it does not use the batch.kubernetes.io/job-tracking finalizer, reserved for the built-in controller. When the node that a Pod is running on reboots or fails, the pod is terminated and will not be restarted. However, a Job will create new Pods to replace terminated ones. For this reason, we recommend that you use a Job rather than a bare Pod, even if your application requires only a single Pod. Jobs are complementary to Replication Controllers. A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Job manages Pods that are expected to terminate (e.g. batch tasks). As discussed in Pod Lifecycle, Job is only appropriate for pods with RestartPolicy equal to OnFailure or Never. (Note: If RestartPolicy is not set, the default value is Always.) Another pattern is for a single Job to create a Pod which then creates other Pods, acting as a sort of custom controller for those Pods. This allows the most flexibility, but may be somewhat complicated to get started with and offers less integration with Kubernetes. One example of this pattern would be a Job which starts a Pod which runs a script that in turn starts a Spark master controller (see spark example), runs a spark driver, and then cleans up. An advantage of this approach is that the overall process gets the completion guarantee of a Job object, but maintains complete control over what Pods are created and how work is assigned to them. Was this page helpful? Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub Repository if you want to report a problem or suggest an" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Kubernetes", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications. The open source project is hosted by the Cloud Native Computing Foundation (CNCF). Learn about Kubernetes and its fundamental concepts. Follow tutorials to learn how to deploy applications in Kubernetes. Get Kubernetes running based on your resources and needs. Look up common tasks and how to perform them using a short sequence of steps. Browse terminology, command line syntax, API resource types, and setup tool documentation. Find out how you can help make Kubernetes better. Get certified in Kubernetes and make your cloud native projects successful! Install Kubernetes or upgrade to the newest version. This website contains documentation for the current and previous 4 versions of Kubernetes." } ]
{ "category": "Orchestration & Management", "file_name": "api-docs.md", "project_name": "Nomad", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Nomad exposes a RESTful HTTP API to control almost every aspect of the Nomad agent. The main interface to Nomad is a RESTful HTTP API. The API can query the current state of the system as well as modify the state of the system. The Nomad CLI actually invokes Nomad's HTTP for many commands. All API routes are prefixed with /v1/. Nomad binds to a specific set of addresses and ports. The HTTP API is served via the http address and port. This address:port must be accessible locally. If you bind to 127.0.0.1:4646, the API is only available from that host. If you bind to a private internal IP, the API will be available from within that network. If you bind to a public IP, the API will be available from the public Internet (not recommended). The default port for the Nomad HTTP API is 4646. This can be overridden via the Nomad configuration block. Here is an example curl request to query a Nomad server with the default configuration: ``` $ curl http://127.0.0.1:4646/v1/agent/members``` The conventions used in the API documentation do not list a port and use the standard URL localhost:4646. Be sure to replace this with your Nomad agent URL when using the examples. There are five primary nouns in Nomad: Jobs are submitted by users and represent a desired state. A job is a declarative description of tasks to run which are bounded by constraints and require resources. Jobs can also have affinities which are used to express placement preferences. Nodes are the servers in the clusters that tasks can be scheduled on. The mapping of tasks in a job to nodes is done using allocations. An allocation is used to declare that a set of tasks in a job should be run on a particular node. Scheduling is the process of determining the appropriate allocations and is done as part of an evaluation. Deployments are objects to track a rolling update of allocations between two versions of a job. The API is modeled closely on the underlying data model. Use the links to the left for documentation about specific endpoints. There are also \"Agent\" APIs which interact with a specific agent and not the broader cluster used for administration. Several endpoints in Nomad use or require ACL tokens to operate. The token are used to authenticate the request and determine if the request is allowed based on the associated authorizations. Tokens are specified per-request by using the X-Nomad-Token request header or with the Bearer scheme in the authorization header set to the SecretID of an ACL Token. For more details about ACLs, please see the ACL Guide. When ACLs are enabled, a Nomad token should be provided to API requests using the X-Nomad-Token header or with the Bearer scheme in the authorization header. When using authentication, clients should communicate via TLS. Here is an example using curl with X-Nomad-Token: ``` $ curl \\ --header \"X-Nomad-Token: aa534e09-6a07-0a45-2295-a7f77063d429\" \\ https://localhost:4646/v1/jobs``` Below is an example using curl with a RFC6750 Bearer token: ``` $ curl \\ --header \"Authorization: Bearer <token>\" \\ http://localhost:4646/v1/jobs``` Nomad has support for namespaces, which allow jobs and their associated objects to be segmented from each other and other users of the cluster. When using non-default namespace, the API request must pass the target namespace as an API query parameter. Prior to Nomad 1.0 namespaces were" }, { "data": "Here is an example using curl to query the qa namespace: ``` $ curl 'localhost:4646/v1/jobs?namespace=qa'``` Use a wildcard (*) to query all namespaces: ``` $ curl 'localhost:4646/v1/jobs?namespace=*'``` Filter expressions refine data queries for some API listing endpoints, as notated in the individual API endpoints documentation. To create a filter expression, you will write one or more expressions. Each expression has matching operators composed of selectors and values. Filtering is executed on the Nomad server, before data is returned, reducing the network load. To pass a filter expression to Nomad, use the filter query parameter with the URL encoded expression when sending requests to HTTP API endpoints that support it. ``` $ curl --get https://localhost:4646/v1/<path> \\ --data-urlencode 'filter=<filter expression>'``` The filter expression can also be specified in the -filter flag of the nomad operator api command. ``` $ nomad operator api -filter '<filter expression>' /v1/<path>``` Some endpoints may have other query parameters that are used for filtering, but they can't be used with the filter query parameter. Doing so will result in a 400 status error response. These query parameters are usually backed by a database index, so they may be prefereable over an equivalent simple filter expression due to better resource usage and performance. Some list endpoints return a reduced version of the resource being queried. This smaller version is called a stub and may have different fields than the full resource definition. To allow more expressive filtering operations, the filter is applied to the full version, not the stub. If a request returns an error such as error finding value in datum the field used in filter expression may need to be adjusted. For example, filtering on node addresses should use the HTTPAddr field of the full node definition instead of Address field present in the stub. ``` $ nomad operator api -filter 'HTTPAddr matches \"10.0.0..+\"' /v1/nodes``` A single expression is a matching operator with a selector and value and they are written in plain text format. Boolean logic and parenthesization are supported. In general, whitespace is ignored, except within literal strings. All matching operators use a selector or value to choose what data should be matched. Each endpoint that supports filtering accepts a potentially different list of selectors and is detailed in the API documentation for those endpoints. ``` // Equality & Inequality checks<Selector> == \"<Value>\"<Selector> != \"<Value>\"// Emptiness checks<Selector> is empty<Selector> is not empty// Contains checks or Substring Matching\"<Value>\" in <Selector>\"<Value>\" not in <Selector><Selector> contains \"<Value>\"<Selector> not contains \"<Value>\"// Regular Expression Matching<Selector> matches \"<Value>\"<Selector> not matches \"<Value>\"``` Selectors are used by matching operators to create an expression. They are defined by a . separated list of names. Each name must start with an ASCII letter and can contain ASCII letters, numbers, and underscores. When part of the selector references a map value it may be expressed using the form [\"<map key name>\"] instead of .<map key name>. This allows the possibility of using map keys that are not valid selectors in and of themselves. ``` // selects the `cache` key within the `TaskGroups` mapping for the// /v1/deployments endpointTaskGroups.cache// Also selects the `cache` key for the same endpointTaskGroups[\"cache\"]``` Values are used by matching operators to create an expression. Values can be any valid selector, a number, or a string. It is best practice to quote values. Numbers can be base 10 integers or floating point numbers. When quoting strings, they may either be enclosed in double quotes or" }, { "data": "When enclosed in backticks they are treated as raw strings and escape sequences such as \\n will not be expanded. There are several methods for connecting expressions, including: ``` // Logical Or - evaluates to true if either sub-expression does<Expression 1> or <Expression 2>// Logical And - evaluates to true if both sub-expressions do<Expression 1 > and <Expression 2>// Logical Not - evaluates to true if the sub-expression does notnot <Expression 1>// Grouping - Overrides normal precedence rules( <Expression 1> )// Inspects data to check for a match<Matching Expression 1>``` Standard operator precedence can be expected for the various forms. For example, the following two expressions would be equivalent. ``` <Expression 1> and not <Expression 2> or <Expression 3>( <Expression 1> and (not <Expression 2> )) or <Expression 3>``` Generally, only the main object is filtered. When filtering for an item within an array that is not at the top level, the entire array that contains the item will be returned. This is usually the outermost object of a response, but in some cases the filtering is performed on a object embedded within the results. Filters are executed on the servers and therefore will consume some amount of CPU time on the server. For non-stale queries this means that the filter is executed on the" }, { "data": "Command (Unfiltered) ``` $ nomad operator api /v1/jobs``` Response (Unfiltered) ``` [ { \"CreateIndex\": 52, \"Datacenters\": [ \"dc1\", \"dc2\" ], \"ID\": \"countdash\", \"JobModifyIndex\": 56, \"JobSummary\": { \"Children\": { \"Dead\": 0, \"Pending\": 0, \"Running\": 0 }, \"CreateIndex\": 52, \"JobID\": \"countdash\", \"ModifyIndex\": 55, \"Namespace\": \"default\", \"Summary\": { \"api\": { \"Complete\": 0, \"Failed\": 0, \"Lost\": 0, \"Queued\": 1, \"Running\": 0, \"Starting\": 0 }, \"dashboard\": { \"Complete\": 0, \"Failed\": 0, \"Lost\": 0, \"Queued\": 1, \"Running\": 0, \"Starting\": 0 } } }, \"ModifyIndex\": 56, \"Multiregion\": null, \"Name\": \"countdash\", \"Namespace\": \"default\", \"ParameterizedJob\": false, \"ParentID\": \"\", \"Periodic\": false, \"Priority\": 50, \"Status\": \"pending\", \"StatusDescription\": \"\", \"Stop\": false, \"SubmitTime\": 1645230445788556000, \"Type\": \"service\" }, { \"CreateIndex\": 42, \"Datacenters\": [ \"dc1\" ], \"ID\": \"example\", \"JobModifyIndex\": 42, \"JobSummary\": { \"Children\": { \"Dead\": 0, \"Pending\": 0, \"Running\": 0 }, \"CreateIndex\": 42, \"JobID\": \"example\", \"ModifyIndex\": 46, \"Namespace\": \"default\", \"Summary\": { \"cache\": { \"Complete\": 0, \"Failed\": 0, \"Lost\": 0, \"Queued\": 0, \"Running\": 1, \"Starting\": 0 } } }, \"ModifyIndex\": 49, \"Multiregion\": null, \"Name\": \"example\", \"Namespace\": \"default\", \"ParameterizedJob\": false, \"ParentID\": \"\", \"Periodic\": false, \"Priority\": 50, \"Status\": \"running\", \"StatusDescription\": \"\", \"Stop\": false, \"SubmitTime\": 1645230403921889000, \"Type\": \"service\" }]``` Command (Filtered) ``` $ nomad operator api -filter 'Datacenters contains \"dc2\"' /v1/jobs``` Response (Filtered) ``` [ { \"CreateIndex\": 52, \"Datacenters\": [ \"dc1\", \"dc2\" ], \"ID\": \"countdash\", \"JobModifyIndex\": 56, \"JobSummary\": { \"Children\": { \"Dead\": 0, \"Pending\": 0, \"Running\": 0 }, \"CreateIndex\": 52, \"JobID\": \"countdash\", \"ModifyIndex\": 55, \"Namespace\": \"default\", \"Summary\": { \"api\": { \"Complete\": 0, \"Failed\": 0, \"Lost\": 0, \"Queued\": 1, \"Running\": 0, \"Starting\": 0 }, \"dashboard\": { \"Complete\": 0, \"Failed\": 0, \"Lost\": 0, \"Queued\": 1, \"Running\": 0, \"Starting\": 0 } } }, \"ModifyIndex\": 56, \"Multiregion\": null, \"Name\": \"countdash\", \"Namespace\": \"default\", \"ParameterizedJob\": false, \"ParentID\": \"\", \"Periodic\": false, \"Priority\": 50, \"Status\": \"pending\", \"StatusDescription\": \"\", \"Stop\": false, \"SubmitTime\": 1645230445788556000, \"Type\": \"service\" }]``` Command (Unfiltered) ``` $ nomad operator api /v1/deployments``` Response (Unfiltered) ``` [ { \"CreateIndex\": 54, \"EvalPriority\": 50, \"ID\": \"58fd0616-ce64-d14b-6917-03d0ab5af67e\", \"IsMultiregion\": false, \"JobCreateIndex\": 52, \"JobID\": \"countdash\", \"JobModifyIndex\": 52, \"JobSpecModifyIndex\": 52, \"JobVersion\": 0, \"ModifyIndex\": 59, \"Namespace\": \"default\", \"Status\": \"cancelled\", \"StatusDescription\": \"Cancelled due to newer version of job\", \"TaskGroups\": { \"dashboard\": { \"AutoPromote\": false, \"AutoRevert\": false, \"DesiredCanaries\": 0, \"DesiredTotal\": 1, \"HealthyAllocs\": 0, \"PlacedAllocs\": 0, \"PlacedCanaries\": null, \"ProgressDeadline\": 600000000000, \"Promoted\": false, \"RequireProgressBy\": null, \"UnhealthyAllocs\": 0 }, \"api\": { \"AutoPromote\": false, \"AutoRevert\": false, \"DesiredCanaries\": 0, \"DesiredTotal\": 1, \"HealthyAllocs\": 0, \"PlacedAllocs\": 0, \"PlacedCanaries\": null, \"ProgressDeadline\": 600000000000, \"Promoted\": false, \"RequireProgressBy\": null, \"UnhealthyAllocs\": 0 } } }, { \"CreateIndex\": 43, \"EvalPriority\": 50, \"ID\": \"1f18b48c-b33b-8e96-5640-71e3f3000242\", \"IsMultiregion\": false, \"JobCreateIndex\": 42, \"JobID\": \"example\", \"JobModifyIndex\": 42, \"JobSpecModifyIndex\": 42, \"JobVersion\": 0, \"ModifyIndex\": 49, \"Namespace\": \"default\", \"Status\": \"successful\", \"StatusDescription\": \"Deployment completed successfully\", \"TaskGroups\": { \"cache\": { \"AutoPromote\": false, \"AutoRevert\": false, \"DesiredCanaries\": 0, \"DesiredTotal\": 1, \"HealthyAllocs\": 1, \"PlacedAllocs\": 1, \"PlacedCanaries\": null, \"ProgressDeadline\": 600000000000, \"Promoted\": false, \"RequireProgressBy\": \"2022-02-18T19:36:54.421823-05:00\", \"UnhealthyAllocs\": 0 } } }]``` Command (Filtered) ``` $ nomad operator api -filter 'Status != \"successful\"' /v1/deployments``` Response (Filtered) ``` [ { \"CreateIndex\": 54, \"EvalPriority\": 50, \"ID\": \"58fd0616-ce64-d14b-6917-03d0ab5af67e\", \"IsMultiregion\": false, \"JobCreateIndex\": 52, \"JobID\": \"countdash\", \"JobModifyIndex\": 52, \"JobSpecModifyIndex\": 52, \"JobVersion\": 0, \"ModifyIndex\": 59, \"Namespace\": \"default\", \"Status\": \"cancelled\", \"StatusDescription\": \"Cancelled due to newer version of job\", \"TaskGroups\": { \"dashboard\": { \"AutoPromote\": false, \"AutoRevert\": false, \"DesiredCanaries\": 0, \"DesiredTotal\": 1, \"HealthyAllocs\": 0, \"PlacedAllocs\": 0, \"PlacedCanaries\": null, \"ProgressDeadline\": 600000000000, \"Promoted\": false, \"RequireProgressBy\": null, \"UnhealthyAllocs\": 0 }, \"api\": { \"AutoPromote\": false, \"AutoRevert\": false, \"DesiredCanaries\": 0, \"DesiredTotal\": 1, \"HealthyAllocs\": 0, \"PlacedAllocs\": 0, \"PlacedCanaries\": null, \"ProgressDeadline\": 600000000000, \"Promoted\": false, \"RequireProgressBy\": null, \"UnhealthyAllocs\": 0 } } }]``` Some list endpoints support partial results to limit the amount of data retrieved. The returned list is split into pages and the page size can be set using the per_page query parameter with a positive integer value. If more data is available past the page requested, the response will contain an HTTP header named X-Nomad-Nexttoken with the value of the next item to be retrieved. This value can then be set as a query parameter called next_token in a follow-up request to retrieve the next page. When the last page is reached, the X-Nomad-Nexttoken HTTP header will not be present in the response, indicating that there is nothing more to return. List results are usually returned in ascending order by their internal key, such as their ID. Some endpoints may return data sorted by their CreateIndex value, which roughly corelates to their creation order. The result order may be reversed using the reverse=true query parameter when supported by the endpoint. Many endpoints in Nomad support a feature known as \"blocking queries\". A blocking query is used to wait for a potential change using long polling. Not all endpoints support blocking, but each endpoint uniquely documents its support for blocking queries in the documentation. Endpoints that support blocking queries return an HTTP header named X-Nomad-Index. This is a unique identifier representing the current state of the requested resource. On a new Nomad cluster the value of this index starts at 1. On subsequent requests for this resource, the client can set the index query string parameter to the value of X-Nomad-Index, indicating that the client wishes to wait for any changes subsequent to that index. When this is provided, the HTTP request will \"hang\" until a change in the system occurs, or the maximum timeout is reached. A critical note is that the return of a blocking request is no guarantee of a change. It is possible that the timeout was reached or that there was an idempotent write that does not affect the result of the query. In addition to index, endpoints that support blocking will also honor a wait parameter specifying a maximum duration for the blocking request. This is limited to 10 minutes. If not set, the wait time defaults to 5" }, { "data": "This value can be specified in the form of \"10s\" or \"5m\" (i.e., 10 seconds or 5 minutes, respectively). A small random amount of additional wait time is added to the supplied maximum wait time to spread out the wake up time of any concurrent requests. This adds up to wait / 16 additional time to the maximum duration. Most of the read query endpoints support multiple levels of consistency. Since no policy will suit all clients' needs, these consistency modes allow the user to have the ultimate say in how to balance the trade-offs inherent in a distributed system. The two read modes are: default - If not specified, the default is strongly consistent in almost all cases. However, there is a small window in which a new leader may be elected during which the old leader may service stale values. The trade-off is fast reads but potentially stale values. The condition resulting in stale reads is hard to trigger, and most clients should not need to worry about this case. Also, note that this race condition only applies to reads, not writes. stale - This mode allows any server to service the read regardless of whether it is the leader. This means reads can be arbitrarily stale; however, results are generally consistent to within 50 milliseconds of the leader. The trade-off is very fast and scalable reads with a higher likelihood of stale values. Since this mode allows reads without a leader, a cluster that is unavailable will still be able to respond to queries. To switch these modes, use the stale query parameter on requests. To support bounding the acceptable staleness of data, responses provide the X-Nomad-LastContact header containing the time in milliseconds that a server was last contacted by the leader node. The X-Nomad-KnownLeader header also indicates if there is a known leader. These can be used by clients to gauge the staleness of a result and take appropriate action. By default, any request to the HTTP API will default to the region on which the machine is servicing the request. If the agent runs in \"region1\", the request will query the region \"region1\". A target region can be explicitly request using the ?region query parameter. The request will be transparently forwarded and serviced by a server in the requested region. The HTTP API will gzip the response if the HTTP request denotes that the client accepts gzip compression. This is achieved by passing the accept encoding: ``` $ curl \\ --header \"Accept-Encoding: gzip\" \\ https://localhost:4646/v1/...``` By default, the output of all HTTP API requests is minimized JSON. If the client passes pretty on the query string, formatted JSON will be returned. In general, clients should prefer a client-side parser like jq instead of server-formatted data. Asking the server to format the data takes away processing cycles from more important tasks. ``` $ curl https://localhost:4646/v1/page?pretty``` Nomad's API aims to be RESTful, although there are some exceptions. The API responds to the standard HTTP verbs GET, PUT, and DELETE. Each API method will clearly document the verb(s) it responds to and the generated response. The same path with different verbs may trigger different behavior. For example: ``` PUT /v1/jobsGET /v1/jobs``` Even though these share a path, the PUT operation creates a new job whereas the GET operation reads all jobs. Individual API's will contain further documentation in the case that more specific response codes are returned but all clients should handle the following: On this page:" } ]
{ "category": "Orchestration & Management", "file_name": "alternative.md", "project_name": "Nomad", "subcategory": "Scheduling & Orchestration" }
[ { "data": "While known for its goal of automating and simplifying application deployment, an orchestrator itself can be extremely complex to implement and manage. Kubernetes requires significant time and deep understanding to deploy, operate, and troubleshoot. Teams and organizations choose Nomad as an alternative to Kubernetes for its two core strengths: Simplicity in usage and maintainability Flexibility to deploy and manage containerized and non-containerized applications Operating as a single lightweight binary, Nomad excels on-premises and at the edge, providing the same ease-of-use as it does in the cloud. Its architectural simplicity, native federation capabilities, and operator-friendly design enable companies to scale and manage an orchestrator with little operational overhead. Our customer interviews show that no matter company size, teams and organizations gain benefits from simplicity and flexibility, including Fast time to production: Average 1-3 weeks to get Nomad from a technical proof of concept into production Rapid adoption: From 2 hours to less than 30 minutes to onboard a developer to directly deploy applications on Nomad Great operational efficiency: Allows operation teams to stay lean (1-4 people) to service hundreds of developers and applications, and achieve high uptime with a self-hosted orchestrator Smooth path to migration: Allows teams to incrementally migrate or containerize existing applications at their own pace with a single, unified deployment workflow Nomad is easy to cluster up. We converted our Kubernetes deployment manifest to Nomad job files, then tested it. And since its a single binary, its simple to configure to our specific needs, which eliminates much of the complexity we faced with Kubernetes. More importantly, Nomads agnostic infrastructure resource pool and automated workflows let us deploy and manage our containers and apps across on-prem and any private or public cloud environment, which dramatically expands our datacenter options while still meeting our data residency obligations. AmpleOrganics, Canada's #1 cannabis software company migrated off of Kubernetes to Nomad Im a complete beginner when it comes to distributed computing and" }, { "data": "Nomad virtually eliminates barriers to entry for developers who dont have cloud computing expertise and makes it really easy to connect to the cluster, configure it, and run my jobs while having full visibility into the jobs status so I can restart them if need be. Autodesk, Autodesk Research built a scalable, maintenance-free, and multi-cloud orchestration workflow with Nomad We have people who are first-time system administrators deploying applications, building containers, maintaining Nomad. There is a guy on our team who worked in the IT help desk for eight years just today he upgraded an entire cluster himself. Thats the value proposition that I hope people understand. People seem to get stuck on I need to run Kubernetes because my friend runs it but do you really use it? Can you operate it at the level thats needed? Roblox, the top online gaming company built a global gaming platform with Nomad serving more than 150 million players A large portion of our applications are Windows-based, so we need both Windows and Linux support. Although we do prefer running containers, we dont necessarily want a hard requirement to have to use them and we like the idea of directly running applications on VMs if the use case calls for it. We wanted to make improvements to the current workflow without massive and time-consuming application rewrites. Ultimately we chose Nomad because it met all of our requirements and made the most sense to our environments. Q2 - The ebanking platform that serves 10% of the digital banking customers in America transformed the deployment workflow with Nomad While Nomads strengths lie in simplicity and flexibility in core scheduling, Kubernetes excels over Nomad in terms of ecosystem. The goal for Nomad ecosystem is to build a simpler, leaner, and more prescriptive path to the ecosystem. This is an area of improvement for Nomad that we invest significantly in today and plan to continue into the future. Read more about Nomads ecosystem." } ]
{ "category": "Orchestration & Management", "file_name": "docs.md", "project_name": "Nomad", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Nomad exposes a RESTful HTTP API to control almost every aspect of the Nomad agent. The main interface to Nomad is a RESTful HTTP API. The API can query the current state of the system as well as modify the state of the system. The Nomad CLI actually invokes Nomad's HTTP for many commands. All API routes are prefixed with /v1/. Nomad binds to a specific set of addresses and ports. The HTTP API is served via the http address and port. This address:port must be accessible locally. If you bind to 127.0.0.1:4646, the API is only available from that host. If you bind to a private internal IP, the API will be available from within that network. If you bind to a public IP, the API will be available from the public Internet (not recommended). The default port for the Nomad HTTP API is 4646. This can be overridden via the Nomad configuration block. Here is an example curl request to query a Nomad server with the default configuration: ``` $ curl http://127.0.0.1:4646/v1/agent/members``` The conventions used in the API documentation do not list a port and use the standard URL localhost:4646. Be sure to replace this with your Nomad agent URL when using the examples. There are five primary nouns in Nomad: Jobs are submitted by users and represent a desired state. A job is a declarative description of tasks to run which are bounded by constraints and require resources. Jobs can also have affinities which are used to express placement preferences. Nodes are the servers in the clusters that tasks can be scheduled on. The mapping of tasks in a job to nodes is done using allocations. An allocation is used to declare that a set of tasks in a job should be run on a particular node. Scheduling is the process of determining the appropriate allocations and is done as part of an evaluation. Deployments are objects to track a rolling update of allocations between two versions of a job. The API is modeled closely on the underlying data model. Use the links to the left for documentation about specific endpoints. There are also \"Agent\" APIs which interact with a specific agent and not the broader cluster used for administration. Several endpoints in Nomad use or require ACL tokens to operate. The token are used to authenticate the request and determine if the request is allowed based on the associated authorizations. Tokens are specified per-request by using the X-Nomad-Token request header or with the Bearer scheme in the authorization header set to the SecretID of an ACL Token. For more details about ACLs, please see the ACL Guide. When ACLs are enabled, a Nomad token should be provided to API requests using the X-Nomad-Token header or with the Bearer scheme in the authorization header. When using authentication, clients should communicate via TLS. Here is an example using curl with X-Nomad-Token: ``` $ curl \\ --header \"X-Nomad-Token: aa534e09-6a07-0a45-2295-a7f77063d429\" \\ https://localhost:4646/v1/jobs``` Below is an example using curl with a RFC6750 Bearer token: ``` $ curl \\ --header \"Authorization: Bearer <token>\" \\ http://localhost:4646/v1/jobs``` Nomad has support for namespaces, which allow jobs and their associated objects to be segmented from each other and other users of the cluster. When using non-default namespace, the API request must pass the target namespace as an API query parameter. Prior to Nomad 1.0 namespaces were" }, { "data": "Here is an example using curl to query the qa namespace: ``` $ curl 'localhost:4646/v1/jobs?namespace=qa'``` Use a wildcard (*) to query all namespaces: ``` $ curl 'localhost:4646/v1/jobs?namespace=*'``` Filter expressions refine data queries for some API listing endpoints, as notated in the individual API endpoints documentation. To create a filter expression, you will write one or more expressions. Each expression has matching operators composed of selectors and values. Filtering is executed on the Nomad server, before data is returned, reducing the network load. To pass a filter expression to Nomad, use the filter query parameter with the URL encoded expression when sending requests to HTTP API endpoints that support it. ``` $ curl --get https://localhost:4646/v1/<path> \\ --data-urlencode 'filter=<filter expression>'``` The filter expression can also be specified in the -filter flag of the nomad operator api command. ``` $ nomad operator api -filter '<filter expression>' /v1/<path>``` Some endpoints may have other query parameters that are used for filtering, but they can't be used with the filter query parameter. Doing so will result in a 400 status error response. These query parameters are usually backed by a database index, so they may be prefereable over an equivalent simple filter expression due to better resource usage and performance. Some list endpoints return a reduced version of the resource being queried. This smaller version is called a stub and may have different fields than the full resource definition. To allow more expressive filtering operations, the filter is applied to the full version, not the stub. If a request returns an error such as error finding value in datum the field used in filter expression may need to be adjusted. For example, filtering on node addresses should use the HTTPAddr field of the full node definition instead of Address field present in the stub. ``` $ nomad operator api -filter 'HTTPAddr matches \"10.0.0..+\"' /v1/nodes``` A single expression is a matching operator with a selector and value and they are written in plain text format. Boolean logic and parenthesization are supported. In general, whitespace is ignored, except within literal strings. All matching operators use a selector or value to choose what data should be matched. Each endpoint that supports filtering accepts a potentially different list of selectors and is detailed in the API documentation for those endpoints. ``` // Equality & Inequality checks<Selector> == \"<Value>\"<Selector> != \"<Value>\"// Emptiness checks<Selector> is empty<Selector> is not empty// Contains checks or Substring Matching\"<Value>\" in <Selector>\"<Value>\" not in <Selector><Selector> contains \"<Value>\"<Selector> not contains \"<Value>\"// Regular Expression Matching<Selector> matches \"<Value>\"<Selector> not matches \"<Value>\"``` Selectors are used by matching operators to create an expression. They are defined by a . separated list of names. Each name must start with an ASCII letter and can contain ASCII letters, numbers, and underscores. When part of the selector references a map value it may be expressed using the form [\"<map key name>\"] instead of .<map key name>. This allows the possibility of using map keys that are not valid selectors in and of themselves. ``` // selects the `cache` key within the `TaskGroups` mapping for the// /v1/deployments endpointTaskGroups.cache// Also selects the `cache` key for the same endpointTaskGroups[\"cache\"]``` Values are used by matching operators to create an expression. Values can be any valid selector, a number, or a string. It is best practice to quote values. Numbers can be base 10 integers or floating point numbers. When quoting strings, they may either be enclosed in double quotes or" }, { "data": "When enclosed in backticks they are treated as raw strings and escape sequences such as \\n will not be expanded. There are several methods for connecting expressions, including: ``` // Logical Or - evaluates to true if either sub-expression does<Expression 1> or <Expression 2>// Logical And - evaluates to true if both sub-expressions do<Expression 1 > and <Expression 2>// Logical Not - evaluates to true if the sub-expression does notnot <Expression 1>// Grouping - Overrides normal precedence rules( <Expression 1> )// Inspects data to check for a match<Matching Expression 1>``` Standard operator precedence can be expected for the various forms. For example, the following two expressions would be equivalent. ``` <Expression 1> and not <Expression 2> or <Expression 3>( <Expression 1> and (not <Expression 2> )) or <Expression 3>``` Generally, only the main object is filtered. When filtering for an item within an array that is not at the top level, the entire array that contains the item will be returned. This is usually the outermost object of a response, but in some cases the filtering is performed on a object embedded within the results. Filters are executed on the servers and therefore will consume some amount of CPU time on the server. For non-stale queries this means that the filter is executed on the" }, { "data": "Command (Unfiltered) ``` $ nomad operator api /v1/jobs``` Response (Unfiltered) ``` [ { \"CreateIndex\": 52, \"Datacenters\": [ \"dc1\", \"dc2\" ], \"ID\": \"countdash\", \"JobModifyIndex\": 56, \"JobSummary\": { \"Children\": { \"Dead\": 0, \"Pending\": 0, \"Running\": 0 }, \"CreateIndex\": 52, \"JobID\": \"countdash\", \"ModifyIndex\": 55, \"Namespace\": \"default\", \"Summary\": { \"api\": { \"Complete\": 0, \"Failed\": 0, \"Lost\": 0, \"Queued\": 1, \"Running\": 0, \"Starting\": 0 }, \"dashboard\": { \"Complete\": 0, \"Failed\": 0, \"Lost\": 0, \"Queued\": 1, \"Running\": 0, \"Starting\": 0 } } }, \"ModifyIndex\": 56, \"Multiregion\": null, \"Name\": \"countdash\", \"Namespace\": \"default\", \"ParameterizedJob\": false, \"ParentID\": \"\", \"Periodic\": false, \"Priority\": 50, \"Status\": \"pending\", \"StatusDescription\": \"\", \"Stop\": false, \"SubmitTime\": 1645230445788556000, \"Type\": \"service\" }, { \"CreateIndex\": 42, \"Datacenters\": [ \"dc1\" ], \"ID\": \"example\", \"JobModifyIndex\": 42, \"JobSummary\": { \"Children\": { \"Dead\": 0, \"Pending\": 0, \"Running\": 0 }, \"CreateIndex\": 42, \"JobID\": \"example\", \"ModifyIndex\": 46, \"Namespace\": \"default\", \"Summary\": { \"cache\": { \"Complete\": 0, \"Failed\": 0, \"Lost\": 0, \"Queued\": 0, \"Running\": 1, \"Starting\": 0 } } }, \"ModifyIndex\": 49, \"Multiregion\": null, \"Name\": \"example\", \"Namespace\": \"default\", \"ParameterizedJob\": false, \"ParentID\": \"\", \"Periodic\": false, \"Priority\": 50, \"Status\": \"running\", \"StatusDescription\": \"\", \"Stop\": false, \"SubmitTime\": 1645230403921889000, \"Type\": \"service\" }]``` Command (Filtered) ``` $ nomad operator api -filter 'Datacenters contains \"dc2\"' /v1/jobs``` Response (Filtered) ``` [ { \"CreateIndex\": 52, \"Datacenters\": [ \"dc1\", \"dc2\" ], \"ID\": \"countdash\", \"JobModifyIndex\": 56, \"JobSummary\": { \"Children\": { \"Dead\": 0, \"Pending\": 0, \"Running\": 0 }, \"CreateIndex\": 52, \"JobID\": \"countdash\", \"ModifyIndex\": 55, \"Namespace\": \"default\", \"Summary\": { \"api\": { \"Complete\": 0, \"Failed\": 0, \"Lost\": 0, \"Queued\": 1, \"Running\": 0, \"Starting\": 0 }, \"dashboard\": { \"Complete\": 0, \"Failed\": 0, \"Lost\": 0, \"Queued\": 1, \"Running\": 0, \"Starting\": 0 } } }, \"ModifyIndex\": 56, \"Multiregion\": null, \"Name\": \"countdash\", \"Namespace\": \"default\", \"ParameterizedJob\": false, \"ParentID\": \"\", \"Periodic\": false, \"Priority\": 50, \"Status\": \"pending\", \"StatusDescription\": \"\", \"Stop\": false, \"SubmitTime\": 1645230445788556000, \"Type\": \"service\" }]``` Command (Unfiltered) ``` $ nomad operator api /v1/deployments``` Response (Unfiltered) ``` [ { \"CreateIndex\": 54, \"EvalPriority\": 50, \"ID\": \"58fd0616-ce64-d14b-6917-03d0ab5af67e\", \"IsMultiregion\": false, \"JobCreateIndex\": 52, \"JobID\": \"countdash\", \"JobModifyIndex\": 52, \"JobSpecModifyIndex\": 52, \"JobVersion\": 0, \"ModifyIndex\": 59, \"Namespace\": \"default\", \"Status\": \"cancelled\", \"StatusDescription\": \"Cancelled due to newer version of job\", \"TaskGroups\": { \"dashboard\": { \"AutoPromote\": false, \"AutoRevert\": false, \"DesiredCanaries\": 0, \"DesiredTotal\": 1, \"HealthyAllocs\": 0, \"PlacedAllocs\": 0, \"PlacedCanaries\": null, \"ProgressDeadline\": 600000000000, \"Promoted\": false, \"RequireProgressBy\": null, \"UnhealthyAllocs\": 0 }, \"api\": { \"AutoPromote\": false, \"AutoRevert\": false, \"DesiredCanaries\": 0, \"DesiredTotal\": 1, \"HealthyAllocs\": 0, \"PlacedAllocs\": 0, \"PlacedCanaries\": null, \"ProgressDeadline\": 600000000000, \"Promoted\": false, \"RequireProgressBy\": null, \"UnhealthyAllocs\": 0 } } }, { \"CreateIndex\": 43, \"EvalPriority\": 50, \"ID\": \"1f18b48c-b33b-8e96-5640-71e3f3000242\", \"IsMultiregion\": false, \"JobCreateIndex\": 42, \"JobID\": \"example\", \"JobModifyIndex\": 42, \"JobSpecModifyIndex\": 42, \"JobVersion\": 0, \"ModifyIndex\": 49, \"Namespace\": \"default\", \"Status\": \"successful\", \"StatusDescription\": \"Deployment completed successfully\", \"TaskGroups\": { \"cache\": { \"AutoPromote\": false, \"AutoRevert\": false, \"DesiredCanaries\": 0, \"DesiredTotal\": 1, \"HealthyAllocs\": 1, \"PlacedAllocs\": 1, \"PlacedCanaries\": null, \"ProgressDeadline\": 600000000000, \"Promoted\": false, \"RequireProgressBy\": \"2022-02-18T19:36:54.421823-05:00\", \"UnhealthyAllocs\": 0 } } }]``` Command (Filtered) ``` $ nomad operator api -filter 'Status != \"successful\"' /v1/deployments``` Response (Filtered) ``` [ { \"CreateIndex\": 54, \"EvalPriority\": 50, \"ID\": \"58fd0616-ce64-d14b-6917-03d0ab5af67e\", \"IsMultiregion\": false, \"JobCreateIndex\": 52, \"JobID\": \"countdash\", \"JobModifyIndex\": 52, \"JobSpecModifyIndex\": 52, \"JobVersion\": 0, \"ModifyIndex\": 59, \"Namespace\": \"default\", \"Status\": \"cancelled\", \"StatusDescription\": \"Cancelled due to newer version of job\", \"TaskGroups\": { \"dashboard\": { \"AutoPromote\": false, \"AutoRevert\": false, \"DesiredCanaries\": 0, \"DesiredTotal\": 1, \"HealthyAllocs\": 0, \"PlacedAllocs\": 0, \"PlacedCanaries\": null, \"ProgressDeadline\": 600000000000, \"Promoted\": false, \"RequireProgressBy\": null, \"UnhealthyAllocs\": 0 }, \"api\": { \"AutoPromote\": false, \"AutoRevert\": false, \"DesiredCanaries\": 0, \"DesiredTotal\": 1, \"HealthyAllocs\": 0, \"PlacedAllocs\": 0, \"PlacedCanaries\": null, \"ProgressDeadline\": 600000000000, \"Promoted\": false, \"RequireProgressBy\": null, \"UnhealthyAllocs\": 0 } } }]``` Some list endpoints support partial results to limit the amount of data retrieved. The returned list is split into pages and the page size can be set using the per_page query parameter with a positive integer value. If more data is available past the page requested, the response will contain an HTTP header named X-Nomad-Nexttoken with the value of the next item to be retrieved. This value can then be set as a query parameter called next_token in a follow-up request to retrieve the next page. When the last page is reached, the X-Nomad-Nexttoken HTTP header will not be present in the response, indicating that there is nothing more to return. List results are usually returned in ascending order by their internal key, such as their ID. Some endpoints may return data sorted by their CreateIndex value, which roughly corelates to their creation order. The result order may be reversed using the reverse=true query parameter when supported by the endpoint. Many endpoints in Nomad support a feature known as \"blocking queries\". A blocking query is used to wait for a potential change using long polling. Not all endpoints support blocking, but each endpoint uniquely documents its support for blocking queries in the documentation. Endpoints that support blocking queries return an HTTP header named X-Nomad-Index. This is a unique identifier representing the current state of the requested resource. On a new Nomad cluster the value of this index starts at 1. On subsequent requests for this resource, the client can set the index query string parameter to the value of X-Nomad-Index, indicating that the client wishes to wait for any changes subsequent to that index. When this is provided, the HTTP request will \"hang\" until a change in the system occurs, or the maximum timeout is reached. A critical note is that the return of a blocking request is no guarantee of a change. It is possible that the timeout was reached or that there was an idempotent write that does not affect the result of the query. In addition to index, endpoints that support blocking will also honor a wait parameter specifying a maximum duration for the blocking request. This is limited to 10 minutes. If not set, the wait time defaults to 5" }, { "data": "This value can be specified in the form of \"10s\" or \"5m\" (i.e., 10 seconds or 5 minutes, respectively). A small random amount of additional wait time is added to the supplied maximum wait time to spread out the wake up time of any concurrent requests. This adds up to wait / 16 additional time to the maximum duration. Most of the read query endpoints support multiple levels of consistency. Since no policy will suit all clients' needs, these consistency modes allow the user to have the ultimate say in how to balance the trade-offs inherent in a distributed system. The two read modes are: default - If not specified, the default is strongly consistent in almost all cases. However, there is a small window in which a new leader may be elected during which the old leader may service stale values. The trade-off is fast reads but potentially stale values. The condition resulting in stale reads is hard to trigger, and most clients should not need to worry about this case. Also, note that this race condition only applies to reads, not writes. stale - This mode allows any server to service the read regardless of whether it is the leader. This means reads can be arbitrarily stale; however, results are generally consistent to within 50 milliseconds of the leader. The trade-off is very fast and scalable reads with a higher likelihood of stale values. Since this mode allows reads without a leader, a cluster that is unavailable will still be able to respond to queries. To switch these modes, use the stale query parameter on requests. To support bounding the acceptable staleness of data, responses provide the X-Nomad-LastContact header containing the time in milliseconds that a server was last contacted by the leader node. The X-Nomad-KnownLeader header also indicates if there is a known leader. These can be used by clients to gauge the staleness of a result and take appropriate action. By default, any request to the HTTP API will default to the region on which the machine is servicing the request. If the agent runs in \"region1\", the request will query the region \"region1\". A target region can be explicitly request using the ?region query parameter. The request will be transparently forwarded and serviced by a server in the requested region. The HTTP API will gzip the response if the HTTP request denotes that the client accepts gzip compression. This is achieved by passing the accept encoding: ``` $ curl \\ --header \"Accept-Encoding: gzip\" \\ https://localhost:4646/v1/...``` By default, the output of all HTTP API requests is minimized JSON. If the client passes pretty on the query string, formatted JSON will be returned. In general, clients should prefer a client-side parser like jq instead of server-formatted data. Asking the server to format the data takes away processing cycles from more important tasks. ``` $ curl https://localhost:4646/v1/page?pretty``` Nomad's API aims to be RESTful, although there are some exceptions. The API responds to the standard HTTP verbs GET, PUT, and DELETE. Each API method will clearly document the verb(s) it responds to and the generated response. The same path with different verbs may trigger different behavior. For example: ``` PUT /v1/jobsGET /v1/jobs``` Even though these share a path, the PUT operation creates a new job whereas the GET operation reads all jobs. Individual API's will contain further documentation in the case that more specific response codes are returned but all clients should handle the following: On this page:" } ]
{ "category": "Orchestration & Management", "file_name": "supplement.md", "project_name": "Nomad", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Enterprises are comprised of multiple groups of people (business units) with different projects, infrastructure environments, technical competencies, team sizes, budgets, and SLAs. Each group has different requirements and leverages technologies based on their particular needs and constraints. Medium to large scale enterprises run into challenges when trying to standardize hundreds to thousands of software developers and administrators onto one single orchestrator (Kubernetes, Nomad, Mesos) as no scheduler today fits all applications, environments, projects, and teams. Companies in the Global 2000 today such as Intel, Autodesk and GitHub with multiple products and business units organically run Nomad and Kubernetes to supplement each other. They leverage each scheduler to its strengths with Kubernetes for its cutting edge ecosystem and Nomad for simple maintenance and flexibility in core scheduling. These are the characteristics we see in teams that typically adopt self-hosted Kubernetes: Greenfield use-cases such as machine learning (ML), serverless, and big data that require the Kubernetes ecosystem and Helm chart High budget and full-time staffing to maintain Kubernetes High-profile projects with significant investment and long-term timeline (multi-year) Deploying and managing new, cloud-native applications Public cloud environment such as AWS, GCP, Azure Characteristics of teams that typically adopt Nomad: Run a mix of containerized and non-containerized workloads (Windows, Java) Small/medium-sized teams with limited capacity to maintain an orchestrator Deploying and managing core, existing applications On-premises environment, or hybrid environments Require simplicity to move fast and fulfill business needs with hard deadlines We continue to see small enterprises continue to standardize on a single orchestrator given the natural staffing and organizational constraints. There are not enough DevOps members to maintain more than one orchestrator, not enough developers to warrant diverging workflows, or simply not enough workload diversity to require more than one orchestrator." } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "OpenNebula", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Learn how to design, deploy, and operate an OpenNebula Cloud. Build an OpenNebula Cloud on top of your existing VMware vCenter infrastructure. Our evaluation tool for building an OpenNebula Cloud using KVM, LXC, or Firecracker microVMs. Get a quick overview of OpenNebulas main features and key benefits. In-depth technical guides for IT architects, cloud admins, and consultants. Browse our ever-growing catalog of screencasts and tutorial videos. A guide to our public catalog of certified virtual appliances. SVG file PNG file PNG file About UsInnovationPartner ProgramPress ReleasesAcknowledgmentsLegal NoticeCareersContact Us Copyright 2002-2023 OpenNebula Systems (OpenNebula.io) Unless otherwise stated, all content is distributed under CC BY-NC-SA 4.0" } ]
{ "category": "Orchestration & Management", "file_name": "index.html#qs.md", "project_name": "OpenNebula", "subcategory": "Scheduling & Orchestration" }
[ { "data": "The Quick Start Guide provides an example of a learning, development or OpenNebula test installation. The guide will walk you through the steps to set up an OpenNebula Front-end and to automatically deploy a simple Edge Cluster on AWS for true hybrid and multi-cloud computing. To install a production-ready environment, go to the Installation and Configuration Guide after completing this guide. To learn about OpenNebula and try its main features, see the Get Started Guide." } ]