content
large_stringlengths
3
20.5k
url
large_stringlengths
54
193
branch
large_stringclasses
4 values
source
large_stringclasses
42 values
embeddings
listlengths
384
384
score
float64
-0.21
0.65
[enable the http2-push-preload directive](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/http2pushpreload.go#L34) ### [allowlist-source-range](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/ipallowlist.go#L27) - [should set valid ip allowlist range](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/ipallowlist.go#L34) ### [denylist-source-range](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/ipdenylist.go#L28) - [only deny explicitly denied IPs, allow all others](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/ipdenylist.go#L35) - [only allow explicitly allowed IPs, deny all others](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/ipdenylist.go#L86) ### [Annotation - limit-connections](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/limitconnections.go#L31) - [should limit-connections](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/limitconnections.go#L38) ### [limit-rate](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/limitrate.go#L29) - [Check limit-rate annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/limitrate.go#L37) ### [enable-access-log enable-rewrite-log](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/log.go#L27) - [set access\_log off](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/log.go#L34) - [set rewrite\_log on](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/log.go#L49) ### [mirror-\*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/mirror.go#L28) - [should set mirror-target to http://localhost/mirror](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/mirror.go#L36) - [should set mirror-target to https://test.env.com/$request\_uri](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/mirror.go#L51) - [should disable mirror-request-body](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/mirror.go#L67) ### [modsecurity owasp](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L39) - [should enable modsecurity](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L46) - [should enable modsecurity with transaction ID and OWASP rules](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L64) - [should disable modsecurity](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L85) - [should enable modsecurity with snippet](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L102) - [should enable modsecurity without using 'modsecurity on;'](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L124) - [should disable modsecurity using 'modsecurity off;'](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L147) - [should enable modsecurity with snippet and block requests](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L169) - [should enable modsecurity globally and with modsecurity-snippet block requests](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L202) - [should enable modsecurity when enable-owasp-modsecurity-crs is set to true](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L235) - [should enable modsecurity through the config map](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L269) - [should enable modsecurity through the config map but ignore snippet as disabled by admin](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L309) - [should disable default modsecurity conf setting when modsecurity-snippet is specified](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/modsecurity/modsecurity.go#L354) ### [preserve-trailing-slash](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/preservetrailingslash.go#L27) - [should allow preservation of trailing slashes](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/preservetrailingslash.go#L34) ### [proxy-\*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L30) - [should set proxy\_redirect to off](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L38) - [should set proxy\_redirect to default](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L54) - [should set proxy\_redirect to hello.com goodbye.com](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L70) - [should set proxy client-max-body-size to 8m](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L87) - [should not set proxy client-max-body-size to incorrect value](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L102) - [should set valid proxy timeouts](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L117) - [should not set invalid proxy timeouts](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L138) - [should turn on proxy-buffering](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L159) - [should turn off proxy-request-buffering](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L184) - [should build proxy next upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L199) - [should setup proxy cookies](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L220) - [should change the default proxy HTTP version](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxy.go#L238) ### [proxy-ssl-\*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxyssl.go#L32) - [should set valid proxy-ssl-secret](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxyssl.go#L39) - [should set valid proxy-ssl-secret, proxy-ssl-verify to on, proxy-ssl-verify-depth to 2, and proxy-ssl-server-name to on](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxyssl.go#L66) - [should set valid proxy-ssl-secret, proxy-ssl-ciphers to HIGH:!AES](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxyssl.go#L96) - [should set valid proxy-ssl-secret, proxy-ssl-protocols](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxyssl.go#L124) - [proxy-ssl-location-only flag should change the nginx config server part](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/proxyssl.go#L152) ### [permanent-redirect permanent-redirect-code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/redirect.go#L30) - [should respond with a standard redirect code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/redirect.go#L33) - [should respond with a custom redirect code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/redirect.go#L61) ### [relative-redirects](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/relativeredirects.go#L35) - [configures Nginx correctly](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/relativeredirects.go#L43) - [should respond with absolute URL in Location](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/relativeredirects.go#L61) - [should respond with relative URL in Location](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/relativeredirects.go#L85) ### [rewrite-target use-regex enable-rewrite-log](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/rewrite.go#L32) - [should write rewrite logs](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/rewrite.go#L39) - [should use correct longest path match](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/rewrite.go#L68) - [should use ~\* location modifier if regex annotation is present](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/rewrite.go#L113) - [should fail to use longest match for documented warning](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/rewrite.go#L160) - [should allow for custom rewrite parameters](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/rewrite.go#L192) ### [satisfy](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/satisfy.go#L33) - [should configure satisfy directive correctly](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/satisfy.go#L40) - [should allow multiple auth with satisfy any](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/satisfy.go#L82) ### [server-snippet](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/serversnippet.go#L28) ### [service-upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/serviceupstream.go#L32) - [should use the Service Cluster IP and Port ](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/serviceupstream.go#L41) - [should use the Service Cluster IP and Port ](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/serviceupstream.go#L69) - [should not use the Service Cluster IP and Port](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/serviceupstream.go#L97) ### [configuration-snippet](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/snippet.go#L28) - [set snippet more\_set\_headers in all locations](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/snippet.go#L34) - [drops snippet more\_set\_header in all locations if disabled by admin](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/snippet.go#L66) ### [ssl-ciphers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/sslciphers.go#L28) - [should change ssl ciphers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/sslciphers.go#L35) - [should keep ssl ciphers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/sslciphers.go#L58) ### [stream-snippet](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/streamsnippet.go#L34) - [should add value of stream-snippet to nginx config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/streamsnippet.go#L41) - [should add stream-snippet and drop annotations per admin config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/streamsnippet.go#L88) ### [upstream-hash-by-\*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/upstreamhashby.go#L79) - [should connect to the same pod](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/upstreamhashby.go#L86) - [should connect to the same subset of pods](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/upstreamhashby.go#L95) ### [upstream-vhost](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/upstreamvhost.go#L27) - [set host to upstreamvhost.bar.com](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/upstreamvhost.go#L34) ### [x-forwarded-prefix](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/xforwardedprefix.go#L28) - [should set the X-Forwarded-Prefix to the annotation value](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/xforwardedprefix.go#L35) - [should not add X-Forwarded-Prefix if the annotation value is empty](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/xforwardedprefix.go#L57) ### [[CGroups] cgroups](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/cgroups/cgroups.go#L32) - [detects cgroups version v1](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/cgroups/cgroups.go#L40) - [detect cgroups version v2](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/cgroups/cgroups.go#L83) ### [Debug CLI](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/dbg/main.go#L29) - [should list the backend servers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/dbg/main.go#L37) - [should get information for a specific backend server](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/dbg/main.go#L56) - [should produce valid JSON for /dbg general](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/dbg/main.go#L85) ### [[Default Backend] custom service](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/custom\_default\_backend.go#L33) - [uses custom default backend that returns
https://github.com/kubernetes/ingress-nginx/blob/main//docs/e2e-tests.md
main
ingress-nginx
[ -0.05909236893057823, 0.05936269462108612, 0.050394196063280106, -0.0388554222881794, 0.009510969743132591, -0.009507971815764904, -0.01939481496810913, -0.05998890846967697, 0.011568710207939148, 0.062162164598703384, 0.04476452246308327, -0.026748182252049446, -0.01250432524830103, 0.035...
0.076644
[[CGroups] cgroups](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/cgroups/cgroups.go#L32) - [detects cgroups version v1](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/cgroups/cgroups.go#L40) - [detect cgroups version v2](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/cgroups/cgroups.go#L83) ### [Debug CLI](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/dbg/main.go#L29) - [should list the backend servers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/dbg/main.go#L37) - [should get information for a specific backend server](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/dbg/main.go#L56) - [should produce valid JSON for /dbg general](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/dbg/main.go#L85) ### [[Default Backend] custom service](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/custom\_default\_backend.go#L33) - [uses custom default backend that returns 200 as status code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/custom\_default\_backend.go#L36) ### [[Default Backend]](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/default\_backend.go#L30) - [should return 404 sending requests when only a default backend is running](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/default\_backend.go#L33) - [enables access logging for default backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/default\_backend.go#L88) - [disables access logging for default backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/default\_backend.go#L102) ### [[Default Backend] SSL](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/ssl.go#L26) - [should return a self generated SSL certificate](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/ssl.go#L29) ### [[Default Backend] change default settings](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/with\_hosts.go#L30) - [should apply the annotation to the default backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/defaultbackend/with\_hosts.go#L38) ### [[Disable Leader] Routing works when leader election was disabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/disableleaderelection/disable\_leader.go#L28) - [should create multiple ingress routings rules when leader election has disabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/disableleaderelection/disable\_leader.go#L35) ### [[Endpointslices] long service name](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/endpointslices/longname.go#L29) - [should return 200 when service name has max allowed number of characters 63](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/endpointslices/longname.go#L38) ### [[TopologyHints] topology aware routing](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/endpointslices/topology.go#L34) - [should return 200 when service has topology hints](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/endpointslices/topology.go#L42) ### [[Shutdown] Grace period shutdown](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/gracefulshutdown/grace\_period.go#L32) - [/healthz should return status code 500 during shutdown grace period](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/gracefulshutdown/grace\_period.go#L35) ### [[Shutdown] ingress controller](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/gracefulshutdown/shutdown.go#L30) - [should shutdown in less than 60 seconds without pending connections](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/gracefulshutdown/shutdown.go#L40) ### [[Shutdown] Graceful shutdown with pending request](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/gracefulshutdown/slow\_requests.go#L25) - [should let slow requests finish before shutting down](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/gracefulshutdown/slow\_requests.go#L33) ### [[Ingress] DeepInspection](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/deep\_inspection.go#L27) - [should drop whole ingress if one path matches invalid regex](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/deep\_inspection.go#L34) ### [single ingress - multiple hosts](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/multiple\_rules.go#L30) - [should set the correct $service\_name NGINX variable](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/multiple\_rules.go#L38) ### [[Ingress] [PathType] exact](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype\_exact.go#L30) - [should choose exact location for /exact](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype\_exact.go#L37) ### [[Ingress] [PathType] mix Exact and Prefix paths](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype\_mixed.go#L30) - [should choose the correct location](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype\_mixed.go#L39) ### [[Ingress] [PathType] prefix checks](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype\_prefix.go#L28) - [should return 404 when prefix /aaa does not match request /aaaccc](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype\_prefix.go#L35) - [should test prefix path using simple regex pattern for /id/{int}](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype\_prefix.go#L72) - [should test prefix path using regex pattern for /id/{int} ignoring non-digits characters at end of string](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype\_prefix.go#L113) - [should test prefix path using fixed path size regex pattern /id/{int}{3}](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype\_prefix.go#L142) - [should correctly route multi-segment path patterns](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/pathtype\_prefix.go#L177) ### [[Ingress] definition without host](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/without\_host.go#L31) - [should set ingress details variables for ingresses without a host](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/without\_host.go#L34) - [should set ingress details variables for ingresses with host without IngressRuleValue, only Backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ingress/without\_host.go#L55) ### [[Memory Leak] Dynamic Certificates](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/leaks/lua\_ssl.go#L35) - [should not leak memory from ingress SSL certificates or configuration updates](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/leaks/lua\_ssl.go#L42) ### [[Load Balancer] load-balance](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/loadbalance/configmap.go#L30) - [should apply the configmap load-balance setting](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/loadbalance/configmap.go#L37) ### [[Load Balancer] EWMA](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/loadbalance/ewma.go#L31) - [does not fail requests](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/loadbalance/ewma.go#L43) ### [[Load Balancer] round-robin](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/loadbalance/round\_robin.go#L31) - [should evenly distribute requests with round-robin (default algorithm)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/loadbalance/round\_robin.go#L39) ### [[Lua] dynamic certificates](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic\_certificates.go#L37) - [picks up the certificate when we add TLS spec to existing ingress](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic\_certificates.go#L45) - [picks up the previously missing secret for a given ingress without reloading](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic\_certificates.go#L70) - [supports requests with domain with trailing dot](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic\_certificates.go#L145) - [picks up the updated certificate without reloading](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic\_certificates.go#L149) - [falls back to using default certificate when secret gets deleted without reloading](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic\_certificates.go#L185) - [picks up a non-certificate only change](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic\_certificates.go#L218) - [removes HTTPS configuration when we delete TLS spec](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic\_certificates.go#L233) ### [[Lua] dynamic configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic\_configuration.go#L41) - [configures balancer Lua middleware correctly](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic\_configuration.go#L49) - [handles endpoints only changes](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic\_configuration.go#L56) - [handles endpoints only changes (down scaling of replicas)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic\_configuration.go#L81) - [handles endpoints only changes consistently (down scaling of replicas vs. empty service)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic\_configuration.go#L119) - [handles an annotation change](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/lua/dynamic\_configuration.go#L165) ### [[metrics] exported prometheus metrics](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/metrics/metrics.go#L36) - [exclude socket request metrics are absent](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/metrics/metrics.go#L51) - [exclude socket request metrics are present](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/metrics/metrics.go#L73) - [request metrics per undefined host are present when flag is set](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/metrics/metrics.go#L95) - [request metrics per undefined host are not present when flag is not set](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/metrics/metrics.go#L128) ### [nginx-configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/nginx/nginx.go#L99) - [start nginx with default configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/nginx/nginx.go#L102) - [fails when using alias directive](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/nginx/nginx.go#L114) - [fails when using root directive](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/nginx/nginx.go#L121) ### [[Security] request smuggling](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/security/request\_smuggling.go#L32) - [should not return
https://github.com/kubernetes/ingress-nginx/blob/main//docs/e2e-tests.md
main
ingress-nginx
[ -0.09666809439659119, -0.005870991852134466, 0.0033148047514259815, -0.024440091103315353, 0.028546815738081932, -0.021073460578918457, -0.030270500108599663, -0.07003997266292572, 0.021273691207170486, -0.010958723723888397, 0.04615553468465805, -0.10398145765066147, -0.057309094816446304, ...
0.124786
undefined host are present when flag is set](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/metrics/metrics.go#L95) - [request metrics per undefined host are not present when flag is not set](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/metrics/metrics.go#L128) ### [nginx-configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/nginx/nginx.go#L99) - [start nginx with default configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/nginx/nginx.go#L102) - [fails when using alias directive](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/nginx/nginx.go#L114) - [fails when using root directive](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/nginx/nginx.go#L121) ### [[Security] request smuggling](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/security/request\_smuggling.go#L32) - [should not return body content from error\_page](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/security/request\_smuggling.go#L39) ### [[Service] backend status code 503](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service\_backend.go#L34) - [should return 503 when backend service does not exist](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service\_backend.go#L37) - [should return 503 when all backend service endpoints are unavailable](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service\_backend.go#L55) ### [[Service] Type ExternalName](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service\_externalname.go#L38) - [works with external name set to incomplete fqdn](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service\_externalname.go#L41) - [should return 200 for service type=ExternalName without a port defined](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service\_externalname.go#L78) - [should return 200 for service type=ExternalName with a port defined](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service\_externalname.go#L118) - [should return status 502 for service type=ExternalName with an invalid host](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service\_externalname.go#L148) - [should return 200 for service type=ExternalName using a port name](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service\_externalname.go#L184) - [should return 200 for service type=ExternalName using FQDN with trailing dot](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service\_externalname.go#L225) - [should update the external name after a service update](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service\_externalname.go#L261) - [should sync ingress on external name service addition/deletion](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service\_externalname.go#L344) ### [[Service] Nil Service Backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service\_nil\_backend.go#L31) - [should return 404 when backend service is nil](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/servicebackend/service\_nil\_backend.go#L38) ### [access-log](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/access\_log.go#L27) - [use the default configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/access\_log.go#L31) - [use the specified configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/access\_log.go#L41) - [use the specified configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/access\_log.go#L52) - [use the specified configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/access\_log.go#L64) - [use the specified configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/access\_log.go#L76) ### [aio-write](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/aio\_write.go#L27) - [should be enabled by default](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/aio\_write.go#L30) - [should be enabled when setting is true](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/aio\_write.go#L37) - [should be disabled when setting is false](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/aio\_write.go#L46) ### [Bad annotation values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/badannotationvalues.go#L29) - [[BAD\_ANNOTATIONS] should drop an ingress if there is an invalid character in some annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/badannotationvalues.go#L36) - [[BAD\_ANNOTATIONS] should drop an ingress if there is a forbidden word in some annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/badannotationvalues.go#L68) - [[BAD\_ANNOTATIONS] should allow an ingress if there is a default blocklist config in place](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/badannotationvalues.go#L105) - [[BAD\_ANNOTATIONS] should drop an ingress if there is a custom blocklist config in place and allow others to pass](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/badannotationvalues.go#L138) ### [brotli](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/brotli.go#L30) - [should only compress responses that meet the `brotli-min-length` condition](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/brotli.go#L38) ### [Configmap change](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/configmap\_change.go#L29) - [should reload after an update in the configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/configmap\_change.go#L36) ### [add-headers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/custom\_header.go#L30) - [Add a custom header](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/custom\_header.go#L40) - [Add multiple custom headers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/custom\_header.go#L65) ### [[SSL] [Flag] default-ssl-certificate](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/default\_ssl\_certificate.go#L35) - [uses default ssl certificate for catch-all ingress](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/default\_ssl\_certificate.go#L66) - [uses default ssl certificate for host based ingress when configured certificate does not match host](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/default\_ssl\_certificate.go#L82) ### [[Flag] disable-catch-all](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable\_catch\_all.go#L33) - [should ignore catch all Ingress with backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable\_catch\_all.go#L50) - [should ignore catch all Ingress with backend and rules](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable\_catch\_all.go#L69) - [should delete Ingress updated to catch-all](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable\_catch\_all.go#L81) - [should allow Ingress with rules](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable\_catch\_all.go#L123) ### [[Flag] disable-service-external-name](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable\_service\_external\_name.go#L35) - [should ignore services of external-name type](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable\_service\_external\_name.go#L55) ### [[Flag] disable-sync-events](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable\_sync\_events.go#L32) - [should create sync events (default)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable\_sync\_events.go#L35) - [should create sync events](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable\_sync\_events.go#L55) - [should not create sync events](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/disable\_sync\_events.go#L83) ### [enable-real-ip](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/enable\_real\_ip.go#L30) - [trusts X-Forwarded-For header only when setting is true](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/enable\_real\_ip.go#L40) - [should not trust X-Forwarded-For header when setting is false](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/enable\_real\_ip.go#L80) ### [use-forwarded-headers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/forwarded\_headers.go#L31) - [should trust X-Forwarded headers when setting is true](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/forwarded\_headers.go#L41) - [should not trust X-Forwarded headers when setting is false](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/forwarded\_headers.go#L95) ### [Geoip2](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/geoip2.go#L36) - [should include geoip2 line in config when enabled and db file exists](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/geoip2.go#L45) - [should only allow requests from specific countries](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/geoip2.go#L69) - [should up and running nginx controller using autoreload flag](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/geoip2.go#L122) ### [[Security] block-\*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_access\_block.go#L28) - [should block CIDRs defined in the ConfigMap](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_access\_block.go#L38) - [should block User-Agents defined in the ConfigMap](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_access\_block.go#L55) - [should block Referers defined in the ConfigMap](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_access\_block.go#L88) ### [[Security] global-auth-url](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_external\_auth.go#L39) - [should return status code 401 when request any protected service](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_external\_auth.go#L91) - [should return status code 200 when request whitelisted (via no-auth-locations) service and 401 when request protected service](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_external\_auth.go#L107) - [should return status code 200 when request whitelisted (via ingress annotation) service and 401 when request protected service](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_external\_auth.go#L130) - [should still return status code 200 after auth backend is deleted using cache](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_external\_auth.go#L158) -
https://github.com/kubernetes/ingress-nginx/blob/main//docs/e2e-tests.md
main
ingress-nginx
[ -0.014698234386742115, 0.03443973511457443, 0.03793875128030777, -0.03254600986838341, 0.005496424622833729, -0.042094986885786057, 0.0010568471625447273, 0.0017538766842335463, 0.05593147873878479, 0.024307923391461372, 0.02172890491783619, -0.09965824335813522, -0.03355724364519119, 0.07...
0.033225
return status code 200 when request whitelisted (via no-auth-locations) service and 401 when request protected service](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_external\_auth.go#L107) - [should return status code 200 when request whitelisted (via ingress annotation) service and 401 when request protected service](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_external\_auth.go#L130) - [should still return status code 200 after auth backend is deleted using cache](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_external\_auth.go#L158) - [user retains cookie by default](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_external\_auth.go#L322) - [user does not retain cookie if upstream returns error status code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_external\_auth.go#L333) - [user with global-auth-always-set-cookie key in configmap retains cookie if upstream returns error status code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_external\_auth.go#L344) ### [global-options](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_options.go#L28) - [should have worker\_rlimit\_nofile option](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_options.go#L31) - [should have worker\_rlimit\_nofile option and be independent on amount of worker processes](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/global\_options.go#L37) ### [GRPC](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/grpc.go#L39) - [should set the correct GRPC Buffer Size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/grpc.go#L42) ### [gzip](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/gzip.go#L30) - [should be disabled by default](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/gzip.go#L40) - [should be enabled with default settings](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/gzip.go#L56) - [should set gzip\_comp\_level to 4](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/gzip.go#L82) - [should set gzip\_disable to msie6](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/gzip.go#L102) - [should set gzip\_min\_length to 100](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/gzip.go#L132) - [should set gzip\_types to text/html](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/gzip.go#L164) ### [hash size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/hash-size.go#L27) - [should set server\_names\_hash\_bucket\_size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/hash-size.go#L39) - [should set server\_names\_hash\_max\_size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/hash-size.go#L47) - [should set proxy-headers-hash-bucket-size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/hash-size.go#L57) - [should set proxy-headers-hash-max-size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/hash-size.go#L65) - [should set variables-hash-bucket-size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/hash-size.go#L75) - [should set variables-hash-max-size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/hash-size.go#L83) - [should set vmap-hash-bucket-size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/hash-size.go#L93) ### [[Flag] ingress-class](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress\_class.go#L41) - [should ignore Ingress with a different class annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress\_class.go#L70) - [should ignore Ingress with different controller class](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress\_class.go#L106) - [should accept both Ingresses with default IngressClassName and IngressClass annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress\_class.go#L134) - [should ignore Ingress without IngressClass configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress\_class.go#L166) - [should delete Ingress when class is removed](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress\_class.go#L194) - [should serve Ingress when class is added](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress\_class.go#L259) - [should serve Ingress when class is updated between annotation and ingressClassName](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress\_class.go#L325) - [should ignore Ingress with no class and accept the correctly configured Ingresses](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress\_class.go#L414) - [should watch Ingress with no class and ignore ingress with a different class](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress\_class.go#L482) - [should watch Ingress that uses the class name even if spec is different](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress\_class.go#L538) - [should watch Ingress with correct annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress\_class.go#L628) - [should ignore Ingress with only IngressClassName](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ingress\_class.go#L648) ### [keep-alive keep-alive-requests](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/keep-alive.go#L28) - [should set keepalive\_timeout](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/keep-alive.go#L40) - [should set keepalive\_requests](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/keep-alive.go#L48) - [should set keepalive connection to upstream server](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/keep-alive.go#L58) - [should set keep alive connection timeout to upstream server](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/keep-alive.go#L68) - [should set keepalive time to upstream server](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/keep-alive.go#L78) - [should set the request count to upstream server through one keep alive connection](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/keep-alive.go#L88) ### [Configmap - limit-rate](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/limit\_rate.go#L28) - [Check limit-rate config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/limit\_rate.go#L36) ### [[Flag] custom HTTP and HTTPS ports](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/listen\_nondefault\_ports.go#L30) - [should set X-Forwarded-Port headers accordingly when listening on a non-default HTTP port](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/listen\_nondefault\_ports.go#L45) - [should set X-Forwarded-Port header to 443](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/listen\_nondefault\_ports.go#L65) - [should set the X-Forwarded-Port header to 443](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/listen\_nondefault\_ports.go#L93) ### [log-format-\*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L28) - [should not configure log-format escape by default](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L39) - [should enable the log-format-escape-json](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L46) - [should disable the log-format-escape-json](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L54) - [should enable the log-format-escape-none](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L62) - [should disable the log-format-escape-none](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L70) - [log-format-escape-json enabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L80) - [log-format default escape](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L103) - [log-format-escape-none enabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/log-format.go#L126) ### [[Lua] lua-shared-dicts](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/lua\_shared\_dicts.go#L26) - [configures lua shared dicts](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/lua\_shared\_dicts.go#L29) ### [main-snippet](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/main\_snippet.go#L27) - [should add value of main-snippet setting to nginx config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/main\_snippet.go#L31) ### [[Security] modsecurity-snippet](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/modsecurity/modsecurity\_snippet.go#L27) - [should add value of modsecurity-snippet setting to nginx config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/modsecurity/modsecurity\_snippet.go#L30) ### [enable-multi-accept](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/multi\_accept.go#L27) - [should be enabled by default](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/multi\_accept.go#L31) - [should be enabled when set to true](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/multi\_accept.go#L39) - [should be disabled when set to false](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/multi\_accept.go#L49) ### [[Flag] watch namespace selector](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/namespace\_selector.go#L30) - [should ignore Ingress of namespace without label foo=bar and accept those of namespace with label foo=bar](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/namespace\_selector.go#L62) ### [[Security] no-auth-locations](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/no\_auth\_locations.go#L33) - [should return status code 401 when accessing '/' unauthentication](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/no\_auth\_locations.go#L54) - [should return status code 200 when accessing '/' authentication](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/no\_auth\_locations.go#L68) - [should return status code 200 when accessing '/noauth' unauthenticated](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/no\_auth\_locations.go#L82) ### [Add no tls redirect locations](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/no\_tls\_redirect\_locations.go#L27) - [Check no tls redirect locations config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/no\_tls\_redirect\_locations.go#L30) ### [OCSP](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ocsp/ocsp.go#L43) - [should enable OCSP and contain stapling information in the connection](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ocsp/ocsp.go#L50) ### [Configure Opentelemetry](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/opentelemetry.go#L39) - [should not exists opentelemetry directive](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/opentelemetry.go#L49) - [should exists opentelemetry directive when is enabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/opentelemetry.go#L62) - [should include opentelemetry\_trust\_incoming\_spans on directive when enabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/opentelemetry.go#L76)
https://github.com/kubernetes/ingress-nginx/blob/main//docs/e2e-tests.md
main
ingress-nginx
[ -0.06083200126886368, 0.05440913885831833, -0.01994316466152668, -0.041025180369615555, 0.0258123017847538, -0.04579799249768257, 0.051959384232759476, -0.016038432717323303, 0.01900513470172882, 0.058227989822626114, 0.02878100611269474, -0.023385660722851753, -0.016447916626930237, 0.024...
0.081933
[Add no tls redirect locations](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/no\_tls\_redirect\_locations.go#L27) - [Check no tls redirect locations config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/no\_tls\_redirect\_locations.go#L30) ### [OCSP](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ocsp/ocsp.go#L43) - [should enable OCSP and contain stapling information in the connection](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ocsp/ocsp.go#L50) ### [Configure Opentelemetry](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/opentelemetry.go#L39) - [should not exists opentelemetry directive](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/opentelemetry.go#L49) - [should exists opentelemetry directive when is enabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/opentelemetry.go#L62) - [should include opentelemetry\_trust\_incoming\_spans on directive when enabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/opentelemetry.go#L76) - [should not exists opentelemetry\_operation\_name directive when is empty](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/opentelemetry.go#L91) - [should exists opentelemetry\_operation\_name directive when is configured](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/opentelemetry.go#L106) ### [proxy-connect-timeout](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_connect\_timeout.go#L29) - [should set valid proxy timeouts using configmap values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_connect\_timeout.go#L37) - [should not set invalid proxy timeouts using configmap values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_connect\_timeout.go#L53) ### [Dynamic $proxy\_host](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_host.go#L28) - [should exist a proxy\_host](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_host.go#L36) - [should exist a proxy\_host using the upstream-vhost annotation value](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_host.go#L60) ### [proxy-next-upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_next\_upstream.go#L28) - [should build proxy next upstream using configmap values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_next\_upstream.go#L36) ### [use-proxy-protocol](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_protocol.go#L38) - [should respect port passed by the PROXY Protocol](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_protocol.go#L49) - [should respect proto passed by the PROXY Protocol server port](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_protocol.go#L86) - [should enable PROXY Protocol for HTTPS](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_protocol.go#L122) - [should enable PROXY Protocol for TCP](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_protocol.go#L165) - [should not trust X-Forwarded headers when the client IP address is not trusted](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_protocol.go#L238) - [should trust X-Forwarded headers when the client IP address is trusted](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_protocol.go#L274) ### [proxy-read-timeout](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_read\_timeout.go#L29) - [should set valid proxy read timeouts using configmap values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_read\_timeout.go#L37) - [should not set invalid proxy read timeouts using configmap values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_read\_timeout.go#L53) ### [proxy-send-timeout](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_send\_timeout.go#L29) - [should set valid proxy send timeouts using configmap values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_send\_timeout.go#L37) - [should not set invalid proxy send timeouts using configmap values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/proxy\_send\_timeout.go#L53) ### [reuse-port](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/reuse-port.go#L27) - [reuse port should be enabled by default](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/reuse-port.go#L38) - [reuse port should be disabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/reuse-port.go#L44) - [reuse port should be enabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/reuse-port.go#L52) ### [configmap server-snippet](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/server\_snippet.go#L28) - [should add value of server-snippet setting to all ingress config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/server\_snippet.go#L35) - [should add global server-snippet and drop annotations per admin config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/server\_snippet.go#L100) ### [server-tokens](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/server\_tokens.go#L29) - [should not exists Server header in the response](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/server\_tokens.go#L38) - [should exists Server header in the response when is enabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/server\_tokens.go#L50) ### [ssl-ciphers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl\_ciphers.go#L28) - [Add ssl ciphers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl\_ciphers.go#L31) ### [[Flag] enable-ssl-passthrough](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl\_passthrough.go#L36) ### [With enable-ssl-passthrough enabled](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl\_passthrough.go#L55) - [should enable ssl-passthrough-proxy-port on a different port](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl\_passthrough.go#L56) - [should pass unknown traffic to default backend and handle known traffic](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl\_passthrough.go#L78) ### [ssl-session-cache](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl\_session\_cache.go#L27) - [should have default ssl\_session\_cache and ssl\_session\_timeout values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl\_session\_cache.go#L30) - [should disable ssl\_session\_cache](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl\_session\_cache.go#L37) - [should set ssl\_session\_cache value](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl\_session\_cache.go#L45) - [should set ssl\_session\_timeout value](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl\_session\_cache.go#L53) ### [ssl-session-tickets](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl\_session\_tickets.go#L27) - [should have default ssl\_session\_tickets value](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl\_session\_tickets.go#L30) - [should set ssl\_session\_tickets value](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl\_session\_tickets.go#L36) - [should set ssl\_session\_tickets and ssl\_session\_ticket\_key values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/ssl\_session\_tickets.go#L44) ### [configmap stream-snippet](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/stream\_snippet.go#L35) - [should add value of stream-snippet via config map to nginx config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/stream\_snippet.go#L42) ### [[SSL] TLS protocols, ciphers and headers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/tls.go#L32) - [setting cipher suite](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/tls.go#L66) - [setting max-age parameter](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/tls.go#L110) - [setting includeSubDomains parameter](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/tls.go#L127) - [setting preload parameter](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/tls.go#L147) - [overriding what's set from the upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/tls.go#L168) - [should not use ports during the HTTP to HTTPS redirection](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/tls.go#L190) - [should not use ports or X-Forwarded-Host during the HTTP to HTTPS redirection](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/tls.go#L208) ### [annotation validations](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/validations/validations.go#L30) - [should allow ingress based on their risk on webhooks](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/validations/validations.go#L33) - [should allow ingress based on their risk on webhooks](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/settings/validations/validations.go#L68) ### [[SSL] redirect to HTTPS](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ssl/http\_redirect.go#L29) - [should redirect from HTTP to HTTPS when secret is missing](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ssl/http\_redirect.go#L36) ### [[SSL] secret update](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ssl/secret\_update.go#L33) - [should not appear references to secret updates not used in ingress rules](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ssl/secret\_update.go#L40) - [should return the fake SSL certificate if the secret is invalid](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/ssl/secret\_update.go#L83) ### [[Status] status update](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/status/update.go#L38) - [should update status field after client-go reconnection](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/status/update.go#L43) ### [[TCP] tcp-services](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/tcpudp/tcp.go#L38) - [should expose a TCP service](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/tcpudp/tcp.go#L46) - [should expose an ExternalName TCP service](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/tcpudp/tcp.go#L80) - [should reload after an update in the configuration](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/tcpudp/tcp.go#L168)
https://github.com/kubernetes/ingress-nginx/blob/main//docs/e2e-tests.md
main
ingress-nginx
[ -0.04556037485599518, 0.08057345449924469, 0.03055516816675663, 0.016513239592313766, 0.006484493613243103, -0.07712601870298386, -0.019347544759511948, -0.01980096660554409, 0.0863921195268631, 0.018788687884807587, -0.019584553316235542, -0.08936258405447006, -0.013571186922490597, 0.027...
0.040163
# FAQ ## Multi-tenant Kubernetes Do not use in multi-tenant Kubernetes production installations. This project assumes that users that can create Ingress objects are administrators of the cluster. For example, the Ingress NGINX control plane has global and per Ingress configuration options that make it insecure, if enabled, in a multi-tenant environment. For example, enabling snippets, a global configuration, allows any Ingress object to run arbitrary Lua code that could affect the security of all Ingress objects that a controller is running. We changed the default to allow snippets to `false` in https://github.com/kubernetes/ingress-nginx/pull/10393. ## Multiple controller in one cluster Question - How can I easily install multiple instances of the ingress-nginx controller in the same cluster? You can install them in different namespaces. - Create a new namespace ``` kubectl create namespace ingress-nginx-2 ``` - Use Helm to install the additional instance of the ingress controller - Ensure you have Helm working (refer to the [Helm documentation](https://helm.sh/docs/)) - We have to assume that you have the helm repo for the ingress-nginx controller already added to your Helm config. But, if you have not added the helm repo then you can do this to add the repo to your helm config; ``` helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx ``` - Make sure you have updated the helm repo data; ``` helm repo update ``` - Now, install an additional instance of the ingress-nginx controller like this: ``` helm install ingress-nginx-2 ingress-nginx/ingress-nginx \ --namespace ingress-nginx-2 \ --set controller.ingressClassResource.name=nginx-two \ --set controller.ingressClass=nginx-two \ --set controller.ingressClassResource.controllerValue="example.com/ingress-nginx-2" \ --set controller.ingressClassResource.enabled=true \ --set controller.ingressClassByName=true ``` If you need to install yet another instance, then repeat the procedure to create a new namespace, change the values such as names & namespaces (for example from "-2" to "-3"), or anything else that meets your needs. Note that `controller.ingressClassResource.name` and `controller.ingressClass` have to be set correctly. The first is to create the IngressClass object and the other is to modify the deployment of the actual ingress controller pod. ### I can't use multiple namespaces, what should I do? If you need to install all instances in the same namespace, then you need to specify a different \*\*election id\*\*, like this: ``` helm install ingress-nginx-2 ingress-nginx/ingress-nginx \ --namespace kube-system \ --set controller.electionID=nginx-two-leader \ --set controller.ingressClassResource.name=nginx-two \ --set controller.ingressClass=nginx-two \ --set controller.ingressClassResource.controllerValue="example.com/ingress-nginx-2" \ --set controller.ingressClassResource.enabled=true \ --set controller.ingressClassByName=true ``` ## Retaining Client IPAddress Question - How to obtain the real-client-ipaddress ? The goto solution for retaining the real-client IPaddress is to enable PROXY protocol. Enabling PROXY protocol has to be done on both, the Ingress NGINX controller, as well as the L4 load balancer, in front of the controller. The real-client IP address is lost by default, when traffic is forwarded over the network. But enabling PROXY protocol ensures that the connection details are retained and hence the real-client IP address doesn't get lost. Enabling proxy-protocol on the controller is documented [here](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-proxy-protocol) . For enabling proxy-protocol on the LoadBalancer, please refer to the documentation of your infrastructure provider because that is where the LB is provisioned. Some more info available [here](https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#source-ip-address) Some more info on proxy-protocol is [here](https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol) ### client-ipaddress on single-node cluster Single node clusters are created for dev & test uses with tools like "kind" or "minikube". A trick to simulate a real use network with these clusters (kind or minikube) is to install Metallb and configure the ipaddress of the kind container or the minikube vm/container, as the starting and ending of the pool for Metallb in L2 mode. Then the host ip becomes a real client ipaddress, for curl requests sent from the host. After installing ingress-nginx controller
https://github.com/kubernetes/ingress-nginx/blob/main//docs/faq.md
main
ingress-nginx
[ 0.0009845205349847674, -0.03948856517672539, -0.01247459277510643, -0.0072218915447592735, -0.027422189712524414, -0.008621195331215858, 0.004077292047441006, -0.04975960776209831, 0.058088868856430054, 0.0709979236125946, 0.022191282361745834, -0.098662830889225, 0.030007123947143555, -0....
0.059226
minikube) is to install Metallb and configure the ipaddress of the kind container or the minikube vm/container, as the starting and ending of the pool for Metallb in L2 mode. Then the host ip becomes a real client ipaddress, for curl requests sent from the host. After installing ingress-nginx controller on a kind or a minikube cluster with helm, you can configure it for real-client-ip with a simple change to the service that ingress-nginx controller creates. The service object of --type LoadBalancer has a field service.spec.externalTrafficPolicy. If you set the value of this field to "Local" then the real-ipaddress of a client is visible to the controller. ``` % kubectl explain service.spec.externalTrafficPolicy KIND: Service VERSION: v1 FIELD: externalTrafficPolicy DESCRIPTION: externalTrafficPolicy describes how nodes distribute service traffic they receive on one of the Service's "externally-facing" addresses (NodePorts, ExternalIPs, and LoadBalancer IPs). If set to "Local", the proxy will configure the service in a way that assumes that external load balancers will take care of balancing the service traffic between nodes, and so each node will deliver traffic only to the node-local endpoints of the service, without masquerading the client source IP. (Traffic mistakenly sent to a node with no endpoints will be dropped.) The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Note that traffic sent to an External IP or LoadBalancer IP from within the cluster will always get "Cluster" semantics, but clients sending to a NodePort from within the cluster may need to take traffic policy into account when picking a node. Possible enum values: - `"Cluster"` routes traffic to all endpoints. - `"Local"` preserves the source IP of the traffic by routing only to endpoints on the same node as the traffic was received on (dropping the traffic if there are no local endpoints). ``` ### client-ipaddress L7 The solution is to get the real client IPaddress from the ["X-Forward-For" HTTP header](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For) Example : If your application pod behind Ingress NGINX controller, uses the NGINX webserver and the reverseproxy inside it, then you can do the following to preserve the remote client IP. - First you need to make sure that the X-Forwarded-For header reaches the backend pod. This is done by using a Ingress NGINX conftroller ConfigMap key. Its documented [here](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-forwarded-headers) - Next, edit `nginx.conf` file inside your app pod, to contain the directives shown below: ``` set\_real\_ip\_from 0.0.0.0/0; # Trust all IPs (use your VPC CIDR block in production) real\_ip\_header X-Forwarded-For; real\_ip\_recursive on; log\_format main '$remote\_addr - $remote\_user [$time\_local] "$request" ' '$status $body\_bytes\_sent "$http\_referer" ' '"$http\_user\_agent" ' 'host=$host x-forwarded-for=$http\_x\_forwarded\_for'; access\_log /var/log/nginx/access.log main; ``` ## Kubernetes v1.22 Migration If you are using Ingress objects in your cluster (running Kubernetes older than version 1.22), and you plan to upgrade your Kubernetes version to K8S 1.22 or above, then please read [the migration guide here](./user-guide/k8s-122-migration.md). ## Validation Of \*\*`path`\*\* - For improving security and also following desired standards on Kubernetes API spec, the next release, scheduled for v1.8.0, will include a new & optional feature of validating the value for the key `ingress.spec.rules.http.paths.path`. - This behavior will be disabled by default on the 1.8.0 release and enabled by default on the next breaking change release, set for 2.0.0. - When "`ingress.spec.rules.http.pathType=Exact`" or "`pathType=Prefix`", this validation will limit the characters accepted on the field "`ingress.spec.rules.http.paths.path`", to "`alphanumeric characters`", and "`/`", "`\_`", "`-`". Also, in this case, the path should start with "`/`". - When the ingress resource path contains other characters (like on rewrite configurations), the pathType value should be "`ImplementationSpecific`". - API Spec on pathType is documented
https://github.com/kubernetes/ingress-nginx/blob/main//docs/faq.md
main
ingress-nginx
[ -0.054692599922418594, 0.07905813306570053, -0.010937614366412163, -0.0012399227125570178, -0.0035764530766755342, 0.007750884164124727, 0.009678184054791927, 0.043442998081445694, 0.04739535599946976, 0.01567821577191353, -0.03134986385703087, -0.06003591790795326, 0.03286026790738106, -0...
0.098891
the characters accepted on the field "`ingress.spec.rules.http.paths.path`", to "`alphanumeric characters`", and "`/`", "`\_`", "`-`". Also, in this case, the path should start with "`/`". - When the ingress resource path contains other characters (like on rewrite configurations), the pathType value should be "`ImplementationSpecific`". - API Spec on pathType is documented [here](https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types) - When this option is enabled, the validation will happen on the Admission Webhook. So if any new ingress object contains characters other than alphanumeric characters, and, "`/`", "`\_`", "`-`", in the `path` field, but is not using `pathType` value as `ImplementationSpecific`, then the ingress object will be denied admission. - The cluster admin should establish validation rules using mechanisms like "`Open Policy Agent`", to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used. [The configmap value is here](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#strict-validate-path-type) - A complete example of an Openpolicyagent gatekeeper rule is available [here](https://kubernetes.github.io/ingress-nginx/examples/openpolicyagent/) - If you have any issues or concerns, please do one of the following: - Open a GitHub issue - Comment in our Dev Slack Channel - Open a thread in our Google Group ## Why is chunking not working since controller v1.10 ? - If your code is setting the HTTP header `"Transfer-Encoding: chunked"` and the controller log messages show an error about duplicate header, it is because of this change - More details are available in this issue
https://github.com/kubernetes/ingress-nginx/blob/main//docs/faq.md
main
ingress-nginx
[ -0.004957959987223148, 0.008367303758859634, 0.0072130789048969746, -0.008745024912059307, -0.043228600174188614, -0.01297527365386486, 0.03579144924879074, 0.0014249534578993917, 0.07164467871189117, 0.06429126858711243, -0.0004159160307608545, -0.09028090536594391, 0.041382960975170135, ...
0.15764
# Overview This is the documentation for the Ingress NGINX Controller. It is built around the [Kubernetes Ingress resource](https://kubernetes.io/docs/concepts/services-networking/ingress/), using a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) to store the controller configuration. You can learn more about using [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) in the official [Kubernetes documentation](https://docs.k8s.io). # Getting Started See [Deployment](./deploy/index.md) for a whirlwind tour that will get you started.
https://github.com/kubernetes/ingress-nginx/blob/main//docs/index.md
main
ingress-nginx
[ -0.05949033796787262, 0.04175080731511116, -0.011122194118797779, 0.018859228119254112, -0.01740177534520626, 0.025621913373470306, 0.03758781775832176, 0.03885798901319504, 0.02722865156829357, 0.07110553979873657, -0.026354914531111717, -0.005938887130469084, 0.03506031259894371, -0.0383...
0.158615
# Miscellaneous ## Source IP address By default NGINX uses the content of the header `X-Forwarded-For` as the source of truth to get information about the client IP address. This works without issues in L7 \*\*if we configure the setting `proxy-real-ip-cidr`\*\* with the correct information of the IP/network address of trusted external load balancer. This setting can be enabled/disabled by setting [`use-forwarded-headers`](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-forwarded-headers). If the ingress controller is running in AWS we need to use the VPC IPv4 CIDR. Another option is to enable the \*\*PROXY protocol\*\* using [`use-proxy-protocol: "true"`](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-proxy-protocol). In this mode, NGINX uses the PROXY protocol TCP header to retrieve the source IP address of the connection. This works in most cases, but if you have a Layer 7 proxy (e.g., Cloudflare) in front of a TCP load balancer, it may not work correctly. The HTTP proxy IP address might appear as the client IP address. In this case, you should also enable the `use-forwarded-headers` setting in addition to enabling `use-proxy-protocol`, and properly configure `proxy-real-ip-cidr` to trust all intermediate proxies (both within the private network and any external proxies). Example configmap for setups with multiple proxies: ```yaml use-proxy-protocol: "true" use-forwarded-headers: "true" proxy-real-ip-cidr: "10.0.0.0/8,131.0.72.0/22,172.64.0.0/13,104.24.0.0/14,104.16.0.0/13,162.158.0.0/15,198.41.128.0/17" ``` \*\*Note:\*\* Be sure to use real CIDRs that match your exact environment. ## Path types Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. By default NGINX path type is Prefix to not break existing definitions ## Proxy Protocol If you are using a L4 proxy to forward the traffic to the Ingress NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP address. To prevent this you could use the [PROXY Protocol](http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt) for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself. Amongst others [ELBs in AWS](http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html) and [HAProxy](http://www.haproxy.org/) support Proxy Protocol. ## Websockets Support for websockets is provided by NGINX out of the box. No special configuration required. The only requirement to avoid the close of connections is the increase of the values of `proxy-read-timeout` and `proxy-send-timeout`. The default value of these settings is `60 seconds`. A more adequate value to support websockets is a value higher than one hour (`3600`). !!! Important If the Ingress-Nginx Controller is exposed with a service `type=LoadBalancer` make sure the protocol between the loadbalancer and NGINX is TCP. ## Optimizing TLS Time To First Byte (TTTFB) NGINX provides the configuration option [ssl\_buffer\_size](https://nginx.org/en/docs/http/ngx\_http\_ssl\_module.html#ssl\_buffer\_size) to allow the optimization of the TLS record size. This improves the [TLS Time To First Byte](https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/) (TTTFB). The default value in the Ingress controller is `4k` (NGINX default is `16k`). ## Retries in non-idempotent methods Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error. The previous behavior can be restored using `retry-non-idempotent=true` in the configuration ConfigMap. ## Limitations - Ingress rules for TLS require the definition of the field `host` ## Why endpoints and not services The Ingress-Nginx Controller does not use [Services](http://kubernetes.io/docs/user-guide/services) to route traffic to the pods. Instead it uses the Endpoints API in order to bypass [kube-proxy](http://kubernetes.io/docs/admin/kube-proxy/) to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/miscellaneous.md
main
ingress-nginx
[ -0.08428390324115753, 0.015480579808354378, 0.028094755485653877, -0.008219465613365173, -0.015614962205290794, -0.0015293947653844953, 0.04302273690700531, -0.023027492687106133, 0.05296734347939491, 0.0794789269566536, -0.06426946818828583, -0.09582914412021637, 0.034512318670749664, 0.0...
0.00669
# Command line arguments The following command line arguments are accepted by the Ingress controller executable. They are set in the container spec of the `ingress-nginx-controller` Deployment manifest | Argument | Description | |----------|-------------| | `--annotations-prefix` | Prefix of the Ingress annotations specific to the NGINX controller. (default "nginx.ingress.kubernetes.io") | | `--apiserver-host` | Address of the Kubernetes API server. Takes the form "protocol://address:port". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted. | | `--bucket-factor` | Bucket factor for native histograms. Value must be > 1 for enabling native histograms. (default 0) | | `--certificate-authority` | Path to a cert file for the certificate authority. This certificate is used only when the flag --apiserver-host is specified. | | `--configmap` | Name of the ConfigMap containing custom global configurations for the controller. | | `--controller-class` | Ingress Class Controller value this Ingress satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.19.0 or higher. The .spec.controller value of the IngressClass referenced in an Ingress Object should be the same value specified here to make this object be watched. | | `--deep-inspect` | Enables ingress object security deep inspector. (default true) | | `--default-backend-service` | Service used to serve HTTP requests not matching any known server name (catch-all). Takes the form "namespace/name". The controller configures NGINX to forward requests to the first port of this Service. | | `--default-server-port` | Port to use for exposing the default server (catch-all). (default 8181) | | `--default-ssl-certificate` | Secret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form "namespace/name". | | `--enable-annotation-validation` | If true, will enable the annotation validation feature. Defaults to true | | `--disable-catch-all` | Disable support for catch-all Ingresses. (default false) | | `--disable-full-test` | Disable full test of all merged ingresses at the admission stage and tests the template of the ingress being created or updated (full test of all ingresses is enabled by default). | | `--disable-svc-external-name` | Disable support for Services of type ExternalName. (default false) | | `--disable-sync-events` | Disables the creation of 'Sync' Event resources, but still logs them | | `--dynamic-configuration-retries` | Number of times to retry failed dynamic configuration before failing to sync an ingress. (default 15) | | `--election-id` | Election id to use for Ingress status updates. (default "ingress-controller-leader") | | `--election-ttl` | Duration a leader election is valid before it's getting re-elected, e.g. `15s`, `10m` or `1h`. (Default: 30s) | | `--enable-metrics` | Enables the collection of NGINX metrics. (Default: false) | | `--enable-ssl-chain-completion` | Autocomplete SSL certificate chains with missing intermediate CA certificates. Certificates uploaded to Kubernetes must have the "Authority Information Access" X.509 v3 extension for this to succeed. (default false)| | `--enable-ssl-passthrough` | Enable SSL Passthrough. (default false) | | `--disable-leader-election` | Disable Leader Election on Nginx Controller. (default false) | | `--enable-topology-aware-routing` | Enable topology aware routing feature, needs service object annotation service.kubernetes.io/topology-mode sets to auto. (default false) | | `--exclude-socket-metrics` | Set of socket request metrics to exclude which won't be exported nor being calculated. The possible socket request metrics to exclude are documented in the monitoring guide e.g. 'nginx\_ingress\_controller\_request\_duration\_seconds,nginx\_ingress\_controller\_response\_size'| | `--health-check-path` | URL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default "/healthz") | | `--health-check-timeout` | Time limit, in seconds, for a probe to health-check-path to succeed. (default 10) | | `--healthz-port` | Port to use for the healthz
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/cli-arguments.md
main
ingress-nginx
[ -0.0020122213754802942, 0.0577569380402565, 0.00014549406478181481, -0.010500192642211914, -0.008705511689186096, 0.0035410535056144, 0.020465457811951637, 0.0005625183694064617, 0.06899197399616241, 0.08806996047496796, -0.034096166491508484, -0.11728648096323013, 0.02493126690387726, 0.0...
0.165075
the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default "/healthz") | | `--health-check-timeout` | Time limit, in seconds, for a probe to health-check-path to succeed. (default 10) | | `--healthz-port` | Port to use for the healthz endpoint. (default 10254) | | `--healthz-host` | Address to bind the healthz endpoint. | | `--http-port` | Port to use for servicing HTTP traffic. (default 80) | | `--https-port` | Port to use for servicing HTTPS traffic. (default 443) | | `--ingress-class` | Name of the ingress class this controller satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.18.0 or higher or the annotation "kubernetes.io/ingress.class" (deprecated). If this parameter is not set, or set to the default value of "nginx", it will handle ingresses with either an empty or "nginx" class name. | | `--ingress-class-by-name` | Define if Ingress Controller should watch for Ingress Class by Name together with Controller Class. (default false). | | `--internal-logger-address` | Address to be used when binding internal syslogger. (default 127.0.0.1:11514) | | `--kubeconfig` | Path to a kubeconfig file containing authorization and API server information. | | `--length-buckets` | Set of buckets which will be used for prometheus histogram metrics such as RequestLength, ResponseLength. (default `[10, 20, 30, 40, 50, 60, 70, 80, 90, 100]`) | | `--max-buckets` | Maximum number of buckets for native histograms. (default 100) | | `--maxmind-edition-ids` | Maxmind edition ids to download GeoLite2 Databases. (default "GeoLite2-City,GeoLite2-ASN") | | `--maxmind-retries-timeout` | Maxmind downloading delay between 1st and 2nd attempt, 0s - do not retry to download if something went wrong. (default 0s) | | `--maxmind-retries-count` | Number of attempts to download the GeoIP DB. (default 1) | | `--maxmind-license-key` | Maxmind license key to download GeoLite2 Databases. https://blog.maxmind.com/2019/12/significant-changes-to-accessing-and-using-geolite2-databases/ . | | `--maxmind-mirror` | Maxmind mirror url (example: http://geoip.local/databases. | | `--metrics-per-host` | Export metrics per-host. (default true) | | `--metrics-per-undefined-host` | Export metrics per-host even if the host is not defined in an ingress. Requires --metrics-per-host to be set to true. (default false) | | `--monitor-max-batch-size` | Max batch size of NGINX metrics. (default 10000)| | `--post-shutdown-grace-period` | Additional delay in seconds before controller container exits. (default 10) | | `--profiler-port` | Port to use for expose the ingress controller Go profiler when it is enabled. (default 10245) | | `--profiling` | Enable profiling via web interface host:port/debug/pprof/ . (default true) | | `--publish-service` | Service fronting the Ingress controller. Takes the form "namespace/name". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies. | | `--publish-status-address` | Customized address (or addresses, separated by comma) to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter. | | `--report-node-internal-ip-address`| Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter. (default false) | | `--report-status-classes` | If true, report status classes in metrics (2xx, 3xx, 4xx and 5xx) instead of full status codes. (default false) | | `--ssl-passthrough-proxy-port` | Port to use internally for SSL Passthrough. (default 442) | | `--status-port` | Port to use for the lua HTTP endpoint configuration. (default 10246) | | `--status-update-interval` | Time interval in seconds in which the status should check if an update is required. Default is 60 seconds. (default 60) | | `--stream-port` | Port to use for the lua TCP/UDP endpoint configuration. (default 10247) | | `--sync-period` | Period at which the controller forces the
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/cli-arguments.md
main
ingress-nginx
[ -0.03984178602695465, 0.10668159276247025, -0.0874762162566185, -0.008089644834399223, -0.02232770249247551, -0.04178815707564354, 0.0223084706813097, -0.02523370087146759, -0.006224307231605053, 0.06374774873256683, -0.01014444325119257, 0.02875586599111557, -0.025655750185251236, -0.0195...
0.118911
| `--status-update-interval` | Time interval in seconds in which the status should check if an update is required. Default is 60 seconds. (default 60) | | `--stream-port` | Port to use for the lua TCP/UDP endpoint configuration. (default 10247) | | `--sync-period` | Period at which the controller forces the repopulation of its local object stores. Disabled by default. | | `--sync-rate-limit` | Define the sync frequency upper limit. (default 0.3) | | `--tcp-services-configmap` | Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form "namespace/name:port", where "port" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic. | | `--time-buckets` | Set of buckets which will be used for prometheus histogram metrics such as RequestTime, ResponseTime. (default `[0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]`) | | `--udp-services-configmap` | Name of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form "namespace/name:port", where "port" can either be a port name or number. | | `--update-status` | Update the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true) | | `--update-status-on-shutdown` | Update the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true) | | `--shutdown-grace-period` | Seconds to wait after receiving the shutdown signal, before stopping the nginx process. (default 0) | | `--size-buckets` | Set of buckets which will be used for prometheus histogram metrics such as BytesSent. (default `[10, 100, 1000, 10000, 100000, 1e+06, 1e+07]`) | | `-v, --v Level` | number for the log level verbosity | | `--validating-webhook` | The address to start an admission controller on to validate incoming ingresses. Takes the form ":port". If not provided, no admission controller is started. | | `--validating-webhook-certificate` | The path of the validating webhook certificate PEM. | | `--validating-webhook-key` | The path of the validating webhook key PEM. | | `--version` | Show release information about the Ingress-Nginx Controller and exit. | | `--watch-ingress-without-class` | Define if Ingress Controller should also watch for Ingresses without an IngressClass or the annotation specified. (default false) | | `--watch-namespace` | Namespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty. | | `--watch-namespace-selector` | The controller will watch namespaces whose labels match the given selector. This flag only takes effective when `--watch-namespace` is empty. |
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/cli-arguments.md
main
ingress-nginx
[ -0.07119452208280563, 0.011835615150630474, -0.0851094052195549, -0.045863281935453415, -0.0040476033464074135, -0.10835117846727371, -0.0202199574559927, -0.011202570982277393, -0.018395816907286644, 0.02302534319460392, 0.06000319495797157, -0.01347548421472311, -0.00004584838097798638, ...
0.14242
# Basic usage - host based routing ingress-nginx can be used for many use cases, inside various cloud providers and supports a lot of configurations. In this section you can find a common usage scenario where a single load balancer powered by ingress-nginx will route traffic to 2 different HTTP backend services based on the host name. First of all follow the instructions to install ingress-nginx. Then imagine that you need to expose 2 HTTP services already installed, `myServiceA`, `myServiceB`, and configured as `type: ClusterIP`. Let's say that you want to expose the first at `myServiceA.foo.org` and the second at `myServiceB.foo.org`. If the cluster version is < 1.19, you can create two \*\*ingress\*\* resources like this: ``` apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress-myservicea spec: ingressClassName: nginx rules: - host: myservicea.foo.org http: paths: - path: / backend: serviceName: myservicea servicePort: 80 --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress-myserviceb annotations: # use the shared ingress-nginx kubernetes.io/ingress.class: "nginx" spec: rules: - host: myserviceb.foo.org http: paths: - path: / backend: serviceName: myserviceb servicePort: 80 ``` If the cluster uses Kubernetes version >= 1.19.x, then its suggested to create 2 ingress resources, using yaml examples shown below. These examples are in conformity with the `networking.kubernetes.io/v1` api. ``` apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-myservicea spec: rules: - host: myservicea.foo.org http: paths: - path: / pathType: Prefix backend: service: name: myservicea port: number: 80 ingressClassName: nginx --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-myserviceb spec: rules: - host: myserviceb.foo.org http: paths: - path: / pathType: Prefix backend: service: name: myserviceb port: number: 80 ingressClassName: nginx ``` When you apply this yaml, 2 ingress resources will be created managed by the \*\*ingress-nginx\*\* instance. Nginx is configured to automatically discover all ingress with the `kubernetes.io/ingress.class: "nginx"` annotation or where `ingressClassName: nginx` is present. Please note that the ingress resource should be placed inside the same namespace of the backend resource. On many cloud providers ingress-nginx will also create the corresponding Load Balancer resource. All you have to do is get the external IP and add a DNS `A record` inside your DNS provider that point myservicea.foo.org and myserviceb.foo.org to the nginx external IP. Get the external IP by running: ``` kubectl get services -n ingress-nginx ``` To test inside minikube refer to this documentation: [Set up Ingress on Minikube with the NGINX Ingress Controller](https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/)
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/basic-usage.md
main
ingress-nginx
[ -0.06157994642853737, -0.010540153831243515, -0.00004438568794284947, -0.008546818047761917, -0.05740176513791084, -0.04135903716087341, 0.04598953202366829, -0.030906839296221733, 0.04874604195356369, 0.059019360691308975, -0.07411228865385056, -0.0579695887863636, 0.037963081151247025, -...
0.122881
# TLS/HTTPS ## TLS Secrets Anytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret. !!! warning Ensure that the certificate order is leaf->intermediate->root, otherwise the controller will not be able to import the certificate, and you'll see this error in the logs ```W1012 09:15:45.920000 6 backend\_ssl.go:46] Error obtaining X.509 certificate: unexpected error creating SSL Cert: certificate and private key does not have a matching public key: tls: private key does not match public key``` You can generate a self-signed certificate and private key with: ```bash $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${KEY\_FILE} -out ${CERT\_FILE} -subj "/CN=${HOST}/O=${HOST}" -addext "subjectAltName = DNS:${HOST}" ``` Then create the secret in the cluster via: ```bash kubectl create secret tls ${CERT\_NAME} --key ${KEY\_FILE} --cert ${CERT\_FILE} ``` The resulting secret will be of type `kubernetes.io/tls`. ## Host names Ensure that the relevant [ingress rules specify a matching hostname](https://kubernetes.io/docs/concepts/services-networking/ingress/#tls). ## Default SSL Certificate NGINX provides the option to configure a server as a catch-all with [server\_name](https://nginx.org/en/docs/http/server\_names.html) for requests that do not match any of the configured server names. This configuration works out-of-the-box for HTTP traffic. For HTTPS, a certificate is naturally required. For this reason the Ingress controller provides the flag `--default-ssl-certificate`. The secret referred to by this flag contains the default certificate to be used when accessing the catch-all server. If this flag is not provided NGINX will use a self-signed certificate. For instance, if you have a TLS secret `foo-tls` in the `default` namespace, add `--default-ssl-certificate=default/foo-tls` in the `nginx-controller` deployment. If the `tls:` section is not set, NGINX will provide the default certificate but will not force HTTPS redirect. On the other hand, if the `tls:` section is set - even without specifying a `secretName` option - NGINX will force HTTPS redirect. To force redirects for Ingresses that do not specify a TLS-block at all, take a look at `force-ssl-redirect` in [ConfigMap][ConfigMap]. ## SSL Passthrough The [`--enable-ssl-passthrough`](cli-arguments.md) flag enables the SSL Passthrough feature, which is disabled by default. This is required to enable passthrough backends in Ingress objects. !!! warning This feature is implemented by intercepting \*\*all traffic\*\* on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty. SSL Passthrough leverages [SNI][SNI] and reads the virtual domain from the TLS negotiation, which requires compatible clients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back and forth between the backend and the client. If there is no hostname matching the requested host name, the request is handed over to NGINX on the configured passthrough proxy port (default: 442), which proxies the request to the default backend. !!! note Unlike HTTP backends, traffic to Passthrough backends is sent to the \*clusterIP\* of the backing Service instead of individual Endpoints. ## HTTP Strict Transport Security HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS. HSTS is enabled by default. To disable this behavior use `hsts: "false"` in the configuration [ConfigMap][ConfigMap]. ## Server-side HTTPS enforcement through redirect By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress. This can be disabled globally using `ssl-redirect: "false"` in the NGINX [config map][ConfigMap], or per-Ingress with the `nginx.ingress.kubernetes.io/ssl-redirect: "false"` annotation in the
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/tls.md
main
ingress-nginx
[ -0.03006361797451973, 0.058427292853593826, -0.059872981160879135, 0.03156905621290207, 0.04858558252453804, -0.020875368267297745, -0.03473764657974243, 0.015243858098983765, 0.04453233256936073, 0.025869641453027725, 0.07186923176050186, -0.039238810539245605, 0.06256303191184998, 0.0530...
-0.023167
enforcement through redirect By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress. This can be disabled globally using `ssl-redirect: "false"` in the NGINX [config map][ConfigMap], or per-Ingress with the `nginx.ingress.kubernetes.io/ssl-redirect: "false"` annotation in the particular resource. !!! tip When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the `nginx.ingress.kubernetes.io/force-ssl-redirect: "true"` annotation in the particular resource. ## Automated Certificate Management with cert-manager [cert-manager] automatically requests missing or expired certificates from a range of [supported issuers][cert-manager-issuer-config] (including [Let's Encrypt]) by monitoring ingress resources. To set up cert-manager you should take a look at this [full example][full-cert-manager-example]. To enable it for an ingress resource you have to deploy cert-manager, configure a certificate issuer update the manifest: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-demo annotations: cert-manager.io/issuer: "letsencrypt-staging" # Replace this with a production issuer once you've tested it [..] spec: tls: - hosts: - ingress-demo.example.com secretName: ingress-demo-tls [...] ``` ## Default TLS Version and Ciphers To provide the most secure baseline configuration possible, ingress-nginx defaults to using TLS 1.2 and 1.3 only, with a [secure set of TLS ciphers][ssl-ciphers]. ### Legacy TLS The default configuration, though secure, does not support some older browsers and operating systems. For instance, TLS 1.1+ is only enabled by default from Android 5.0 on. At the time of writing, May 2018, [approximately 15% of Android devices](https://developer.android.com/about/dashboards/#Platform) are not compatible with ingress-nginx's default configuration. To change this default behavior, use a [ConfigMap][ConfigMap]. A sample ConfigMap fragment to allow these older clients to connect could look something like the following (generated using the Mozilla SSL Configuration Generator)[mozilla-ssl-config-old]: ``` kind: ConfigMap apiVersion: v1 metadata: name: nginx-config data: ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA" ssl-protocols: "TLSv1.2 TLSv1.3" ``` [Let's Encrypt]:https://letsencrypt.org [ConfigMap]: ./nginx-configuration/configmap.md [ssl-ciphers]: ./nginx-configuration/configmap.md#ssl-ciphers [SNI]: https://en.wikipedia.org/wiki/Server\_Name\_Indication [mozilla-ssl-config-old]: https://ssl-config.mozilla.org/#server=nginx&config=old [cert-manager]: https://github.com/jetstack/cert-manager/ [full-cert-manager-example]:https://cert-manager.io/docs/tutorials/acme/nginx-ingress/ [cert-manager-issuer-config]:https://cert-manager.io/docs/configuration/
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/tls.md
main
ingress-nginx
[ 0.02418898418545723, 0.08560330420732498, 0.019627800211310387, 0.05791706591844559, -0.00223277835175395, -0.022708142176270485, -0.015367267653346062, -0.059913817793130875, 0.09527043998241425, 0.030431032180786133, -0.036513399332761765, -0.07862816751003265, 0.047680459916591644, 0.03...
0.081294
# FAQ - Migration to Kubernetes 1.22 and apiVersion `networking.k8s.io/v1` If you are using Ingress objects in your cluster (running Kubernetes older than v1.22), and you plan to upgrade to Kubernetes v1.22, this page is relevant to you. - Please read this [official blog on deprecated Ingress API versions](https://kubernetes.io/blog/2021/07/26/update-with-ingress-nginx/) - Please read this [official documentation on the IngressClass object](https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class) ## What is an IngressClass and why is it important for users of ingress-nginx controller now? IngressClass is a Kubernetes resource. See the description below. It's important because until now, a default install of the ingress-nginx controller did not require a IngressClass object. From version 1.0.0 of the ingress-nginx controller, an IngressClass object is required. On clusters with more than one instance of the ingress-nginx controller, all instances of the controllers must be aware of which Ingress objects they serve. The `ingressClassName` field of an Ingress is the way to let the controller know about that. ```console kubectl explain ingressclass ``` ``` KIND: IngressClass VERSION: networking.k8s.io/v1 DESCRIPTION: IngressClass represents the class of the Ingress, referenced by the Ingress Spec. The `ingressclass.kubernetes.io/is-default-class` annotation can be used to indicate that an IngressClass should be considered default. When a single IngressClass resource has this annotation set to true, new Ingress resources without a class specified will be assigned this default class. FIELDS: apiVersion APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec Spec is the desired state of the IngressClass. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status` ``` ## What has caused this change in behavior? There are 2 primary reasons. ### Reason 1 Until K8s version 1.21, it was possible to create an Ingress resource using deprecated versions of the Ingress API, such as: - `extensions/v1beta1` - `networking.k8s.io/v1beta1` You would get a message about deprecation, but the Ingress resource would get created. From K8s version 1.22 onwards, you can \*\*only\*\* access the Ingress API via the stable, `networking.k8s.io/v1` API. The reason is explained in the [official blog on deprecated ingress API versions](https://kubernetes.io/blog/2021/07/26/update-with-ingress-nginx/). ### Reason #2 If you are already using the ingress-nginx controller and then upgrade to Kubernetes 1.22, there are several scenarios where your existing Ingress objects will not work how you expect. Read this FAQ to check which scenario matches your use case. ## What is the `ingressClassName` field? `ingressClassName` is a field in the spec of an Ingress object. ```shell kubectl explain ingress.spec.ingressClassName ``` ```console KIND: Ingress VERSION: networking.k8s.io/v1 FIELD: ingressClassName DESCRIPTION: IngressClassName is the name of the IngressClass cluster resource. The associated IngressClass defines which controller will implement the resource. This replaces the deprecated `kubernetes.io/ingress.class` annotation. For backwards compatibility, when that annotation is set, it must be given precedence over this field. The controller may emit a warning if the field and annotation have different values. Implementations of this API should ignore Ingresses without a class specified. An IngressClass resource may be marked as default, which can be used to set a default value for this field. For more information, refer to the IngressClass documentation. ``` The `.spec.ingressClassName` behavior has precedence over the deprecated `kubernetes.io/ingress.class` annotation. ## I have only one ingress controller in my cluster. What should I do? If a single instance of the ingress-nginx controller is the sole Ingress controller running in your cluster, you should add the annotation
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/k8s-122-migration.md
main
ingress-nginx
[ -0.09834345430135727, 0.026376111432909966, 0.06985554099082947, 0.02225637622177601, 0.04559736326336861, 0.05679348483681679, 0.01188489980995655, -0.019907919690012932, 0.032179225236177444, 0.07870391011238098, -0.0029681739397346973, -0.0016757334815338254, -0.013334915041923523, -0.0...
0.170201
the IngressClass documentation. ``` The `.spec.ingressClassName` behavior has precedence over the deprecated `kubernetes.io/ingress.class` annotation. ## I have only one ingress controller in my cluster. What should I do? If a single instance of the ingress-nginx controller is the sole Ingress controller running in your cluster, you should add the annotation "ingressclass.kubernetes.io/is-default-class" in your IngressClass, so any new Ingress objects will have this one as default IngressClass. When using Helm, you can enable this annotation by setting `.controller.ingressClassResource.default: true` in your Helm chart installation's values file. If you have any old Ingress objects remaining without an IngressClass set, you can do one or more of the following to make the ingress-nginx controller aware of the old objects: - You can manually set the [`.spec.ingressClassName`](https://kubernetes.io/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec) field in the manifest of your own Ingress resources. - You can re-create them after setting the `ingressclass.kubernetes.io/is-default-class` annotation to `true` on the IngressClass - Alternatively you can make the ingress-nginx controller watch Ingress objects without the ingressClassName field set by starting your ingress-nginx with the flag [--watch-ingress-without-class=true](#what-is-the-flag-watch-ingress-without-class). When using Helm, you can configure your Helm chart installation's values file with `.controller.watchIngressWithoutClass: true`. We recommend that you create the IngressClass as shown below: ``` --- apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: labels: app.kubernetes.io/component: controller name: nginx annotations: ingressclass.kubernetes.io/is-default-class: "true" spec: controller: k8s.io/ingress-nginx ``` and add the value `spec.ingressClassName=nginx` in your Ingress objects. ## I have many ingress objects in my cluster. What should I do? If you have a lot of ingress objects without ingressClass configuration, you can run the ingress controller with the flag `--watch-ingress-without-class=true`. ### What is the flag `--watch-ingress-without-class`? It's a flag that is passed, as an argument, to the `nginx-ingress-controller` executable. In the configuration, it looks like this: ```yaml # ... args: - /nginx-ingress-controller - --watch-ingress-without-class=true - --controller-class=k8s.io/ingress-nginx # ... # ... ``` ## I have more than one controller in my cluster, and I'm already using the annotation No problem. This should still keep working, but we highly recommend you to test! Even though `kubernetes.io/ingress.class` is deprecated, the ingress-nginx controller still understands that annotation. If you want to follow good practice, you should consider migrating to use IngressClass and `.spec.ingressClassName`. ## I have more than one controller running in my cluster, and I want to use the new API In this scenario, you need to create multiple IngressClasses (see the example above). Be aware that IngressClass works in a very specific way: you will need to change the `.spec.controller` value in your IngressClass and configure the controller to expect the exact same value. Let's see an example, supposing that you have three IngressClasses: - IngressClass `ingress-nginx-one`, with `.spec.controller` equal to `example.com/ingress-nginx1` - IngressClass `ingress-nginx-two`, with `.spec.controller` equal to `example.com/ingress-nginx2` - IngressClass `ingress-nginx-three`, with `.spec.controller` equal to `example.com/ingress-nginx1` For private use, you can also use a controller name that doesn't contain a `/`, e.g. `ingress-nginx1`. When deploying your ingress controllers, you will have to change the `--controller-class` field as follows: - Ingress-Nginx A, configured to use controller class name `example.com/ingress-nginx1` - Ingress-Nginx B, configured to use controller class name `example.com/ingress-nginx2` When you create an Ingress object with its `ingressClassName` set to `ingress-nginx-two`, only controllers looking for the `example.com/ingress-nginx2` controller class pay attention to the new object. Given that Ingress-Nginx B is set up that way, it will serve that object, whereas Ingress-Nginx A ignores the new Ingress. Bear in mind that if you start Ingress-Nginx B with the command line argument `--watch-ingress-without-class=true`, it will serve: 1. Ingresses without any `ingressClassName` set 2. Ingresses where the deprecated annotation (`kubernetes.io/ingress.class`) matches the value set in the command line argument `--ingress-class` 3. Ingresses that refer to any
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/k8s-122-migration.md
main
ingress-nginx
[ -0.03301144018769264, 0.03664662688970566, 0.07028526067733765, 0.022010156884789467, 0.06218700110912323, 0.03748339042067528, 0.0033019352704286575, -0.014011493884027004, 0.050033070147037506, 0.0862935334444046, -0.020133499056100845, -0.06025365740060806, -0.007638710550963879, -0.017...
0.10859
the new Ingress. Bear in mind that if you start Ingress-Nginx B with the command line argument `--watch-ingress-without-class=true`, it will serve: 1. Ingresses without any `ingressClassName` set 2. Ingresses where the deprecated annotation (`kubernetes.io/ingress.class`) matches the value set in the command line argument `--ingress-class` 3. Ingresses that refer to any IngressClass that has the same `spec.controller` as configured in `--controller-class` 4. If you start Ingress-Nginx B with the command line argument `--watch-ingress-without-class=true` and you run Ingress-Nginx A with the command line argument `--watch-ingress-without-class=false` then this is a supported configuration. If you have two ingress-nginx controllers for the same cluster, both running with `--watch-ingress-without-class=true` then there is likely to be a conflict. ## Why am I seeing "ingress class annotation is not equal to the expected by Ingress Controller" in my controller logs? It is highly likely that you will also see the name of the ingress resource in the same error message. This error message has been observed on use the deprecated annotation (`kubernetes.io/ingress.class`) in an Ingress resource manifest. It is recommended to use the `.spec.ingressClassName` field of the Ingress resource, to specify the name of the IngressClass of the Ingress you are defining.
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/k8s-122-migration.md
main
ingress-nginx
[ -0.08222292363643646, 0.003501587314531207, 0.026369910687208176, 0.01631980948150158, 0.05684854835271835, 0.023785054683685303, 0.035149190574884415, -0.06574945896863937, 0.01841195672750473, 0.026808330789208412, -0.015039993450045586, -0.038092248141765594, 0.009980791248381138, -0.02...
0.124595
# Ingress Path Matching ## Regular Expression Support !!! important Regular expressions is not supported in the `spec.rules.host` field. The wildcard character '\\*' must appear by itself as the first DNS label and matches only a single label. You cannot have a wildcard label by itself (e.g. Host == "\\*"). !!! note Please see the [FAQ](../faq.md#validation-of-path) for Validation Of \_\_`path`\_\_ The ingress controller supports \*\*case insensitive\*\* regular expressions in the `spec.rules.http.paths.path` field. This can be enabled by setting the `nginx.ingress.kubernetes.io/use-regex` annotation to `true` (the default is false). See the [description](./nginx-configuration/annotations.md#use-regex) of the `use-regex` annotation for more details. ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/use-regex: "true" spec: ingressClassName: nginx rules: - host: test.com http: paths: - path: /foo/.\* pathType: ImplementationSpecific backend: service: name: test port: number: 80 ``` The preceding ingress definition would translate to the following location block within the NGINX configuration for the `test.com` server: ```txt location ~\* "^/foo/.\*" { ... } ``` ## Path Priority In NGINX, regular expressions follow a \*\*first match\*\* policy. In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks. \*\*Please read the [warning](#warning) before using regular expressions in your ingress definitions.\*\* ### Example Let the following two ingress definitions be created: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress-1 spec: ingressClassName: nginx rules: - host: test.com http: paths: - path: /foo/bar pathType: Prefix backend: service: name: service1 port: number: 80 - path: /foo/bar/ pathType: Prefix backend: service: name: service2 port: number: 80 ``` ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress-2 annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: ingressClassName: nginx rules: - host: test.com http: paths: - path: /foo/bar/(.+) pathType: ImplementationSpecific backend: service: name: service3 port: number: 80 ``` The ingress controller would define the following location blocks, in order of descending length, within the NGINX template for the `test.com` server: ```txt location ~\* ^/foo/bar/.+ { ... } location ~\* "^/foo/bar/" { ... } location ~\* "^/foo/bar" { ... } ``` The following request URI's would match the corresponding location blocks: - `test.com/foo/bar/1` matches `~\* ^/foo/bar/.+` and will go to service 3. - `test.com/foo/bar/` matches `~\* ^/foo/bar/` and will go to service 2. - `test.com/foo/bar` matches `~\* ^/foo/bar` and will go to service 1. \*\*IMPORTANT NOTES\*\*: - If the `use-regex` OR `rewrite-target` annotation is used on any Ingress for a given host, then the case insensitive regular expression [location modifier](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#location) will be enforced on ALL paths for a given host regardless of what Ingress they are defined on. ## Warning The following example describes a case that may inflict unwanted path matching behavior. This case is expected and a result of NGINX's a first match policy for paths that use the regular expression [location modifier](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#location). For more information about how a path is chosen, please read the following article: ["Understanding Nginx Server and Location Block Selection Algorithms"](https://www.digitalocean.com/community/tutorials/understanding-nginx-server-and-location-block-selection-algorithms). ### Example Let the following ingress be defined: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress-3 annotations: nginx.ingress.kubernetes.io/use-regex: "true" spec: ingressClassName: nginx rules: - host: test.com http: paths: - path: /foo/bar/bar pathType: Prefix backend: service: name: test port: number: 80 - path: /foo/bar/[A-Z0-9]{3} pathType: ImplementationSpecific backend: service: name: test port: number: 80 ``` The ingress controller would define the following location blocks (in this order) within the NGINX template for the `test.com` server: ```txt location ~\* "^/foo/bar/[A-Z0-9]{3}" { ... } location ~\* "^/foo/bar/bar" { ... } ``` A request to `test.com/foo/bar/bar` would match the `^/foo/bar/[A-Z0-9]{3}` location block instead of the longest EXACT matching path.
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/ingress-path-matching.md
main
ingress-nginx
[ -0.009014059789478779, 0.06362112611532211, 0.0839909091591835, -0.021205013617873192, -0.012445762753486633, 0.0016089159762486815, 0.014214914292097092, -0.04866109788417816, 0.08049418777227402, 0.06085629388689995, -0.07279171049594879, -0.08542652428150177, 0.002057186095044017, 0.071...
0.091961
the `test.com` server: ```txt location ~\* "^/foo/bar/[A-Z0-9]{3}" { ... } location ~\* "^/foo/bar/bar" { ... } ``` A request to `test.com/foo/bar/bar` would match the `^/foo/bar/[A-Z0-9]{3}` location block instead of the longest EXACT matching path.
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/ingress-path-matching.md
main
ingress-nginx
[ -0.046706218272447586, 0.008378878235816956, 0.020068105310201645, -0.07035353034734726, -0.05220638960599899, -0.08268017321825027, -0.00664524594321847, -0.014737047255039215, 0.020959841087460518, 0.007043390069156885, 0.01872324012219906, 0.045523252338171005, 0.02024903893470764, 0.07...
0.026626
# Exposing TCP and UDP services While the Kubernetes Ingress resource only officially supports routing external HTTP(s) traffic to services, ingress-nginx can be configured to receive external TCP/UDP traffic from non-HTTP protocols and route them to internal services using TCP/UDP port mappings that are specified within a ConfigMap. To support this, the `--tcp-services-configmap` and `--udp-services-configmap` flags can be used to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: `:::[PROXY]:[PROXY]` It is also possible to use a number or the name of the port. The two last fields are optional. Adding `PROXY` in either or both of the two last fields we can use [Proxy Protocol](https://www.nginx.com/resources/admin-guide/proxy-protocol) decoding (listen) and/or encoding (proxy\_pass) in a TCP service. The first `PROXY` controls the decode of the proxy protocol and the second `PROXY` controls the encoding using proxy protocol. This allows an incoming connection to be decoded or an outgoing connection to be encoded. It is also possible to arbitrate between two different proxies by turning on the decode and encode on a TCP service. The next example shows how to expose the service `example-go` running in the namespace `default` in the port `8080` using the port `9000` ```yaml apiVersion: v1 kind: ConfigMap metadata: name: tcp-services namespace: ingress-nginx data: 9000: "default/example-go:8080" ``` Since 1.9.13 NGINX provides [UDP Load Balancing](https://www.nginx.com/blog/announcing-udp-load-balancing/). The next example shows how to expose the service `kube-dns` running in the namespace `kube-system` in the port `53` using the port `53` ```yaml apiVersion: v1 kind: ConfigMap metadata: name: udp-services namespace: ingress-nginx data: 53: "kube-system/kube-dns:53" ``` If TCP/UDP proxy support is used, then those ports need to be exposed in the Service defined for the Ingress. ```yaml apiVersion: v1 kind: Service metadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: type: LoadBalancer ports: - name: http port: 80 targetPort: 80 protocol: TCP - name: https port: 443 targetPort: 443 protocol: TCP - name: proxied-tcp-9000 port: 9000 targetPort: 9000 protocol: TCP selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx ``` Then, the configmap should be added into ingress controller's deployment args. ``` args: - /nginx-ingress-controller - --tcp-services-configmap=ingress-nginx/tcp-services ```
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/exposing-tcp-udp-services.md
main
ingress-nginx
[ -0.067690908908844, 0.028343550860881805, 0.035752538591623306, -0.048445701599121094, -0.020326506346464157, -0.00959739089012146, 0.07767947763204575, -0.018997106701135635, 0.03325439617037773, 0.06489948183298111, -0.0608307309448719, -0.0465102382004261, -0.0015638830373063684, 0.0271...
0.164926
# Exposing FastCGI Servers > \*\*FastCGI\*\* is a [binary protocol](https://en.wikipedia.org/wiki/Binary\_protocol "Binary protocol") for interfacing interactive programs with a [web server](https://en.wikipedia.org/wiki/Web\_server "Web server"). [...] (It's) aim is to reduce the overhead related to interfacing between web server and CGI programs, allowing a server to handle more web page requests per unit of time. > > — Wikipedia The \_ingress-nginx\_ ingress controller can be used to directly expose [FastCGI](https://en.wikipedia.org/wiki/FastCGI) servers. Enabling FastCGI in your Ingress only requires setting the \_backend-protocol\_ annotation to `FCGI`, and with a couple more annotations you can customize the way \_ingress-nginx\_ handles the communication with your FastCGI \_server\_. For most practical use-cases, php applications are a good example. PHP is not HTML so a FastCGI server like php-fpm processes a index.php script for the response to a request. See a working example below. This [post in a FactCGI feature issue](https://github.com/kubernetes/ingress-nginx/issues/8207#issuecomment-2161405468) describes a test for the FastCGI feature. The same test is described below here. ## Example Objects to expose a FastCGI server pod ### The FasctCGI server pod The \_Pod\_ object example below exposes port `9000`, which is the conventional FastCGI port. ```yaml apiVersion: v1 kind: Pod metadata: name: example-app labels: app: example-app spec: containers: - name: example-app image: php:fpm-alpine ports: - containerPort: 9000 name: fastcgi ``` - For this example to work, a HTML response should be received from the FastCGI server being exposed - A HTTP request to the FastCGI server pod should be sent - The response should be generated by a php script as that is what we are demonstrating here The image we are using here `php:fpm-alpine` does not ship with a ready to use php script inside it. So we need to provide the image with a simple php-script for this example to work. - Use `kubectl exec` to get into the example-app pod - You will land at the path `/var/www/html` - Create a simple php script there at the path /var/www/html called index.php - Make the index.php file look like this ``` PHP Test php echo '<pFastCGI Test Worked!'; ?> ``` - Save and exit from the shell in the pod - If you delete the pod, then you will have to recreate the file as this method is not persistent ### The FastCGI service The \_Service\_ object example below matches port `9000` from the \_Pod\_ object above. ```yaml apiVersion: v1 kind: Service metadata: name: example-service spec: selector: app: example-app ports: - port: 9000 targetPort: 9000 name: fastcgi ``` ### The configMap object and the ingress object The \_Ingress\_ and \_ConfigMap\_ objects below demonstrate the supported \_FastCGI\_ specific annotations. !!! Important NGINX actually has 50 [FastCGI directives](https://nginx.org/en/docs/http/ngx\_http\_fastcgi\_module.html#directives) All of the nginx directives have not been exposed in the ingress yet ### The ConfigMap object This configMap object is required to set the parameters of [FastCGI directives](https://nginx.org/en/docs/http/ngx\_http\_fastcgi\_module.html#directives) !!! Attention - The \_ConfigMap\_ \*\*must\*\* be created before creating the ingress object - The \_Ingress Controller\_ needs to find the configMap when the \_Ingress\_ object with the FastCGI annotations is created - So create the configMap before the ingress - If the configMap is created after the ingress is created, then you will need to restart the \_Ingress Controller\_ pods. ```yaml apiVersion: v1 kind: ConfigMap metadata: name: example-cm data: SCRIPT\_FILENAME: "/var/www/html/index.php" ``` ### The ingress object - Do not create the ingress shown below until you have created the configMap seen above. - You can see that this ingress matches the service `example-service`, and the port named `fastcgi` from above. ``` apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/backend-protocol: "FCGI" nginx.ingress.kubernetes.io/fastcgi-index: "index.php" nginx.ingress.kubernetes.io/fastcgi-params-configmap: "example-cm" name: example-app spec: ingressClassName: nginx rules: - host: app.example.com http:
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/fcgi-services.md
main
ingress-nginx
[ -0.12751077115535736, -0.01370969507843256, -0.03529747202992439, 0.044550757855176926, 0.0035790784750133753, -0.022736022248864174, 0.015032383613288403, -0.013054762035608292, 0.0773809477686882, 0.05753206089138985, -0.021459480747580528, -0.026812151074409485, -0.032674528658390045, -...
0.236502
below until you have created the configMap seen above. - You can see that this ingress matches the service `example-service`, and the port named `fastcgi` from above. ``` apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/backend-protocol: "FCGI" nginx.ingress.kubernetes.io/fastcgi-index: "index.php" nginx.ingress.kubernetes.io/fastcgi-params-configmap: "example-cm" name: example-app spec: ingressClassName: nginx rules: - host: app.example.com http: paths: - path: / pathType: Prefix backend: service: name: example-service port: name: fastcgi ``` ## Send a request to the exposed FastCGI server You will have to look at the external-ip of the ingress or you have to send the HTTP request to the ClusterIP address of the ingress-nginx controller pod. ``` % curl 172.19.0.2 -H "Host: app.example.com" -vik \* Trying 172.19.0.2:80... \* Connected to 172.19.0.2 (172.19.0.2) port 80 > GET / HTTP/1.1 > Host: app.example.com > User-Agent: curl/8.6.0 > Accept: \*/\* > < HTTP/1.1 200 OK HTTP/1.1 200 OK < Date: Wed, 12 Jun 2024 07:11:59 GMT Date: Wed, 12 Jun 2024 07:11:59 GMT < Content-Type: text/html; charset=UTF-8 Content-Type: text/html; charset=UTF-8 < Transfer-Encoding: chunked Transfer-Encoding: chunked < Connection: keep-alive Connection: keep-alive < X-Powered-By: PHP/8.3.8 X-Powered-By: PHP/8.3.8 < PHP Test FastCGI Test Worked ``` ## FastCGI Ingress Annotations To enable FastCGI, the `nginx.ingress.kubernetes.io/backend-protocol` annotation needs to be set to `FCGI`, which overrides the default `HTTP` value. > `nginx.ingress.kubernetes.io/backend-protocol: "FCGI"` \*\*This enables the \_FastCGI\_ mode for all paths defined in the \_Ingress\_ object\*\* ### The `nginx.ingress.kubernetes.io/fastcgi-index` Annotation To specify an index file, the `fastcgi-index` annotation value can optionally be set. In the example below, the value is set to `index.php`. This annotation corresponds to [the \_NGINX\_ `fastcgi\_index` directive](https://nginx.org/en/docs/http/ngx\_http\_fastcgi\_module.html#fastcgi\_index). > `nginx.ingress.kubernetes.io/fastcgi-index: "index.php"` ### The `nginx.ingress.kubernetes.io/fastcgi-params-configmap` Annotation To specify [\_NGINX\_ `fastcgi\_param` directives](https://nginx.org/en/docs/http/ngx\_http\_fastcgi\_module.html#fastcgi\_param), the `fastcgi-params-configmap` annotation is used, which in turn must lead to a \_ConfigMap\_ object containing the \_NGINX\_ `fastcgi\_param` directives as key/values. > `nginx.ingress.kubernetes.io/fastcgi-params-configmap: "example-configmap"` And the \_ConfigMap\_ object to specify the `SCRIPT\_FILENAME` and `HTTP\_PROXY` \_NGINX's\_ `fastcgi\_param` directives will look like the following: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: example-configmap data: SCRIPT\_FILENAME: "/example/index.php" HTTP\_PROXY: "" ``` Using the \_namespace/\_ prefix is also supported, for example: > `nginx.ingress.kubernetes.io/fastcgi-params-configmap: "example-namespace/example-configmap"`
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/fcgi-services.md
main
ingress-nginx
[ -0.07033871859312057, -0.007788824383169413, -0.017765216529369354, 0.012157255783677101, 0.009026900865137577, -0.013923831284046173, 0.05425579100847244, -0.005518181249499321, 0.024306565523147583, 0.09400557726621628, -0.01145193725824356, -0.11093059182167053, 0.0005100948619656265, -...
0.141659
# Multiple Ingress controllers By default, deploying multiple Ingress controllers (e.g., `ingress-nginx` & `gce`) will result in all controllers simultaneously racing to update Ingress status fields in confusing ways. To fix this problem, use [IngressClasses](https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class). The `kubernetes.io/ingress.class` annotation is not being preferred or suggested to use as it can be deprecated in the future. Better to use the field `ingress.spec.ingressClassName`. But, when user has deployed with `scope.enabled`, then the ingress class resource field is not used. ## Using IngressClasses If all ingress controllers respect IngressClasses (e.g. multiple instances of ingress-nginx v1.0), you can deploy two Ingress controllers by granting them control over two different IngressClasses, then selecting one of the two IngressClasses with `ingressClassName`. First, ensure the `--controller-class=` and `--ingress-class` are set to something different on each ingress controller, If your additional ingress controller is to be installed in a namespace, where there is/are one/more-than-one ingress-nginx-controller(s) already installed, then you need to specify a different unique `--election-id` for the new instance of the controller. ```yaml # ingress-nginx Deployment/Statefulset spec: template: spec: containers: - name: ingress-nginx-internal-controller args: - /nginx-ingress-controller - '--election-id=ingress-controller-leader' - '--controller-class=k8s.io/internal-ingress-nginx' - '--ingress-class=k8s.io/internal-nginx' ... ``` Then use the same value in the IngressClass: ```yaml # ingress-nginx IngressClass apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: name: internal-nginx spec: controller: k8s.io/internal-ingress-nginx ... ``` And refer to that IngressClass in your Ingress: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress spec: ingressClassName: internal-nginx ... ``` or if installing with Helm: ```yaml controller: electionID: ingress-controller-leader ingressClass: internal-nginx # default: nginx ingressClassResource: name: internal-nginx # default: nginx enabled: true default: false controllerValue: "k8s.io/internal-ingress-nginx" # default: k8s.io/ingress-nginx ``` !!! important When running multiple ingress-nginx controllers, it will only process an unset class annotation if one of the controllers uses the default `--controller-class` value (see `IsValid` method in `internal/ingress/annotations/class/main.go`), otherwise the class annotation becomes required. If `--controller-class` is set to the default value of `k8s.io/ingress-nginx`, the controller will monitor Ingresses with no class annotation \*and\* Ingresses with annotation class set to `nginx`. Use a non-default value for `--controller-class`, to ensure that the controller only satisfied the specific class of Ingresses. ## Using the kubernetes.io/ingress.class annotation (in deprecation) If you're running multiple ingress controllers where one or more do not support IngressClasses, you must specify the annotation `kubernetes.io/ingress.class: "nginx"` in all ingresses that you would like ingress-nginx to claim. For instance, ```yaml metadata: name: foo annotations: kubernetes.io/ingress.class: "gce" ``` will target the GCE controller, forcing the Ingress-NGINX controller to ignore it, while an annotation like: ```yaml metadata: name: foo annotations: kubernetes.io/ingress.class: "nginx" ``` will target the Ingress-NGINX controller, forcing the GCE controller to ignore it. You can change the value "nginx" to something else by setting the `--ingress-class` flag: ```yaml spec: template: spec: containers: - name: ingress-nginx-internal-controller args: - /nginx-ingress-controller - --ingress-class=internal-nginx ``` then setting the corresponding `kubernetes.io/ingress.class: "internal-nginx"` annotation on your Ingresses. To reiterate, setting the annotation to any value which does not match a valid ingress class will force the Ingress-Nginx Controller to ignore your Ingress. If you are only running a single Ingress-Nginx Controller, this can be achieved by setting the annotation to any value except "nginx" or an empty string. Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller.
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/multiple-ingress.md
main
ingress-nginx
[ -0.03111121617257595, -0.0013638795353472233, 0.04245588555932045, 0.002707331208512187, -0.011187057010829449, -0.014101305976510048, 0.08162254095077515, -0.04446498677134514, 0.02992129512131214, 0.1165890246629715, 0.0008276402950286865, -0.06396657973527908, 0.025448961183428764, -0.0...
0.079246
# External Articles - [Pain(less) NGINX Ingress](http://danielfm.me/posts/painless-nginx-ingress.html) - [Accessing Kubernetes Pods from Outside of the Cluster](http://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster) - [Kubernetes - Redirect HTTP to HTTPS with ELB and the Ingress-Nginx Controller](https://dev.to/tomhoule/kubernetes---redirect-http-to-https-with-elb-and-the-nginx-ingress-controller) - [Configure Nginx Ingress Controller for TLS termination on Kubernetes on Azure](https://blogs.technet.microsoft.com/livedevopsinjapan/2017/02/28/configure-nginx-ingress-controller-for-tls-termination-on-kubernetes-on-azure-2/) - [Secure your Nginx Ingress controller behind Google Cloud Armor or Identity-Aware Proxy (IAP)](https://medium.com/google-cloud/secure-your-nginx-ingress-controller-behind-cloud-armor-805d6109af86?sk=f64029bb5624b4ad5cd2828f4c358af3)
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/external-articles.md
main
ingress-nginx
[ -0.034792713820934296, 0.08560637384653091, 0.03320859745144844, 0.0020826621912419796, -0.009659863077104092, -0.03685973584651947, -0.02007896825671196, -0.070075623691082, 0.06524160504341125, 0.057548511773347855, -0.023967469111084938, -0.04067211225628853, 0.06328898668289185, 0.0058...
0.05403
# Custom errors When the [`custom-http-errors`][cm-custom-http-errors] option is enabled, the Ingress controller configures NGINX so that it passes several HTTP headers down to its `default-backend` in case of error: | Header | Value | | ---------------- | ------------------------------------------------------------------- | | `X-Code` | HTTP status code returned by the request | | `X-Format` | Value of the `Accept` header sent by the client | | `X-Original-URI` | URI that caused the error | | `X-Namespace` | Namespace where the backend Service is located | | `X-Ingress-Name` | Name of the Ingress where the backend is defined | | `X-Service-Name` | Name of the Service backing the backend | | `X-Service-Port` | Port number of the Service backing the backend | | `X-Request-ID` | Unique ID that identifies the request - same as for backend service | A custom error backend can use this information to return the best possible representation of an error page. For example, if the value of the `Accept` header send by the client was `application/json`, a carefully crafted backend could decide to return the error payload as a JSON document instead of HTML. !!! Important The custom backend is expected to return the correct HTTP status code instead of `200`. NGINX does not change the response from the custom default backend. An example of such custom backend is available inside the source repository at [images/custom-error-pages][img-custom-error-pages]. See also the [Custom errors][example-custom-errors] example. [cm-custom-http-errors]: ./nginx-configuration/configmap.md#custom-http-errors [img-custom-error-pages]: https://github.com/kubernetes/ingress-nginx/tree/main/images/custom-error-pages [example-custom-errors]: ../examples/customization/custom-errors/README.md
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/custom-errors.md
main
ingress-nginx
[ -0.07719531655311584, 0.08984654396772385, 0.011184421367943287, 0.010801831260323524, -0.009389477781951427, -0.038268789649009705, 0.022987747564911842, 0.0286400206387043, 0.015283389016985893, 0.12163785845041275, -0.0054705492220819, -0.057966213673353195, 0.0376749262213707, -0.04661...
0.098583
# Monitoring Two different methods to install and configure Prometheus and Grafana are described in this doc. \* Prometheus and Grafana installation using Pod Annotations. This installs Prometheus and Grafana in the same namespace as NGINX Ingress \* Prometheus and Grafana installation using Service Monitors. This installs Prometheus and Grafana in two different namespaces. This is the preferred method, and helm charts supports this by default. ## Prometheus and Grafana installation using Pod Annotations This tutorial will show you how to install [Prometheus](https://prometheus.io/) and [Grafana](https://grafana.com/) for scraping the metrics of the Ingress-Nginx Controller. !!! important This example uses `emptyDir` volumes for Prometheus and Grafana. This means once the pod gets terminated you will lose all the data. ### Before You Begin - The Ingress-Nginx Controller should already be deployed according to the deployment instructions [here](../deploy/index.md). - The controller should be configured for exporting metrics. This requires 3 configurations to the controller. These configurations are: 1. controller.metrics.enabled=true 2. controller.podAnnotations."prometheus.io/scrape"="true" 3. controller.podAnnotations."prometheus.io/port"="10254" - The easiest way to configure the controller for metrics is via helm upgrade. Assuming you have installed the ingress-nginx controller as a helm release named ingress-nginx, then you can simply type the command shown below : ``` helm upgrade ingress-nginx ingress-nginx \ --repo https://kubernetes.github.io/ingress-nginx \ --namespace ingress-nginx \ --set controller.metrics.enabled=true \ --set-string controller.podAnnotations."prometheus\.io/scrape"="true" \ --set-string controller.podAnnotations."prometheus\.io/port"="10254" ``` - You can validate that the controller is configured for metrics by looking at the values of the installed release, like this: ``` helm get values ingress-nginx --namespace ingress-nginx ``` - You should be able to see the values shown below: ``` .. controller: metrics: enabled: true podAnnotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" .. ``` - If you are \*\*not using helm\*\*, you will have to edit your manifests like this: - Service manifest: ``` apiVersion: v1 kind: Service .. spec: ports: - name: prometheus port: 10254 targetPort: prometheus .. ``` - Deployment manifest: ``` apiVersion: v1 kind: Deployment .. spec: template: metadata: annotations: prometheus.io/scrape: "true" prometheus.io/port: "10254" spec: containers: - name: controller args: .. - '--enable-metrics=true' ports: - name: prometheus containerPort: 10254 .. ``` ### Deploy and configure Prometheus Server Note that the kustomize bases used in this tutorial are stored in the [deploy](https://github.com/kubernetes/ingress-nginx/tree/main/deploy) folder of the GitHub repository [kubernetes/ingress-nginx](https://github.com/kubernetes/ingress-nginx). - The Prometheus server must be configured so that it can discover endpoints of services. If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods, no extra configuration is needed. - If there is no existing Prometheus server running, the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server. - Running the following command deploys prometheus in Kubernetes: ``` kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/prometheus/ ``` #### Prometheus Dashboard - Open Prometheus dashboard in a web browser: ```console kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default-http-backend ClusterIP 10.103.59.201 80/TCP 3d ingress-nginx NodePort 10.97.44.72 80:30100/TCP,443:30154/TCP,10254:32049/TCP 5h prometheus-server NodePort 10.98.233.86 9090:32630/TCP 1m ``` - Obtain the IP address of the nodes in the running cluster: ```console kubectl get nodes -o wide ``` - In some cases where the node only have internal IP addresses we need to execute: ``` kubectl get nodes --selector=kubernetes.io/role!=master -o jsonpath={.items[\*].status.addresses[?\(@.type==\"InternalIP\"\)].address} 10.192.0.2 10.192.0.3 10.192.0.4 ``` - Open your browser and visit the following URL: \_http://{node IP address}:{prometheus-svc-nodeport}\_ to load the Prometheus Dashboard. - According to the above example, this URL will be http://10.192.0.3:32630 ![Prometheus Dashboard](../images/prometheus-dashboard.png) #### Grafana - Install grafana using the below command ``` kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/grafana/ ``` - Look at the services ``` kubectl get svc -n ingress-nginx NAME TYPE
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/monitoring.md
main
ingress-nginx
[ -0.10773523896932602, 0.047450367361307144, -0.04404804855585098, -0.006157325580716133, 0.012743142433464527, -0.07452050596475601, -0.02571527659893036, 0.010018053464591503, 0.010012609884142876, 0.0891314297914505, -0.005258957389742136, -0.08847782760858536, -0.008450212888419628, 0.0...
0.170535
URL: \_http://{node IP address}:{prometheus-svc-nodeport}\_ to load the Prometheus Dashboard. - According to the above example, this URL will be http://10.192.0.3:32630 ![Prometheus Dashboard](../images/prometheus-dashboard.png) #### Grafana - Install grafana using the below command ``` kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/grafana/ ``` - Look at the services ``` kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default-http-backend ClusterIP 10.103.59.201 80/TCP 3d ingress-nginx NodePort 10.97.44.72 80:30100/TCP,443:30154/TCP,10254:32049/TCP 5h prometheus-server NodePort 10.98.233.86 9090:32630/TCP 10m grafana NodePort 10.98.233.87 3000:31086/TCP 10m ``` - Open your browser and visit the following URL: \_http://{node IP address}:{grafana-svc-nodeport}\_ to load the Grafana Dashboard. According to the above example, this URL will be http://10.192.0.3:31086 The username and password is `admin` - After the login you can import the Grafana dashboard from [official dashboards](https://github.com/kubernetes/ingress-nginx/tree/main/deploy/grafana/dashboards), by following steps given below : - Navigate to lefthand panel of grafana - Hover on the gearwheel icon for Configuration and click "Data Sources" - Click "Add data source" - Select "Prometheus" - Enter the details (note: I used http://CLUSTER\_IP\_PROMETHEUS\_SVC:9090) - Left menu (hover over +) -> Dashboard - Click "Import" - Enter the copy pasted json from https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/nginx.json - Click Import JSON - Select the Prometheus data source - Click "Import" ![Grafana Dashboard](../images/grafana.png) ### Caveats #### Wildcard ingresses - By default request metrics are labeled with the hostname. When you have a wildcard domain ingress, then there will be no metrics for that ingress (to prevent the metrics from exploding in cardinality). To get metrics in this case you have two options: - Run the ingress controller with `--metrics-per-host=false`. You will lose labeling by hostname, but still have labeling by ingress. - Run the ingress controller with `--metrics-per-undefined-host=true --metrics-per-host=true`. You will get labeling by hostname even if the hostname is not explicitly defined on an ingress. Be warned that cardinality could explode due to many hostnames and CPU usage could also increase. ### Grafana dashboard using ingress resource - If you want to expose the dashboard for grafana using an ingress resource, then you can : - change the service type of the prometheus-server service and the grafana service to "ClusterIP" like this : ``` kubectl -n ingress-nginx edit svc grafana ``` - This will open the currently deployed service grafana in the default editor configured in your shell (vi/nvim/nano/other) - scroll down to line 34 that looks like "type: NodePort" - change it to look like "type: ClusterIP". Save and exit. - create an ingress resource with backend as "grafana" and port as "3000" - Similarly, you can edit the service "prometheus-server" and add an ingress resource. ## Prometheus and Grafana installation using Service Monitors This document assumes you're using helm and using the kube-prometheus-stack package to install Prometheus and Grafana. ### Verify Ingress-Nginx Controller is installed - The Ingress-Nginx Controller should already be deployed according to the deployment instructions [here](../deploy/index.md). - To check if Ingress controller is deployed, ``` kubectl get pods -n ingress-nginx ``` - The result should look something like: ``` NAME READY STATUS RESTARTS AGE ingress-nginx-controller-7c489dc7b7-ccrf6 1/1 Running 0 19h ``` ### Verify Prometheus is installed - To check if Prometheus is already deployed, run the following command: ``` helm ls -A ``` ``` NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ingress-nginx ingress-nginx 10 2022-01-20 18:08:55.267373 -0800 PST deployed ingress-nginx-4.0.16 1.1.1 prometheus prometheus 1 2022-01-20 16:07:25.086828 -0800 PST deployed kube-prometheus-stack-30.1.0 0.53.1 ``` - Notice that prometheus is installed in a differenet namespace than ingress-nginx - If prometheus is not installed, then you can install from [here](https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack) ### Re-configure Ingress-Nginx Controller - The Ingress NGINX controller needs to be reconfigured for exporting metrics. This requires 3 additional configurations to the controller. These
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/monitoring.md
main
ingress-nginx
[ -0.09216522425413132, 0.05595860257744789, -0.052358098328113556, -0.044800858944654465, -0.019939692690968513, -0.09054575860500336, -0.051441047340631485, -0.04882979765534401, 0.04298964515328407, 0.06722165644168854, -0.030096152797341347, -0.05333125218749046, -0.03394744545221329, -0...
0.179811
``` - Notice that prometheus is installed in a differenet namespace than ingress-nginx - If prometheus is not installed, then you can install from [here](https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack) ### Re-configure Ingress-Nginx Controller - The Ingress NGINX controller needs to be reconfigured for exporting metrics. This requires 3 additional configurations to the controller. These configurations are : ``` controller.metrics.enabled=true controller.metrics.serviceMonitor.enabled=true controller.metrics.serviceMonitor.additionalLabels.release="prometheus" ``` - The easiest way of doing this is to helm upgrade ``` helm upgrade ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx \ --set controller.metrics.enabled=true \ --set controller.metrics.serviceMonitor.enabled=true \ --set controller.metrics.serviceMonitor.additionalLabels.release="prometheus" ``` - Here `controller.metrics.serviceMonitor.additionalLabels.release="prometheus"` should match the name of the helm release of the `kube-prometheus-stack` - You can validate that the controller has been successfully reconfigured to export metrics by looking at the values of the installed release, like this: ``` helm get values ingress-nginx --namespace ingress-nginx ``` ``` controller: metrics: enabled: true serviceMonitor: additionalLabels: release: prometheus enabled: true ``` ### Configure Prometheus - Since Prometheus is running in a different namespace and not in the ingress-nginx namespace, it would not be able to discover ServiceMonitors in other namespaces when installed. Reconfigure your kube-prometheus-stack Helm installation to set `serviceMonitorSelectorNilUsesHelmValues` flag to false. By default, Prometheus only discovers PodMonitors within its own namespace. This should be disabled by setting `podMonitorSelectorNilUsesHelmValues` to false - The configurations required are: ``` prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false ``` - The easiest way of doing this is to use `helm upgrade ...` ``` helm upgrade prometheus prometheus-community/kube-prometheus-stack \ --namespace prometheus \ --set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false \ --set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false ``` - You can validate that Prometheus has been reconfigured by looking at the values of the installed release, like this: ``` helm get values prometheus --namespace prometheus ``` - You should be able to see the values shown below: ``` prometheus: prometheusSpec: podMonitorSelectorNilUsesHelmValues: false serviceMonitorSelectorNilUsesHelmValues: false ``` ### Connect and view Prometheus dashboard - Port forward to Prometheus service. Find out the name of the prometheus service by using the following command: ``` kubectl get svc -n prometheus ``` The result of this command would look like: ``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE alertmanager-operated ClusterIP None 9093/TCP,9094/TCP,9094/UDP 7h46m prometheus-grafana ClusterIP 10.106.28.162 80/TCP 7h46m prometheus-kube-prometheus-alertmanager ClusterIP 10.108.125.245 9093/TCP 7h46m prometheus-kube-prometheus-operator ClusterIP 10.110.220.1 443/TCP 7h46m prometheus-kube-prometheus-prometheus ClusterIP 10.102.72.134 9090/TCP 7h46m prometheus-kube-state-metrics ClusterIP 10.104.231.181 8080/TCP 7h46m prometheus-operated ClusterIP None 9090/TCP 7h46m prometheus-prometheus-node-exporter ClusterIP 10.96.247.128 9100/TCP 7h46m ``` prometheus-kube-prometheus-prometheus is the service we want to port forward to. We can do so using the following command: ``` kubectl port-forward svc/prometheus-kube-prometheus-prometheus -n prometheus 9090:9090 ``` When you run the above command, you should see something like: ``` Forwarding from 127.0.0.1:9090 -> 9090 Forwarding from [::1]:9090 -> 9090 ``` - Open your browser and visit the following URL http://localhost:{port-forwarded-port} according to the above example it would be, http://localhost:9090 ![Prometheus Dashboard](../images/prometheus-dashboard1.png) ### Connect and view Grafana dashboard - Port forward to Grafana service. Find out the name of the Grafana service by using the following command: ``` kubectl get svc -n prometheus ``` The result of this command would look like: ``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE alertmanager-operated ClusterIP None 9093/TCP,9094/TCP,9094/UDP 7h46m prometheus-grafana ClusterIP 10.106.28.162 80/TCP 7h46m prometheus-kube-prometheus-alertmanager ClusterIP 10.108.125.245 9093/TCP 7h46m prometheus-kube-prometheus-operator ClusterIP 10.110.220.1 443/TCP 7h46m prometheus-kube-prometheus-prometheus ClusterIP 10.102.72.134 9090/TCP 7h46m prometheus-kube-state-metrics ClusterIP 10.104.231.181 8080/TCP 7h46m prometheus-operated ClusterIP None 9090/TCP 7h46m prometheus-prometheus-node-exporter ClusterIP 10.96.247.128 9100/TCP 7h46m ``` prometheus-grafana is the service we want to port forward to. We can do so using the following command: ``` kubectl port-forward svc/prometheus-grafana 3000:80 -n prometheus ``` When you run the above command, you should see something like: ``` Forwarding from 127.0.0.1:3000 -> 3000 Forwarding from [::1]:3000 -> 3000 ``` - Open your browser and visit the following URL http://localhost:{port-forwarded-port} according to the above
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/monitoring.md
main
ingress-nginx
[ -0.05900103971362114, 0.048747673630714417, 0.007234492804855108, 0.003607078455388546, 0.003025413490831852, -0.058489587157964706, 0.009774957783520222, -0.024360770359635353, 0.04960654675960541, 0.09082267433404922, -0.0421203188598156, -0.10516872256994247, 0.00023854196479078382, -0....
0.134441
so using the following command: ``` kubectl port-forward svc/prometheus-grafana 3000:80 -n prometheus ``` When you run the above command, you should see something like: ``` Forwarding from 127.0.0.1:3000 -> 3000 Forwarding from [::1]:3000 -> 3000 ``` - Open your browser and visit the following URL http://localhost:{port-forwarded-port} according to the above example it would be, http://localhost:3000 The default username/ password is admin/prom-operator - After the login you can import the Grafana dashboard from [official dashboards](https://github.com/kubernetes/ingress-nginx/tree/main/deploy/grafana/dashboards), by following steps given below : - Navigate to lefthand panel of grafana - Hover on the gearwheel icon for Configuration and click "Data Sources" - Click "Add data source" - Select "Prometheus" - Enter the details (note: I used http://10.102.72.134:9090 which is the CLUSTER-IP for Prometheus service) - Left menu (hover over +) -> Dashboard - Click "Import" - Enter the copy pasted json from https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/nginx.json - Click Import JSON - Select the Prometheus data source - Click "Import" ![Grafana Dashboard](../images/grafana-dashboard1.png) ## Exposed metrics Prometheus metrics are exposed on port 10254. ### Request metrics \* `nginx\_ingress\_controller\_request\_duration\_seconds` Histogram\ The request processing (time elapsed between the first bytes were read from the client and the log write after the last bytes were sent to the client) time in seconds (affected by client speed).\ nginx var: `request\_time` \* `nginx\_ingress\_controller\_response\_duration\_seconds` Histogram\ The time spent on receiving the response from the upstream server in seconds (affected by client speed when the response is bigger than proxy buffers).\ Note: can be up to several millis bigger than the `nginx\_ingress\_controller\_request\_duration\_seconds` because of the different measuring method. nginx var: `upstream\_response\_time` \* `nginx\_ingress\_controller\_header\_duration\_seconds` Histogram\ The time spent on receiving first header from the upstream server\ nginx var: `upstream\_header\_time` \* `nginx\_ingress\_controller\_connect\_duration\_seconds` Histogram\ The time spent on establishing a connection with the upstream server\ nginx var: `upstream\_connect\_time` \* `nginx\_ingress\_controller\_response\_size` Histogram\ The response length (including request line, header, and request body)\ nginx var: `bytes\_sent` \* `nginx\_ingress\_controller\_request\_size` Histogram\ The request length (including request line, header, and request body)\ nginx var: `request\_length` \* `nginx\_ingress\_controller\_requests` Counter\ The total number of client requests \* `nginx\_ingress\_controller\_bytes\_sent` Histogram\ The number of bytes sent to a client. \*\*Deprecated\*\*, use `nginx\_ingress\_controller\_response\_size`\ nginx var: `bytes\_sent` ``` # HELP nginx\_ingress\_controller\_bytes\_sent The number of bytes sent to a client. DEPRECATED! Use nginx\_ingress\_controller\_response\_size # TYPE nginx\_ingress\_controller\_bytes\_sent histogram # HELP nginx\_ingress\_controller\_connect\_duration\_seconds The time spent on establishing a connection with the upstream server # TYPE nginx\_ingress\_controller\_connect\_duration\_seconds nginx\_ingress\_controller\_connect\_duration\_seconds \* HELP nginx\_ingress\_controller\_header\_duration\_seconds The time spent on receiving first header from the upstream server # TYPE nginx\_ingress\_controller\_header\_duration\_seconds histogram # HELP nginx\_ingress\_controller\_request\_duration\_seconds The request processing time in milliseconds # TYPE nginx\_ingress\_controller\_request\_duration\_seconds histogram # HELP nginx\_ingress\_controller\_request\_size The request length (including request line, header, and request body) # TYPE nginx\_ingress\_controller\_request\_size histogram # HELP nginx\_ingress\_controller\_requests The total number of client requests. # TYPE nginx\_ingress\_controller\_requests counter # HELP nginx\_ingress\_controller\_response\_duration\_seconds The time spent on receiving the response from the upstream server # TYPE nginx\_ingress\_controller\_response\_duration\_seconds histogram # HELP nginx\_ingress\_controller\_response\_size The response length (including request line, header, and request body) # TYPE nginx\_ingress\_controller\_response\_size histogram ``` ### Nginx process metrics ``` # HELP nginx\_ingress\_controller\_nginx\_process\_connections current number of client connections with state {active, reading, writing, waiting} # TYPE nginx\_ingress\_controller\_nginx\_process\_connections gauge # HELP nginx\_ingress\_controller\_nginx\_process\_connections\_total total number of connections with state {accepted, handled} # TYPE nginx\_ingress\_controller\_nginx\_process\_connections\_total counter # HELP nginx\_ingress\_controller\_nginx\_process\_cpu\_seconds\_total Cpu usage in seconds # TYPE nginx\_ingress\_controller\_nginx\_process\_cpu\_seconds\_total counter # HELP nginx\_ingress\_controller\_nginx\_process\_num\_procs number of processes # TYPE nginx\_ingress\_controller\_nginx\_process\_num\_procs gauge # HELP nginx\_ingress\_controller\_nginx\_process\_oldest\_start\_time\_seconds start time in seconds since 1970/01/01 # TYPE nginx\_ingress\_controller\_nginx\_process\_oldest\_start\_time\_seconds gauge # HELP nginx\_ingress\_controller\_nginx\_process\_read\_bytes\_total number of bytes read # TYPE nginx\_ingress\_controller\_nginx\_process\_read\_bytes\_total counter # HELP nginx\_ingress\_controller\_nginx\_process\_requests\_total total number of client requests # TYPE nginx\_ingress\_controller\_nginx\_process\_requests\_total counter # HELP nginx\_ingress\_controller\_nginx\_process\_resident\_memory\_bytes number of bytes of memory in use # TYPE nginx\_ingress\_controller\_nginx\_process\_resident\_memory\_bytes gauge # HELP nginx\_ingress\_controller\_nginx\_process\_virtual\_memory\_bytes number of bytes of memory
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/monitoring.md
main
ingress-nginx
[ -0.09506702423095703, 0.0196455679833889, -0.09241568297147751, -0.0603775680065155, -0.05968645587563515, -0.09740640223026276, -0.07847011834383011, -0.025300445035099983, 0.020787622779607773, 0.07114075124263763, -0.022627323865890503, -0.07061294466257095, -0.016256801784038544, 0.001...
0.121664
1970/01/01 # TYPE nginx\_ingress\_controller\_nginx\_process\_oldest\_start\_time\_seconds gauge # HELP nginx\_ingress\_controller\_nginx\_process\_read\_bytes\_total number of bytes read # TYPE nginx\_ingress\_controller\_nginx\_process\_read\_bytes\_total counter # HELP nginx\_ingress\_controller\_nginx\_process\_requests\_total total number of client requests # TYPE nginx\_ingress\_controller\_nginx\_process\_requests\_total counter # HELP nginx\_ingress\_controller\_nginx\_process\_resident\_memory\_bytes number of bytes of memory in use # TYPE nginx\_ingress\_controller\_nginx\_process\_resident\_memory\_bytes gauge # HELP nginx\_ingress\_controller\_nginx\_process\_virtual\_memory\_bytes number of bytes of memory in use # TYPE nginx\_ingress\_controller\_nginx\_process\_virtual\_memory\_bytes gauge # HELP nginx\_ingress\_controller\_nginx\_process\_write\_bytes\_total number of bytes written # TYPE nginx\_ingress\_controller\_nginx\_process\_write\_bytes\_total counter ``` ### Controller metrics ``` # HELP nginx\_ingress\_controller\_build\_info A metric with a constant '1' labeled with information about the build. # TYPE nginx\_ingress\_controller\_build\_info gauge # HELP nginx\_ingress\_controller\_check\_success Cumulative number of Ingress controller syntax check operations # TYPE nginx\_ingress\_controller\_check\_success counter # HELP nginx\_ingress\_controller\_config\_hash Running configuration hash actually running # TYPE nginx\_ingress\_controller\_config\_hash gauge # HELP nginx\_ingress\_controller\_config\_last\_reload\_successful Whether the last configuration reload attempt was successful # TYPE nginx\_ingress\_controller\_config\_last\_reload\_successful gauge # HELP nginx\_ingress\_controller\_config\_last\_reload\_successful\_timestamp\_seconds Timestamp of the last successful configuration reload. # TYPE nginx\_ingress\_controller\_config\_last\_reload\_successful\_timestamp\_seconds gauge # HELP nginx\_ingress\_controller\_ssl\_certificate\_info Hold all labels associated to a certificate # TYPE nginx\_ingress\_controller\_ssl\_certificate\_info gauge # HELP nginx\_ingress\_controller\_success Cumulative number of Ingress controller reload operations # TYPE nginx\_ingress\_controller\_success counter # HELP nginx\_ingress\_controller\_orphan\_ingress Gauge reporting status of ingress orphanity, 1 indicates orphaned ingress. 'namespace' is the string used to identify namespace of ingress, 'ingress' for ingress name and 'type' for 'no-service' or 'no-endpoint' of orphanity # TYPE nginx\_ingress\_controller\_orphan\_ingress gauge ``` ### Admission metrics ``` # HELP nginx\_ingress\_controller\_admission\_config\_size The size of the tested configuration # TYPE nginx\_ingress\_controller\_admission\_config\_size gauge # HELP nginx\_ingress\_controller\_admission\_render\_duration The processing duration of ingresses rendering by the admission controller (float seconds) # TYPE nginx\_ingress\_controller\_admission\_render\_duration gauge # HELP nginx\_ingress\_controller\_admission\_render\_ingresses The length of ingresses rendered by the admission controller # TYPE nginx\_ingress\_controller\_admission\_render\_ingresses gauge # HELP nginx\_ingress\_controller\_admission\_roundtrip\_duration The complete duration of the admission controller at the time to process a new event (float seconds) # TYPE nginx\_ingress\_controller\_admission\_roundtrip\_duration gauge # HELP nginx\_ingress\_controller\_admission\_tested\_duration The processing duration of the admission controller tests (float seconds) # TYPE nginx\_ingress\_controller\_admission\_tested\_duration gauge # HELP nginx\_ingress\_controller\_admission\_tested\_ingresses The length of ingresses processed by the admission controller # TYPE nginx\_ingress\_controller\_admission\_tested\_ingresses gauge ``` ### Histogram buckets You can configure buckets for histogram metrics using these command line options (here are their default values): \* `--time-buckets=[0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]` \* `--length-buckets=[10, 20, 30, 40, 50, 60, 70, 80, 90, 100]` \* `--size-buckets=[10, 100, 1000, 10000, 100000, 1e+06, 1e+07]`
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/monitoring.md
main
ingress-nginx
[ -0.05178041383624077, 0.001315401284955442, -0.12540078163146973, 0.053906895220279694, -0.04695325344800949, -0.04627811908721924, 0.059988584369421005, 0.07058300077915192, -0.0015330533497035503, 0.0777626559138298, -0.01692100241780281, -0.06072378531098366, 0.019612502306699753, 0.003...
0.148233
# ConfigMaps ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system components for the nginx-controller. In order to overwrite nginx-controller configuration values as seen in [config.go](https://github.com/kubernetes/ingress-nginx/blob/main/internal/ingress/controller/config/config.go), you can add key-value pairs to the data section of the config-map. For Example: ```yaml data: map-hash-bucket-size: "128" ssl-protocols: SSLv2 ``` !!! important The key and values in a ConfigMap can only be strings. This means that we want a value with boolean values we need to quote the values, like "true" or "false". Same for numbers, like "100". "Slice" types (defined below as `[]string` or `[]int`) can be provided as a comma-delimited string. ## Configuration options The following table shows a configuration option's name, type, and the default value: | name | type | default | notes | |:--------------------------------------------------------------------------------|:-------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | [add-headers](#add-headers) | string | "" | | | [allow-backend-server-header](#allow-backend-server-header) | bool | "false" | | | [allow-cross-namespace-resources](#allow-cross-namespace-resources) | bool | "false" | | | [allow-snippet-annotations](#allow-snippet-annotations) | bool | "false" | | | [annotations-risk-level](#annotations-risk-level) | string | High | | | [annotation-value-word-blocklist](#annotation-value-word-blocklist) | string array | "" | | | [hide-headers](#hide-headers) | string array | empty | | | [access-log-params](#access-log-params) | string | "" | | | [access-log-path](#access-log-path) | string | "/var/log/nginx/access.log" | | | [http-access-log-path](#http-access-log-path) | string | "" | | | [stream-access-log-path](#stream-access-log-path) | string | "" | | | [enable-access-log-for-default-backend](#enable-access-log-for-default-backend) | bool | "false" | | | [error-log-path](#error-log-path) | string | "/var/log/nginx/error.log" | | | [enable-modsecurity](#enable-modsecurity) | bool | "false" | | | [modsecurity-snippet](#modsecurity-snippet) | string | "" | | | [enable-owasp-modsecurity-crs](#enable-owasp-modsecurity-crs) | bool | "false" | | | [client-header-buffer-size](#client-header-buffer-size) | string | "1k" | | | [client-header-timeout](#client-header-timeout) | int | 60 | | | [client-body-buffer-size](#client-body-buffer-size) | string | "8k" | | | [client-body-timeout](#client-body-timeout) | int | 60 | | | [disable-access-log](#disable-access-log) | bool | "false" | | | [disable-ipv6](#disable-ipv6) | bool | "false" | | | [disable-ipv6-dns](#disable-ipv6-dns) | bool | "false" | | | [enable-underscores-in-headers](#enable-underscores-in-headers) | bool | "false" | | | [enable-ocsp](#enable-ocsp) | bool | "false" | | | [ignore-invalid-headers](#ignore-invalid-headers) | bool | "true" | | | [retry-non-idempotent](#retry-non-idempotent) | bool | "false" | | | [error-log-level](#error-log-level) | string | "notice" | | | [http2-max-field-size](#http2-max-field-size) | string | "" | DEPRECATED in favour of [large\_client\_header\_buffers](#large-client-header-buffers) | | [http2-max-header-size](#http2-max-header-size) | string | "" | DEPRECATED in favour of [large\_client\_header\_buffers](#large-client-header-buffers) | | [http2-max-requests](#http2-max-requests) | int | 0 | DEPRECATED in favour of [keepalive\_requests](#keep-alive-requests) | | [http2-max-concurrent-streams](#http2-max-concurrent-streams) | int | 128 | | | [hsts](#hsts) | bool | "true" | | | [hsts-include-subdomains](#hsts-include-subdomains) | bool | "true" | | | [hsts-max-age](#hsts-max-age) | string | "31536000" | | | [hsts-preload](#hsts-preload) | bool | "false" | | | [keep-alive](#keep-alive) | int | 75 | | | [keep-alive-requests](#keep-alive-requests) | int | 1000 | | | [large-client-header-buffers](#large-client-header-buffers) | string | "4 8k" | | | [log-format-escape-none](#log-format-escape-none) | bool | "false" | | | [log-format-escape-json](#log-format-escape-json) | bool | "false" | | | [log-format-upstream](#log-format-upstream) | string | `$remote\_addr - $remote\_user [$time\_local] "$request" $status $body\_bytes\_sent "$http\_referer" "$http\_user\_agent" $request\_length $request\_time [$proxy\_upstream\_name] [$proxy\_alternative\_upstream\_name] $upstream\_addr $upstream\_response\_length $upstream\_response\_time $upstream\_status $req\_id` | | | [log-format-stream](#log-format-stream) | string | `[$remote\_addr] [$time\_local] $protocol $status $bytes\_sent $bytes\_received $session\_time` | | | [enable-multi-accept](#enable-multi-accept) | bool | "true" | | | [max-worker-connections](#max-worker-connections) | int | 16384 | | | [max-worker-open-files](#max-worker-open-files) | int | 0 | | | [map-hash-bucket-size](#map-hash-bucket-size) | int | 64 | | | [nginx-status-ipv4-whitelist](#nginx-status-ipv4-whitelist) | []string | "127.0.0.1" | | | [nginx-status-ipv6-whitelist](#nginx-status-ipv6-whitelist) | []string | "::1" | | | [proxy-real-ip-cidr](#proxy-real-ip-cidr) | []string | "0.0.0.0/0" | | | [proxy-set-headers](#proxy-set-headers) | string | "" | | | [server-name-hash-max-size](#server-name-hash-max-size)
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/configmap.md
main
ingress-nginx
[ -0.012754520401358604, 0.08800186961889267, 0.01819121092557907, -0.0064164334908127785, -0.0043868119828403, -0.004928603768348694, 0.0624302513897419, -0.011436202563345432, 0.009907955303788185, 0.03515958413481712, 0.0026115416549146175, -0.039481163024902344, 0.03756947070360184, -0.0...
0.049077
| [max-worker-open-files](#max-worker-open-files) | int | 0 | | | [map-hash-bucket-size](#map-hash-bucket-size) | int | 64 | | | [nginx-status-ipv4-whitelist](#nginx-status-ipv4-whitelist) | []string | "127.0.0.1" | | | [nginx-status-ipv6-whitelist](#nginx-status-ipv6-whitelist) | []string | "::1" | | | [proxy-real-ip-cidr](#proxy-real-ip-cidr) | []string | "0.0.0.0/0" | | | [proxy-set-headers](#proxy-set-headers) | string | "" | | | [server-name-hash-max-size](#server-name-hash-max-size) | int | 1024 | | | [server-name-hash-bucket-size](#server-name-hash-bucket-size) | int | `` | | [proxy-headers-hash-max-size](#proxy-headers-hash-max-size) | int | 512 | | | [proxy-headers-hash-bucket-size](#proxy-headers-hash-bucket-size) | int | 64 | | | [reuse-port](#reuse-port) | bool | "true" | | | [server-tokens](#server-tokens) | bool | "false" | | | [ssl-ciphers](#ssl-ciphers) | string | "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256" | | | [ssl-ecdh-curve](#ssl-ecdh-curve) | string | "auto" | | | [ssl-dh-param](#ssl-dh-param) | string | "" | | | [ssl-protocols](#ssl-protocols) | string | "TLSv1.2 TLSv1.3" | | | [ssl-session-cache](#ssl-session-cache) | bool | "true" | | | [ssl-session-cache-size](#ssl-session-cache-size) | string | "10m" | | | [ssl-session-tickets](#ssl-session-tickets) | bool | "false" | | | [ssl-session-ticket-key](#ssl-session-ticket-key) | string | `` | | [ssl-session-timeout](#ssl-session-timeout) | string | "10m" | | | [ssl-buffer-size](#ssl-buffer-size) | string | "4k" | | | [use-proxy-protocol](#use-proxy-protocol) | bool | "false" | | | [proxy-protocol-header-timeout](#proxy-protocol-header-timeout) | string | "5s" | | | [enable-aio-write](#enable-aio-write) | bool | "true" | | | [use-gzip](#use-gzip) | bool | "false" | | | [use-geoip](#use-geoip) | bool | "true" | | | [use-geoip2](#use-geoip2) | bool | "false" | | | [geoip2-autoreload-in-minutes](#geoip2-autoreload-in-minutes) | int | "0" | | | [enable-brotli](#enable-brotli) | bool | "false" | | | [brotli-level](#brotli-level) | int | 4 | | | [brotli-min-length](#brotli-min-length) | int | 20 | | | [brotli-types](#brotli-types) | string | "application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/javascript text/plain text/x-component" | | | [use-http2](#use-http2) | bool | "true" | | | [gzip-disable](#gzip-disable) | string | "" | | | [gzip-level](#gzip-level) | int | 1 | | | [gzip-min-length](#gzip-min-length) | int | 256 | | | [gzip-types](#gzip-types) | string | "application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/javascript text/plain text/x-component" | | | [worker-processes](#worker-processes) | string | `` | | | [worker-cpu-affinity](#worker-cpu-affinity) | string | "" | | | [worker-shutdown-timeout](#worker-shutdown-timeout) | string | "240s" | | | [enable-serial-reloads](#enable-serial-reloads) | bool | "false" | | | [load-balance](#load-balance) | string | "round\_robin" | | | [variables-hash-bucket-size](#variables-hash-bucket-size) | int | 256 | | | [variables-hash-max-size](#variables-hash-max-size) | int | 2048 | | | [upstream-keepalive-connections](#upstream-keepalive-connections) | int | 320 | | | [upstream-keepalive-time](#upstream-keepalive-time) | string | "1h" | | | [upstream-keepalive-timeout](#upstream-keepalive-timeout) | int | 60 | | | [upstream-keepalive-requests](#upstream-keepalive-requests) | int | 10000 | | | [limit-conn-zone-variable](#limit-conn-zone-variable) | string | "$binary\_remote\_addr" | | | [proxy-stream-timeout](#proxy-stream-timeout) | string | "600s" | | | [proxy-stream-next-upstream](#proxy-stream-next-upstream) | bool | "true" | | | [proxy-stream-next-upstream-timeout](#proxy-stream-next-upstream-timeout) | string | "600s" | | | [proxy-stream-next-upstream-tries](#proxy-stream-next-upstream-tries) | int | 3 | | | [proxy-stream-responses](#proxy-stream-responses) | int | 1 | | | [bind-address](#bind-address) | []string | "" | | | [use-forwarded-headers](#use-forwarded-headers) | bool | "false" | | | [enable-real-ip](#enable-real-ip) | bool | "false" | | | [forwarded-for-header](#forwarded-for-header) | string | "X-Forwarded-For" | | | [forwarded-for-proxy-protocol-header](#forwarded-for-proxy-protocol-header) | string | "X-Forwarded-For-Proxy-Protocol" | | | [compute-full-forwarded-for](#compute-full-forwarded-for) | bool | "false" | | | [proxy-add-original-uri-header](#proxy-add-original-uri-header) | bool | "false" | | | [generate-request-id](#generate-request-id) | bool | "true" | | | [jaeger-collector-host](#jaeger-collector-host) | string | "" | | | [jaeger-collector-port](#jaeger-collector-port) | int | 6831 | | | [jaeger-endpoint](#jaeger-endpoint) | string | "" | | | [jaeger-service-name](#jaeger-service-name) | string | "nginx" | | | [jaeger-propagation-format](#jaeger-propagation-format) | string | "jaeger" | | | [jaeger-sampler-type](#jaeger-sampler-type) | string | "const" | | | [jaeger-sampler-param](#jaeger-sampler-param) | string | "1" | | | [jaeger-sampler-host](#jaeger-sampler-host) | string | "http://127.0.0.1" | |
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/configmap.md
main
ingress-nginx
[ -0.0008885649731382728, 0.10427604615688324, -0.018146196380257607, -0.014511986635625362, -0.0077968258410692215, -0.027499787509441376, -0.0029323457274585962, -0.03696995973587036, -0.04150044545531273, 0.025998961180448532, -0.05191102623939514, 0.04848575219511986, -0.013004535809159279...
0.002889
| | | [jaeger-endpoint](#jaeger-endpoint) | string | "" | | | [jaeger-service-name](#jaeger-service-name) | string | "nginx" | | | [jaeger-propagation-format](#jaeger-propagation-format) | string | "jaeger" | | | [jaeger-sampler-type](#jaeger-sampler-type) | string | "const" | | | [jaeger-sampler-param](#jaeger-sampler-param) | string | "1" | | | [jaeger-sampler-host](#jaeger-sampler-host) | string | "http://127.0.0.1" | | | [jaeger-sampler-port](#jaeger-sampler-port) | int | 5778 | | | [jaeger-trace-context-header-name](#jaeger-trace-context-header-name) | string | uber-trace-id | | | [jaeger-debug-header](#jaeger-debug-header) | string | uber-debug-id | | | [jaeger-baggage-header](#jaeger-baggage-header) | string | jaeger-baggage | | | [jaeger-trace-baggage-header-prefix](#jaeger-trace-baggage-header-prefix) | string | uberctx- | | | [enable-opentelemetry](#enable-opentelemetry) | bool | "false" | | | [opentelemetry-trust-incoming-span](#opentelemetry-trust-incoming-span) | bool | "true" | | | [opentelemetry-operation-name](#opentelemetry-operation-name) | string | "" | | | [opentelemetry-config](#opentelemetry-config) | string | "/etc/ingress-controller/telemetry/opentelemetry.toml" | | | [otlp-collector-host](#otlp-collector-host) | string | "" | | | [otlp-collector-port](#otlp-collector-port) | int | 4317 | | | [otel-max-queuesize](#otel-max-queuesize) | int | 2048 | | | [otel-schedule-delay-millis](#otel-schedule-delay-millis) | int | 5000 | | | [otel-max-export-batch-size](#otel-max-export-batch-size) | int | 512 | | | [otel-service-name](#otel-service-name) | string | "nginx" | | | [otel-sampler](#otel-sampler) | string | "AlwaysOn" | | | [otel-sampler-parent-based](#otel-sampler-parent-based) | bool | "true" | | | [otel-sampler-ratio](#otel-sampler-ratio) | float | 0.01 | | | [main-snippet](#main-snippet) | string | "" | | | [http-snippet](#http-snippet) | string | "" | | | [server-snippet](#server-snippet) | string | "" | | | [stream-snippet](#stream-snippet) | string | "" | | | [location-snippet](#location-snippet) | string | "" | | | [custom-http-errors](#custom-http-errors) | []int | []int{} | | | [proxy-body-size](#proxy-body-size) | string | "1m" | | | [proxy-connect-timeout](#proxy-connect-timeout) | int | 5 | | | [proxy-read-timeout](#proxy-read-timeout) | int | 60 | | | [proxy-send-timeout](#proxy-send-timeout) | int | 60 | | | [proxy-buffers-number](#proxy-buffers-number) | int | 4 | | | [proxy-buffer-size](#proxy-buffer-size) | string | "4k" | | | [proxy-busy-buffers-size](#proxy-busy-buffers-size) | string | "" | | | [proxy-cookie-path](#proxy-cookie-path) | string | "off" | | | [proxy-cookie-domain](#proxy-cookie-domain) | string | "off" | | | [proxy-next-upstream](#proxy-next-upstream) | string | "error timeout" | | | [proxy-next-upstream-timeout](#proxy-next-upstream-timeout) | int | 0 | | | [proxy-next-upstream-tries](#proxy-next-upstream-tries) | int | 3 | | | [proxy-redirect-from](#proxy-redirect-from) | string | "off" | | | [proxy-request-buffering](#proxy-request-buffering) | string | "on" | | | [ssl-redirect](#ssl-redirect) | bool | "true" | | | [force-ssl-redirect](#force-ssl-redirect) | bool | "false" | | | [denylist-source-range](#denylist-source-range) | []string | []string{} | | | [whitelist-source-range](#whitelist-source-range) | []string | []string{} | | | [skip-access-log-urls](#skip-access-log-urls) | []string | []string{} | | | [limit-rate](#limit-rate) | int | 0 | | | [limit-rate-after](#limit-rate-after) | int | 0 | | | [lua-shared-dicts](#lua-shared-dicts) | string | "" | | | [http-redirect-code](#http-redirect-code) | int | 308 | | | [proxy-buffering](#proxy-buffering) | string | "off" | | | [limit-req-status-code](#limit-req-status-code) | int | 503 | | | [limit-conn-status-code](#limit-conn-status-code) | int | 503 | | | [enable-syslog](#enable-syslog) | bool | "false" | | | [syslog-host](#syslog-host) | string | "" | | | [syslog-port](#syslog-port) | int | 514 | | | [no-tls-redirect-locations](#no-tls-redirect-locations) | string | "/.well-known/acme-challenge" | | | [global-allowed-response-headers](#global-allowed-response-headers) | string | "" | | | [global-auth-url](#global-auth-url) | string | "" | | | [global-auth-method](#global-auth-method) | string | "" | | | [global-auth-signin](#global-auth-signin) | string | "" | | | [global-auth-signin-redirect-param](#global-auth-signin-redirect-param) | string | "rd" | | | [global-auth-response-headers](#global-auth-response-headers) | string | "" | | | [global-auth-request-redirect](#global-auth-request-redirect) | string | "" | | | [global-auth-snippet](#global-auth-snippet) | string | "" | | | [global-auth-cache-key](#global-auth-cache-key) | string | "" | | | [global-auth-cache-duration](#global-auth-cache-duration) | string | "200 202 401 5m" | | | [no-auth-locations](#no-auth-locations) | string | "/.well-known/acme-challenge" | | | [block-cidrs](#block-cidrs) | []string | "" | | | [block-user-agents](#block-user-agents) | []string | "" | | | [block-referers](#block-referers) | []string | "" | | | [proxy-ssl-location-only](#proxy-ssl-location-only)
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/configmap.md
main
ingress-nginx
[ -0.039471160620450974, 0.0825231671333313, -0.012248964980244637, -0.04012414067983627, -0.02411527745425701, -0.0076972306706011295, 0.08766066282987595, 0.02397272363305092, -0.015360517427325249, -0.004794517997652292, -0.02856055274605751, -0.11557124555110931, 0.052265409380197525, 0....
0.089451
string | "" | | | [global-auth-cache-duration](#global-auth-cache-duration) | string | "200 202 401 5m" | | | [no-auth-locations](#no-auth-locations) | string | "/.well-known/acme-challenge" | | | [block-cidrs](#block-cidrs) | []string | "" | | | [block-user-agents](#block-user-agents) | []string | "" | | | [block-referers](#block-referers) | []string | "" | | | [proxy-ssl-location-only](#proxy-ssl-location-only) | bool | "false" | | | [default-type](#default-type) | string | "text/html" | | | [service-upstream](#service-upstream) | bool | "false" | | | [ssl-reject-handshake](#ssl-reject-handshake) | bool | "false" | | | [debug-connections](#debug-connections) | []string | "" | | | [strict-validate-path-type](#strict-validate-path-type) | bool | "true" | | | [grpc-buffer-size-kb](#grpc-buffer-size-kb) | int | 0 | | | [relative-redirects](#relative-redirects) | bool | false | | ## add-headers Sets custom headers from named configmap before sending traffic to the client. See [proxy-set-headers](#proxy-set-headers). [example](https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/custom-headers) ## allow-backend-server-header Enables the return of the header Server from the backend instead of the generic nginx string. \_\*\*default:\*\*\_ is disabled ## allow-cross-namespace-resources Enables users to consume cross namespace resource on annotations, when was previously enabled . \_\*\*default:\*\*\_ false \*\*Annotations that may be impacted with this change\*\*: \* `auth-secret` \* `auth-proxy-set-header` \* `auth-tls-secret` \* `fastcgi-params-configmap` \* `proxy-ssl-secret` ## allow-snippet-annotations Enables Ingress to parse and add \*-snippet annotations/directives created by the user. \_\*\*default:\*\*\_ `false` Warning: We recommend enabling this option only if you TRUST users with permission to create Ingress objects, as this may allow a user to add restricted configurations to the final nginx.conf file ## annotations-risk-level Represents the risk accepted on an annotation. If the risk is, for instance `Medium`, annotations with risk High and Critical will not be accepted. Accepted values are `Critical`, `High`, `Medium` and `Low`. \_\*\*default:\*\*\_ `High` ## annotation-value-word-blocklist Contains a comma-separated value of chars/words that are well known of being used to abuse Ingress configuration and must be blocked. Related to [CVE-2021-25742](https://github.com/kubernetes/ingress-nginx/issues/7837) When an annotation is detected with a value that matches one of the blocked bad words, the whole Ingress won't be configured. \_\*\*default:\*\*\_ `""` When doing this, the default blocklist is override, which means that the Ingress admin should add all the words that should be blocked, here is a suggested block list. \_\*\*suggested:\*\*\_ `"load\_module,lua\_package,\_by\_lua,location,root,proxy\_pass,serviceaccount,{,},',\""` ## hide-headers Sets additional header that will not be passed from the upstream server to the client response. \_\*\*default:\*\*\_ empty \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_hide\_header](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_hide\_header) ## access-log-params Additional params for access\_log. For example, buffer=16k, gzip, flush=1m \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_log\_module.html#access\_log](https://nginx.org/en/docs/http/ngx\_http\_log\_module.html#access\_log) ## access-log-path Access log path for both http and stream context. Goes to `/var/log/nginx/access.log` by default. \_\_Note:\_\_ the file `/var/log/nginx/access.log` is a symlink to `/dev/stdout` ## http-access-log-path Access log path for http context globally. \_\*\*default:\*\*\_ "" \_\_Note:\_\_ If not specified, the `access-log-path` will be used. ## stream-access-log-path Access log path for stream context globally. \_\*\*default:\*\*\_ "" \_\_Note:\_\_ If not specified, the `access-log-path` will be used. ## enable-access-log-for-default-backend Enables logging access to default backend. \_\*\*default:\*\*\_ is disabled. ## error-log-path Error log path. Goes to `/var/log/nginx/error.log` by default. \_\_Note:\_\_ the file `/var/log/nginx/error.log` is a symlink to `/dev/stderr` \_References:\_ [https://nginx.org/en/docs/ngx\_core\_module.html#error\_log](https://nginx.org/en/docs/ngx\_core\_module.html#error\_log) ## enable-modsecurity Enables the modsecurity module for NGINX. \_\*\*default:\*\*\_ is disabled ## enable-owasp-modsecurity-crs Enables the OWASP ModSecurity Core Rule Set (CRS). \_\*\*default:\*\*\_ is disabled ## modsecurity-snippet Adds custom rules to modsecurity section of nginx configuration ## client-header-buffer-size Allows to configure a custom buffer size for reading client request header. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#client\_header\_buffer\_size](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#client\_header\_buffer\_size) ## client-header-timeout Defines a timeout for reading client request header, in seconds. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#client\_header\_timeout](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#client\_header\_timeout) ## client-body-buffer-size Sets buffer size for reading client request body. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#client\_body\_buffer\_size](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#client\_body\_buffer\_size) ## client-body-timeout Defines a timeout for reading client request body, in seconds. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#client\_body\_timeout](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#client\_body\_timeout) ## disable-access-log Disables the Access Log from the entire Ingress Controller. \_\*\*default:\*\*\_ `false` \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_log\_module.html#access\_log](https://nginx.org/en/docs/http/ngx\_http\_log\_module.html#access\_log) ## disable-ipv6 Disable listening on IPV6. \_\*\*default:\*\*\_ `false`; IPv6 listening is
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/configmap.md
main
ingress-nginx
[ -0.0763559341430664, 0.08927448093891144, -0.07554343342781067, 0.021398134529590607, -0.02143346145749092, -0.016559863463044167, 0.07177446037530899, -0.07953710854053497, 0.03987591341137886, -0.05712847784161568, -0.012650196440517902, -0.047740738838911057, 0.09329060465097427, 0.0390...
0.040136
Sets buffer size for reading client request body. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#client\_body\_buffer\_size](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#client\_body\_buffer\_size) ## client-body-timeout Defines a timeout for reading client request body, in seconds. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#client\_body\_timeout](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#client\_body\_timeout) ## disable-access-log Disables the Access Log from the entire Ingress Controller. \_\*\*default:\*\*\_ `false` \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_log\_module.html#access\_log](https://nginx.org/en/docs/http/ngx\_http\_log\_module.html#access\_log) ## disable-ipv6 Disable listening on IPV6. \_\*\*default:\*\*\_ `false`; IPv6 listening is enabled ## disable-ipv6-dns Disable IPV6 for nginx DNS resolver. \_\*\*default:\*\*\_ `false`; IPv6 resolving enabled. ## enable-underscores-in-headers Enables underscores in header names. \_\*\*default:\*\*\_ is disabled ## enable-ocsp Enables [Online Certificate Status Protocol stapling](https://en.wikipedia.org/wiki/OCSP\_stapling) (OCSP) support. \_\*\*default:\*\*\_ is disabled ## ignore-invalid-headers Set if header fields with invalid names should be ignored. \_\*\*default:\*\*\_ is enabled ## retry-non-idempotent Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value "true". ## error-log-level Configures the logging level of errors. Log levels above are listed in the order of increasing severity. \_References:\_ [https://nginx.org/en/docs/ngx\_core\_module.html#error\_log](https://nginx.org/en/docs/ngx\_core\_module.html#error\_log) ## http2-max-field-size !!! warning This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use [large-client-header-buffers](#large-client-header-buffers) instead. Limits the maximum size of an HPACK-compressed request header field. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_v2\_module.html#http2\_max\_field\_size](https://nginx.org/en/docs/http/ngx\_http\_v2\_module.html#http2\_max\_field\_size) ## http2-max-header-size !!! warning This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use [large-client-header-buffers](#large-client-header-buffers) instead. Limits the maximum size of the entire request header list after HPACK decompression. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_v2\_module.html#http2\_max\_header\_size](https://nginx.org/en/docs/http/ngx\_http\_v2\_module.html#http2\_max\_header\_size) ## http2-max-requests !!! warning This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use [upstream-keepalive-requests](#upstream-keepalive-requests) instead. Sets the maximum number of requests (including push requests) that can be served through one HTTP/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_v2\_module.html#http2\_max\_requests](https://nginx.org/en/docs/http/ngx\_http\_v2\_module.html#http2\_max\_requests) ## http2-max-concurrent-streams Sets the maximum number of concurrent HTTP/2 streams in a connection. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_v2\_module.html#http2\_max\_concurrent\_streams](https://nginx.org/en/docs/http/ngx\_http\_v2\_module.html#http2\_max\_concurrent\_streams) ## hsts Enables or disables the header HSTS in servers running SSL. HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft. \_References:\_ - [https://developer.mozilla.org/en-US/docs/Web/Security/HTTP\_strict\_transport\_security](https://developer.mozilla.org/en-US/docs/Web/Security/HTTP\_strict\_transport\_security) - [https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server](https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server) ## hsts-include-subdomains Enables or disables the use of HSTS in all the subdomains of the server-name. ## hsts-max-age Sets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS. ## hsts-preload Enables or disables the preload attribute in the HSTS feature (when it is enabled). ## keep-alive Sets the time, in seconds, during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#keepalive\_timeout](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#keepalive\_timeout) !!! important Setting `keep-alive: '0'` will most likely break concurrent http/2 requests due to changes introduced with nginx 1.19.7 ``` Changes with nginx 1.19.7 16 Feb 2021 \*) Change: connections handling in HTTP/2 has been changed to better match HTTP/1.x; the "http2\_recv\_timeout", "http2\_idle\_timeout", and "http2\_max\_requests" directives have been removed, the "keepalive\_timeout" and "keepalive\_requests" directives should be used instead. ``` \_References:\_ [nginx change log](https://nginx.org/en/CHANGES) [nginx issue tracker](https://trac.nginx.org/nginx/ticket/2155) [nginx mailing list](https://mailman.nginx.org/pipermail/nginx/2021-May/060697.html) ## keep-alive-requests Sets the maximum number of requests that can be served through one keep-alive connection. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#keepalive\_requests](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#keepalive\_requests) ## large-client-header-buffers Sets the maximum number and size of buffers used for reading large client request header. \_\*\*default:\*\*\_ 4 8k \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#large\_client\_header\_buffers](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#large\_client\_header\_buffers) ## log-format-escape-none Sets if the escape parameter is disabled entirely for character escaping in variables ("true") or controlled by log-format-escape-json ("false") Sets the nginx [log format](https://nginx.org/en/docs/http/ngx\_http\_log\_module.html#log\_format). ## log-format-escape-json Sets if the escape parameter allows JSON ("true") or default characters escaping in variables ("false") Sets the nginx [log format](https://nginx.org/en/docs/http/ngx\_http\_log\_module.html#log\_format). ## log-format-upstream Sets the nginx [log format](https://nginx.org/en/docs/http/ngx\_http\_log\_module.html#log\_format). Example for json output:
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/configmap.md
main
ingress-nginx
[ -0.007806845474988222, 0.11407940089702606, -0.01573994569480419, 0.019109193235635757, -0.0034510137047618628, -0.059148047119379044, 0.014855632558465004, 0.012884645722806454, 0.06549087911844254, 0.07981414347887039, -0.037198107689619064, 0.04725302755832672, -0.028754550963640213, 0....
0.114095
entirely for character escaping in variables ("true") or controlled by log-format-escape-json ("false") Sets the nginx [log format](https://nginx.org/en/docs/http/ngx\_http\_log\_module.html#log\_format). ## log-format-escape-json Sets if the escape parameter allows JSON ("true") or default characters escaping in variables ("false") Sets the nginx [log format](https://nginx.org/en/docs/http/ngx\_http\_log\_module.html#log\_format). ## log-format-upstream Sets the nginx [log format](https://nginx.org/en/docs/http/ngx\_http\_log\_module.html#log\_format). Example for json output: ```json log-format-upstream: '{"time": "$time\_iso8601", "remote\_addr": "$proxy\_protocol\_addr", "x\_forwarded\_for": "$proxy\_add\_x\_forwarded\_for", "request\_id": "$req\_id", "remote\_user": "$remote\_user", "bytes\_sent": $bytes\_sent, "request\_time": $request\_time, "status": $status, "vhost": "$host", "request\_proto": "$server\_protocol", "path": "$uri", "request\_query": "$args", "request\_length": $request\_length, "method": "$request\_method", "http\_referrer": "$http\_referer", "http\_user\_agent": "$http\_user\_agent" }' ``` Please check the [log-format](log-format.md) for definition of each field. ## log-format-stream Sets the nginx [stream format](https://nginx.org/en/docs/stream/ngx\_stream\_log\_module.html#log\_format). ## enable-multi-accept If disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections at a time. \_\*\*default:\*\*\_ true \_References:\_ [https://nginx.org/en/docs/ngx\_core\_module.html#multi\_accept](https://nginx.org/en/docs/ngx\_core\_module.html#multi\_accept) ## max-worker-connections Sets the [maximum number of simultaneous connections](https://nginx.org/en/docs/ngx\_core\_module.html#worker\_connections) that can be opened by each worker process. 0 will use the value of [max-worker-open-files](#max-worker-open-files). \_\*\*default:\*\*\_ 16384 !!! tip Using 0 in scenarios of high load improves performance at the cost of increasing RAM utilization (even on idle). ## max-worker-open-files Sets the [maximum number of files](https://nginx.org/en/docs/ngx\_core\_module.html#worker\_rlimit\_nofile) that can be opened by each worker process. The default of 0 means "max open files (system's limit) - 1024". \_\*\*default:\*\*\_ 0 ## map-hash-bucket-size Sets the bucket size for the [map variables hash tables](https://nginx.org/en/docs/http/ngx\_http\_map\_module.html#map\_hash\_bucket\_size). The details of setting up hash tables are provided in a separate [document](https://nginx.org/en/docs/hash.html). ## proxy-real-ip-cidr If `use-forwarded-headers` or `use-proxy-protocol` is enabled, `proxy-real-ip-cidr` defines the default IP/network address of your external load balancer. Can be a comma-separated list of CIDR blocks. \_\*\*default:\*\*\_ "0.0.0.0/0" ## proxy-set-headers Sets custom headers from named configmap before sending traffic to backends. The value format is namespace/name. See [example](https://kubernetes.github.io/ingress-nginx/examples/customization/custom-headers/) ## server-name-hash-max-size Sets the maximum size of the [server names hash tables](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#server\_names\_hash\_max\_size) used in server names,map directive’s values, MIME types, names of request header strings, etc. \_References:\_ [https://nginx.org/en/docs/hash.html](https://nginx.org/en/docs/hash.html) ## server-name-hash-bucket-size Sets the size of the bucket for the server names hash tables. \_References:\_ - [https://nginx.org/en/docs/hash.html](https://nginx.org/en/docs/hash.html) - [https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#server\_names\_hash\_bucket\_size](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#server\_names\_hash\_bucket\_size) ## proxy-headers-hash-max-size Sets the maximum size of the proxy headers hash tables. \_References:\_ - [https://nginx.org/en/docs/hash.html](https://nginx.org/en/docs/hash.html) - [https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_headers\_hash\_max\_size](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_headers\_hash\_max\_size) ## reuse-port Instructs NGINX to create an individual listening socket for each worker process (using the SO\_REUSEPORT socket option), allowing a kernel to distribute incoming connections between worker processes \_\*\*default:\*\*\_ true ## proxy-headers-hash-bucket-size Sets the size of the bucket for the proxy headers hash tables. \_References:\_ - [https://nginx.org/en/docs/hash.html](https://nginx.org/en/docs/hash.html) - [https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_headers\_hash\_bucket\_size](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_headers\_hash\_bucket\_size) ## server-tokens Send NGINX Server header in responses and display NGINX version in error pages. \_\*\*default:\*\*\_ is disabled ## ssl-ciphers Sets the [ciphers](https://nginx.org/en/docs/http/ngx\_http\_ssl\_module.html#ssl\_ciphers) list to enable. The ciphers are specified in the format understood by the OpenSSL library. The default cipher list is: `ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256`. The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect [forward secrecy](https://wiki.mozilla.org/Security/Server\_Side\_TLS#Forward\_Secrecy). DHE-based cyphers will not be available until DH parameter is configured [Custom DH parameters for perfect forward secrecy](https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/ssl-dh-param) Please check the [Mozilla SSL Configuration Generator](https://mozilla.github.io/server-side-tls/ssl-config-generator/). \_\_Note:\_\_ ssl\_prefer\_server\_ciphers directive will be enabled by default for http context. ## ssl-ecdh-curve Specifies a curve for ECDHE ciphers. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_ssl\_module.html#ssl\_ecdh\_curve](https://nginx.org/en/docs/http/ngx\_http\_ssl\_module.html#ssl\_ecdh\_curve) ## ssl-dh-param Sets the name of the secret that contains Diffie-Hellman key to help with "Perfect Forward Secrecy". \_References:\_ - [https://wiki.openssl.org/index.php/Diffie-Hellman\_parameters](https://wiki.openssl.org/index.php/Diffie-Hellman\_parameters) - [https://wiki.mozilla.org/Security/Server\_Side\_TLS#DHE\_handshake\_and\_dhparam](https://wiki.mozilla.org/Security/Server\_Side\_TLS#DHE\_handshake\_and\_dhparam) - [https://nginx.org/en/docs/http/ngx\_http\_ssl\_module.html#ssl\_dhparam](https://nginx.org/en/docs/http/ngx\_http\_ssl\_module.html#ssl\_dhparam) ## ssl-protocols Sets the [SSL protocols](https://nginx.org/en/docs/http/ngx\_http\_ssl\_module.html#ssl\_protocols) to use. The default is: `TLSv1.2 TLSv1.3`. Please check the result of the configuration using `https://ssllabs.com/ssltest/analyze.html` or `https://testssl.sh`. ## ssl-early-data Enables or disables TLS 1.3 [early data](https://tools.ietf.org/html/rfc8446#section-2.3), also known as Zero Round Trip Time Resumption (0-RTT). This requires `ssl-protocols` to have `TLSv1.3` enabled. Enable this with caution, because requests sent within
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/configmap.md
main
ingress-nginx
[ -0.04881257563829422, 0.040982551872730255, 0.022103257477283478, 0.004235840402543545, 0.0014928854070603848, -0.00477363308891654, -0.02774549089372158, -0.00047550690942443907, 0.05688844248652458, 0.04070818051695824, -0.02509215660393238, -0.010088480077683926, -0.005648039281368256, ...
0.015286
use. The default is: `TLSv1.2 TLSv1.3`. Please check the result of the configuration using `https://ssllabs.com/ssltest/analyze.html` or `https://testssl.sh`. ## ssl-early-data Enables or disables TLS 1.3 [early data](https://tools.ietf.org/html/rfc8446#section-2.3), also known as Zero Round Trip Time Resumption (0-RTT). This requires `ssl-protocols` to have `TLSv1.3` enabled. Enable this with caution, because requests sent within early data are subject to [replay attacks](https://tools.ietf.org/html/rfc8470). [ssl\_early\_data](https://nginx.org/en/docs/http/ngx\_http\_ssl\_module.html#ssl\_early\_data). The default is: `false`. ## ssl-session-cache Enables or disables the use of shared [SSL cache](https://nginx.org/en/docs/http/ngx\_http\_ssl\_module.html#ssl\_session\_cache) among worker processes. ## ssl-session-cache-size Sets the size of the [SSL shared session cache](https://nginx.org/en/docs/http/ngx\_http\_ssl\_module.html#ssl\_session\_cache) between all worker processes. ## ssl-session-tickets Enables or disables session resumption through [TLS session tickets](https://nginx.org/en/docs/http/ngx\_http\_ssl\_module.html#ssl\_session\_tickets). ## ssl-session-ticket-key Sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string. To create a ticket: `openssl rand 80 | openssl enc -A -base64` [TLS session ticket-key](https://nginx.org/en/docs/http/ngx\_http\_ssl\_module.html#ssl\_session\_tickets), by default, a randomly generated key is used. ## ssl-session-timeout Sets the time during which a client may [reuse the session](https://nginx.org/en/docs/http/ngx\_http\_ssl\_module.html#ssl\_session\_timeout) parameters stored in a cache. ## ssl-buffer-size Sets the size of the [SSL buffer](https://nginx.org/en/docs/http/ngx\_http\_ssl\_module.html#ssl\_buffer\_size) used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB). \_References:\_ [https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/](https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/) ## use-proxy-protocol Enables or disables the [PROXY protocol](https://www.nginx.com/resources/admin-guide/proxy-protocol/) to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB). ## proxy-protocol-header-timeout Sets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection. \_\*\*default:\*\*\_ 5s ## enable-aio-write Enables or disables the directive [aio\_write](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#aio\_write) that writes files asynchronously. \_\*\*default:\*\*\_ true ## use-gzip Enables or disables compression of HTTP responses using the ["gzip" module](https://nginx.org/en/docs/http/ngx\_http\_gzip\_module.html). MIME types to compress are controlled by [gzip-types](#gzip-types). \_\*\*default:\*\*\_ false ## use-geoip Enables or disables ["geoip" module](https://nginx.org/en/docs/http/ngx\_http\_geoip\_module.html) that creates variables with values depending on the client IP address, using the precompiled MaxMind databases. \_\*\*default:\*\*\_ true > \_\_Note:\_\_ MaxMind legacy databases are discontinued and will not receive updates after 2019-01-02, cf. [discontinuation notice](https://support.maxmind.com/geolite-legacy-discontinuation-notice/). Consider [use-geoip2](#use-geoip2) below. ## use-geoip2 Enables the [geoip2 module](https://github.com/leev/ngx\_http\_geoip2\_module) for NGINX. Since `0.27.0` and due to a [change in the MaxMind databases](https://blog.maxmind.com/2019/12/significant-changes-to-accessing-and-using-geolite2-databases/) a license is required to have access to the databases. For this reason, it is required to define a new flag `--maxmind-license-key` in the ingress controller deployment to download the databases needed during the initialization of the ingress controller. Alternatively, it is possible to use a volume to mount the files `/etc/ingress-controller/geoip/GeoLite2-City.mmdb` and `/etc/ingress-controller/geoip/GeoLite2-ASN.mmdb`, avoiding the overhead of the download. !!! important If the feature is enabled but the files are missing, GeoIP2 will not be enabled. \_\*\*default:\*\*\_ false ## geoip2-autoreload-in-minutes Enables the [geoip2 module](https://github.com/leev/ngx\_http\_geoip2\_module) autoreload in MaxMind databases setting the interval in minutes. \_\*\*default:\*\*\_ 0 ## enable-brotli Enables or disables compression of HTTP responses using the ["brotli" module](https://github.com/google/ngx\_brotli). The default mime type list to compress is: `application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component`. \_\*\*default:\*\*\_ false > \_\_Note:\_\_ Brotli does not works in Safari < 11. For more information see [https://caniuse.com/#feat=brotli](https://caniuse.com/#feat=brotli) ## brotli-level Sets the Brotli Compression Level that will be used. \_\*\*default:\*\*\_ 4 ## brotli-min-length Minimum length of responses, in bytes, that will be eligible for brotli compression. \_\*\*default:\*\*\_ 20 ## brotli-types Sets the MIME Types that will be compressed on-the-fly by brotli. \_\*\*default:\*\*\_ `application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component` ## use-http2 Enables or disables [HTTP/2](https://nginx.org/en/docs/http/ngx\_http\_v2\_module.html) support in secure connections. ## gzip-disable Disables [gzipping](https://nginx.org/en/docs/http/ngx\_http\_gzip\_module.html#gzip\_disable) of responses for requests with "User-Agent" header fields matching any of the specified regular expressions. ## gzip-level
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/configmap.md
main
ingress-nginx
[ -0.058299195021390915, 0.1299894005060196, -0.03754943981766701, 0.043760158121585846, -0.03120279125869274, -0.0604429729282856, -0.08284499496221542, -0.009642976336181164, 0.07650034874677658, -0.005689666140824556, -0.03538994491100311, -0.01833498664200306, -0.028314581140875816, 0.06...
-0.066578
brotli. \_\*\*default:\*\*\_ `application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component` ## use-http2 Enables or disables [HTTP/2](https://nginx.org/en/docs/http/ngx\_http\_v2\_module.html) support in secure connections. ## gzip-disable Disables [gzipping](https://nginx.org/en/docs/http/ngx\_http\_gzip\_module.html#gzip\_disable) of responses for requests with "User-Agent" header fields matching any of the specified regular expressions. ## gzip-level Sets the gzip Compression Level that will be used. \_\*\*default:\*\*\_ 1 ## gzip-min-length Minimum length of responses to be returned to the client before it is eligible for gzip compression, in bytes. \_\*\*default:\*\*\_ 256 ## gzip-types Sets the MIME types in addition to "text/html" to compress. The special value "\\*" matches any MIME type. Responses with the "text/html" type are always compressed if [`use-gzip`](#use-gzip) is enabled. \_\*\*default:\*\*\_ `application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component`. ## worker-processes Sets the number of [worker processes](https://nginx.org/en/docs/ngx\_core\_module.html#worker\_processes). The default of "auto" means number of available CPU cores. ## worker-cpu-affinity Binds worker processes to the sets of CPUs. [worker\_cpu\_affinity](https://nginx.org/en/docs/ngx\_core\_module.html#worker\_cpu\_affinity). By default worker processes are not bound to any specific CPUs. The value can be: - "": empty string indicate no affinity is applied. - cpumask: e.g. `0001 0010 0100 1000` to bind processes to specific cpus. - auto: binding worker processes automatically to available CPUs. ## worker-shutdown-timeout Sets a timeout for Nginx to [wait for worker to gracefully shutdown](https://nginx.org/en/docs/ngx\_core\_module.html#worker\_shutdown\_timeout). \_\*\*default:\*\*\_ "240s" ## load-balance Sets the algorithm to use for load balancing. The value can either be: - round\_robin: to use the default round robin loadbalancer - ewma: to use the Peak EWMA method for routing ([implementation](https://github.com/kubernetes/ingress-nginx/blob/main/rootfs/etc/nginx/lua/balancer/ewma.lua)) The default is `round\_robin`. - To load balance using consistent hashing of IP or other variables, consider the `nginx.ingress.kubernetes.io/upstream-hash-by` annotation. - To load balance using session cookies, consider the `nginx.ingress.kubernetes.io/affinity` annotation. \_References:\_ [https://nginx.org/en/docs/http/load\_balancing.html](https://nginx.org/en/docs/http/load\_balancing.html) ## variables-hash-bucket-size Sets the bucket size for the variables hash table. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_map\_module.html#map\_hash\_bucket\_size](https://nginx.org/en/docs/http/ngx\_http\_map\_module.html#map\_hash\_bucket\_size) ## variables-hash-max-size Sets the maximum size of the variables hash table. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_map\_module.html#map\_hash\_max\_size](https://nginx.org/en/docs/http/ngx\_http\_map\_module.html#map\_hash\_max\_size) ## upstream-keepalive-connections Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. \_\*\*default:\*\*\_ 320 \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_upstream\_module.html#keepalive](https://nginx.org/en/docs/http/ngx\_http\_upstream\_module.html#keepalive) ## upstream-keepalive-time Sets the maximum time during which requests can be processed through one keepalive connection. \_\*\*default:\*\*\_ "1h" \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_upstream\_module.html#keepalive\_time](https://nginx.org/en/docs/http/ngx\_http\_upstream\_module.html#keepalive\_time) ## upstream-keepalive-timeout Sets a timeout during which an idle keepalive connection to an upstream server will stay open. \_\*\*default:\*\*\_ 60 \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_upstream\_module.html#keepalive\_timeout](https://nginx.org/en/docs/http/ngx\_http\_upstream\_module.html#keepalive\_timeout) ## upstream-keepalive-requests Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed. \_\*\*default:\*\*\_ 10000 \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_upstream\_module.html#keepalive\_requests](https://nginx.org/en/docs/http/ngx\_http\_upstream\_module.html#keepalive\_requests) ## limit-conn-zone-variable Sets parameters for a shared memory zone that will keep states for various keys of [limit\_conn\_zone](https://nginx.org/en/docs/http/ngx\_http\_limit\_conn\_module.html#limit\_conn\_zone). The default of "$binary\_remote\_addr" variable’s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses. ## proxy-stream-timeout Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed. \_References:\_ [https://nginx.org/en/docs/stream/ngx\_stream\_proxy\_module.html#proxy\_timeout](https://nginx.org/en/docs/stream/ngx\_stream\_proxy\_module.html#proxy\_timeout) ## proxy-stream-next-upstream When a connection to the proxied server cannot be established, determines whether a client connection will be passed to the next server. \_References:\_ [https://nginx.org/en/docs/stream/ngx\_stream\_proxy\_module.html#proxy\_next\_upstream](https://nginx.org/en/docs/stream/ngx\_stream\_proxy\_module.html#proxy\_next\_upstream) ## proxy-stream-next-upstream-timeout Limits the time allowed to pass a connection to the next server. The 0 value turns off this limitation. \_References:\_ [https://nginx.org/en/docs/stream/ngx\_stream\_proxy\_module.html#proxy\_next\_upstream\_timeout](https://nginx.org/en/docs/stream/ngx\_stream\_proxy\_module.html#proxy\_next\_upstream\_timeout) ## proxy-stream-next-upstream-tries Limits the number of possible tries a request should be passed to the next server. The 0 value turns off this limitation. \_References:\_ [https://nginx.org/en/docs/stream/ngx\_stream\_proxy\_module.html#proxy\_next\_upstream\_tries](https://nginx.org/en/docs/stream/ngx\_stream\_proxy\_module.html#proxy\_next\_upstream\_timeout) ## proxy-stream-responses Sets the number of datagrams expected from the proxied server in response
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/configmap.md
main
ingress-nginx
[ -0.13523158431053162, 0.10663412511348724, -0.06317421793937683, -0.02537430077791214, -0.00881212204694748, -0.03917562589049339, -0.010177464224398136, -0.03424115106463432, -0.029311925172805786, -0.08005467057228088, 0.02596220374107361, 0.061877306550741196, 0.046085018664598465, 0.02...
0.047207
server. The 0 value turns off this limitation. \_References:\_ [https://nginx.org/en/docs/stream/ngx\_stream\_proxy\_module.html#proxy\_next\_upstream\_timeout](https://nginx.org/en/docs/stream/ngx\_stream\_proxy\_module.html#proxy\_next\_upstream\_timeout) ## proxy-stream-next-upstream-tries Limits the number of possible tries a request should be passed to the next server. The 0 value turns off this limitation. \_References:\_ [https://nginx.org/en/docs/stream/ngx\_stream\_proxy\_module.html#proxy\_next\_upstream\_tries](https://nginx.org/en/docs/stream/ngx\_stream\_proxy\_module.html#proxy\_next\_upstream\_timeout) ## proxy-stream-responses Sets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used. \_References:\_ [https://nginx.org/en/docs/stream/ngx\_stream\_proxy\_module.html#proxy\_responses](https://nginx.org/en/docs/stream/ngx\_stream\_proxy\_module.html#proxy\_responses) ## bind-address Sets the addresses on which the server will accept requests instead of \*. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop. ## use-forwarded-headers If true, NGINX passes the incoming `X-Forwarded-\*` headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers. If false, NGINX ignores incoming `X-Forwarded-\*` headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets. ## enable-real-ip `enable-real-ip` enables the configuration of [https://nginx.org/en/docs/http/ngx\_http\_realip\_module.html](https://nginx.org/en/docs/http/ngx\_http\_realip\_module.html). Specific attributes of the module can be configured further by using `forwarded-for-header` and `proxy-real-ip-cidr` settings. ## forwarded-for-header Sets the header field for identifying the originating IP address of a client. \_\*\*default:\*\*\_ X-Forwarded-For ## forwarded-for-proxy-protocol-header Sets the name of the intermediate header used to determine the client's originating IP when both `use-proxy-protocol` and `use-forwarded-headers` are enabled. This doesn't impact functionality and should not typically be modified. \_\*\*default:\*\*\_ X-Forwarded-For-Proxy-Protocol ## compute-full-forwarded-for Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies. ## proxy-add-original-uri-header Adds an X-Original-Uri header with the original request URI to the backend request ## generate-request-id Ensures that X-Request-ID is defaulted to a random value, if no X-Request-ID is present in the request ## jaeger-collector-host Specifies the host to use when uploading traces. It must be a valid URL. ## jaeger-collector-port Specifies the port to use when uploading traces. \_\*\*default:\*\*\_ 6831 ## jaeger-endpoint Specifies the endpoint to use when uploading traces to a collector. This takes priority over `jaeger-collector-host` if both are specified. ## jaeger-service-name Specifies the service name to use for any traces created. \_\*\*default:\*\*\_ nginx ## jaeger-propagation-format Specifies the traceparent/tracestate propagation format. \_\*\*default:\*\*\_ jaeger ## jaeger-sampler-type Specifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. \_\*\*default:\*\*\_ const ## jaeger-sampler-param Specifies the argument to be passed to the sampler constructor. Must be a number. For const this should be 0 to never sample and 1 to always sample. \_\*\*default:\*\*\_ 1 ## jaeger-sampler-host Specifies the custom remote sampler host to be passed to the sampler constructor. Must be a valid URL. Leave blank to use default value (localhost). \_\*\*default:\*\*\_ http://127.0.0.1 ## jaeger-sampler-port Specifies the custom remote sampler port to be passed to the sampler constructor. Must be a number. \_\*\*default:\*\*\_ 5778 ## jaeger-trace-context-header-name Specifies the header name used for passing trace context. \_\*\*default:\*\*\_ uber-trace-id ## jaeger-debug-header Specifies the header name used for force sampling. \_\*\*default:\*\*\_ jaeger-debug-id ## jaeger-baggage-header Specifies the header name used to submit baggage if there is no root span. \_\*\*default:\*\*\_ jaeger-baggage ## jaeger-tracer-baggage-header-prefix Specifies the header prefix used to propagate baggage. \_\*\*default:\*\*\_ uberctx- ## enable-opentelemetry Enables the nginx OpenTelemetry extension. \_\*\*default:\*\*\_ is disabled \_References:\_ [https://github.com/open-telemetry/opentelemetry-cpp-contrib](https://github.com/open-telemetry/opentelemetry-cpp-contrib/tree/main/instrumentation/nginx) ## opentelemetry-operation-name Specifies a custom name for the server span. \_\*\*default:\*\*\_ is empty For example, set to "HTTP $request\_method $uri". ## opentelemetry-config Sets the opentelemetry config file. \_\*\*default:\*\*\_ /etc/ingress-controller/telemetry/opentelemetry.toml ## otlp-collector-host Specifies the host to use when uploading traces. It must be a
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/configmap.md
main
ingress-nginx
[ -0.059863489121198654, -0.01176487561315298, 0.05875607952475548, -0.016682814806699753, -0.013586556538939476, -0.02461480163037777, -0.025266382843255997, -0.027088576927781105, 0.062446556985378265, 0.01672436110675335, -0.05333000794053078, -0.00003978378663305193, 0.0232333242893219, ...
-0.030934
OpenTelemetry extension. \_\*\*default:\*\*\_ is disabled \_References:\_ [https://github.com/open-telemetry/opentelemetry-cpp-contrib](https://github.com/open-telemetry/opentelemetry-cpp-contrib/tree/main/instrumentation/nginx) ## opentelemetry-operation-name Specifies a custom name for the server span. \_\*\*default:\*\*\_ is empty For example, set to "HTTP $request\_method $uri". ## opentelemetry-config Sets the opentelemetry config file. \_\*\*default:\*\*\_ /etc/ingress-controller/telemetry/opentelemetry.toml ## otlp-collector-host Specifies the host to use when uploading traces. It must be a valid URL. ## otlp-collector-port Specifies the port to use when uploading traces. \_\*\*default:\*\*\_ 4317 ## otel-service-name Specifies the service name to use for any traces created. \_\*\*default:\*\*\_ nginx ## opentelemetry-trust-incoming-span Enables or disables using spans from incoming requests as parent for created ones. \_\*\*default:\*\*\_ true ## otel-sampler-parent-based Uses sampler implementation which by default will take a sample if parent Activity is sampled. \_\*\*default:\*\*\_ true ## otel-sampler-ratio Specifies sample rate for any traces created. \_\*\*default:\*\*\_ 0.01 ## otel-sampler Specifies the sampler to be used when sampling traces. The available samplers are: AlwaysOff, AlwaysOn, TraceIdRatioBased, remote. \_\*\*default:\*\*\_ AlwaysOn ## main-snippet Adds custom configuration to the main section of the nginx configuration. ## http-snippet Adds custom configuration to the http section of the nginx configuration. ## server-snippet Adds custom configuration to all the servers in the nginx configuration. ## stream-snippet Adds custom configuration to the stream section of the nginx configuration. ## location-snippet Adds custom configuration to all the locations in the nginx configuration. You can not use this to add new locations that proxy to the Kubernetes pods, as the snippet does not have access to the Go template functions. If you want to add custom locations you will have to [provide your own nginx.tmpl](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/). ## custom-http-errors Enables which HTTP codes should be passed for processing with the [error\_page directive](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#error\_page) Setting at least one code also enables [proxy\_intercept\_errors](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_intercept\_errors) which are required to process error\_page. Example usage: `custom-http-errors: 404,415` ## proxy-body-size Sets the maximum allowed size of the client request body. See NGINX [client\_max\_body\_size](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#client\_max\_body\_size). ## proxy-connect-timeout Sets the timeout for [establishing a connection with a proxied server](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_connect\_timeout). It should be noted that this timeout cannot usually exceed 75 seconds. It will also set the [grpc\_connect\_timeout](https://nginx.org/en/docs/http/ngx\_http\_grpc\_module.html#grpc\_connect\_timeout) for gRPC connections. ## proxy-read-timeout Sets the timeout in seconds for [reading a response from the proxied server](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_read\_timeout). The timeout is set only between two successive read operations, not for the transmission of the whole response. It will also set the [grpc\_read\_timeout](https://nginx.org/en/docs/http/ngx\_http\_grpc\_module.html#grpc\_read\_timeout) for gRPC connections. ## proxy-send-timeout Sets the timeout in seconds for [transmitting a request to the proxied server](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_send\_timeout). The timeout is set only between two successive write operations, not for the transmission of the whole request. It will also set the [grpc\_send\_timeout](https://nginx.org/en/docs/http/ngx\_http\_grpc\_module.html#grpc\_send\_timeout) for gRPC connections. ## proxy-buffers-number Sets the number of the buffer used for [reading the first part of the response](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_buffers) received from the proxied server. This part usually contains a small response header. ## proxy-buffer-size Sets the size of the buffer used for [reading the first part of the response](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_buffer\_size) received from the proxied server. This part usually contains a small response header. ## proxy-busy-buffers-size [Limits the total size of buffers that can be busy](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_busy\_buffers\_size) sending a response to the client while the response is not yet fully read. ## proxy-cookie-path Sets a text that [should be changed in the path attribute](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_cookie\_path) of the “Set-Cookie” header fields of a proxied server response. ## proxy-cookie-domain Sets a text that [should be changed in the domain attribute](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_cookie\_domain) of the “Set-Cookie” header fields of a proxied server response. ## proxy-next-upstream Specifies in [which cases](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_next\_upstream) a request should be passed to the next server. ## proxy-next-upstream-timeout [Limits the time](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_next\_upstream\_timeout) in seconds during which a request can be passed to the next server. ## proxy-next-upstream-tries Limit the number of [possible tries](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_next\_upstream\_tries) a request should be passed to the next
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/configmap.md
main
ingress-nginx
[ -0.042347729206085205, 0.07555495947599411, -0.01834634505212307, -0.005673334002494812, -0.045649804174900055, -0.11019568145275116, -0.019767655059695244, 0.004524026997387409, 0.04320130497217178, 0.030130116268992424, 0.04401525855064392, -0.07309777289628983, -0.035927385091781616, 0....
0.171507
response. ## proxy-next-upstream Specifies in [which cases](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_next\_upstream) a request should be passed to the next server. ## proxy-next-upstream-timeout [Limits the time](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_next\_upstream\_timeout) in seconds during which a request can be passed to the next server. ## proxy-next-upstream-tries Limit the number of [possible tries](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_next\_upstream\_tries) a request should be passed to the next server. ## proxy-redirect-from Sets the original text that should be changed in the "Location" and "Refresh" header fields of a proxied server response. \_\*\*default:\*\*\_ off \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_redirect](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_redirect) ## proxy-request-buffering Enables or disables [buffering of a client request body](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_request\_buffering). ## ssl-redirect Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule). \_\*\*default:\*\*\_ "true" ## force-ssl-redirect Sets the global value of redirects (308) to HTTPS if the server has a default TLS certificate (defined in extra-args). \_\*\*default:\*\*\_ "false" ## denylist-source-range Sets the default denylisted IPs for each `server` block. This can be overwritten by an annotation on an Ingress rule. See [ngx\_http\_access\_module](https://nginx.org/en/docs/http/ngx\_http\_access\_module.html). ## whitelist-source-range Sets the default whitelisted IPs for each `server` block. This can be overwritten by an annotation on an Ingress rule. See [ngx\_http\_access\_module](https://nginx.org/en/docs/http/ngx\_http\_access\_module.html). ## skip-access-log-urls Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like `/health` or `health-check` that make "complex" reading the logs. \_\*\*default:\*\*\_ is empty ## limit-rate Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#limit\_rate](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#limit\_rate) ## limit-rate-after Sets the initial amount after which the further transmission of a response to a client will be rate limited. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#limit\_rate\_after](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#limit\_rate\_after) ## lua-shared-dicts Customize default Lua shared dictionaries or define more. You can use the following syntax to do so: ``` lua-shared-dicts: ": , [: ], ..." ``` For example following will set default `certificate\_data` dictionary to `100M` and will introduce a new dictionary called `my\_custom\_plugin`: ``` lua-shared-dicts: "certificate\_data: 100, my\_custom\_plugin: 5" ``` You can optionally set a size unit to allow for kilobyte-granularity. Allowed units are 'm' or 'k' (case-insensitive), and it defaults to MB if no unit is provided. Here is a similar example, but the `my\_custom\_plugin` dict is only 512KB. ``` lua-shared-dicts: "certificate\_data: 100, my\_custom\_plugin: 512k" ``` ## http-redirect-code Sets the HTTP status code to be used in redirects. Supported codes are [301](https://developer.mozilla.org/docs/Web/HTTP/Status/301),[302](https://developer.mozilla.org/docs/Web/HTTP/Status/302),[307](https://developer.mozilla.org/docs/Web/HTTP/Status/307) and [308](https://developer.mozilla.org/docs/Web/HTTP/Status/308) \_\*\*default:\*\*\_ 308 > \_\_Why the default code is 308?\_\_ > [RFC 7238](https://tools.ietf.org/html/rfc7238) was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if we send a redirect in methods like POST. ## proxy-buffering Enables or disables [buffering of responses from the proxied server](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_buffering). ## limit-req-status-code Sets the [status code to return in response to rejected requests](https://nginx.org/en/docs/http/ngx\_http\_limit\_req\_module.html#limit\_req\_status). \_\*\*default:\*\*\_ 503 ## limit-conn-status-code Sets the [status code to return in response to rejected connections](https://nginx.org/en/docs/http/ngx\_http\_limit\_conn\_module.html#limit\_conn\_status). \_\*\*default:\*\*\_ 503 ## enable-syslog Enable [syslog](https://nginx.org/en/docs/syslog.html) feature for access log and error log. \_\*\*default:\*\*\_ false ## syslog-host Sets the address of syslog server. The address can be specified as a domain name or IP address. ## syslog-port Sets the port of syslog server. \_\*\*default:\*\*\_ 514 ## no-tls-redirect-locations A comma-separated list of locations on which http requests will never get redirected to their https counterpart. \_\*\*default:\*\*\_ "/.well-known/acme-challenge" ## global-allowed-response-headers A comma-separated list of allowed response headers inside the [custom headers annotations](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#custom-headers) ## global-auth-url A url to an existing service that provides authentication for all the locations. Similar to
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/configmap.md
main
ingress-nginx
[ -0.07119612395763397, 0.007361316122114658, 0.05060090124607086, -0.031086644157767296, -0.033119671046733856, -0.01828889176249504, -0.044283438473939896, -0.01420971192419529, 0.02928982861340046, 0.02343192882835865, -0.0385456420481205, 0.04906749725341797, 0.009314474649727345, -0.022...
0.045633
A comma-separated list of locations on which http requests will never get redirected to their https counterpart. \_\*\*default:\*\*\_ "/.well-known/acme-challenge" ## global-allowed-response-headers A comma-separated list of allowed response headers inside the [custom headers annotations](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#custom-headers) ## global-auth-url A url to an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation `nginx.ingress.kubernetes.io/auth-url`. Locations that should not get authenticated can be listed using `no-auth-locations` See [no-auth-locations](#no-auth-locations). In addition, each service can be excluded from authentication via annotation `enable-global-auth` set to "false". \_\*\*default:\*\*\_ "" \_References:\_ [https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#external-authentication](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#external-authentication) ## global-auth-method A HTTP method to use for an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation `nginx.ingress.kubernetes.io/auth-method`. \_\*\*default:\*\*\_ "" ## global-auth-signin Sets the location of the error page for an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation `nginx.ingress.kubernetes.io/auth-signin`. \_\*\*default:\*\*\_ "" ## global-auth-signin-redirect-param Sets the query parameter in the error page signin URL which contains the original URL of the request that failed authentication. Similar to the Ingress rule annotation `nginx.ingress.kubernetes.io/auth-signin-redirect-param`. \_\*\*default:\*\*\_ "rd" ## global-auth-response-headers Sets the headers to pass to backend once authentication request completes. Applied to all the locations. Similar to the Ingress rule annotation `nginx.ingress.kubernetes.io/auth-response-headers`. \_\*\*default:\*\*\_ "" ## global-auth-request-redirect Sets the X-Auth-Request-Redirect header value. Applied to all the locations. Similar to the Ingress rule annotation `nginx.ingress.kubernetes.io/auth-request-redirect`. \_\*\*default:\*\*\_ "" ## global-auth-snippet Sets a custom snippet to use with external authentication. Applied to all the locations. Similar to the Ingress rule annotation `nginx.ingress.kubernetes.io/auth-snippet`. \_\*\*default:\*\*\_ "" ## global-auth-cache-key Enables caching for global auth requests. Specify a lookup key for auth responses, e.g. `$remote\_user$http\_authorization`. ## global-auth-cache-duration Set a caching time for auth responses based on their response codes, e.g. `200 202 30m`. See [proxy\_cache\_valid](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_cache\_valid) for details. You may specify multiple, comma-separated values: `200 202 10m, 401 5m`. defaults to `200 202 401 5m`. ## global-auth-always-set-cookie Always set a cookie returned by auth request. By default, the cookie will be set only if an upstream reports with the code 200, 201, 204, 206, 301, 302, 303, 304, 307, or 308. \_\*\*default:\*\*\_ false ## no-auth-locations A comma-separated list of locations that should not get authenticated. \_\*\*default:\*\*\_ "/.well-known/acme-challenge" ## block-cidrs A comma-separated list of IP addresses (or subnets), request from which have to be blocked globally. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_access\_module.html#deny](https://nginx.org/en/docs/http/ngx\_http\_access\_module.html#deny) ## block-user-agents A comma-separated list of User-Agent, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at `map` Nginx directive documentation. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_map\_module.html#map](https://nginx.org/en/docs/http/ngx\_http\_map\_module.html#map) ## block-referers A comma-separated list of Referers, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at `map` Nginx directive documentation. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_map\_module.html#map](https://nginx.org/en/docs/http/ngx\_http\_map\_module.html#map) ## proxy-ssl-location-only Set if proxy-ssl parameters should be applied only on locations and not on servers. \_\*\*default:\*\*\_ is disabled ## default-type Sets the default MIME type of a response. \_\*\*default:\*\*\_ text/html \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#default\_type](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#default\_type) ## service-upstream Set if the service's Cluster IP and port should be used instead of a list of all endpoints. This can be overwritten by an annotation on an Ingress rule. \_\*\*default:\*\*\_ "false" ## ssl-reject-handshake Set to reject SSL handshake to an unknown virtualhost. This parameter helps to mitigate the fingerprinting using default certificate of ingress. \_\*\*default:\*\*\_ "false" \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_ssl\_module.html#ssl\_reject\_handshake](https://nginx.org/en/docs/http/ngx\_http\_ssl\_module.html#ssl\_reject\_handshake) ## debug-connections Enables debugging log for selected client connections. \_\*\*default:\*\*\_ "" \_References:\_ [https://nginx.org/en/docs/ngx\_core\_module.html#debug\_connection](https://nginx.org/en/docs/ngx\_core\_module.html#debug\_connection) ## strict-validate-path-type Ingress objects contains a field called pathType that defines the proxy behavior. It can be `Exact`, `Prefix` and `ImplementationSpecific`. When pathType is configured as `Exact` or `Prefix`, there should be a more strict validation, allowing only
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/configmap.md
main
ingress-nginx
[ -0.01762143336236477, 0.11281970888376236, 0.015168906189501286, -0.005493082571774721, 0.04102249816060066, -0.012335779145359993, 0.031529925763607025, -0.0596046969294548, 0.0662047490477562, 0.026470107957720757, -0.05063825845718384, -0.1108107641339302, 0.06271310895681381, 0.0489676...
0.144959
debug-connections Enables debugging log for selected client connections. \_\*\*default:\*\*\_ "" \_References:\_ [https://nginx.org/en/docs/ngx\_core\_module.html#debug\_connection](https://nginx.org/en/docs/ngx\_core\_module.html#debug\_connection) ## strict-validate-path-type Ingress objects contains a field called pathType that defines the proxy behavior. It can be `Exact`, `Prefix` and `ImplementationSpecific`. When pathType is configured as `Exact` or `Prefix`, there should be a more strict validation, allowing only paths starting with "/" and containing only alphanumeric characters and "-", "\_" and additional "/". When this option is enabled, the validation will happen on the Admission Webhook, making any Ingress not using pathType `ImplementationSpecific` and containing invalid characters to be denied. This means that Ingress objects that rely on paths containing regex characters should use `ImplementationSpecific` pathType. The cluster admin should establish validation rules using mechanisms like [Open Policy Agent](https://www.openpolicyagent.org/) to validate that only authorized users can use `ImplementationSpecific` pathType and that only the authorized characters can be used. \_\*\*default:\*\*\_ "true" ## grpc-buffer-size-kb Sets the configuration for the GRPC Buffer Size parameter. If not set it will use the default from NGINX. \_References:\_ [https://nginx.org/en/docs/http/ngx\_http\_grpc\_module.html#grpc\_buffer\_size](https://nginx.org/en/docs/http/ngx\_http\_grpc\_module.html#grpc\_buffer\_size) ## relative-redirects Use relative redirects instead of absolute redirects. Absolute redirects are the default in nginx. RFC7231 allows relative redirects since 2014. Similar to the Ingress rule annotation `nginx.ingress.kubernetes.io/relative-redirects`. \_\*\*default:\*\*\_ "false" \_References:\_ - [https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#absolute\_redirect](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#absolute\_redirect) - [https://datatracker.ietf.org/doc/html/rfc7231#section-7.1.2](https://datatracker.ietf.org/doc/html/rfc7231#section-7.1.2)
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/configmap.md
main
ingress-nginx
[ -0.07839343696832657, -0.004993030335754156, 0.03768870234489441, 0.030265983194112778, -0.009473846293985844, -0.06819091737270355, 0.048783570528030396, -0.04682862013578415, 0.04429493099451065, 0.06904806196689606, -0.025442618876695633, -0.027339914813637733, -0.0035872815642505884, 0...
0.084774
# Annotations Scope and Risk |Group |Annotation | Risk | Scope | |--------|------------------|------|-------| | Aliases | server-alias | High | ingress | | Allowlist | allowlist-source-range | Medium | location | | BackendProtocol | backend-protocol | Low | location | | BasicDigestAuth | auth-realm | Medium | location | | BasicDigestAuth | auth-secret | Medium | location | | BasicDigestAuth | auth-secret-type | Low | location | | BasicDigestAuth | auth-type | Low | location | | Canary | canary | Low | ingress | | Canary | canary-by-cookie | Medium | ingress | | Canary | canary-by-header | Medium | ingress | | Canary | canary-by-header-pattern | Medium | ingress | | Canary | canary-by-header-value | Medium | ingress | | Canary | canary-weight | Low | ingress | | Canary | canary-weight-total | Low | ingress | | CertificateAuth | auth-tls-error-page | High | location | | CertificateAuth | auth-tls-match-cn | High | location | | CertificateAuth | auth-tls-pass-certificate-to-upstream | Low | location | | CertificateAuth | auth-tls-secret | Medium | location | | CertificateAuth | auth-tls-verify-client | Medium | location | | CertificateAuth | auth-tls-verify-depth | Low | location | | ClientBodyBufferSize | client-body-buffer-size | Low | location | | ConfigurationSnippet | configuration-snippet | Critical | location | | Connection | connection-proxy-header | Low | location | | CorsConfig | cors-allow-credentials | Low | ingress | | CorsConfig | cors-allow-headers | Medium | ingress | | CorsConfig | cors-allow-methods | Medium | ingress | | CorsConfig | cors-allow-origin | Medium | ingress | | CorsConfig | cors-expose-headers | Medium | ingress | | CorsConfig | cors-max-age | Low | ingress | | CorsConfig | enable-cors | Low | ingress | | CustomHTTPErrors | custom-http-errors | Low | location | | CustomHeaders | custom-headers | Medium | location | | DefaultBackend | default-backend | Low | location | | Denylist | denylist-source-range | Medium | location | | DisableProxyInterceptErrors | disable-proxy-intercept-errors | Low | location | | EnableGlobalAuth | enable-global-auth | Low | location | | ExternalAuth | auth-always-set-cookie | Low | location | | ExternalAuth | auth-cache-duration | Medium | location | | ExternalAuth | auth-cache-key | Medium | location | | ExternalAuth | auth-keepalive | Low | location | | ExternalAuth | auth-keepalive-requests | Low | location | | ExternalAuth | auth-keepalive-share-vars | Low | location | | ExternalAuth | auth-keepalive-timeout | Low | location | | ExternalAuth | auth-method | Low | location | | ExternalAuth | auth-proxy-set-headers | Medium | location | | ExternalAuth | auth-request-redirect | Medium | location | | ExternalAuth | auth-response-headers | Medium | location | | ExternalAuth | auth-signin | High | location | | ExternalAuth | auth-signin-redirect-param | Medium | location | | ExternalAuth | auth-snippet | Critical | location | | ExternalAuth | auth-url | High | location | | FastCGI | fastcgi-index | Medium | location | | FastCGI | fastcgi-params-configmap | Medium | location | | HTTP2PushPreload | http2-push-preload | Low | location | | LoadBalancing | load-balance | Low | location | | Logs | enable-access-log | Low | location | | Logs | enable-rewrite-log | Low | location | | Mirror | mirror-host | High | ingress | | Mirror | mirror-request-body | Low | ingress | | Mirror | mirror-target | High | ingress | | ModSecurity | enable-modsecurity | Low | ingress | | ModSecurity | enable-owasp-core-rules | Low | ingress | | ModSecurity | modsecurity-snippet | Critical | ingress | | ModSecurity | modsecurity-transaction-id | High | ingress | | Opentelemetry | enable-opentelemetry | Low | location | | Opentelemetry
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/annotations-risk.md
main
ingress-nginx
[ 0.05750202015042305, -0.00819296296685934, -0.08382336050271988, -0.02069990709424019, 0.041840024292469025, -0.007797560654580593, 0.04622035473585129, -0.043728142976760864, -0.03951825574040413, 0.02180344983935356, 0.03655936196446419, -0.09138123691082001, 0.08597911149263382, 0.03089...
0.092318
| ingress | | ModSecurity | enable-modsecurity | Low | ingress | | ModSecurity | enable-owasp-core-rules | Low | ingress | | ModSecurity | modsecurity-snippet | Critical | ingress | | ModSecurity | modsecurity-transaction-id | High | ingress | | Opentelemetry | enable-opentelemetry | Low | location | | Opentelemetry | opentelemetry-operation-name | Medium | location | | Opentelemetry | opentelemetry-trust-incoming-span | Low | location | | Proxy | proxy-body-size | Medium | location | | Proxy | proxy-buffer-size | Low | location | | Proxy | proxy-buffering | Low | location | | Proxy | proxy-buffers-number | Low | location | | Proxy | proxy-busy-buffers-size | Low | location | | Proxy | proxy-connect-timeout | Low | location | | Proxy | proxy-cookie-domain | Medium | location | | Proxy | proxy-cookie-path | Medium | location | | Proxy | proxy-http-version | Low | location | | Proxy | proxy-max-temp-file-size | Low | location | | Proxy | proxy-next-upstream | Medium | location | | Proxy | proxy-next-upstream-timeout | Low | location | | Proxy | proxy-next-upstream-tries | Low | location | | Proxy | proxy-read-timeout | Low | location | | Proxy | proxy-redirect-from | Medium | location | | Proxy | proxy-redirect-to | Medium | location | | Proxy | proxy-request-buffering | Low | location | | Proxy | proxy-send-timeout | Low | location | | ProxySSL | proxy-ssl-ciphers | Medium | ingress | | ProxySSL | proxy-ssl-name | High | ingress | | ProxySSL | proxy-ssl-protocols | Low | ingress | | ProxySSL | proxy-ssl-secret | Medium | ingress | | ProxySSL | proxy-ssl-server-name | Low | ingress | | ProxySSL | proxy-ssl-verify | Low | ingress | | ProxySSL | proxy-ssl-verify-depth | Low | ingress | | RateLimit | limit-allowlist | Low | location | | RateLimit | limit-burst-multiplier | Low | location | | RateLimit | limit-connections | Low | location | | RateLimit | limit-rate | Low | location | | RateLimit | limit-rate-after | Low | location | | RateLimit | limit-rpm | Low | location | | RateLimit | limit-rps | Low | location | | Redirect | from-to-www-redirect | Low | location | | Redirect | permanent-redirect | Medium | location | | Redirect | permanent-redirect-code | Low | location | | Redirect | relative-redirects | Low | location | | Redirect | temporal-redirect | Medium | location | | Redirect | temporal-redirect-code | Low | location | | Rewrite | app-root | Medium | location | | Rewrite | force-ssl-redirect | Medium | location | | Rewrite | preserve-trailing-slash | Medium | location | | Rewrite | rewrite-target | Medium | ingress | | Rewrite | ssl-redirect | Low | location | | Rewrite | use-regex | Low | location | | SSLCipher | ssl-ciphers | Low | ingress | | SSLCipher | ssl-prefer-server-ciphers | Low | ingress | | SSLPassthrough | ssl-passthrough | Low | ingress | | Satisfy | satisfy | Low | location | | ServerSnippet | server-snippet | Critical | ingress | | ServiceUpstream | service-upstream | Low | ingress | | SessionAffinity | affinity | Low | ingress | | SessionAffinity | affinity-canary-behavior | Low | ingress | | SessionAffinity | affinity-mode | Medium | ingress | | SessionAffinity | session-cookie-change-on-failure | Low | ingress | | SessionAffinity | session-cookie-conditional-samesite-none | Low | ingress | | SessionAffinity | session-cookie-domain | Medium | ingress | | SessionAffinity | session-cookie-expires | Medium | ingress | | SessionAffinity | session-cookie-max-age | Medium | ingress | | SessionAffinity | session-cookie-name | Medium | ingress | | SessionAffinity |
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/annotations-risk.md
main
ingress-nginx
[ -0.014065197668969631, -0.04136921837925911, -0.05208159610629082, 0.018941665068268776, -0.01920080929994583, -0.046150561422109604, 0.051466070115566254, 0.0007479763007722795, -0.05554790794849396, 0.02067297324538231, 0.05851563811302185, -0.03477137163281441, 0.09288763254880905, -0.0...
0.118059
ingress | | SessionAffinity | session-cookie-conditional-samesite-none | Low | ingress | | SessionAffinity | session-cookie-domain | Medium | ingress | | SessionAffinity | session-cookie-expires | Medium | ingress | | SessionAffinity | session-cookie-max-age | Medium | ingress | | SessionAffinity | session-cookie-name | Medium | ingress | | SessionAffinity | session-cookie-path | Medium | ingress | | SessionAffinity | session-cookie-samesite | Low | ingress | | SessionAffinity | session-cookie-secure | Low | ingress | | StreamSnippet | stream-snippet | Critical | ingress | | UpstreamHashBy | upstream-hash-by | High | location | | UpstreamHashBy | upstream-hash-by-subset | Low | location | | UpstreamHashBy | upstream-hash-by-subset-size | Low | location | | UpstreamVhost | upstream-vhost | Low | location | | UsePortInRedirects | use-port-in-redirects | Low | location | | XForwardedPrefix | x-forwarded-prefix | Medium | location |
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/annotations-risk.md
main
ingress-nginx
[ 0.010066534392535686, -0.01025648508220911, -0.023002952337265015, -0.02681892178952694, 0.01019118633121252, -0.022304989397525787, 0.02036946266889572, 0.009408419951796532, -0.018395882099866867, 0.06430072337388992, 0.0024460728745907545, -0.09094654768705368, 0.04141968488693237, 0.01...
0.058281
# Annotations You can add these Kubernetes annotations to specific Ingress objects to customize their behavior. !!! tip Annotation keys and values can only be strings. Other types, such as boolean or numeric values must be quoted, i.e. `"true"`, `"false"`, `"100"`. !!! note The annotation prefix can be changed using the [`--annotations-prefix` command line argument](../cli-arguments.md), but the default is `nginx.ingress.kubernetes.io`, as described in the table below. |Name | type | |---------------------------|------| |[nginx.ingress.kubernetes.io/app-root](#rewrite)|string| |[nginx.ingress.kubernetes.io/affinity](#session-affinity)|cookie| |[nginx.ingress.kubernetes.io/affinity-mode](#session-affinity)|"balanced" or "persistent"| |[nginx.ingress.kubernetes.io/affinity-canary-behavior](#session-affinity)|"sticky" or "legacy"| |[nginx.ingress.kubernetes.io/auth-realm](#authentication)|string| |[nginx.ingress.kubernetes.io/auth-secret](#authentication)|string| |[nginx.ingress.kubernetes.io/auth-secret-type](#authentication)|string| |[nginx.ingress.kubernetes.io/auth-type](#authentication)|"basic" or "digest"| |[nginx.ingress.kubernetes.io/auth-tls-secret](#client-certificate-authentication)|string| |[nginx.ingress.kubernetes.io/auth-tls-verify-depth](#client-certificate-authentication)|number| |[nginx.ingress.kubernetes.io/auth-tls-verify-client](#client-certificate-authentication)|string| |[nginx.ingress.kubernetes.io/auth-tls-error-page](#client-certificate-authentication)|string| |[nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream](#client-certificate-authentication)|"true" or "false"| |[nginx.ingress.kubernetes.io/auth-tls-match-cn](#client-certificate-authentication)|string| |[nginx.ingress.kubernetes.io/auth-url](#external-authentication)|string| |[nginx.ingress.kubernetes.io/auth-cache-key](#external-authentication)|string| |[nginx.ingress.kubernetes.io/auth-cache-duration](#external-authentication)|string| |[nginx.ingress.kubernetes.io/auth-keepalive](#external-authentication)|number| |[nginx.ingress.kubernetes.io/auth-keepalive-share-vars](#external-authentication)|"true" or "false"| |[nginx.ingress.kubernetes.io/auth-keepalive-requests](#external-authentication)|number| |[nginx.ingress.kubernetes.io/auth-keepalive-timeout](#external-authentication)|number| |[nginx.ingress.kubernetes.io/auth-proxy-set-headers](#external-authentication)|string| |[nginx.ingress.kubernetes.io/auth-snippet](#external-authentication)|string| |[nginx.ingress.kubernetes.io/enable-global-auth](#external-authentication)|"true" or "false"| |[nginx.ingress.kubernetes.io/backend-protocol](#backend-protocol)|string| |[nginx.ingress.kubernetes.io/canary](#canary)|"true" or "false"| |[nginx.ingress.kubernetes.io/canary-by-header](#canary)|string| |[nginx.ingress.kubernetes.io/canary-by-header-value](#canary)|string| |[nginx.ingress.kubernetes.io/canary-by-header-pattern](#canary)|string| |[nginx.ingress.kubernetes.io/canary-by-cookie](#canary)|string| |[nginx.ingress.kubernetes.io/canary-weight](#canary)|number| |[nginx.ingress.kubernetes.io/canary-weight-total](#canary)|number| |[nginx.ingress.kubernetes.io/client-body-buffer-size](#client-body-buffer-size)|string| |[nginx.ingress.kubernetes.io/configuration-snippet](#configuration-snippet)|string| |[nginx.ingress.kubernetes.io/custom-http-errors](#custom-http-errors)|[]int| |[nginx.ingress.kubernetes.io/custom-headers](#custom-headers)|string| |[nginx.ingress.kubernetes.io/default-backend](#default-backend)|string| |[nginx.ingress.kubernetes.io/enable-cors](#enable-cors)|"true" or "false"| |[nginx.ingress.kubernetes.io/cors-allow-origin](#enable-cors)|string| |[nginx.ingress.kubernetes.io/cors-allow-methods](#enable-cors)|string| |[nginx.ingress.kubernetes.io/cors-allow-headers](#enable-cors)|string| |[nginx.ingress.kubernetes.io/cors-expose-headers](#enable-cors)|string| |[nginx.ingress.kubernetes.io/cors-allow-credentials](#enable-cors)|"true" or "false"| |[nginx.ingress.kubernetes.io/cors-max-age](#enable-cors)|number| |[nginx.ingress.kubernetes.io/force-ssl-redirect](#server-side-https-enforcement-through-redirect)|"true" or "false"| |[nginx.ingress.kubernetes.io/from-to-www-redirect](#redirect-fromto-www)|"true" or "false"| |[nginx.ingress.kubernetes.io/http2-push-preload](#http2-push-preload)|"true" or "false"| |[nginx.ingress.kubernetes.io/limit-connections](#rate-limiting)|number| |[nginx.ingress.kubernetes.io/limit-rps](#rate-limiting)|number| |[nginx.ingress.kubernetes.io/permanent-redirect](#permanent-redirect)|string| |[nginx.ingress.kubernetes.io/permanent-redirect-code](#permanent-redirect-code)|number| |[nginx.ingress.kubernetes.io/temporal-redirect](#temporal-redirect)|string| |[nginx.ingress.kubernetes.io/temporal-redirect-code](#temporal-redirect-code)|number| |[nginx.ingress.kubernetes.io/preserve-trailing-slash](#server-side-https-enforcement-through-redirect)|"true" or "false"| |[nginx.ingress.kubernetes.io/proxy-body-size](#custom-max-body-size)|string| |[nginx.ingress.kubernetes.io/proxy-cookie-domain](#proxy-cookie-domain)|string| |[nginx.ingress.kubernetes.io/proxy-cookie-path](#proxy-cookie-path)|string| |[nginx.ingress.kubernetes.io/proxy-connect-timeout](#custom-timeouts)|number| |[nginx.ingress.kubernetes.io/proxy-send-timeout](#custom-timeouts)|number| |[nginx.ingress.kubernetes.io/proxy-read-timeout](#custom-timeouts)|number| |[nginx.ingress.kubernetes.io/proxy-next-upstream](#custom-timeouts)|string| |[nginx.ingress.kubernetes.io/proxy-next-upstream-timeout](#custom-timeouts)|number| |[nginx.ingress.kubernetes.io/proxy-next-upstream-tries](#custom-timeouts)|number| |[nginx.ingress.kubernetes.io/proxy-request-buffering](#custom-timeouts)|string| |[nginx.ingress.kubernetes.io/proxy-redirect-from](#proxy-redirect)|string| |[nginx.ingress.kubernetes.io/proxy-redirect-to](#proxy-redirect)|string| |[nginx.ingress.kubernetes.io/proxy-http-version](#proxy-http-version)|"1.0" or "1.1"| |[nginx.ingress.kubernetes.io/proxy-ssl-secret](#backend-certificate-authentication)|string| |[nginx.ingress.kubernetes.io/proxy-ssl-ciphers](#backend-certificate-authentication)|string| |[nginx.ingress.kubernetes.io/proxy-ssl-name](#backend-certificate-authentication)|string| |[nginx.ingress.kubernetes.io/proxy-ssl-protocols](#backend-certificate-authentication)|string| |[nginx.ingress.kubernetes.io/proxy-ssl-verify](#backend-certificate-authentication)|string| |[nginx.ingress.kubernetes.io/proxy-ssl-verify-depth](#backend-certificate-authentication)|number| |[nginx.ingress.kubernetes.io/proxy-ssl-server-name](#backend-certificate-authentication)|string| |[nginx.ingress.kubernetes.io/enable-rewrite-log](#enable-rewrite-log)|"true" or "false"| |[nginx.ingress.kubernetes.io/rewrite-target](#rewrite)|URI| |[nginx.ingress.kubernetes.io/satisfy](#satisfy)|string| |[nginx.ingress.kubernetes.io/server-alias](#server-alias)|string| |[nginx.ingress.kubernetes.io/server-snippet](#server-snippet)|string| |[nginx.ingress.kubernetes.io/service-upstream](#service-upstream)|"true" or "false"| |[nginx.ingress.kubernetes.io/session-cookie-change-on-failure](#cookie-affinity)|"true" or "false"| |[nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none](#cookie-affinity)|"true" or "false"| |[nginx.ingress.kubernetes.io/session-cookie-domain](#cookie-affinity)|string| |[nginx.ingress.kubernetes.io/session-cookie-expires](#cookie-affinity)|string| |[nginx.ingress.kubernetes.io/session-cookie-max-age](#cookie-affinity)|string| |[nginx.ingress.kubernetes.io/session-cookie-name](#cookie-affinity)|string|default "INGRESSCOOKIE"| |[nginx.ingress.kubernetes.io/session-cookie-path](#cookie-affinity)|string| |[nginx.ingress.kubernetes.io/session-cookie-samesite](#cookie-affinity)|string|"None", "Lax" or "Strict"| |[nginx.ingress.kubernetes.io/session-cookie-secure](#cookie-affinity)|string| |[nginx.ingress.kubernetes.io/ssl-redirect](#server-side-https-enforcement-through-redirect)|"true" or "false"| |[nginx.ingress.kubernetes.io/ssl-passthrough](#ssl-passthrough)|"true" or "false"| |[nginx.ingress.kubernetes.io/stream-snippet](#stream-snippet)|string| |[nginx.ingress.kubernetes.io/upstream-hash-by](#custom-nginx-upstream-hashing)|string| |[nginx.ingress.kubernetes.io/x-forwarded-prefix](#x-forwarded-prefix-header)|string| |[nginx.ingress.kubernetes.io/load-balance](#custom-nginx-load-balancing)|string| |[nginx.ingress.kubernetes.io/upstream-vhost](#custom-nginx-upstream-vhost)|string| |[nginx.ingress.kubernetes.io/denylist-source-range](#denylist-source-range)|CIDR| |[nginx.ingress.kubernetes.io/whitelist-source-range](#whitelist-source-range)|CIDR| |[nginx.ingress.kubernetes.io/proxy-buffering](#proxy-buffering)|string| |[nginx.ingress.kubernetes.io/proxy-buffers-number](#proxy-buffers-number)|number| |[nginx.ingress.kubernetes.io/proxy-buffer-size](#proxy-buffer-size)|string| |[nginx.ingress.kubernetes.io/proxy-busy-buffers-size](#proxy-busy-buffers-size)|string| |[nginx.ingress.kubernetes.io/proxy-max-temp-file-size](#proxy-max-temp-file-size)|string| |[nginx.ingress.kubernetes.io/ssl-ciphers](#ssl-ciphers)|string| |[nginx.ingress.kubernetes.io/ssl-prefer-server-ciphers](#ssl-ciphers)|"true" or "false"| |[nginx.ingress.kubernetes.io/connection-proxy-header](#connection-proxy-header)|string| |[nginx.ingress.kubernetes.io/enable-access-log](#enable-access-log)|"true" or "false"| |[nginx.ingress.kubernetes.io/enable-opentelemetry](#enable-opentelemetry)|"true" or "false"| |[nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span](#opentelemetry-trust-incoming-span)|"true" or "false"| |[nginx.ingress.kubernetes.io/use-regex](#use-regex)|bool| |[nginx.ingress.kubernetes.io/enable-modsecurity](#modsecurity)|bool| |[nginx.ingress.kubernetes.io/enable-owasp-core-rules](#modsecurity)|bool| |[nginx.ingress.kubernetes.io/modsecurity-transaction-id](#modsecurity)|string| |[nginx.ingress.kubernetes.io/modsecurity-snippet](#modsecurity)|string| |[nginx.ingress.kubernetes.io/mirror-request-body](#mirror)|string| |[nginx.ingress.kubernetes.io/mirror-target](#mirror)|string| |[nginx.ingress.kubernetes.io/mirror-host](#mirror)|string| ### Canary In some cases, you may want to "canary" a new set of changes by sending a small number of requests to a different service than the production service. The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied. The following annotations to configure canary can be enabled after `nginx.ingress.kubernetes.io/canary: "true"` is set: \* `nginx.ingress.kubernetes.io/canary-by-header`: The header to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to `always`, it will be routed to the canary. When the header is set to `never`, it will never be routed to the canary. For any other value, the header will be ignored and the request compared against the other canary rules by precedence. \* `nginx.ingress.kubernetes.io/canary-by-header-value`: The header value to match for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to this value, it will be routed to the canary. For any other header value, the header will be ignored and the request compared against the other canary rules by precedence. This annotation has to be used together with `nginx.ingress.kubernetes.io/canary-by-header`. The annotation is an extension of the `nginx.ingress.kubernetes.io/canary-by-header` to allow customizing the header value instead of using hardcoded values. It doesn't have any effect if the `nginx.ingress.kubernetes.io/canary-by-header` annotation is not defined. \* `nginx.ingress.kubernetes.io/canary-by-header-pattern`: This works the same way as `canary-by-header-value` except it does PCRE Regex matching. Note that when `canary-by-header-value` is set this annotation will be ignored. When the given Regex causes error during request processing, the request will be considered as not matching. \* `nginx.ingress.kubernetes.io/canary-by-cookie`: The cookie to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the cookie value is set to `always`, it will be routed to the canary. When the cookie is set to `never`, it will never be routed to the canary. For any other value, the cookie will be ignored and the request compared against the other canary rules by precedence. \* `nginx.ingress.kubernetes.io/canary-weight`: The integer based (0
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/annotations.md
main
ingress-nginx
[ -0.0503375269472599, 0.09049122035503387, 0.028970269486308098, 0.01195254735648632, 0.008408386260271072, 0.018226362764835358, 0.12911781668663025, -0.025811081752181053, 0.05782204866409302, 0.09082029014825821, -0.08045421540737152, -0.11780700087547302, -0.045040544122457504, 0.033575...
0.153435
to `always`, it will be routed to the canary. When the cookie is set to `never`, it will never be routed to the canary. For any other value, the cookie will be ignored and the request compared against the other canary rules by precedence. \* `nginx.ingress.kubernetes.io/canary-weight`: The integer based (0 - ) percent of random requests that should be routed to the service specified in the canary Ingress. A weight of 0 implies that no requests will be sent to the service in the Canary ingress by this canary rule. A weight of `` means implies all requests will be sent to the alternative service specified in the Ingress. `` defaults to 100, and can be increased via `nginx.ingress.kubernetes.io/canary-weight-total`. \* `nginx.ingress.kubernetes.io/canary-weight-total`: The total weight of traffic. If unspecified, it defaults to 100. Canary rules are evaluated in order of precedence. Precedence is as follows: `canary-by-header -> canary-by-cookie -> canary-weight` \*\*Note\*\* that when you mark an ingress as canary, then all the other non-canary annotations will be ignored (inherited from the corresponding main ingress) except `nginx.ingress.kubernetes.io/load-balance`, `nginx.ingress.kubernetes.io/upstream-hash-by`, and [annotations related to session affinity](#session-affinity). If you want to restore the original behavior of canaries when session affinity was ignored, set `nginx.ingress.kubernetes.io/affinity-canary-behavior` annotation with value `legacy` on the canary ingress definition. \*\*Known Limitations\*\* Currently a maximum of one canary ingress can be applied per Ingress rule. ### Rewrite In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404. Set the annotation `nginx.ingress.kubernetes.io/rewrite-target` to the path expected by the service. If the Application Root is exposed in a different path and needs to be redirected, set the annotation `nginx.ingress.kubernetes.io/app-root` to redirect requests for `/`. !!! example Please check the [rewrite](../../examples/rewrite/README.md) example. ### Session Affinity The annotation `nginx.ingress.kubernetes.io/affinity` enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server. The only affinity type available for NGINX is `cookie`. The annotation `nginx.ingress.kubernetes.io/affinity-mode` defines the stickiness of a session. Setting this to `balanced` (default) will redistribute some sessions if a deployment gets scaled up, therefore rebalancing the load on the servers. Setting this to `persistent` will not rebalance sessions to new servers, therefore providing maximum stickiness. The annotation `nginx.ingress.kubernetes.io/affinity-canary-behavior` defines the behavior of canaries when session affinity is enabled. Setting this to `sticky` (default) will ensure that users that were served by canaries, will continue to be served by canaries. Setting this to `legacy` will restore original canary behavior, when session affinity was ignored. !!! attention If more than one Ingress is defined for a host and at least one Ingress uses `nginx.ingress.kubernetes.io/affinity: cookie`, then only paths on the Ingress using `nginx.ingress.kubernetes.io/affinity` will use session cookie affinity. All paths defined on other Ingresses for the host will be load balanced through the random selection of a backend server. !!! example Please check the [affinity](../../examples/affinity/cookie/README.md) example. #### Cookie affinity If you use the ``cookie`` affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation `nginx.ingress.kubernetes.io/session-cookie-name`. The default is to create a cookie named 'INGRESSCOOKIE'. The NGINX annotation `nginx.ingress.kubernetes.io/session-cookie-path` defines the path that will be set on the cookie. This is optional unless the annotation `nginx.ingress.kubernetes.io/use-regex` is set to true; Session cookie paths do not support regex. Use `nginx.ingress.kubernetes.io/session-cookie-domain` to set the `Domain` attribute of the sticky cookie. Use `nginx.ingress.kubernetes.io/session-cookie-samesite` to apply a `SameSite` attribute to the sticky cookie. Browser accepted values are `None`, `Lax`, and `Strict`. Some browsers reject cookies with `SameSite=None`, including those created
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/annotations.md
main
ingress-nginx
[ 0.005500218365341425, 0.03688720613718033, -0.00035395054146647453, 0.009967607446014881, -0.05206026881933212, -0.08096326142549515, 0.035323940217494965, -0.04337826371192932, 0.041742682456970215, 0.038914624601602554, -0.041788313537836075, -0.027748191729187965, -0.0874425619840622, 0...
0.088459
is set to true; Session cookie paths do not support regex. Use `nginx.ingress.kubernetes.io/session-cookie-domain` to set the `Domain` attribute of the sticky cookie. Use `nginx.ingress.kubernetes.io/session-cookie-samesite` to apply a `SameSite` attribute to the sticky cookie. Browser accepted values are `None`, `Lax`, and `Strict`. Some browsers reject cookies with `SameSite=None`, including those created before the `SameSite=None` specification (e.g. Chrome 5X). Other browsers mistakenly treat `SameSite=None` cookies as `SameSite=Strict` (e.g. Safari running on OSX 14). To omit `SameSite=None` from browsers with these incompatibilities, add the annotation `nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none: "true"`. Use `nginx.ingress.kubernetes.io/session-cookie-expires` to control the cookie expires, its value is a number of seconds until the cookie expires. Use `nginx.ingress.kubernetes.io/session-cookie-path` to control the cookie path when use-regex is set to true. Use `nginx.ingress.kubernetes.io/session-cookie-change-on-failure` to control the cookie change after request failure. ### Authentication It is possible to add authentication by adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords. The annotations are: ``` nginx.ingress.kubernetes.io/auth-type: [basic|digest] ``` Indicates the [HTTP Authentication Type: Basic or Digest Access Authentication](https://tools.ietf.org/html/rfc2617). ``` nginx.ingress.kubernetes.io/auth-secret: secretName ``` The name of the Secret that contains the usernames and passwords which are granted access to the `path`s defined in the Ingress rules. This annotation also accepts the alternative form "namespace/secretName", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace. ``` nginx.ingress.kubernetes.io/auth-secret-type: [auth-file|auth-map] ``` The `auth-secret` can have two forms: - `auth-file` - default, an htpasswd file in the key `auth` within the secret - `auth-map` - the keys of the secret are the usernames, and the values are the hashed passwords ``` nginx.ingress.kubernetes.io/auth-realm: "realm string" ``` !!! example Please check the [auth](../../examples/auth/basic/README.md) example. ### Custom NGINX upstream hashing NGINX supports load balancing by client-server mapping based on [consistent hashing](https://nginx.org/en/docs/http/ngx\_http\_upstream\_module.html#hash) for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The [ketama](https://www.last.fm/user/RJ/journal/2007/04/10/rz\_libketama\_-\_a\_consistent\_hashing\_algo\_for\_memcache\_clients) consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes. There is a special mode of upstream hashing called subset. In this mode, upstream servers are grouped into subsets, and stickiness works by mapping keys to a subset instead of individual upstream servers. Specific server is chosen uniformly at random from the selected sticky subset. It provides a balance between stickiness and load distribution. To enable consistent hashing for a backend: `nginx.ingress.kubernetes.io/upstream-hash-by`: the nginx variable, text value or any combination thereof to use for consistent hashing. For example: `nginx.ingress.kubernetes.io/upstream-hash-by: "$request\_uri"` or `nginx.ingress.kubernetes.io/upstream-hash-by: "$request\_uri$host"` or `nginx.ingress.kubernetes.io/upstream-hash-by: "${request\_uri}-text-value"` to consistently hash upstream requests by the current request URI. "subset" hashing can be enabled setting `nginx.ingress.kubernetes.io/upstream-hash-by-subset`: "true". This maps requests to subset of nodes instead of a single one. `nginx.ingress.kubernetes.io/upstream-hash-by-subset-size` determines the size of each subset (default 3). Please check the [chashsubset](../../examples/chashsubset/deployment.yaml) example. ### Custom NGINX load balancing This is similar to [`load-balance` in ConfigMap](./configmap.md#load-balance), but configures load balancing algorithm per ingress. >Note that `nginx.ingress.kubernetes.io/upstream-hash-by` takes preference over this. If this and `nginx.ingress.kubernetes.io/upstream-hash-by` are not set then we fallback to using globally configured load balancing algorithm. ### Custom NGINX upstream vhost This configuration setting allows you to control the value for host in the following statement: `proxy\_set\_header Host $host`, which forms part of the location block. This is useful if you need to call the upstream server by something other than `$host`. ### Client Certificate Authentication It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule. Client Certificate Authentication is applied per host and it is not possible to specify rules that differ for individual paths.
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/annotations.md
main
ingress-nginx
[ -0.03425253555178642, 0.0036542469169944525, 0.05534878745675087, -0.024818938225507736, -0.01051655039191246, -0.0834357813000679, 0.029165169224143028, -0.076258584856987, 0.03978440910577774, 0.026623114943504333, -0.021111179143190384, -0.05241503566503525, -0.0328061617910862, 0.08951...
0.006708
you need to call the upstream server by something other than `$host`. ### Client Certificate Authentication It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule. Client Certificate Authentication is applied per host and it is not possible to specify rules that differ for individual paths. To enable, add the annotation `nginx.ingress.kubernetes.io/auth-tls-secret: namespace/secretName`. This secret must have a file named `ca.crt` containing the full Certificate Authority chain `ca.crt` that is enabled to authenticate against this Ingress. You can further customize client certificate authentication and behavior with these annotations: \* `nginx.ingress.kubernetes.io/auth-tls-verify-depth`: The validation depth between the provided client certificate and the Certification Authority chain. (default: 1) \* `nginx.ingress.kubernetes.io/auth-tls-verify-client`: Enables verification of client certificates. Possible values are: \* `on`: Request a client certificate that must be signed by a certificate that is included in the secret key `ca.crt` of the secret specified by `nginx.ingress.kubernetes.io/auth-tls-secret: namespace/secretName`. Failed certificate verification will result in a status code 400 (Bad Request) (default) \* `off`: Don't request client certificates and don't do client certificate verification. \* `optional`: Do optional client certificate validation against the CAs from `auth-tls-secret`. The request fails with status code 400 (Bad Request) when a certificate is provided that is not signed by the CA. When no or an otherwise invalid certificate is provided, the request does not fail, but instead the verification result is sent to the upstream service. \* `optional\_no\_ca`: Do optional client certificate validation, but do not fail the request when the client certificate is not signed by the CAs from `auth-tls-secret`. Certificate verification result is sent to the upstream service. \* `nginx.ingress.kubernetes.io/auth-tls-error-page`: The URL/Page that user should be redirected in case of a Certificate Authentication Error \* `nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream`: Indicates if the received certificates should be passed or not to the upstream server in the header `ssl-client-cert`. Possible values are "true" or "false" (default). \* `nginx.ingress.kubernetes.io/auth-tls-match-cn`: Adds a sanity check for the CN of the client certificate that is sent over using a string / regex starting with "CN=", example: `"CN=myvalidclient"`. If the certificate CN sent during mTLS does not match your string / regex it will fail with status code 403. Another way of using this is by adding multiple options in your regex, example: `"CN=(option1|option2|myvalidclient)"`. In this case, as long as one of the options in the brackets matches the certificate CN then you will receive a 200 status code. The following headers are sent to the upstream service according to the `auth-tls-\*` annotations: \* `ssl-client-issuer-dn`: The issuer information of the client certificate. Example: "CN=My CA" \* `ssl-client-subject-dn`: The subject information of the client certificate. Example: "CN=My Client" \* `ssl-client-verify`: The result of the client verification. Possible values: "SUCCESS", "FAILED: " \* `ssl-client-cert`: The full client certificate in PEM format. Will only be sent when `nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream` is set to "true". Example: `-----BEGIN%20CERTIFICATE-----%0A...---END%20CERTIFICATE-----%0A` !!! example Please check the [client-certs](../../examples/auth/client-certs/README.md) example. !!! attention TLS with Client Authentication is \*\*not\*\* possible in Cloudflare and might result in unexpected behavior. Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate: [https://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/](https://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/) Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial: [https://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls](https://web.archive.org/web/20200907143649/https://support.cloudflare.com/hc/en-us/articles/204899617-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls#section5) ### Backend Certificate Authentication It is possible to authenticate to a proxied HTTPS backend with certificate using additional annotations in Ingress Rule. \* `nginx.ingress.kubernetes.io/proxy-ssl-secret: secretName`: Specifies a Secret with the certificate `tls.crt`, key `tls.key` in PEM format used for authentication to a proxied HTTPS server. It should also contain trusted CA certificates `ca.crt` in PEM format used to verify the certificate of the proxied HTTPS server. This annotation expects the Secret name in the form "namespace/secretName". \* `nginx.ingress.kubernetes.io/proxy-ssl-verify`: Enables
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/annotations.md
main
ingress-nginx
[ -0.05709625035524368, 0.09821474552154541, 0.04464252293109894, -0.014095688238739967, -0.013758000917732716, -0.05963391065597534, 0.03661384433507919, 0.010738358832895756, 0.10235238820314407, 0.06450570374727249, -0.06996144354343414, -0.1149882823228836, 0.05376376211643219, 0.0982510...
0.04939
the certificate `tls.crt`, key `tls.key` in PEM format used for authentication to a proxied HTTPS server. It should also contain trusted CA certificates `ca.crt` in PEM format used to verify the certificate of the proxied HTTPS server. This annotation expects the Secret name in the form "namespace/secretName". \* `nginx.ingress.kubernetes.io/proxy-ssl-verify`: Enables or disables verification of the proxied HTTPS server certificate. (default: off) \* `nginx.ingress.kubernetes.io/proxy-ssl-verify-depth`: Sets the verification depth in the proxied HTTPS server certificates chain. (default: 1) \* `nginx.ingress.kubernetes.io/proxy-ssl-ciphers`: Specifies the enabled [ciphers](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_ssl\_ciphers) for requests to a proxied HTTPS server. The ciphers are specified in the format understood by the OpenSSL library. \* `nginx.ingress.kubernetes.io/proxy-ssl-name`: Allows to set [proxy\_ssl\_name](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_ssl\_name). This allows overriding the server name used to verify the certificate of the proxied HTTPS server. This value is also passed through SNI when a connection is established to the proxied HTTPS server. \* `nginx.ingress.kubernetes.io/proxy-ssl-protocols`: Enables the specified [protocols](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_ssl\_protocols) for requests to a proxied HTTPS server. \* `nginx.ingress.kubernetes.io/proxy-ssl-server-name`: Enables passing of the server name through TLS Server Name Indication extension (SNI, RFC 6066) when establishing a connection with the proxied HTTPS server. ### Configuration snippet Using this annotation you can add additional configuration to the NGINX location. For example: ```yaml nginx.ingress.kubernetes.io/configuration-snippet: | more\_set\_headers "Request-Id: $req\_id"; ``` Be aware this can be dangerous in multi-tenant clusters, as it can lead to people with otherwise limited permissions being able to retrieve all secrets on the cluster. The recommended mitigation for this threat is to disable this feature, so it may not work for you. See CVE-2021-25742 and the [related issue on github](https://github.com/kubernetes/ingress-nginx/issues/7837) for more information. ### Custom HTTP Errors Like the [`custom-http-errors`](./configmap.md#custom-http-errors) value in the ConfigMap, this annotation will set NGINX `proxy-intercept-errors`, but only for the NGINX location associated with this ingress. If a [default backend annotation](#default-backend) is specified on the ingress, the errors will be routed to that annotation's default backend service (instead of the global default backend). Different ingresses can specify different sets of error codes. Even if multiple ingress objects share the same hostname, this annotation can be used to intercept different error codes for each ingress (for example, different error codes to be intercepted for different paths on the same hostname, if each path is on a different ingress). If `custom-http-errors` is also specified globally, the error values specified in this annotation will override the global value for the given ingress' hostname and path. Example usage: ``` nginx.ingress.kubernetes.io/custom-http-errors: "404,415" ``` ### Custom Headers This annotation is of the form `nginx.ingress.kubernetes.io/custom-headers: /` to specify a namespace and configmap name that contains custom headers. This annotation uses `more\_set\_headers` nginx directive. Example annotation for following example configmap: ```yaml nginx.ingress.kubernetes.io/custom-headers: default/custom-headers-configmap ``` Example configmap: ```yaml apiVersion: v1 data: Content-Type: application/json kind: ConfigMap metadata: name: custom-headers-configmap namespace: default ``` !!! attention First define the allowed response headers in [global-allowed-response-headers](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/configmap.md#global-allowed-response-headers). ### Default Backend This annotation is of the form `nginx.ingress.kubernetes.io/default-backend: ` to specify a custom default backend. This `` is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend. In case the service has [multiple ports](https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services), the first one is the one which will receive the backend traffic. This service will be used to handle the response when the configured service in the Ingress rule does not have any active endpoints. It will also be used to handle the error responses if both this annotation and the [custom-http-errors annotation](#custom-http-errors) are set. ### Enable CORS To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation `nginx.ingress.kubernetes.io/enable-cors: "true"`. This will add a section in the server location enabling this functionality. CORS
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/annotations.md
main
ingress-nginx
[ -0.07185540348291397, 0.07102017104625702, 0.027573714032769203, -0.01322251372039318, -0.03065604902803898, -0.03140004724264145, 0.052962131798267365, -0.0025101574137806892, 0.12562789022922516, 0.058651603758335114, -0.01638603024184704, -0.08860183507204056, 0.044450543820858, 0.03000...
0.075723
will also be used to handle the error responses if both this annotation and the [custom-http-errors annotation](#custom-http-errors) are set. ### Enable CORS To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation `nginx.ingress.kubernetes.io/enable-cors: "true"`. This will add a section in the server location enabling this functionality. CORS can be controlled with the following annotations: \* `nginx.ingress.kubernetes.io/cors-allow-methods`: Controls which methods are accepted. This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case). - Default: `GET, PUT, POST, DELETE, PATCH, OPTIONS` - Example: `nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"` \* `nginx.ingress.kubernetes.io/cors-allow-headers`: Controls which headers are accepted. This is a multi-valued field, separated by ',' and accepts letters, numbers, \_ and -. - Default: `DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization` - Example: `nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-app123-XPTO"` \* `nginx.ingress.kubernetes.io/cors-expose-headers`: Controls which headers are exposed to response. This is a multi-valued field, separated by ',' and accepts letters, numbers, \_, - and \*. - Default: \*empty\* - Example: `nginx.ingress.kubernetes.io/cors-expose-headers: "\*, X-CustomResponseHeader"` \* `nginx.ingress.kubernetes.io/cors-allow-origin`: Controls what's the accepted Origin for CORS. This is a multi-valued field, separated by ','. It must follow this format: `protocol://origin-site.com` or `protocol://origin-site.com:port` - Default: `\*` - Example: `nginx.ingress.kubernetes.io/cors-allow-origin: "https://origin-site.com:4443, http://origin-site.com, myprotocol://example.org:1199"` It also supports single level wildcard subdomains and follows this format: `protocol://\*.foo.bar`, `protocol://\*.bar.foo:8080` or `protocol://\*.abc.bar.foo:9000` - Example: `nginx.ingress.kubernetes.io/cors-allow-origin: "https://\*.origin-site.com:4443, http://\*.origin-site.com, myprotocol://example.org:1199"` \* `nginx.ingress.kubernetes.io/cors-allow-credentials`: Controls if credentials can be passed during CORS operations. - Default: `true` - Example: `nginx.ingress.kubernetes.io/cors-allow-credentials: "false"` \* `nginx.ingress.kubernetes.io/cors-max-age`: Controls how long preflight requests can be cached. - Default: `1728000` - Example: `nginx.ingress.kubernetes.io/cors-max-age: 600` !!! note For more information please see [https://enable-cors.org](https://enable-cors.org/server\_nginx.html) ### HTTP2 Push Preload. Enables automatic conversion of preload links specified in the “Link” response header fields into push requests. !!! example \* `nginx.ingress.kubernetes.io/http2-push-preload: "true"` ### Server Alias Allows the definition of one or more aliases in the server definition of the NGINX configuration using the annotation `nginx.ingress.kubernetes.io/server-alias: ","`. This will create a server with the same configuration, but adding new values to the `server\_name` directive. !!! note A server-alias name cannot conflict with the hostname of an existing server. If it does, the server-alias annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created, the new server configuration will take place over the alias configuration. For more information please see [the `server\_name` documentation](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#server\_name). ### Server snippet Using the annotation `nginx.ingress.kubernetes.io/server-snippet` it is possible to add custom configuration in the server configuration block. ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/server-snippet: | set $agentflag 0; if ($http\_user\_agent ~\* "(Mobile)" ){ set $agentflag 1; } if ( $agentflag = 1 ) { return 301 https://m.example.com; } ``` !!! attention This annotation can be used only once per host. ### Client Body Buffer Size Sets buffer size for reading client request body per location. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is applied to each location provided in the ingress rule. !!! note The annotation value must be given in a format understood by Nginx. !!! example \* `nginx.ingress.kubernetes.io/client-body-buffer-size: "1000"` # 1000 bytes \* `nginx.ingress.kubernetes.io/client-body-buffer-size: 1k` # 1 kilobyte \* `nginx.ingress.kubernetes.io/client-body-buffer-size: 1K` # 1 kilobyte \* `nginx.ingress.kubernetes.io/client-body-buffer-size: 1m` # 1 megabyte \* `nginx.ingress.kubernetes.io/client-body-buffer-size: 1M` # 1 megabyte For more information please see [https://nginx.org](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#client\_body\_buffer\_size) ### External Authentication To use an existing service that provides authentication the Ingress rule can be annotated with `nginx.ingress.kubernetes.io/auth-url` to indicate the URL where the HTTP request
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/annotations.md
main
ingress-nginx
[ -0.053753264248371124, 0.10208238661289215, 0.040797799825668335, -0.044318024069070816, 0.009087393060326576, 0.010517086833715439, -0.026511650532484055, -0.02199586108326912, 0.0488334558904171, 0.0564919151365757, -0.03093552216887474, -0.09536653012037277, -0.012879475019872189, -0.00...
0.182767
`nginx.ingress.kubernetes.io/client-body-buffer-size: 1K` # 1 kilobyte \* `nginx.ingress.kubernetes.io/client-body-buffer-size: 1m` # 1 megabyte \* `nginx.ingress.kubernetes.io/client-body-buffer-size: 1M` # 1 megabyte For more information please see [https://nginx.org](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#client\_body\_buffer\_size) ### External Authentication To use an existing service that provides authentication the Ingress rule can be annotated with `nginx.ingress.kubernetes.io/auth-url` to indicate the URL where the HTTP request should be sent. ```yaml nginx.ingress.kubernetes.io/auth-url: "URL to the authentication service" ``` Additionally it is possible to set: \* `nginx.ingress.kubernetes.io/auth-keepalive`: `` to specify the maximum number of keepalive connections to `auth-url`. Only takes effect when no variables are used in the host part of the URL. Defaults to `0` (keepalive disabled). > Note: does not work with HTTP/2 listener because of a limitation in Lua [subrequests](https://github.com/openresty/lua-nginx-module#spdy-mode-not-fully-supported). > [UseHTTP2](./configmap.md#use-http2) configuration should be disabled! \* `nginx.ingress.kubernetes.io/auth-keepalive-share-vars`: Whether to share Nginx variables among the current request and the auth request. Example use case is to track requests: when set to "true" X-Request-ID HTTP header will be the same for the backend and the auth request. Defaults to "false". \* `nginx.ingress.kubernetes.io/auth-keepalive-requests`: `` to specify the maximum number of requests that can be served through one keepalive connection. Defaults to `1000` and only applied if `auth-keepalive` is set to higher than `0`. \* `nginx.ingress.kubernetes.io/auth-keepalive-timeout`: `` to specify a duration in seconds which an idle keepalive connection to an upstream server will stay open. Defaults to `60` and only applied if `auth-keepalive` is set to higher than `0`. \* `nginx.ingress.kubernetes.io/auth-method`: `` to specify the HTTP method to use. \* `nginx.ingress.kubernetes.io/auth-signin`: `` to specify the location of the error page. \* `nginx.ingress.kubernetes.io/auth-signin-redirect-param`: `` to specify the URL parameter in the error page which should contain the original URL for a failed signin request. \* `nginx.ingress.kubernetes.io/auth-response-headers`: `` to specify headers to pass to backend once authentication request completes. \* `nginx.ingress.kubernetes.io/auth-proxy-set-headers`: `` the name of a ConfigMap that specifies headers to pass to the authentication service \* `nginx.ingress.kubernetes.io/auth-request-redirect`: `` to specify the X-Auth-Request-Redirect header value. \* `nginx.ingress.kubernetes.io/auth-cache-key`: `` this enables caching for auth requests. specify a lookup key for auth responses. e.g. `$remote\_user$http\_authorization`. Each server and location has it's own keyspace. Hence a cached response is only valid on a per-server and per-location basis. \* `nginx.ingress.kubernetes.io/auth-cache-duration`: `` to specify a caching time for auth responses based on their response codes, e.g. `200 202 30m`. See [proxy\_cache\_valid](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_cache\_valid) for details. You may specify multiple, comma-separated values: `200 202 10m, 401 5m`. defaults to `200 202 401 5m`. \* `nginx.ingress.kubernetes.io/auth-always-set-cookie`: `` to set a cookie returned by auth request. By default, the cookie will be set only if an upstream reports with the code 200, 201, 204, 206, 301, 302, 303, 304, 307, or 308. \* `nginx.ingress.kubernetes.io/auth-snippet`: `` to specify a custom snippet to use with external authentication, e.g. ```yaml nginx.ingress.kubernetes.io/auth-url: http://foo.com/external-auth nginx.ingress.kubernetes.io/auth-snippet: | proxy\_set\_header Foo-Header 42; ``` > Note: `nginx.ingress.kubernetes.io/auth-snippet` is an optional annotation. However, it may only be used in conjunction with `nginx.ingress.kubernetes.io/auth-url` and will be ignored if `nginx.ingress.kubernetes.io/auth-url` is not set !!! example Please check the [external-auth](../../examples/auth/external-auth/README.md) example. #### Global External Authentication By default the controller redirects all requests to an existing service that provides authentication if `global-auth-url` is set in the Ingress NGINX ConfigMap. If you want to disable this behavior for that Ingress, you can use the `nginx.ingress.kubernetes.io/enable-global-auth: "false"` annotation. - `nginx.ingress.kubernetes.io/enable-global-auth`: indicates if GlobalExternalAuth configuration should be applied or not to this Ingress rule. Default values is set to `"true"`. !!! note For more information please see [global-auth-url](./configmap.md#global-auth-url). ### Rate Limiting These annotations define limits on connections and transmission rates. These can be used to mitigate [DDoS Attacks](https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus). !!! attention Rate limits are applied per Ingress NGINX controller replica. If you're running multiple replicas or using a
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/annotations.md
main
ingress-nginx
[ -0.023889275267720222, 0.0953570008277893, 0.01715715043246746, -0.034814272075891495, -0.06503444164991379, -0.10166701674461365, 0.03950657695531845, 0.05123584344983101, 0.04607204720377922, 0.08532535284757614, -0.06797005236148834, -0.03132696822285652, -0.031992170959711075, 0.005374...
0.126554
set to `"true"`. !!! note For more information please see [global-auth-url](./configmap.md#global-auth-url). ### Rate Limiting These annotations define limits on connections and transmission rates. These can be used to mitigate [DDoS Attacks](https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus). !!! attention Rate limits are applied per Ingress NGINX controller replica. If you're running multiple replicas or using a horizontal pod autoscaler (HPA), the effective rate limit will be multiplied by the number of replicas. When using HPA, the exact rate limit becomes dynamic as the number of replicas may change based on load. \* `nginx.ingress.kubernetes.io/limit-connections`: number of concurrent connections allowed from a single IP address per controller replica. A 503 error is returned when exceeding this limit. \* `nginx.ingress.kubernetes.io/limit-rps`: number of requests accepted from a given IP each second per controller replica. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, [limit-req-status-code](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#limit-req-status-code) \*\*\*default:\*\*\* 503 is returned. \* `nginx.ingress.kubernetes.io/limit-rpm`: number of requests accepted from a given IP each minute per controller replica. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, [limit-req-status-code](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#limit-req-status-code) \*\*\*default:\*\*\* 503 is returned. \* `nginx.ingress.kubernetes.io/limit-burst-multiplier`: multiplier of the limit rate for burst size. The default burst multiplier is 5, this annotation override the default multiplier. When clients exceed this limit, [limit-req-status-code](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#limit-req-status-code) \*\*\*default:\*\*\* 503 is returned. \* `nginx.ingress.kubernetes.io/limit-rate-after`: initial number of kilobytes after which the further transmission of a response to a given connection will be rate limited. This feature must be used with [proxy-buffering](#proxy-buffering) enabled. \* `nginx.ingress.kubernetes.io/limit-rate`: number of kilobytes per second allowed to send to a given connection. The zero value disables rate limiting. This feature must be used with [proxy-buffering](#proxy-buffering) enabled. \* `nginx.ingress.kubernetes.io/limit-whitelist`: client IP source ranges to be excluded from rate-limiting. The value is a comma separated list of CIDRs. If you specify multiple annotations in a single Ingress rule, limits are applied in the order `limit-connections`, `limit-rpm`, `limit-rps`. To configure settings globally for all Ingress rules, the `limit-rate-after` and `limit-rate` values may be set in the [NGINX ConfigMap](./configmap.md#limit-rate). The value set in an Ingress annotation will override the global setting. The client IP address will be set based on the use of [PROXY protocol](./configmap.md#use-proxy-protocol) or from the `X-Forwarded-For` header value when [use-forwarded-headers](./configmap.md#use-forwarded-headers) is enabled. ### Permanent Redirect This annotation allows to return a permanent redirect (Return Code 301) instead of sending data to the upstream. For example `nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com` would redirect everything to Google. ### Permanent Redirect Code This annotation allows you to modify the status code used for permanent redirects. For example `nginx.ingress.kubernetes.io/permanent-redirect-code: '308'` would return your permanent-redirect with a 308. ### Temporal Redirect This annotation allows you to return a temporal redirect (Return Code 302) instead of sending data to the upstream. For example `nginx.ingress.kubernetes.io/temporal-redirect: https://www.google.com` would redirect everything to Google with a Return Code of 302 (Moved Temporarily) ### Temporal Redirect Code This annotation allows you to modify the status code used for temporal redirects. For example `nginx.ingress.kubernetes.io/temporal-redirect-code: '307'` would return your temporal-redirect with a 307. ### SSL Passthrough The annotation `nginx.ingress.kubernetes.io/ssl-passthrough` instructs the controller to send TLS connections directly to the backend instead of letting NGINX decrypt the communication. See also [TLS/HTTPS](../tls.md#ssl-passthrough) in the User guide. !!! note SSL Passthrough is \*\*disabled by default\*\* and requires starting the controller with the [`--enable-ssl-passthrough`](../cli-arguments.md) flag. !!! attention Because SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough invalidates all the other annotations set on an Ingress object. ### Service Upstream By default the Ingress-Nginx Controller uses a list of all
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/annotations.md
main
ingress-nginx
[ -0.03415728360414505, 0.015398016199469566, 0.009560229256749153, -0.026835501194000244, -0.025415346026420593, -0.08647630363702774, -0.012970137409865856, -0.009425394237041473, 0.07630819827318192, 0.06280215084552765, 0.004464976489543915, -0.008359166793525219, 0.034021131694316864, -...
0.129352
the [`--enable-ssl-passthrough`](../cli-arguments.md) flag. !!! attention Because SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough invalidates all the other annotations set on an Ingress object. ### Service Upstream By default the Ingress-Nginx Controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration. The `nginx.ingress.kubernetes.io/service-upstream` annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port. This can be desirable for things like zero-downtime deployments . See issue [#257](https://github.com/kubernetes/ingress-nginx/issues/257). #### Known Issues If the `service-upstream` annotation is specified the following things should be taken into consideration: \* Sticky Sessions will not work as only round-robin load balancing is supported. \* The `proxy\_next\_upstream` directive will not have any effect meaning on error the request will not be dispatched to another upstream. ### Server-side HTTPS enforcement through redirect By default the controller redirects (308) to HTTPS if TLS is enabled for that ingress. If you want to disable this behavior globally, you can use `ssl-redirect: "false"` in the NGINX [ConfigMap](./configmap.md#ssl-redirect). To configure this feature for specific ingress resources, you can use the `nginx.ingress.kubernetes.io/ssl-redirect: "false"` annotation in the particular resource. When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the `nginx.ingress.kubernetes.io/force-ssl-redirect: "true"` annotation in the particular resource. To preserve the trailing slash in the URI with `ssl-redirect`, set `nginx.ingress.kubernetes.io/preserve-trailing-slash: "true"` annotation for that particular resource. ### Redirect from/to www In some scenarios, it is required to redirect from `www.domain.com` to `domain.com` or vice versa, which way the redirect is performed depends on the configured `host` value in the Ingress object. For example, if `.spec.rules.host` is configured with a value like `www.example.com`, then this annotation will redirect from `example.com` to `www.example.com`. If `.spec.rules.host` is configured with a value like `example.com`, so without a `www`, then this annotation will redirect from `www.example.com` to `example.com` instead. To enable this feature use the annotation `nginx.ingress.kubernetes.io/from-to-www-redirect: "true"` !!! attention If at some point a new Ingress is created with a host equal to one of the options (like `domain.com`) the annotation will be omitted. !!! attention For HTTPS to HTTPS redirects is mandatory the SSL Certificate defined in the Secret, located in the TLS section of Ingress, contains both FQDN in the common name of the certificate. ### Denylist source range You can specify blocked client IP source ranges through the `nginx.ingress.kubernetes.io/denylist-source-range` annotation. The value is a comma separated list of [CIDRs](https://en.wikipedia.org/wiki/Classless\_Inter-Domain\_Routing), e.g. `10.0.0.0/24,172.10.0.1`. To configure this setting globally for all Ingress rules, the `denylist-source-range` value may be set in the [NGINX ConfigMap](./configmap.md#denylist-source-range). !!! note Adding an annotation to an Ingress rule overrides any global restriction. ### Whitelist source range You can specify allowed client IP source ranges through the `nginx.ingress.kubernetes.io/whitelist-source-range` annotation. The value is a comma separated list of [CIDRs](https://en.wikipedia.org/wiki/Classless\_Inter-Domain\_Routing), e.g. `10.0.0.0/24,172.10.0.1`. To configure this setting globally for all Ingress rules, the `whitelist-source-range` value may be set in the [NGINX ConfigMap](./configmap.md#whitelist-source-range). !!! note Adding an annotation to an Ingress rule overrides any global restriction. ### Custom timeouts Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers. In some scenarios is required to have different values. To allow this we provide annotations that allows this customization: - `nginx.ingress.kubernetes.io/proxy-connect-timeout` - `nginx.ingress.kubernetes.io/proxy-send-timeout` - `nginx.ingress.kubernetes.io/proxy-read-timeout` - `nginx.ingress.kubernetes.io/proxy-next-upstream` - `nginx.ingress.kubernetes.io/proxy-next-upstream-timeout` - `nginx.ingress.kubernetes.io/proxy-next-upstream-tries` - `nginx.ingress.kubernetes.io/proxy-request-buffering` If you indicate [Backend Protocol](#backend-protocol) as `GRPC` or `GRPCS`, the following grpc values will be set and inherited from proxy timeouts:
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/annotations.md
main
ingress-nginx
[ -0.057238392531871796, 0.07781442254781723, 0.0576266348361969, 0.0090551832690835, -0.00884296465665102, -0.023312505334615707, 0.0157930888235569, -0.07823998481035233, 0.09074623882770538, -0.004017950501292944, -0.06874249875545502, -0.04567465931177139, 0.03175528347492218, 0.08693006...
0.048989
required to have different values. To allow this we provide annotations that allows this customization: - `nginx.ingress.kubernetes.io/proxy-connect-timeout` - `nginx.ingress.kubernetes.io/proxy-send-timeout` - `nginx.ingress.kubernetes.io/proxy-read-timeout` - `nginx.ingress.kubernetes.io/proxy-next-upstream` - `nginx.ingress.kubernetes.io/proxy-next-upstream-timeout` - `nginx.ingress.kubernetes.io/proxy-next-upstream-tries` - `nginx.ingress.kubernetes.io/proxy-request-buffering` If you indicate [Backend Protocol](#backend-protocol) as `GRPC` or `GRPCS`, the following grpc values will be set and inherited from proxy timeouts: - [`grpc\_connect\_timeout=5s`](https://nginx.org/en/docs/http/ngx\_http\_grpc\_module.html#grpc\_connect\_timeout), from `nginx.ingress.kubernetes.io/proxy-connect-timeout` - [`grpc\_send\_timeout=60s`](https://nginx.org/en/docs/http/ngx\_http\_grpc\_module.html#grpc\_send\_timeout), from `nginx.ingress.kubernetes.io/proxy-send-timeout` - [`grpc\_read\_timeout=60s`](https://nginx.org/en/docs/http/ngx\_http\_grpc\_module.html#grpc\_read\_timeout), from `nginx.ingress.kubernetes.io/proxy-read-timeout` Note: All timeout values are unitless and in seconds e.g. `nginx.ingress.kubernetes.io/proxy-read-timeout: "120"` sets a valid 120 seconds proxy read timeout. ### Proxy redirect The annotations `nginx.ingress.kubernetes.io/proxy-redirect-from` and `nginx.ingress.kubernetes.io/proxy-redirect-to` will set the first and second parameters of NGINX's proxy\_redirect directive respectively. It is possible to set the text that should be changed in the `Location` and `Refresh` header fields of a [proxied server response](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_redirect) Setting "off" or "default" in the annotation `nginx.ingress.kubernetes.io/proxy-redirect-from` disables `nginx.ingress.kubernetes.io/proxy-redirect-to`, otherwise, both annotations must be used in unison. Note that each annotation must be a string without spaces. By default the value of each annotation is "off". ### Custom max body size For NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter [`client\_max\_body\_size`](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#client\_max\_body\_size). To configure this setting globally for all Ingress rules, the `proxy-body-size` value may be set in the [NGINX ConfigMap](./configmap.md#proxy-body-size). To use custom values in an Ingress rule define these annotation: ```yaml nginx.ingress.kubernetes.io/proxy-body-size: 8m ``` ### Proxy cookie domain Sets a text that [should be changed in the domain attribute](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_cookie\_domain) of the "Set-Cookie" header fields of a proxied server response. To configure this setting globally for all Ingress rules, the `proxy-cookie-domain` value may be set in the [NGINX ConfigMap](./configmap.md#proxy-cookie-domain). ### Proxy cookie path Sets a text that [should be changed in the path attribute](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_cookie\_path) of the "Set-Cookie" header fields of a proxied server response. To configure this setting globally for all Ingress rules, the `proxy-cookie-path` value may be set in the [NGINX ConfigMap](./configmap.md#proxy-cookie-path). ### Proxy buffering Enable or disable proxy buffering [`proxy\_buffering`](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_buffering). By default proxy buffering is disabled in the NGINX config. To configure this setting globally for all Ingress rules, the `proxy-buffering` value may be set in the [NGINX ConfigMap](./configmap.md#proxy-buffering). To use custom values in an Ingress rule define these annotation: ```yaml nginx.ingress.kubernetes.io/proxy-buffering: "on" ``` ### Proxy buffers number Sets the number of the buffers in [`proxy\_buffers`](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_buffers) used for reading the first part of the response received from the proxied server. By default proxy buffers number is set as 4 To configure this setting globally, set `proxy-buffers-number` in [NGINX ConfigMap](./configmap.md#proxy-buffers-number). To use custom values in an Ingress rule, define this annotation: ```yaml nginx.ingress.kubernetes.io/proxy-buffers-number: "4" ``` ### Proxy buffer size Sets the size of the buffer [`proxy\_buffer\_size`](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_buffer\_size) used for reading the first part of the response received from the proxied server. By default proxy buffer size is set as "4k" To configure this setting globally, set `proxy-buffer-size` in [NGINX ConfigMap](./configmap.md#proxy-buffer-size). To use custom values in an Ingress rule, define this annotation: ```yaml nginx.ingress.kubernetes.io/proxy-buffer-size: "8k" ``` ### Proxy busy buffers size [Limits the total size of buffers that can be busy](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_busy\_buffers\_size) sending a response to the client while the response is not yet fully read. By default, size is limited by the size of two buffers set by the `proxy\_buffer\_size` and `proxy\_buffers` directives. To configure this setting globally, set `proxy-busy-buffers-size` in the [ConfigMap](./configmap.md#proxy-busy-buffers-size). To use custom values in an Ingress rule, define this annotation: ```yaml nginx.ingress.kubernetes.io/proxy-busy-buffers-size: "16k" ``` ### Proxy max temp file size When [`buffering`](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_buffering) of responses from the proxied server is enabled, and the whole response does not fit into the buffers set by the [`proxy\_buffer\_size`](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_buffer\_size) and
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/annotations.md
main
ingress-nginx
[ -0.04075191169977188, 0.048121266067028046, 0.019521933048963547, -0.030866174027323723, -0.08626077324151993, 0.005118895787745714, 0.02973252907395363, -0.030902007594704628, 0.08547093719244003, 0.025250671431422234, -0.10593516379594803, -0.021048009395599365, -0.028860704973340034, 0....
0.095861
`proxy-busy-buffers-size` in the [ConfigMap](./configmap.md#proxy-busy-buffers-size). To use custom values in an Ingress rule, define this annotation: ```yaml nginx.ingress.kubernetes.io/proxy-busy-buffers-size: "16k" ``` ### Proxy max temp file size When [`buffering`](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_buffering) of responses from the proxied server is enabled, and the whole response does not fit into the buffers set by the [`proxy\_buffer\_size`](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_buffer\_size) and [`proxy\_buffers`](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_buffers) directives, a part of the response can be saved to a temporary file. This directive sets the maximum `size` of the temporary file setting the [`proxy\_max\_temp\_file\_size`](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_max\_temp\_file\_size). The size of data written to the temporary file at a time is set by the [`proxy\_temp\_file\_write\_size`](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_temp\_file\_write\_size) directive. The zero value disables buffering of responses to temporary files. To use custom values in an Ingress rule, define this annotation: ```yaml nginx.ingress.kubernetes.io/proxy-max-temp-file-size: "1024m" ``` ### Proxy HTTP version Using this annotation sets the [`proxy\_http\_version`](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_http\_version) that the Nginx reverse proxy will use to communicate with the backend. By default this is set to "1.1". ```yaml nginx.ingress.kubernetes.io/proxy-http-version: "1.0" ``` ### SSL ciphers Specifies the [enabled ciphers](https://nginx.org/en/docs/http/ngx\_http\_ssl\_module.html#ssl\_ciphers). Using this annotation will set the `ssl\_ciphers` directive at the server level. This configuration is active for all the paths in the host. ```yaml nginx.ingress.kubernetes.io/ssl-ciphers: "ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP" ``` The following annotation will set the `ssl\_prefer\_server\_ciphers` directive at the server level. This configuration specifies that server ciphers should be preferred over client ciphers when using the SSLv3 and TLS protocols. ```yaml nginx.ingress.kubernetes.io/ssl-prefer-server-ciphers: "true" ``` ### Connection proxy header Using this annotation will override the default connection header set by NGINX. To use custom values in an Ingress rule, define the annotation: ```yaml nginx.ingress.kubernetes.io/connection-proxy-header: "keep-alive" ``` ### Enable Access Log Access logs are enabled by default, but in some scenarios access logs might be required to be disabled for a given ingress. To do this, use the annotation: ```yaml nginx.ingress.kubernetes.io/enable-access-log: "false" ``` ### Enable Rewrite Log Rewrite logs are not enabled by default. In some scenarios it could be required to enable NGINX rewrite logs. Note that rewrite logs are sent to the error\_log file at the notice level. To enable this feature use the annotation: ```yaml nginx.ingress.kubernetes.io/enable-rewrite-log: "true" ``` ### Enable Opentelemetry Opentelemetry can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden to enable it or disable it for a specific ingress (e.g. to turn off telemetry of external health check endpoints) ```yaml nginx.ingress.kubernetes.io/enable-opentelemetry: "true" ``` ### Opentelemetry Trust Incoming Span The option to trust incoming trace spans can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden to enable it or disable it for a specific ingress (e.g. only enable on a private endpoint) !!! note This annotation requires `nginx.ingress.kubernetes.io/enable-opentelemetry` to be set to `"true"`, otherwise it will be ignored. ```yaml nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span: "true" ``` ### X-Forwarded-Prefix Header To add the non-standard `X-Forwarded-Prefix` header to the upstream request with a string value, the following annotation can be used: ```yaml nginx.ingress.kubernetes.io/x-forwarded-prefix: "/path" ``` ### ModSecurity [ModSecurity](http://modsecurity.org/) is an OpenSource Web Application firewall. It can be enabled for a particular set of ingress locations. The ModSecurity module must first be enabled by enabling ModSecurity in the [ConfigMap](./configmap.md#enable-modsecurity). Note this will enable ModSecurity for all paths, and each path must be disabled manually. It can be enabled using the following annotation: ```yaml nginx.ingress.kubernetes.io/enable-modsecurity: "true" ``` ModSecurity will run in "Detection-Only" mode using the [recommended configuration](https://github.com/owasp-modsecurity/ModSecurity/blob/v3/master/modsecurity.conf-recommended). You can enable the [OWASP Core Rule Set](https://www.modsecurity.org/CRS/Documentation/) by setting the following annotation: ```yaml nginx.ingress.kubernetes.io/enable-owasp-core-rules: "true" ``` You can pass transactionIDs from nginx by setting up the following: ```yaml nginx.ingress.kubernetes.io/modsecurity-transaction-id: "$request\_id" ``` You can also add your own set of modsecurity rules via a snippet: ```yaml nginx.ingress.kubernetes.io/modsecurity-snippet: | SecRuleEngine On SecDebugLog /tmp/modsec\_debug.log ``` Note: If
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/annotations.md
main
ingress-nginx
[ 0.003213146235793829, 0.02850358933210373, 0.04046405479311943, -0.017604494467377663, -0.043675992637872696, -0.07964541018009186, 0.02879290096461773, 0.01160587277263403, 0.03725159540772438, 0.10365352034568787, -0.0471838153898716, -0.021404245868325233, -0.07088227570056915, -0.05158...
0.093057
Core Rule Set](https://www.modsecurity.org/CRS/Documentation/) by setting the following annotation: ```yaml nginx.ingress.kubernetes.io/enable-owasp-core-rules: "true" ``` You can pass transactionIDs from nginx by setting up the following: ```yaml nginx.ingress.kubernetes.io/modsecurity-transaction-id: "$request\_id" ``` You can also add your own set of modsecurity rules via a snippet: ```yaml nginx.ingress.kubernetes.io/modsecurity-snippet: | SecRuleEngine On SecDebugLog /tmp/modsec\_debug.log ``` Note: If you use both `enable-owasp-core-rules` and `modsecurity-snippet` annotations together, only the `modsecurity-snippet` will take effect. If you wish to include the [OWASP Core Rule Set](https://www.modsecurity.org/CRS/Documentation/) or [recommended configuration](https://github.com/owasp-modsecurity/ModSecurity/blob/v3/master/modsecurity.conf-recommended) simply use the include statement: nginx 0.24.1 and below ```yaml nginx.ingress.kubernetes.io/modsecurity-snippet: | Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf Include /etc/nginx/modsecurity/modsecurity.conf ``` nginx 0.25.0 and above ```yaml nginx.ingress.kubernetes.io/modsecurity-snippet: | Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf ``` ### Backend Protocol Using `backend-protocol` annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces `secure-backends` in older versions) Valid Values: HTTP, HTTPS, AUTO\_HTTP, GRPC, GRPCS and FCGI By default NGINX uses `HTTP`. Example: ```yaml nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" ``` ### Use Regex !!! attention When using this annotation with the NGINX annotation `nginx.ingress.kubernetes.io/affinity` of type `cookie`, `nginx.ingress.kubernetes.io/session-cookie-path` must be also set; Session cookie paths do not support regex. Using the `nginx.ingress.kubernetes.io/use-regex` annotation will indicate whether or not the paths defined on an Ingress use regular expressions. The default value is `false`. The following will indicate that regular expression paths are being used: ```yaml nginx.ingress.kubernetes.io/use-regex: "true" ``` The following will indicate that regular expression paths are \_\_not\_\_ being used: ```yaml nginx.ingress.kubernetes.io/use-regex: "false" ``` When this annotation is set to `true`, the case insensitive regular expression [location modifier](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#location) will be enforced on ALL paths for a given host regardless of what Ingress they are defined on. Additionally, if the [`rewrite-target` annotation](#rewrite) is used on any Ingress for a given host, then the case insensitive regular expression [location modifier](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#location) will be enforced on ALL paths for a given host regardless of what Ingress they are defined on. Please read about [ingress path matching](../ingress-path-matching.md) before using this modifier. ### Satisfy By default, a request would need to satisfy all authentication requirements in order to be allowed. By using this annotation, requests that satisfy either any or all authentication requirements are allowed, based on the configuration value. ```yaml nginx.ingress.kubernetes.io/satisfy: "any" ``` ### Mirror Enables a request to be mirrored to a mirror backend. Responses by mirror backends are ignored. This feature is useful, to see how requests will react in "test" backends. The mirror backend can be set by applying: ```yaml nginx.ingress.kubernetes.io/mirror-target: https://test.env.com$request\_uri ``` By default the request-body is sent to the mirror backend, but can be turned off by applying: ```yaml nginx.ingress.kubernetes.io/mirror-request-body: "off" ``` Also by default header Host for mirrored requests will be set the same as a host part of uri in the "mirror-target" annotation. You can override it by "mirror-host" annotation: ```yaml nginx.ingress.kubernetes.io/mirror-target: https://1.2.3.4$request\_uri nginx.ingress.kubernetes.io/mirror-host: "test.env.com" ``` \*\*Note:\*\* The mirror directive will be applied to all paths within the ingress resource. The request sent to the mirror is linked to the original request. If you have a slow mirror backend, then the original request will throttle. For more information on the mirror module see [ngx\_http\_mirror\_module](https://nginx.org/en/docs/http/ngx\_http\_mirror\_module.html) ### Stream snippet Using the annotation `nginx.ingress.kubernetes.io/stream-snippet` it is possible to add custom stream configuration. ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/stream-snippet: | server { listen 8000; proxy\_pass 127.0.0.1:80; } ```
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/annotations.md
main
ingress-nginx
[ -0.07076414674520493, 0.060310088098049164, -0.015360509976744652, 0.029552588239312172, 0.021645717322826385, -0.03153128921985626, 0.08704159408807755, 0.030234768986701965, 0.08385287970304489, 0.04619996249675751, 0.028420232236385345, -0.08490677922964096, 0.04420364275574684, 0.03829...
0.15184
# Custom NGINX template The NGINX template is located in the file `/etc/nginx/template/nginx.tmpl`. Using a [Volume](https://kubernetes.io/docs/concepts/storage/volumes/) it is possible to use a custom template. This includes using a [Configmap](https://kubernetes.io/docs/concepts/storage/volumes/#example-pod-with-a-secret-a-downward-api-and-a-configmap) as source of the template ```yaml volumeMounts: - mountPath: /etc/nginx/template name: nginx-template-volume readOnly: true volumes: - name: nginx-template-volume configMap: name: nginx-template items: - key: nginx.tmpl path: nginx.tmpl ``` \*\*Please note the template is tied to the Go code. Do not change names in the variable `$cfg`.\*\* For more information about the template syntax please check the [Go template package](https://golang.org/pkg/text/template/). In addition to the built-in functions provided by the Go package the following functions are also available: - empty: returns true if the specified parameter (string) is empty - contains: [strings.Contains](https://golang.org/pkg/strings/#Contains) - hasPrefix: [strings.HasPrefix](https://golang.org/pkg/strings/#HasPrefix) - hasSuffix: [strings.HasSuffix](https://golang.org/pkg/strings/#HasSuffix) - toUpper: [strings.ToUpper](https://golang.org/pkg/strings/#ToUpper) - toLower: [strings.ToLower](https://golang.org/pkg/strings/#ToLower) - split: [strings.Split](https://golang.org/pkg/strings/#Split) - quote: wraps a string in double quotes - buildLocation: helps to build the NGINX Location section in each server - buildProxyPass: builds the reverse proxy configuration - buildRateLimit: helps to build a limit zone inside a location if contains a rate limit annotation TODO: - buildAuthLocation: - buildAuthResponseHeaders: - buildResolvers: - buildDenyVariable: - buildUpstreamName: - buildForwardedFor: - buildForwardedHost: - buildAuthSignURL: - buildNextUpstream: - filterRateLimits: - formatIP: - getenv: - getIngressInformation: - serverConfig: - isLocationAllowed: - isValidClientBodyBufferSize:
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/custom-template.md
main
ingress-nginx
[ -0.0199973713606596, 0.08175598829984665, 0.03458349406719208, -0.018605921417474747, -0.039528585970401764, 0.016608433797955513, -0.02762744575738907, 0.03971094638109207, 0.11449599266052246, 0.037801921367645264, -0.036354370415210724, -0.02527473121881485, -0.024473119527101517, 0.040...
0.076364
# Log format The default configuration uses a custom logging format to add additional information about upstreams, response time and status. ``` log\_format upstreaminfo '$remote\_addr - $remote\_user [$time\_local] "$request" ' '$status $body\_bytes\_sent "$http\_referer" "$http\_user\_agent" ' '$request\_length $request\_time [$proxy\_upstream\_name] [$proxy\_alternative\_upstream\_name] $upstream\_addr ' '$upstream\_response\_length $upstream\_response\_time $upstream\_status $req\_id'; ``` | Placeholder | Description | |-------------|-------------| | `$proxy\_protocol\_addr` | remote address if proxy protocol is enabled | | `$remote\_addr` | the source IP address of the client | | `$remote\_user` | user name supplied with the Basic authentication | | `$time\_local` | local time in the Common Log Format | | `$request` | full original request line | | `$status` | response status | | `$body\_bytes\_sent` | number of bytes sent to a client, not counting the response header | | `$http\_referer` | value of the Referer header | | `$http\_user\_agent` | value of User-Agent header | | `$request\_length` | request length (including request line, header, and request body) | | `$request\_time` | time elapsed since the first bytes were read from the client | | `$proxy\_upstream\_name` | name of the upstream. The format is `upstream---` | | `$proxy\_alternative\_upstream\_name` | name of the alternative upstream. The format is `upstream---` | | `$upstream\_addr` | the IP address and port (or the path to the domain socket) of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas. | | `$upstream\_response\_length` | the length of the response obtained from the upstream server | | `$upstream\_response\_time` | time spent on receiving the response from the upstream server as seconds with millisecond resolution | | `$upstream\_status` | status code of the response obtained from the upstream server | | `$req\_id` | value of the `X-Request-ID` HTTP header. If the header is not set, a randomly generated ID. | Additional available variables: | Placeholder | Description | |-------------|-------------| | `$namespace` | namespace of the ingress | | `$ingress\_name` | name of the ingress | | `$service\_name` | name of the service | | `$service\_port` | port of the service | Sources: - [Upstream variables](https://nginx.org/en/docs/http/ngx\_http\_upstream\_module.html#variables) - [Embedded variables](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#variables)
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/log-format.md
main
ingress-nginx
[ -0.01136286836117506, 0.01952502317726612, -0.013569864444434643, -0.007688861805945635, -0.046367619186639786, -0.035258207470178604, 0.016050726175308228, 0.03671945258975029, 0.006595564540475607, 0.07644233107566833, -0.04002491384744644, -0.0481250062584877, 0.013386218808591366, 0.02...
0.006912
# NGINX Configuration There are three ways to customize NGINX: 1. [ConfigMap](./configmap.md): using a Configmap to set global configurations in NGINX. 2. [Annotations](./annotations.md): use this if you want a specific configuration for a particular Ingress rule. 3. [Custom template](./custom-template.md): when more specific settings are required, like [open\_file\_cache](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#open\_file\_cache), adjust [listen](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#listen) options as `rcvbuf` or when is not possible to change the configuration through the ConfigMap.
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/nginx-configuration/index.md
main
ingress-nginx
[ -0.030192578211426735, 0.11479431390762329, 0.010117239318788052, -0.061701349914073944, 0.018294934183359146, 0.023270605131983757, 0.015294747427105904, 0.04036718234419823, -0.008006570860743523, 0.09674620628356934, -0.0889422819018364, -0.015187287703156471, 0.00021311324962880462, -0...
0.102866
# ModSecurity Web Application Firewall ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by OWASP. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - [https://www.modsecurity.org](https://www.modsecurity.org) The [ModSecurity-nginx](https://github.com/owasp-modsecurity/ModSecurity-nginx) connector is the connection point between NGINX and libmodsecurity (ModSecurity v3). The default ModSecurity configuration file is located in `/etc/nginx/modsecurity/modsecurity.conf`. This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration. To enable the ModSecurity feature we need to specify `enable-modsecurity: "true"` in the configuration configmap. >\_\_Note:\_\_ the default configuration use detection only, because that minimizes the chances of post-installation disruption. Due to the value of the setting [SecAuditLogType=Concurrent](https://github.com/owasp-modsecurity/ModSecurity/wiki/Reference-Manual-(v2.x)#secauditlogtype) the ModSecurity log is stored in multiple files inside the directory `/var/log/audit`. The default `Serial` value in SecAuditLogType can impact performance. The OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts. The directory `/etc/nginx/owasp-modsecurity-crs` contains the [OWASP ModSecurity Core Rule Set repository](https://github.com/coreruleset/coreruleset). Using `enable-owasp-modsecurity-crs: "true"` we enable the use of the rules. ## Supported annotations For more info on supported annotations, please see [annotations/#modsecurity](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#modsecurity) ## Example of using ModSecurity with plugins via the helm chart Suppose you have a ConfigMap that contains the contents of the [nextcloud-rule-exclusions plugin](https://github.com/coreruleset/nextcloud-rule-exclusions-plugin/blob/main/plugins/nextcloud-rule-exclusions-before.conf) like this: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: modsecurity-plugins data: empty-after.conf: | # no data empty-before.conf: | # no data empty-config.conf: | # no data nextcloud-rule-exclusions-before.conf: # this is just a snippet # find the full file at https://github.com/coreruleset/nextcloud-rule-exclusions-plugin # # [ File Manager ] # The web interface uploads files, and interacts with the user. SecRule REQUEST\_FILENAME "@contains /remote.php/webdav" \ "id:9508102,\ phase:1,\ pass,\ t:none,\ nolog,\ ver:'nextcloud-rule-exclusions-plugin/1.2.0',\ ctl:ruleRemoveById=920420,\ ctl:ruleRemoveById=920440,\ ctl:ruleRemoveById=941000-942999,\ ctl:ruleRemoveById=951000-951999,\ ctl:ruleRemoveById=953100-953130,\ ctl:ruleRemoveByTag=attack-injection-php" ``` If you're using the helm chart, you can pass in the following parameters in your `values.yaml`: ```yaml controller: config: # Enables Modsecurity enable-modsecurity: "true" # Update ModSecurity config and rules modsecurity-snippet: | # this enables the mod security nextcloud plugin Include /etc/nginx/owasp-modsecurity-crs/plugins/nextcloud-rule-exclusions-before.conf # this enables the default OWASP Core Rule Set Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf # Enable prevention mode. Options: DetectionOnly,On,Off (default is DetectionOnly) SecRuleEngine On # Enable scanning of the request body SecRequestBodyAccess On # Enable XML and JSON parsing SecRule REQUEST\_HEADERS:Content-Type "(?:text|application(?:/soap\+|/)|application/xml)/" \ "id:200000,phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=XML" SecRule REQUEST\_HEADERS:Content-Type "application/json" \ "id:200001,phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=JSON" # Reject if larger (we could also let it pass with ProcessPartial) SecRequestBodyLimitAction Reject # Send ModSecurity audit logs to the stdout (only for rejected requests) SecAuditLog /dev/stdout # format the logs in JSON SecAuditLogFormat JSON # could be On/Off/RelevantOnly SecAuditEngine RelevantOnly # Add a volume for the plugins directory extraVolumes: - name: plugins configMap: name: modsecurity-plugins # override the /etc/nginx/enable-owasp-modsecurity-crs/plugins with your ConfigMap extraVolumeMounts: - name: plugins mountPath: /etc/nginx/owasp-modsecurity-crs/plugins ```
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/third-party-addons/modsecurity.md
main
ingress-nginx
[ -0.09056887775659561, 0.024594813585281372, -0.09321696311235428, -0.023922841995954514, 0.09086974710226059, -0.06472813338041306, 0.10251853615045547, 0.019624138250947, 0.03223622590303421, 0.005640100687742233, 0.043112464249134064, -0.008040634915232658, 0.06596869975328445, 0.0647731...
0.159819
# OpenTelemetry Enables requests served by NGINX for distributed telemetry via The OpenTelemetry Project. Using the third party module [opentelemetry-cpp-contrib/nginx](https://github.com/open-telemetry/opentelemetry-cpp-contrib/tree/main/instrumentation/nginx) the Ingress-Nginx Controller can configure NGINX to enable [OpenTelemetry](http://opentelemetry.io) instrumentation. By default this feature is disabled. Check out this demo showcasing OpenTelemetry in Ingress NGINX. The video provides an overview and practical demonstration of how OpenTelemetry can be utilized in Ingress NGINX for observability and monitoring purposes. Demo: OpenTelemetry in Ingress NGINX. ## Usage To enable the instrumentation we must enable OpenTelemetry in the configuration ConfigMap: ```yaml data: enable-opentelemetry: "true" ``` To enable or disable instrumentation for a single Ingress, use the `enable-opentelemetry` annotation: ```yaml kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/enable-opentelemetry: "true" ``` We must also set the host to use when uploading traces: ```yaml otlp-collector-host: "otel-coll-collector.otel.svc" ``` NOTE: While the option is called `otlp-collector-host`, you will need to point this to any backend that receives otlp-grpc. Next you will need to deploy a distributed telemetry system which uses OpenTelemetry. [opentelemetry-collector](https://github.com/open-telemetry/opentelemetry-collector), [Jaeger](https://www.jaegertracing.io/), [Tempo](https://github.com/grafana/tempo), and [zipkin](https://zipkin.io/) have been tested. Other optional configuration options: ```yaml # specifies the name to use for the server span opentelemetry-operation-name # sets whether or not to trust incoming telemetry spans, Default: true opentelemetry-trust-incoming-span # specifies the port to use when uploading traces, Default: 4317 otlp-collector-port # specifies the service name to use for any traces created, Default: nginx otel-service-name # The maximum queue size. After the size is reached data are dropped, Default: 2048 otel-max-queuesize # The delay interval in milliseconds between two consecutive exports, Default: 5000 otel-schedule-delay-millis # The maximum batch size of every export. It must be smaller or equal to maxQueueSize, Default: 512 otel-max-export-batch-size # specifies sample rate for any traces created, Default: 0.01 otel-sampler-ratio # specifies the sampler to be used when sampling traces. # The available samplers are: AlwaysOn, AlwaysOff, TraceIdRatioBased, Default: AlwaysOn otel-sampler # Uses sampler implementation which by default will take a sample if parent Activity is sampled, Default: true otel-sampler-parent-based ``` Note that you can also set whether to trust incoming spans (global default is true) per-location using annotations like the following: ```yaml kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span: "true" ``` ## Examples The following examples show how to deploy and test different distributed telemetry systems. These example can be performed using Docker Desktop. In the [esigo/nginx-example](https://github.com/esigo/nginx-example) GitHub repository is an example of a simple hello service: ```mermaid graph TB subgraph Browser start["http://esigo.dev/hello/nginx"] end subgraph app sa[service-a] sb[service-b] sa --> |name: nginx| sb sb --> |hello nginx!| sa end subgraph otel otc["Otel Collector"] end subgraph observability tempo["Tempo"] grafana["Grafana"] backend["Jaeger"] zipkin["Zipkin"] end subgraph ingress-nginx ngx[nginx] end subgraph ngx[nginx] ng[nginx] om[OpenTelemetry module] end subgraph Node app otel observability ingress-nginx om --> |otlp-gRPC| otc --> |jaeger| backend otc --> |zipkin| zipkin otc --> |otlp-gRPC| tempo --> grafana sa --> |otlp-gRPC| otc sb --> |otlp-gRPC| otc start --> ng --> sa end ``` To install the example and collectors run: 1. Enable OpenTelemetry and set the otlp-collector-host: ```yaml $ echo ' apiVersion: v1 kind: ConfigMap data: enable-opentelemetry: "true" opentelemetry-config: "/etc/ingress-controller/telemetry/opentelemetry.toml" opentelemetry-operation-name: "HTTP $request\_method $service\_name $uri" opentelemetry-trust-incoming-span: "true" otlp-collector-host: "otel-coll-collector.otel.svc" otlp-collector-port: "4317" otel-max-queuesize: "2048" otel-schedule-delay-millis: "5000" otel-max-export-batch-size: "512" otel-service-name: "nginx-proxy" # Opentelemetry resource name otel-sampler: "AlwaysOn" # Also: AlwaysOff, TraceIdRatioBased otel-sampler-ratio: "1.0" otel-sampler-parent-based: "false" metadata: name: ingress-nginx-controller namespace: ingress-nginx ' | kubectl replace -f - ``` 2. Deploy otel-collector, grafana and Jaeger backend: ```bash # add helm charts needed for grafana and OpenTelemetry collector helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm repo add grafana https://grafana.github.io/helm-charts helm repo update # deploy cert-manager needed for OpenTelemetry collector operator kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.15.3/cert-manager.yaml # create observability namespace kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/namespace.yaml # install OpenTelemetry collector operator helm upgrade --install
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/third-party-addons/opentelemetry.md
main
ingress-nginx
[ -0.04316604882478714, 0.033314719796180725, 0.020284419879317284, 0.0010787721257656813, -0.02466542087495327, -0.08821281790733337, -0.003612315049394965, -0.0035959172528237104, 0.018895400688052177, 0.04860139265656471, -0.024655554443597794, -0.04880440607666969, -0.02006208524107933, ...
0.189492
add helm charts needed for grafana and OpenTelemetry collector helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm repo add grafana https://grafana.github.io/helm-charts helm repo update # deploy cert-manager needed for OpenTelemetry collector operator kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.15.3/cert-manager.yaml # create observability namespace kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/namespace.yaml # install OpenTelemetry collector operator helm upgrade --install otel-collector-operator -n otel --create-namespace open-telemetry/opentelemetry-operator # deploy OpenTelemetry collector kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/collector.yaml # deploy Jaeger all-in-one kubectl apply -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.37.0/jaeger-operator.yaml -n observability kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/jaeger.yaml -n observability # deploy zipkin kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/zipkin.yaml -n observability # deploy tempo and grafana helm upgrade --install tempo grafana/tempo --create-namespace -n observability helm upgrade -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/grafana/grafana-values.yaml --install grafana grafana/grafana --create-namespace -n observability ``` 3. Build and deploy demo app: ```bash # build images make images # deploy demo app: make deploy-app ``` 4. Make a few requests to the Service: ```bash kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8090:80 curl http://esigo.dev:8090/hello/nginx StatusCode : 200 StatusDescription : OK Content : {"v":"hello nginx!"} RawContent : HTTP/1.1 200 OK Connection: keep-alive Content-Length: 21 Content-Type: text/plain; charset=utf-8 Date: Mon, 10 Oct 2022 17:43:33 GMT {"v":"hello nginx!"} Forms : {} Headers : {[Connection, keep-alive], [Content-Length, 21], [Content-Type, text/plain; charset=utf-8], [Date, Mon, 10 Oct 2022 17:43:33 GMT]} Images : {} InputFields : {} Links : {} ParsedHtml : System.\_\_ComObject RawContentLength : 21 ``` 5. View the Grafana UI: ```bash kubectl port-forward --namespace=observability service/grafana 3000:80 ``` In the Grafana interface we can see the details: ![grafana screenshot](../../images/otel-grafana-demo.png "grafana screenshot") 6. View the Jaeger UI: ```bash kubectl port-forward --namespace=observability service/jaeger-all-in-one-query 16686:16686 ``` In the Jaeger interface we can see the details: ![Jaeger screenshot](../../images/otel-jaeger-demo.png "Jaeger screenshot") 7. View the Zipkin UI: ```bash kubectl port-forward --namespace=observability service/zipkin 9411:9411 ``` In the Zipkin interface we can see the details: ![zipkin screenshot](../../images/otel-zipkin-demo.png "zipkin screenshot") ## Migration from OpenTracing, Jaeger, Zipkin and Datadog If you are migrating from OpenTracing, Jaeger, Zipkin, or Datadog to OpenTelemetry, you may need to update various annotations and configurations. Here are the mappings for common annotations and configurations: ### Annotations | Legacy | OpenTelemetry | |---------------------------------------------------------------|-----------------------------------------------------------------| | `nginx.ingress.kubernetes.io/enable-opentracing` | `nginx.ingress.kubernetes.io/enable-opentelemetry` | | `nginx.ingress.kubernetes.io/opentracing-trust-incoming-span` | `nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span` | ### Configs | Legacy | OpenTelemetry | |---------------------------------------|----------------------------------------------| | `opentracing-operation-name` | `opentelemetry-operation-name` | | `opentracing-location-operation-name` | `opentelemetry-operation-name` | | `opentracing-trust-incoming-span` | `opentelemetry-trust-incoming-span` | | `zipkin-collector-port` | `otlp-collector-port` | | `zipkin-service-name` | `otel-service-name` | | `zipkin-sample-rate` | `otel-sampler-ratio` | | `jaeger-collector-port` | `otlp-collector-port` | | `jaeger-endpoint` | `otlp-collector-port`, `otlp-collector-host` | | `jaeger-service-name` | `otel-service-name` | | `jaeger-propagation-format` | `N/A` | | `jaeger-sampler-type` | `otel-sampler` | | `jaeger-sampler-param` | `otel-sampler` | | `jaeger-sampler-host` | `N/A` | | `jaeger-sampler-port` | `N/A` | | `jaeger-trace-context-header-name` | `N/A` | | `jaeger-debug-header` | `N/A` | | `jaeger-baggage-header` | `N/A` | | `jaeger-tracer-baggage-header-prefix` | `N/A` | | `datadog-collector-port` | `otlp-collector-port` | | `datadog-service-name` | `otel-service-name` | | `datadog-environment` | `N/A` | | `datadog-operation-name-override` | `N/A` | | `datadog-priority-sampling` | `otel-sampler` | | `datadog-sample-rate` | `otel-sampler-ratio` |
https://github.com/kubernetes/ingress-nginx/blob/main//docs/user-guide/third-party-addons/opentelemetry.md
main
ingress-nginx
[ -0.035699497908353806, 0.05870752036571503, -0.01355066243559122, -0.03295327350497246, -0.0646430179476738, -0.07748333364725113, -0.08917682617902756, -0.02171974442899227, 0.045475248247385025, 0.04147307947278023, 0.028960568830370903, -0.15913952887058258, -0.029176566749811172, 0.014...
0.074063
# Hardening Guide Do not use in multi-tenant Kubernetes production installations. This project assumes that users that can create Ingress objects are administrators of the cluster. ## Overview There are several ways to do hardening and securing of nginx. In this documentation two guides are used, the guides are overlapping in some points: - [nginx CIS Benchmark](https://www.cisecurity.org/benchmark/nginx/) - [cipherlist.eu](https://cipherlist.eu/) (one of many forks of the now dead project cipherli.st) This guide describes, what of the different configurations described in those guides is already implemented as default in the nginx implementation of kubernetes ingress, what needs to be configured, what is obsolete due to the fact that the nginx is running as container (the CIS benchmark relates to a non-containerized installation) and what is difficult or not possible. Be aware that this is only a guide and you are responsible for your own implementation. Some of the configurations may lead to have specific clients unable to reach your site or similar consequences. This guide refers to chapters in the CIS Benchmark. For full explanation you should refer to the benchmark document itself ## Configuration Guide | Chapter in CIS benchmark | Status | Default | Action to do if not default| |:-------------------------|:-------|:--------|:---------------------------| | \_\_1 Initial Setup\_\_ ||| | | ||| | | \_\_1.1 Installation\_\_||| | | 1.1.1 Ensure NGINX is installed (Scored)| OK | done through helm charts / following documentation to deploy nginx ingress | | | 1.1.2 Ensure NGINX is installed from source (Not Scored)| OK | done through helm charts / following documentation to deploy nginx ingress | | | ||| | | \_\_1.2 Configure Software Updates\_\_||| | | 1.2.1 Ensure package manager repositories are properly configured (Not Scored) | OK | done via helm, nginx version could be overwritten, however compatibility is not ensured then| | | 1.2.2 Ensure the latest software package is installed (Not Scored)| ACTION NEEDED | done via helm, nginx version could be overwritten, however compatibility is not ensured then| Plan for periodic updates | | ||| | | \_\_2 Basic Configuration\_\_ ||| | | ||| | | \_\_2.1 Minimize NGINX Modules\_\_||| | | 2.1.1 Ensure only required modules are installed (Not Scored) | OK | Already only needed modules are installed, however proposals for further reduction are welcome | | | 2.1.2 Ensure HTTP WebDAV module is not installed (Scored) | OK | | | | 2.1.3 Ensure modules with gzip functionality are disabled (Scored)| OK | | | | 2.1.4 Ensure the autoindex module is disabled (Scored)| OK | No autoindex configs so far in ingress defaults| | | ||| | | \_\_2.2 Account Security\_\_||| | | 2.2.1 Ensure that NGINX is run using a non-privileged, dedicated service account (Not Scored) | OK | Pod configured as user www-data: [See this line in helm chart values](https://github.com/kubernetes/ingress-nginx/blob/0cbe783f43a9313c9c26136e888324b1ee91a72f/charts/ingress-nginx/values.yaml#L10). Compiled with user www-data: [See this line in build script](https://github.com/kubernetes/ingress-nginx/blob/5d67794f4fbf38ec6575476de46201b068eabf87/images/nginx/rootfs/build.sh#L529) | | | 2.2.2 Ensure the NGINX service account is locked (Scored) | OK | Docker design ensures this | | | 2.2.3 Ensure the NGINX service account has an invalid shell (Scored)| OK | Shell is nologin: [see this line in build script](https://github.com/kubernetes/ingress-nginx/blob/5d67794f4fbf38ec6575476de46201b068eabf87/images/nginx/rootfs/build.sh#L613)| | | ||| | | \_\_2.3 Permissions and Ownership\_\_ ||| | | 2.3.1 Ensure NGINX directories and files are owned by root (Scored) | OK | Obsolete through docker-design and ingress controller needs to update the configs dynamically| | | 2.3.2 Ensure access to NGINX directories and files is restricted (Scored) | OK | See previous answer| | | 2.3.3 Ensure the NGINX process ID (PID) file is secured (Scored)| OK | No PID-File due to docker design | | |
https://github.com/kubernetes/ingress-nginx/blob/main//docs/deploy/hardening-guide.md
main
ingress-nginx
[ -0.07558131963014603, 0.04772220551967621, 0.029199982061982155, 0.00034418204450048506, -0.012964013032615185, -0.042707622051239014, 0.01793782226741314, -0.006844700779765844, -0.002129613421857357, -0.012022269889712334, -0.010732152499258518, -0.02572922222316265, 0.026227060705423355, ...
0.138006
controller needs to update the configs dynamically| | | 2.3.2 Ensure access to NGINX directories and files is restricted (Scored) | OK | See previous answer| | | 2.3.3 Ensure the NGINX process ID (PID) file is secured (Scored)| OK | No PID-File due to docker design | | | 2.3.4 Ensure the core dump directory is secured (Not Scored)| OK | No working\_directory configured by default | | | ||| | | \_\_2.4 Network Configuration\_\_ ||| | | 2.4.1 Ensure NGINX only listens for network connections on authorized ports (Not Scored)| OK | Ensured by automatic nginx.conf configuration| | | 2.4.2 Ensure requests for unknown host names are rejected (Not Scored)| OK | They are not rejected but send to the "default backend" delivering appropriate errors (mostly 404)| | | 2.4.3 Ensure keepalive\_timeout is 10 seconds or less, but not 0 (Scored)| ACTION NEEDED| Default is 75s | configure keep-alive to 10 seconds [according to this documentation](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/configmap.md#keep-alive) | | 2.4.4 Ensure send\_timeout is set to 10 seconds or less, but not 0 (Scored)| RISK TO BE ACCEPTED| Not configured, however the nginx default is 60s| Not configurable| | ||| | | \_\_2.5 Information Disclosure\_\_||| | | 2.5.1 Ensure server\_tokens directive is set to `off` (Scored) | OK | server\_tokens is configured to off by default| | | 2.5.2 Ensure default error and index.html pages do not reference NGINX (Scored) | ACTION NEEDED| 404 shows no version at all, 503 and 403 show "nginx", which is hardcoded [see this line in nginx source code](https://github.com/nginx/nginx/blob/master/src/http/ngx\_http\_special\_response.c#L36) | configure custom error pages at least for 403, 404 and 503 and 500| | 2.5.3 Ensure hidden file serving is disabled (Not Scored) | ACTION NEEDED | config not set | configure a config.server-snippet Snippet, but beware of .well-known challenges or similar. Refer to the benchmark here please | | 2.5.4 Ensure the NGINX reverse proxy does not enable information disclosure (Scored)| ACTION NEEDED| hide not configured| configure hide-headers with array of "X-Powered-By" and "Server": [according to this documentation](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/configmap.md#hide-headers) | | ||| | | \_\_3 Logging\_\_ ||| | | ||| | | 3.1 Ensure detailed logging is enabled (Not Scored) | OK | nginx ingress has a very detailed log format by default | | | 3.2 Ensure access logging is enabled (Scored) | OK | Access log is enabled by default | | | 3.3 Ensure error logging is enabled and set to the info logging level (Scored)| OK | Error log is configured by default. The log level does not matter, because it is all sent to STDOUT anyway | | | 3.4 Ensure log files are rotated (Scored) | OBSOLETE | Log file handling is not part of the nginx ingress and should be handled separately | | | 3.5 Ensure error logs are sent to a remote syslog server (Not Scored) | OBSOLETE | See previous answer| | | 3.6 Ensure access logs are sent to a remote syslog server (Not Scored)| OBSOLETE | See previous answer| | | 3.7 Ensure proxies pass source IP information (Scored)| OK | Headers are set by default | | | ||| | | \_\_4 Encryption\_\_ ||| | | ||| | | \_\_4.1 TLS / SSL Configuration\_\_ ||| | | 4.1.1 Ensure HTTP is redirected to HTTPS (Scored) | OK | Redirect to TLS is default | | | 4.1.2 Ensure a trusted certificate and trust chain is installed (Not Scored)| ACTION NEEDED| For installing certs there are enough manuals in the web. A good way is to use lets encrypt through cert-manager | Install proper certificates or use lets encrypt with cert-manager | |
https://github.com/kubernetes/ingress-nginx/blob/main//docs/deploy/hardening-guide.md
main
ingress-nginx
[ -0.028548071160912514, 0.029004884883761406, -0.033395469188690186, -0.023444795981049538, 0.02743213251233101, -0.03444913774728775, -0.07833513617515564, -0.02377699874341488, -0.011759391985833645, 0.07534068822860718, -0.025153230875730515, -0.039877116680145264, 0.0036398242227733135, ...
-0.023664
default | | | 4.1.2 Ensure a trusted certificate and trust chain is installed (Not Scored)| ACTION NEEDED| For installing certs there are enough manuals in the web. A good way is to use lets encrypt through cert-manager | Install proper certificates or use lets encrypt with cert-manager | | 4.1.3 Ensure private key permissions are restricted (Scored)| ACTION NEEDED| See previous answer| | | 4.1.4 Ensure only modern TLS protocols are used (Scored)| OK/ACTION NEEDED | Default is TLS 1.2 + 1.3, while this is okay for CIS Benchmark, cipherlist.eu only recommends 1.3. This may cut off old OS's | Set controller.config.ssl-protocols to "TLSv1.3"| | 4.1.5 Disable weak ciphers (Scored) | ACTION NEEDED| Default ciphers are already good, but cipherlist.eu recommends even stronger ciphers | Set controller.config.ssl-ciphers to "EECDH+AESGCM:EDH+AESGCM"| | 4.1.6 Ensure custom Diffie-Hellman parameters are used (Scored) | ACTION NEEDED| No custom DH parameters are generated| Generate dh parameters for each ingress deployment you use - [see here for a how to](https://kubernetes.github.io/ingress-nginx/examples/customization/ssl-dh-param/) | | 4.1.7 Ensure Online Certificate Status Protocol (OCSP) stapling is enabled (Scored) | ACTION NEEDED | Not enabled | set via [this configuration parameter](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#enable-ocsp) | | 4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored)| OK | HSTS is enabled by default | | | 4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored)| ACTION NEEDED / RISK TO BE ACCEPTED | HKPK not enabled by default | If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown | | 4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored) | DEPENDS ON BACKEND | Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh| If backend allows it, [manual is here](https://kubernetes.github.io/ingress-nginx/examples/auth/client-certs/)| | 4.1.11 Ensure the upstream traffic server certificate is trusted (Not Scored) | DEPENDS ON BACKEND | Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh| If backend allows it, [see configuration here](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#backend-certificate-authentication) | | 4.1.12 Ensure your domain is preloaded (Not Scored) | ACTION NEEDED| Preload is not active by default | Set controller.config.hsts-preload to true| | 4.1.13 Ensure session resumption is disabled to enable perfect forward security (Scored)| OK | Session tickets are disabled by default | | | 4.1.14 Ensure HTTP/2.0 is used (Not Scored) | OK | http2 is set by default| | | ||| | | \_\_5 Request Filtering and Restrictions\_\_||| | | ||| | | \_\_5.1 Access Control\_\_||| | | 5.1.1 Ensure allow and deny filters limit access to specific IP addresses (Not Scored)| OK/ACTION NEEDED | Depends on use case, geo ip module is compiled into Ingress-Nginx Controller, there are several ways to use it | If needed set IP restrictions via annotations or work with config snippets (be careful with lets-encrypt-http-challenge!) | | 5.1.2 Ensure only whitelisted HTTP methods are allowed (Not Scored) | OK/ACTION NEEDED | Depends on use case| If required it can be set via config snippet| | ||| | | \_\_5.2 Request Limits\_\_||| | | 5.2.1 Ensure timeout values for reading the client header and body are set correctly (Scored) | ACTION NEEDED| Default timeout is 60s | Set via [this configuration parameter](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/configmap.md#client-header-timeout) and respective body equivalent| | 5.2.2 Ensure the maximum request body size is set correctly (Scored)| ACTION NEEDED| Default is 1m| set via [this configuration parameter](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/configmap.md#proxy-body-size)| | 5.2.3 Ensure the maximum buffer size for URIs is defined
https://github.com/kubernetes/ingress-nginx/blob/main//docs/deploy/hardening-guide.md
main
ingress-nginx
[ -0.036681488156318665, 0.044118721038103104, -0.0785718485713005, 0.006142486352473497, -0.027582161128520966, -0.009293771348893642, -0.05592853203415871, 0.0471014678478241, -0.026585308834910393, -0.008602669462561607, 0.017002051696181297, -0.051630791276693344, 0.021943695843219757, 0...
-0.022379
| ACTION NEEDED| Default timeout is 60s | Set via [this configuration parameter](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/configmap.md#client-header-timeout) and respective body equivalent| | 5.2.2 Ensure the maximum request body size is set correctly (Scored)| ACTION NEEDED| Default is 1m| set via [this configuration parameter](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/configmap.md#proxy-body-size)| | 5.2.3 Ensure the maximum buffer size for URIs is defined (Scored) | ACTION NEEDED| Default is 4 8k| Set via [this configuration parameter](https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/configmap.md#large-client-header-buffers)| | 5.2.4 Ensure the number of connections per IP address is limited (Not Scored) | OK/ACTION NEEDED| No limit set| Depends on use case, limit can be set via [these annotations](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rate-limiting)| | 5.2.5 Ensure rate limits by IP address are set (Not Scored) | OK/ACTION NEEDED| No limit set| Depends on use case, limit can be set via [these annotations](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rate-limiting)| | ||| | | \_\_5.3 Browser Security\_\_||| | | 5.3.1 Ensure X-Frame-Options header is configured and enabled (Scored)| ACTION NEEDED| Header not set by default| Several ways to implement this - with the helm charts it works via controller.add-headers | | 5.3.2 Ensure X-Content-Type-Options header is configured and enabled (Scored) | ACTION NEEDED| See previous answer| See previous answer | | 5.3.3 Ensure that Content Security Policy (CSP) is enabled and configured properly (Not Scored) | ACTION NEEDED| See previous answer| See previous answer | | 5.3.4 Ensure the Referrer Policy is enabled and configured properly (Not Scored)| ACTION NEEDED | Depends on application. It should be handled in the applications webserver itself, not in the load balancing ingress | check backend webserver | | ||| | | \_\_6 Mandatory Access Control\_\_| n/a| too high level, depends on backends | | @media only screen and (min-width: 768px) { td:nth-child(1){ white-space:normal !important; } .md-typeset table:not([class]) td { padding: .2rem .3rem; } }
https://github.com/kubernetes/ingress-nginx/blob/main//docs/deploy/hardening-guide.md
main
ingress-nginx
[ -0.011432708241045475, 0.07537003606557846, 0.015570349991321564, 0.0044554369524121284, -0.09244698286056519, -0.03531093895435333, -0.03386332467198372, 0.05139639601111412, 0.01776418648660183, 0.08170057088136673, -0.07144361734390259, -0.020372111350297928, -0.0011378936469554901, -0....
0.131175
# Installation Guide There are multiple ways to install the Ingress-Nginx Controller: - with [Helm](https://helm.sh), using the project repository chart; - with `kubectl apply`, using YAML manifests; - with specific addons (e.g. for [minikube](#minikube) or [MicroK8s](#microk8s)). On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to get started as fast as possible, you can check the [quick start](#quick-start) instructions. However, in many environments, you can improve the performance or get better logs by enabling extra features. We recommend that you check the [environment-specific instructions](#environment-specific-instructions) for details about optimizing the ingress controller for your particular environment or cloud provider. ## Contents - [Quick start](#quick-start) - [Environment-specific instructions](#environment-specific-instructions) - ... [Docker Desktop](#docker-desktop) - ... [Rancher Desktop](#rancher-desktop) - ... [minikube](#minikube) - ... [MicroK8s](#microk8s) - ... [AWS](#aws) - ... [GCE - GKE](#gce-gke) - ... [Azure](#azure) - ... [Digital Ocean](#digital-ocean) - ... [Scaleway](#scaleway) - ... [Exoscale](#exoscale) - ... [Oracle Cloud Infrastructure](#oracle-cloud-infrastructure) - ... [OVHcloud](#ovhcloud) - ... [Bare-metal](#bare-metal-clusters) - [Miscellaneous](#miscellaneous) ## Quick start \*\*If you have Helm,\*\* you can deploy the ingress controller with the following command: ```console helm upgrade --install ingress-nginx ingress-nginx \ --repo https://kubernetes.github.io/ingress-nginx \ --namespace ingress-nginx --create-namespace ``` It will install the controller in the `ingress-nginx` namespace, creating that namespace if it doesn't already exist. !!! info This command is \*idempotent\*: - if the ingress controller is not installed, it will install it, - if the ingress controller is already installed, it will upgrade it. \*\*If you want a full list of values that you can set, while installing with Helm,\*\* then run: ```console helm show values ingress-nginx --repo https://kubernetes.github.io/ingress-nginx ``` !!! attention "Helm install on AWS/GCP/Azure/Other providers" The \*ingress-nginx-controller helm-chart is a generic install out of the box\*. The default set of helm values is \*\*not\*\* configured for installation on any infra provider. The annotations that are applicable to the cloud provider must be customized by the users. See [AWS LB Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/). Examples of some annotations recommended (healthecheck ones are required for target-type IP) for the service resource of `--type LoadBalancer` on AWS are below: ```yaml annotations: service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration\_delay.timeout\_seconds=270 service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /healthz service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "10254" service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codes: 200-299 service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing" service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" service.beta.kubernetes.io/aws-load-balancer-type: nlb service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: "true" service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true" service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-something1 sg-something2" service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "somebucket" service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "ingress-nginx" service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "5" ``` \*\*If you don't have Helm\*\* or if you prefer to use a YAML manifest, you can run the following command instead: ```console kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.14.1/deploy/static/provider/cloud/deploy.yaml ``` !!! info The YAML manifest in the command above was generated with `helm template`, so you will end up with almost the same resources as if you had used Helm to install the controller. !!! attention If you are running an old version of Kubernetes (1.18 or earlier), please read [this paragraph](#running-on-kubernetes-versions-older-than-119) for specific instructions. Because of api deprecations, the default manifest may not work on your cluster. Specific manifests for supported Kubernetes versions are available within a sub-folder of each provider. ### Firewall configuration To check which ports are used by your installation of ingress-nginx, look at the output of `kubectl -n ingress-nginx get pod -o yaml`. In general, you need: - Port 8443 open between all hosts on which the kubernetes nodes are running. This is used for the ingress-nginx [admission controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/). - Port 80 (for HTTP) and/or 443 (for HTTPS) open to the public on the kubernetes nodes to which the DNS of your apps are pointing. ### Pre-flight check A few pods should start in the `ingress-nginx` namespace: ```console kubectl get pods --namespace=ingress-nginx ``` After a while, they should all be running. The following command will wait for the ingress controller pod
https://github.com/kubernetes/ingress-nginx/blob/main//docs/deploy/index.md
main
ingress-nginx
[ 0.02402094192802906, 0.034876495599746704, 0.04215467721223831, -0.019294200465083122, 0.031508225947618484, -0.011554372496902943, -0.058072708547115326, 0.03346419706940651, 0.04183849319815636, 0.09980680048465729, -0.02595263347029686, -0.05982322618365288, 0.003034658497199416, -0.022...
0.069973
public on the kubernetes nodes to which the DNS of your apps are pointing. ### Pre-flight check A few pods should start in the `ingress-nginx` namespace: ```console kubectl get pods --namespace=ingress-nginx ``` After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready: ```console kubectl wait --namespace ingress-nginx \ --for=condition=ready pod \ --selector=app.kubernetes.io/component=controller \ --timeout=120s ``` ### Local testing Let's create a simple web server and the associated service: ```console kubectl create deployment demo --image=httpd --port=80 kubectl expose deployment demo ``` Then create an ingress resource. The following example uses a host that maps to `localhost`: ```console kubectl create ingress demo-localhost --class=nginx \ --rule="demo.localdev.me/\*=demo:80" ``` Now, forward a local port to the ingress controller: ```console kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80 ``` !!! info A note on DNS & network-connection. This documentation assumes that a user has awareness of the DNS and the network routing aspects involved in using ingress. The port-forwarding mentioned above, is the easiest way to demo the working of ingress. The "kubectl port-forward..." command above has forwarded the port number 8080, on the localhost's tcp/ip stack, where the command was typed, to the port number 80, of the service created by the installation of ingress-nginx controller. So now, the traffic sent to port number 8080 on localhost will reach the port number 80, of the ingress-controller's service. Port-forwarding is not for a production environment use-case. But here we use port-forwarding, to simulate a HTTP request, originating from outside the cluster, to reach the service of the ingress-nginx controller, that is exposed to receive traffic from outside the cluster. [This issue](https://github.com/kubernetes/ingress-nginx/issues/10014#issuecomment-1567791549described) shows a typical DNS problem and its solution. At this point, you can access your deployment using curl ; ```console curl --resolve demo.localdev.me:8080:127.0.0.1 http://demo.localdev.me:8080 ``` You should see a HTML response containing text like \*\*"It works!"\*\*. ### Online testing If your Kubernetes cluster is a "real" cluster that supports services of type `LoadBalancer`, it will have allocated an external IP address or FQDN to the ingress controller. You can see that IP address or FQDN with the following command: ```console kubectl get service ingress-nginx-controller --namespace=ingress-nginx ``` It will be the `EXTERNAL-IP` field. If that field shows ``, this means that your Kubernetes cluster wasn't able to provision the load balancer (generally, this is because it doesn't support services of type `LoadBalancer`). Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress resource. The following example assumes that you have set up a DNS record for `www.demo.io`: ```console kubectl create ingress demo --class=nginx \ --rule="www.demo.io/\*=demo:80" ``` Alternatively, the above command can be rewritten as follows for the ```--rule``` command and below. ```console kubectl create ingress demo --class=nginx \ --rule www.demo.io/=demo:80 ``` You should then be able to see the "It works!" page when you connect to . Congratulations, you are serving a public website hosted on a Kubernetes cluster! 🎉 ## Environment-specific instructions ### Local development clusters #### minikube The ingress controller can be installed through minikube's addons system: ```console minikube addons enable ingress ``` #### MicroK8s The ingress controller can be installed through MicroK8s's addons system: ```console microk8s enable ingress ``` Please check the MicroK8s [documentation page](https://microk8s.io/docs/addon-ingress) for details. #### Docker Desktop Kubernetes is available in Docker Desktop: - Mac, from [version 18.06.0-ce](https://docs.docker.com/docker-for-mac/release-notes/#stable-releases-of-2018) - Windows, from [version 18.06.0-ce](https://docs.docker.com/docker-for-windows/release-notes/#docker-community-edition-18060-ce-win70-2018-07-25) First, make sure that Kubernetes is enabled in the Docker settings. The command `kubectl get nodes` should show a single node called `docker-desktop`. The ingress controller can be installed on Docker Desktop using
https://github.com/kubernetes/ingress-nginx/blob/main//docs/deploy/index.md
main
ingress-nginx
[ 0.00478355260565877, -0.007428251672536135, 0.038203321397304535, -0.02245146594941616, -0.05254806578159332, -0.05468367412686348, -0.04888220131397247, -0.02022910676896572, 0.08782786875963211, 0.11712921410799026, -0.02866852842271328, -0.0797877311706543, -0.027259595692157745, -0.028...
0.123552
Desktop Kubernetes is available in Docker Desktop: - Mac, from [version 18.06.0-ce](https://docs.docker.com/docker-for-mac/release-notes/#stable-releases-of-2018) - Windows, from [version 18.06.0-ce](https://docs.docker.com/docker-for-windows/release-notes/#docker-community-edition-18060-ce-win70-2018-07-25) First, make sure that Kubernetes is enabled in the Docker settings. The command `kubectl get nodes` should show a single node called `docker-desktop`. The ingress controller can be installed on Docker Desktop using the default [quick start](#quick-start) instructions. On most systems, if you don't have any other service of type `LoadBalancer` bound to port 80, the ingress controller will be assigned the `EXTERNAL-IP` of `localhost`, which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the `kubectl port-forward` method described in the [local testing section](#local-testing). #### Rancher Desktop Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop. Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu. Once traefik is disabled, the Ingress-Nginx Controller can be installed on Rancher Desktop using the default [quick start](#quick-start) instructions. Follow the instructions described in the [local testing section](#local-testing) to try a sample. ### Cloud deployments If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the `externalTrafficPolicy` of the ingress controller Service to `Local` (instead of the default `Cluster`) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding `--set controller.service.externalTrafficPolicy=Local` to the `helm install` or `helm upgrade` command. Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. `--set controller.config.use-proxy-protocol=true`) and in the cloud provider's load balancer configuration to function correctly. In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers. #### AWS In AWS, we use a Network load balancer (NLB) to expose the Ingress-Nginx Controller behind a Service of `Type=LoadBalancer`. !!! info The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use [Network load balancing on Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html) with [AWS Load Balancer Controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller). ##### Network Load Balancer (NLB) ```console kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.14.1/deploy/static/provider/aws/deploy.yaml ``` ##### TLS termination in AWS Load Balancer (NLB) By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB. 1. Download the [deploy.yaml](https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.14.1/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml) template ```console wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.14.1/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml ``` 2. Edit the file and change the VPC CIDR in use for the Kubernetes cluster: ``` proxy-real-ip-cidr: XXX.XXX.XXX/XX ``` 3. Change the AWS Certificate Manager (ACM) ID as well: ``` arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX ``` 4. Deploy the manifest: ```console kubectl apply -f deploy.yaml ``` ##### NLB Idle Timeouts The default idle timeout value for TCP flows is 350 seconds and [can be modified to any value between 60-6000 seconds.](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancers.html#connection-idle-timeout) For this reason, you need to ensure the [keepalive\_timeout](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#keepalive\_timeout) value is configured less than your configured idle timeout to work as expected. By default, NGINX `keepalive\_timeout` is set to `75s`. More information with regard to timeouts can be found in the [official AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancers.html#connection-idle-timeout) #### GCE-GKE First, your user needs to have `cluster-admin` permissions on the
https://github.com/kubernetes/ingress-nginx/blob/main//docs/deploy/index.md
main
ingress-nginx
[ -0.04757099226117134, 0.06065788120031357, 0.01706589199602604, -0.036225397139787674, -0.007633620873093605, -0.008694431744515896, -0.0665447860956192, -0.023072849959135056, 0.08049921691417694, 0.0487930104136467, -0.03293927013874054, -0.09253790229558945, -0.007025785278528929, -0.01...
0.027272
ensure the [keepalive\_timeout](https://nginx.org/en/docs/http/ngx\_http\_core\_module.html#keepalive\_timeout) value is configured less than your configured idle timeout to work as expected. By default, NGINX `keepalive\_timeout` is set to `75s`. More information with regard to timeouts can be found in the [official AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancers.html#connection-idle-timeout) #### GCE-GKE First, your user needs to have `cluster-admin` permissions on the cluster. This can be done with the following command: ```console kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole cluster-admin \ --user $(gcloud config get-value account) ``` Then, the ingress controller can be installed like this: ```console kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.14.1/deploy/static/provider/cloud/deploy.yaml ``` !!! warning For private clusters, you will need to either add a firewall rule that allows master nodes access to port `8443/tcp` on worker nodes, or change the existing rule that allows access to port `80/tcp`, `443/tcp` and `10254/tcp` to also allow access to port `8443/tcp`. More information can be found in the [Official GCP Documentation](https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#config-hc-firewall). See the [GKE documentation](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add\_firewall\_rules) on adding rules and the [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/79739) for more detail. Proxy-protocol is supported in GCE check the [Official Documentations on how to enable.](https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#proxy-protocol) #### Azure ```console kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.14.1/deploy/static/provider/cloud/deploy.yaml ``` More information with regard to Azure annotations for ingress controller can be found in the [official AKS documentation](https://docs.microsoft.com/en-us/azure/aks/ingress-internal-ip#create-an-ingress-controller). #### Digital Ocean ```console kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.14.1/deploy/static/provider/do/deploy.yaml ``` - By default the service object of the ingress-nginx-controller for Digital-Ocean, only configures one annotation. Its this one `service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"`. While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows `no data`, unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in [this issue](https://github.com/kubernetes/ingress-nginx/issues/8965). Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data. #### Scaleway ```console kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.14.1/deploy/static/provider/scw/deploy.yaml ``` Refer to the [dedicated tutorial](https://www.scaleway.com/en/docs/tutorials/proxy-protocol-v2-load-balancer/#configuring-proxy-protocol-for-ingress-nginx) in the Scaleway documentation for configuring the proxy protocol for ingress-nginx with the Scaleway load balancer. #### Exoscale ```console kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml ``` The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager [documentation](https://github.com/exoscale/exoscale-cloud-controller-manager/blob/master/docs/service-loadbalancer.md). #### Oracle Cloud Infrastructure ```console kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.14.1/deploy/static/provider/cloud/deploy.yaml ``` A [complete list of available annotations for Oracle Cloud Infrastructure](https://github.com/oracle/oci-cloud-controller-manager/blob/master/docs/load-balancer-annotations.md) can be found in the [OCI Cloud Controller Manager](https://github.com/oracle/oci-cloud-controller-manager) documentation. #### OVHcloud ```console helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx --create-namespace ``` You can find the [complete tutorial](https://docs.ovh.com/gb/en/kubernetes/installing-nginx-ingress/). ### Bare metal clusters This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as "raw" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...) For quick testing, you can use a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport). This should work on almost every cluster, but it will typically use a port in the range 30000-32767. ```console kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.14.1/deploy/static/provider/baremetal/deploy.yaml ``` For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range), see [bare-metal considerations](./baremetal.md). ## Miscellaneous ### Checking ingress controller version Run `/nginx-ingress-controller --version` within the pod, for instance with `kubectl exec`: ```console POD\_NAMESPACE=ingress-nginx POD\_NAME=$(kubectl get pods -n $POD\_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name) kubectl exec $POD\_NAME -n $POD\_NAMESPACE -- /nginx-ingress-controller --version ``` ### Scope By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior, use the flag `--watch-namespace` or check the Helm chart value `controller.scope` to limit the controller to a single namespace. Although the use of this flag is not popular, one important fact to note is that the
https://github.com/kubernetes/ingress-nginx/blob/main//docs/deploy/index.md
main
ingress-nginx
[ -0.007749980315566063, 0.01980811543762684, -0.01681673899292946, -0.006372432690113783, -0.013415290042757988, -0.0287136510014534, -0.011137744411826134, -0.006883011665195227, 0.06441039592027664, 0.0688667967915535, -0.06440584361553192, -0.07589170336723328, 0.009559725411236286, -0.0...
0.116876
controller watches Ingress objects from all namespaces. If you want to change this behavior, use the flag `--watch-namespace` or check the Helm chart value `controller.scope` to limit the controller to a single namespace. Although the use of this flag is not popular, one important fact to note is that the secret containing the default-ssl-certificate needs to also be present in the watched namespace(s). See also [“How to install multiple Ingress controllers in the same cluster”](https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/) for more details. ### Webhook network access !!! warning The controller uses an [admission webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) to validate Ingress definitions. Make sure that you don't have [Network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) or additional firewalls preventing connections from the API server to the `ingress-nginx-controller-admission` service. ### Certificate generation !!! attention The first time the ingress controller starts, two [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/) create the SSL Certificate used by the admission webhook. This can cause an initial delay of up to two minutes until it is possible to create and validate Ingress definitions. You can wait until it is ready to run the next command: ```yaml kubectl wait --namespace ingress-nginx \ --for=condition=ready pod \ --selector=app.kubernetes.io/component=controller \ --timeout=120s ``` ### Running on Kubernetes versions older than 1.19 Ingress resources evolved over time. They started with `apiVersion: extensions/v1beta1`, then moved to `apiVersion: networking.k8s.io/v1beta1` and more recently to `apiVersion: networking.k8s.io/v1`. Here is how these Ingress versions are supported in Kubernetes: - before Kubernetes 1.19, only `v1beta1` Ingress resources are supported - from Kubernetes 1.19 to 1.21, both `v1beta1` and `v1` Ingress resources are supported - in Kubernetes 1.22 and above, only `v1` Ingress resources are supported And here is how these Ingress versions are supported in Ingress-Nginx Controller: - before version 1.0, only `v1beta1` Ingress resources are supported - in version 1.0 and above, only `v1` Ingress resources are As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X of the Ingress-Nginx Controller (e.g. version 0.49). The Helm chart of the Ingress-Nginx Controller switched to version 1 in version 4 of the chart. In other words, if you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding `--version='<4'` to the `helm install` command ).
https://github.com/kubernetes/ingress-nginx/blob/main//docs/deploy/index.md
main
ingress-nginx
[ -0.035697080194950104, 0.014686217531561852, -0.02789267525076866, 0.01298101618885994, 0.0035477913916110992, -0.04298495128750801, 0.036678120493888855, -0.07252547889947891, 0.09646964073181152, 0.062385443598032, 0.0047514475882053375, -0.11307412385940552, 0.0575108677148819, -0.01417...
0.096144
# Role Based Access Control (RBAC) ## Overview This example applies to ingress-nginx-controllers being deployed in an environment with RBAC enabled. Role Based Access Control is comprised of four layers: 1. `ClusterRole` - permissions assigned to a role that apply to an entire cluster 2. `ClusterRoleBinding` - binding a ClusterRole to a specific account 3. `Role` - permissions assigned to a role that apply to a specific namespace 4. `RoleBinding` - binding a Role to a specific account In order for RBAC to be applied to an ingress-nginx-controller, that controller should be assigned to a `ServiceAccount`. That `ServiceAccount` should be bound to the `Role`s and `ClusterRole`s defined for the ingress-nginx-controller. ## Service Accounts created in this example One ServiceAccount is created in this example, `ingress-nginx`. ## Permissions Granted in this example There are two sets of permissions defined in this example. Cluster-wide permissions defined by the `ClusterRole` named `ingress-nginx`, and namespace specific permissions defined by the `Role` named `ingress-nginx`. ### Cluster Permissions These permissions are granted in order for the ingress-nginx-controller to be able to function as an ingress across the cluster. These permissions are granted to the `ClusterRole` named `ingress-nginx` \* `configmaps`, `endpoints`, `nodes`, `pods`, `secrets`: list, watch \* `nodes`: get \* `services`, `ingresses`, `ingressclasses`, `endpointslices`: get, list, watch \* `events`: create, patch \* `ingresses/status`: update \* `leases`: list, watch ### Namespace Permissions These permissions are granted specific to the ingress-nginx namespace. These permissions are granted to the `Role` named `ingress-nginx` \* `configmaps`, `pods`, `secrets`: get \* `endpoints`: get Furthermore to support leader-election, the ingress-nginx-controller needs to have access to a `leases` using the resourceName `ingress-nginx-leader` > Note that resourceNames can NOT be used to limit requests using the “create” > verb because authorizers only have access to information that can be obtained > from the request URL, method, and headers (resource names in a “create” request > are part of the request body). \* `leases`: get, update (for resourceName `ingress-controller-leader`) \* `leases`: create This resourceName is the `election-id` defined by the ingress-controller, which defaults to: \* `election-id`: `ingress-controller-leader` \* `resourceName` : `` Please adapt accordingly if you overwrite either parameter when launching the ingress-nginx-controller. ### Bindings The ServiceAccount `ingress-nginx` is bound to the Role `ingress-nginx` and the ClusterRole `ingress-nginx`. The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD\_NAMESPACE should be in the ingress-nginx namespace.
https://github.com/kubernetes/ingress-nginx/blob/main//docs/deploy/rbac.md
main
ingress-nginx
[ -0.08413798362016678, -0.04293042793869972, -0.07847636938095093, 0.03190253674983978, -0.03884272649884224, 0.03644513711333275, 0.11375033110380173, -0.04506459832191467, 0.005224694963544607, 0.028963034972548485, 0.0007449381519109011, -0.02453286573290825, 0.0354643389582634, 0.038621...
0.14665
# Upgrading !!! important No matter the method you use for upgrading, \_if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx\_. ## Without Helm To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment. I.e. if your deployment resource looks like (partial example): ```yaml kind: Deployment metadata: name: ingress-nginx-controller namespace: ingress-nginx spec: replicas: 1 selector: ... template: metadata: ... spec: containers: - name: ingress-nginx-controller image: registry.k8s.io/ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef args: ... ``` simply change the `v1.0.4` tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation): ``` kubectl set image deployment/ingress-nginx-controller \ controller=registry.k8s.io/ingress-nginx/controller:v1.0.5@sha256:55a1fcda5b7657c372515fe402c3e39ad93aa59f6e4378e82acd99912fe6028d \ -n ingress-nginx ``` For interactive editing, use `kubectl edit deployment ingress-nginx-controller -n ingress-nginx`. ## With Helm If you installed ingress-nginx using the Helm command in the deployment docs so its name is `ingress-nginx`, you should be able to upgrade using ```shell helm upgrade --reuse-values ingress-nginx ingress-nginx/ingress-nginx ``` ### Migrating from stable/nginx-ingress See detailed steps in the upgrading section of the `ingress-nginx` chart [README](https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/README.md#migrating-from-stablenginx-ingress).
https://github.com/kubernetes/ingress-nginx/blob/main//docs/deploy/upgrade.md
main
ingress-nginx
[ 0.000445258105173707, 0.07246275991201401, 0.08126373589038849, -0.06574798375368118, 0.06206599622964859, -0.009215133264660835, -0.007876724936068058, 0.011159759946167469, 0.019239647313952446, 0.1358836442232132, 0.002835093764588237, -0.02960491180419922, -0.0212432648986578, 0.004553...
0.017211
# Bare-metal considerations In traditional \*cloud\* environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the Ingress-Nginx Controller to external clients and, indirectly, to any application running inside the cluster. \*Bare-metal\* environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers. ![Cloud environment](../images/baremetal/cloud\_overview.jpg) ![Bare-metal environment](../images/baremetal/baremetal\_overview.jpg) The rest of this document describes a few recommended approaches to deploying the Ingress-Nginx Controller inside a Kubernetes cluster running on bare-metal. ## A pure software solution: MetalLB [MetalLB][metallb] provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster. This section demonstrates how to use the [Layer 2 configuration mode][metallb-l2] of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has \*\*publicly accessible nodes\*\*. In this mode, one node attracts all the traffic for the `ingress-nginx` Service IP. See [Traffic policies][metallb-trafficpolicies] for more details. ![MetalLB in L2 mode](../images/baremetal/metallb.jpg) !!! note The description of other supported configuration modes is off-scope for this document. !!! warning MetalLB is currently in \*beta\*. Read about the [Project maturity][metallb-maturity] and make sure you inform yourself by reading the official documentation thoroughly. MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the [Installation][metallb-install] instructions, and that the Ingress-Nginx Controller was installed using the steps described in the [quickstart section of the installation guide][install-quickstart]. MetalLB requires a pool of IP addresses in order to be able to take ownership of the `ingress-nginx` Service. This pool can be defined through `IPAddressPool` objects in the same namespace as the MetalLB controller. This pool of IPs \*\*must\*\* be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server. !!! example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) ```console $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 ``` After creating the following objects, MetalLB takes ownership of one of the IP addresses in the pool and updates the \*loadBalancer\* IP field of the `ingress-nginx` Service accordingly. ```yaml --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: default namespace: metallb-system spec: addresses: - 203.0.113.10-203.0.113.15 autoAssign: true --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: default namespace: metallb-system spec: ipAddressPools: - default ``` ```console $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx LoadBalancer 10.0.220.217 203.0.113.10 80:30100/TCP,443:30101/TCP ``` As soon as MetalLB sets the external IP address of the `ingress-nginx` LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service: ```console $ curl -D- http://203.0.113.10 -H 'Host: myapp.example.com' HTTP/1.1 200 OK Server: nginx/1.15.2 ``` !!! tip In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the `Local` traffic policy. Traffic policies are described in more details in [Traffic policies][metallb-trafficpolicies] as well as in the next section. [metallb]: https://metallb.universe.tf/ [metallb-maturity]: https://metallb.universe.tf/concepts/maturity/ [metallb-l2]: https://metallb.universe.tf/concepts/layer2/ [metallb-install]: https://metallb.universe.tf/installation/ [metallb-trafficpolicies]: https://metallb.universe.tf/usage/#traffic-policies ## Over a NodePort Service Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the [installation guide][install-baremetal]. !!! info A Service of type `NodePort` exposes, via the `kube-proxy` component, the \*\*same unprivileged\*\* port
https://github.com/kubernetes/ingress-nginx/blob/main//docs/deploy/baremetal.md
main
ingress-nginx
[ -0.12474530935287476, 0.037820469588041306, 0.05622410401701927, 0.03952408581972122, -0.019670331850647926, -0.0213596411049366, 0.02264011837542057, 0.022962218150496483, 0.024580780416727066, 0.015520724467933178, -0.0455186553299427, -0.08025025576353073, 0.029949570074677467, -0.07848...
0.213796
[metallb-l2]: https://metallb.universe.tf/concepts/layer2/ [metallb-install]: https://metallb.universe.tf/installation/ [metallb-trafficpolicies]: https://metallb.universe.tf/usage/#traffic-policies ## Over a NodePort Service Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the [installation guide][install-baremetal]. !!! info A Service of type `NodePort` exposes, via the `kube-proxy` component, the \*\*same unprivileged\*\* port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see [Services][nodeport-def]. In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the `ingress-nginx` Service to HTTP requests. ![NodePort request flow](../images/baremetal/nodeport.jpg) You can \*\*customize the exposed node port numbers\*\* by setting the `controller.service.nodePorts.\*` Helm values, but they still have to be in the 30000-32767 range. !!! example Given the NodePort `30100` allocated to the `ingress-nginx` Service ```console $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP ``` and a Kubernetes node with the public IP address `203.0.113.2` (the external IP is added as an example, in most bare-metal environments this value is ) ```console $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 ``` a client would reach an Ingress with `host: myapp.example.com` at `http://myapp.example.com:30100`, where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address. !!! danger "Impact on the host system" While it may sound tempting to reconfigure the NodePort range using the `--service-node-port-range` API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant `kube-proxy` privileges it may otherwise not require. This practice is therefore \*\*discouraged\*\*. See the other approaches proposed in this page for alternatives. This approach has a few other limitations one ought to be aware of: ### Source IP address Services of type NodePort perform [source address translation][nodeport-nat] by default. This means the source IP of a HTTP request is always \*\*the IP address of the Kubernetes node that received the request\*\* from the perspective of NGINX. The recommended way to preserve the source IP in a NodePort setup is to set the value of the `externalTrafficPolicy` field of the `ingress-nginx` Service spec to `Local` ([example][preserve-ip]). !!! warning This setting effectively \*\*drops packets\*\* sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider [assigning NGINX Pods to specific nodes][pod-assign] in order to control on what nodes the Ingress-Nginx Controller should be scheduled or not scheduled. !!! example In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is ) ```console $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 ``` with a `ingress-nginx-controller` Deployment composed of 2 replicas ```console $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3 ingress-nginx-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2 ``` Requests sent to `host-2` and `host-3` would be forwarded to NGINX and original client's IP would be preserved, while requests to `host-1` would get dropped because there is no NGINX replica running on that
https://github.com/kubernetes/ingress-nginx/blob/main//docs/deploy/baremetal.md
main
ingress-nginx
[ -0.021537689492106438, 0.0226347167044878, 0.028945790603756905, -0.03336147218942642, -0.002798191038891673, -0.02897556498646736, 0.009088311344385147, -0.043122291564941406, 0.013979349285364151, 0.003700610250234604, -0.019137442111968994, -0.042944394052028656, 0.0033781693782657385, ...
0.081864
default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3 ingress-nginx-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2 ``` Requests sent to `host-2` and `host-3` would be forwarded to NGINX and original client's IP would be preserved, while requests to `host-1` would get dropped because there is no NGINX replica running on that node. Other ways to preserve the source IP in a NodePort setup are described here: [Source IP address](https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#source-ip-address). ### Ingress status Because NodePort Services do not get a LoadBalancerIP assigned by definition, the Ingress-Nginx Controller \*\*does not update the status of Ingress objects it manages\*\*. ```console $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 ``` Despite the fact there is no load balancer providing a public IP address to the Ingress-Nginx Controller, it is possible to force the status update of all managed Ingress objects by setting the `externalIPs` field of the `ingress-nginx` Service. !!! warning There is more to setting `externalIPs` than just enabling the Ingress-Nginx Controller to update the status of Ingress objects. Please read about this option in the [Services][external-ips] page of official Kubernetes documentation as well as the section about [External IPs](#external-ips) in this document for more information. !!! example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) ```console $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 ``` one could edit the `ingress-nginx` Service and add the following field to the object spec ```yaml spec: externalIPs: - 203.0.113.1 - 203.0.113.2 - 203.0.113.3 ``` which would in turn be reflected on Ingress objects as follows: ```console $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80 ``` ### Redirects As NGINX is \*\*not aware of the port translation operated by the NodePort Service\*\*, backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort. !!! example Redirects generated by NGINX, for instance HTTP to HTTPS or `domain` to `www.domain`, are generated without NodePort: ```console $ curl -D- http://myapp.example.com:30100` HTTP/1.1 308 Permanent Redirect Server: nginx/1.15.2 Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect ``` [install-baremetal]: ./index.md#bare-metal-clusters [install-quickstart]: ./index.md#quick-start [nodeport-def]: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport [nodeport-nat]: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport [pod-assign]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ [preserve-ip]: https://github.com/kubernetes/ingress-nginx/blob/ingress-nginx-3.15.2/deploy/static/provider/aws/deploy.yaml#L290 ## Via the host network In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure `ingress-nginx` Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the Ingress-Nginx Controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services. !!! note This approach does not leverage any Service object to expose the Ingress-Nginx Controller. If the `ingress-nginx` Service exists in the target cluster, it is \*\*recommended to delete it\*\*. This can be achieved by enabling the `hostNetwork` option in the Pods' spec. ```yaml template: spec: hostNetwork: true ``` !!! danger "Security considerations" Enabling this option \*\*exposes every system daemon to the Ingress-Nginx Controller\*\* on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully. !!! example Consider this `ingress-nginx-controller` Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP. ```console $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 ``` One major
https://github.com/kubernetes/ingress-nginx/blob/main//docs/deploy/baremetal.md
main
ingress-nginx
[ -0.09930375218391418, 0.024874035269021988, 0.048387300223112106, -0.00659023504704237, 0.02485952526330948, -0.009552493691444397, 0.03943244367837906, -0.03767964616417885, 0.013258564285933971, 0.07658909261226654, -0.05112976208329201, -0.03883300721645355, 0.0521019771695137, -0.03705...
0.084284
2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP. ```console $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 ``` One major limitation of this deployment approach is that only \*\*a single Ingress-Nginx Controller Pod\*\* may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event: ```console $ kubectl -n ingress-nginx describe pod ... Events: Type Reason From Message ---- ------ ---- ------- Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports. ``` One way to ensure only schedulable Pods are created is to deploy the Ingress-Nginx Controller as a \*DaemonSet\* instead of a traditional Deployment. !!! info A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to [repel those Pods][taints]. For more information, see [DaemonSet][daemonset]. Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion. ![DaemonSet with hostNetwork flow](../images/baremetal/hostnetwork.jpg) Like with NodePorts, this approach has a few quirks it is important to be aware of. ### DNS resolution Pods configured with `hostNetwork: true` do not use the internal DNS resolver (i.e. \*kube-dns\* or \*CoreDNS\*), unless their `dnsPolicy` spec field is set to [`ClusterFirstWithHostNet`][dnspolicy]. Consider using this setting if NGINX is expected to resolve internal names for any reason. ### Ingress status Because there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default `--publish-service` flag used in standard cloud setups \*\*does not apply\*\* and the status of all Ingress objects remains blank. ```console $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 ``` Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the [`--report-node-internal-ip-address`][cli-args] flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the Ingress-Nginx Controller. !!! example Given a `ingress-nginx-controller` DaemonSet composed of 2 replicas ```console $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 ``` the controller sets the status of all Ingress objects it manages to the following value: ```console $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.2,203.0.113.3 80 ``` !!! note Alternatively, it is possible to override the address written to Ingress objects using the `--publish-status-address` flag. See [Command line arguments][cli-args]. [taints]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ [daemonset]: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ [dnspolicy]: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy [cli-args]: ../user-guide/cli-arguments.md ## Using a self-provisioned edge Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. \_HAproxy\_) and is usually managed outside of the Kubernetes landscape by operations teams. Such deployment builds upon the NodePort Service described above in [Over a NodePort Service](#over-a-nodeport-service), with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address. On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to
https://github.com/kubernetes/ingress-nginx/blob/main//docs/deploy/baremetal.md
main
ingress-nginx
[ -0.028032472357153893, -0.005032545421272516, 0.06438403576612473, -0.0033421386033296585, -0.010790704749524593, -0.03961722180247307, -0.02448209747672081, -0.07625511288642883, 0.05433906987309456, 0.07712024450302124, -0.03679685294628143, -0.041207052767276764, -0.004504716955125332, ...
0.136365
suitable for private Kubernetes clusters where none of the nodes has a public IP address. On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below: ![User edge](../images/baremetal/user\_edge.jpg) ## External IPs !!! danger "Source IP address" This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore \*\*not recommended\*\* to use it despite its apparent simplicity. The `externalIPs` Service option was previously mentioned in the [NodePort](#over-a-nodeport-service) section. As per the [Services][external-ips] page of the official Kubernetes documentation, the `externalIPs` option causes `kube-proxy` to route traffic sent to arbitrary IP addresses \*\*and on the Service ports\*\* to the endpoints of that Service. These IP addresses \*\*must belong to the target node\*\*. !!! example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) ```console $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 ``` and the following `ingress-nginx` NodePort Service ```console $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP ``` One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port: ```yaml spec: externalIPs: - 203.0.113.2 - 203.0.113.3 ``` ```console $ curl -D- http://myapp.example.com:30100 HTTP/1.1 200 OK Server: nginx/1.15.2 $ curl -D- http://myapp.example.com HTTP/1.1 200 OK Server: nginx/1.15.2 ``` We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses. [external-ips]: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
https://github.com/kubernetes/ingress-nginx/blob/main//docs/deploy/baremetal.md
main
ingress-nginx
[ -0.026949860155582428, 0.036446597427129745, 0.031272005289793015, -0.03888370469212532, -0.039860017597675323, -0.016570383682847023, 0.023498523980379105, -0.060821011662483215, 0.0938263013958931, 0.05302411690354347, -0.00533277727663517, -0.030582789331674576, 0.012502658180892467, -0...
0.109848
# Prerequisites Many of the examples in this directory have common prerequisites. ## TLS certificates Unless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA key/cert pair with an arbitrarily chosen hostname, created as follows ```console $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc" Generating a 2048 bit RSA private key ................+++ ................+++ writing new private key to 'tls.key' ----- $ kubectl create secret tls tls-secret --key tls.key --cert tls.crt secret "tls-secret" created ``` Note: If using CA Authentication, described below, you will need to sign the server certificate with the CA. ## Client Certificate Authentication CA Authentication also known as Mutual Authentication allows both the server and client to verify each others identity via a common CA. We have a CA Certificate which we usually obtain from a Certificate Authority and use that to sign both our server certificate and client certificate. Then every time we want to access our backend, we must pass the client certificate. These instructions are based on the following [blog](https://medium.com/@awkwardferny/configuring-certificate-based-mutual-authentication-with-kubernetes-ingress-nginx-20e7e38fdfca) \*\*Generate the CA Key and Certificate:\*\* ```console openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 356 -nodes -subj '/CN=My Cert Authority' ``` \*\*Generate the Server Key, and Certificate and Sign with the CA Certificate:\*\* ```console openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=mydomain.com' openssl x509 -req -sha256 -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set\_serial 01 -out server.crt ``` \*\*Generate the Client Key, and Certificate and Sign with the CA Certificate:\*\* ```console openssl req -new -newkey rsa:4096 -keyout client.key -out client.csr -nodes -subj '/CN=My Client' openssl x509 -req -sha256 -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set\_serial 02 -out client.crt ``` Once this is complete you can continue to follow the instructions [here](./auth/client-certs/README.md#creating-certificate-secrets) ## Test HTTP Service All examples that require a test HTTP Service use the standard http-svc pod, which you can deploy as follows ```console $ kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/http-svc.yaml service "http-svc" created replicationcontroller "http-svc" created $ kubectl get po NAME READY STATUS RESTARTS AGE http-svc-p1t3t 1/1 Running 0 1d $ kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-svc 10.0.122.116 80:30301/TCP 1d ``` You can test that the HTTP Service works by exposing it temporarily ```console $ kubectl patch svc http-svc -p '{"spec":{"type": "LoadBalancer"}}' "http-svc" patched $ kubectl get svc http-svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-svc 10.0.122.116 80:30301/TCP 1d $ kubectl describe svc http-svc Name: http-svc Namespace: default Labels: app=http-svc Selector: app=http-svc Type: LoadBalancer IP: 10.0.122.116 LoadBalancer Ingress: 108.59.87.136 Port: http 80/TCP NodePort: http 30301/TCP Endpoints: 10.180.1.6:8080 Session Affinity: None Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {service-controller } Normal Type ClusterIP -> LoadBalancer 1m 1m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer 16s 16s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer $ curl 108.59.87.136 CLIENT VALUES: client\_address=10.240.0.3 command=GET real path=/ query=nil request\_version=1.1 request\_uri=http://108.59.87.136:8080/ SERVER VALUES: server\_version=nginx: 1.9.11 - lua: 10001 HEADERS RECEIVED: accept=\*/\* host=108.59.87.136 user-agent=curl/7.46.0 BODY: -no body in request- $ kubectl patch svc http-svc -p '{"spec":{"type": "NodePort"}}' "http-svc" patched ```
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/PREREQUISITES.md
main
ingress-nginx
[ -0.06128612160682678, 0.04889418184757233, -0.02552863210439682, -0.007168006617575884, -0.03393147513270378, -0.013609590008854866, -0.052180539816617966, -0.01550145074725151, 0.09736897796392441, 0.02374936453998089, -0.015965566039085388, -0.04643864557147026, 0.05308413878083229, 0.02...
0.033092
# Ingress examples This directory contains a catalog of examples on how to run, configure and scale Ingress. Please review the [prerequisites](PREREQUISITES.md) before trying them. The examples on these pages include the `spec.ingressClassName` field which replaces the deprecated `kubernetes.io/ingress.class: nginx` annotation. Users of ingress-nginx < 1.0.0 (Helm chart < 4.0.0) should use the [legacy documentation](https://github.com/kubernetes/ingress-nginx/tree/legacy/docs/examples). For more information, check out the [Migration to apiVersion networking.k8s.io/v1](../user-guide/k8s-122-migration.md) guide. Category | Name | Description | Complexity Level ---------| ---- | ----------- | ---------------- Apps | [Docker Registry](docker-registry/README.md) | TODO | TODO Auth | [Basic authentication](auth/basic/README.md) | password protect your website | Intermediate Auth | [Client certificate authentication](auth/client-certs/README.md) | secure your website with client certificate authentication | Intermediate Auth | [External authentication plugin](auth/external-auth/README.md) | defer to an external authentication service | Intermediate Auth | [OAuth external auth](auth/oauth-external-auth/README.md) | TODO | TODO Customization | [Configuration snippets](customization/configuration-snippets/README.md) | customize nginx location configuration using annotations | Advanced Customization | [Custom configuration](customization/custom-configuration/README.md) | TODO | TODO Customization | [Custom DH parameters for perfect forward secrecy](customization/ssl-dh-param/README.md) | TODO | TODO Customization | [Custom errors](customization/custom-errors/README.md) | serve custom error pages from the default backend | Intermediate Customization | [Custom headers](customization/custom-headers/README.md) | set custom headers before sending traffic to backends | Advanced Customization | [External authentication with response header propagation](customization/external-auth-headers/README.md) | TODO | TODO Customization | [Sysctl tuning](customization/sysctl/README.md) | TODO | TODO Features | [Rewrite](rewrite/README.md) | TODO | TODO Features | [Session stickiness](affinity/cookie/README.md) | route requests consistently to the same endpoint | Advanced Features | [Canary Deployments](canary/README.md) | weighted canary routing to a separate deployment | Intermediate Scaling | [Static IP](static-ip/README.md) | a single ingress gets a single static IP | Intermediate TLS | [Multi TLS certificate termination](multi-tls/README.md) | TODO | TODO TLS | [TLS termination](tls-termination/README.md) | TODO | TODO
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/index.md
main
ingress-nginx
[ -0.034825846552848816, 0.059307631105184555, 0.06074732169508934, -0.008992215618491173, 0.020093953236937523, 0.009086359292268753, -0.05491100251674652, 0.05176123231649399, -0.0010864182841032743, 0.08864961564540863, -0.04185539484024048, -0.07155122607946396, -0.011515225283801556, 0....
0.142807
# Rewrite This example demonstrates how to use `Rewrite` annotations. ## Prerequisites You will need to make sure your Ingress targets exactly one Ingress controller by specifying the [ingress.class annotation](../../user-guide/multiple-ingress.md), and that you have an ingress controller [running](../../deploy/index.md) in your cluster. ## Deployment Rewriting can be controlled using the following annotations: |Name|Description|Values| | --- | --- | --- | |nginx.ingress.kubernetes.io/rewrite-target|Target URI where the traffic must be redirected|string| |nginx.ingress.kubernetes.io/ssl-redirect|Indicates if the location section is only accessible via SSL (defaults to True when Ingress contains a Certificate)|bool| |nginx.ingress.kubernetes.io/force-ssl-redirect|Forces the redirection to HTTPS even if the Ingress is not TLS Enabled|bool| |nginx.ingress.kubernetes.io/app-root|Defines the Application Root that the Controller must redirect if it's in `/` context|string| |nginx.ingress.kubernetes.io/use-regex|Indicates if the paths defined on an Ingress use regular expressions|bool| ## Examples ### Rewrite Target !!! attention Starting in Version 0.22.0, ingress definitions using the annotation `nginx.ingress.kubernetes.io/rewrite-target` are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a [capture group](https://www.regular-expressions.info/refcapture.html). !!! note [Captured groups](https://www.regular-expressions.info/refcapture.html) are saved in numbered placeholders, chronologically, in the form `$1`, `$2` ... `$n`. These placeholders can be used as parameters in the `rewrite-target` annotation. !!! note Please see the [FAQ](../../faq.md#validation-of-path) for Validation Of \_\_`path`\_\_ Create an Ingress rule with a rewrite annotation: ```console $ echo ' apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/use-regex: "true" nginx.ingress.kubernetes.io/rewrite-target: /$2 name: rewrite namespace: default spec: ingressClassName: nginx rules: - host: rewrite.bar.com http: paths: - path: /something(/|$)(.\*) pathType: ImplementationSpecific backend: service: name: http-svc port: number: 80 ' | kubectl create -f - ``` In this ingress definition, any characters captured by `(.\*)` will be assigned to the placeholder `$2`, which is then used as a parameter in the `rewrite-target` annotation. For example, the ingress definition above will result in the following rewrites: - `rewrite.bar.com/something` rewrites to `rewrite.bar.com/` - `rewrite.bar.com/something/` rewrites to `rewrite.bar.com/` - `rewrite.bar.com/something/new` rewrites to `rewrite.bar.com/new` ### App Root Create an Ingress rule with an app-root annotation: ``` $ echo " apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/app-root: /app1 name: approot namespace: default spec: ingressClassName: nginx rules: - host: approot.bar.com http: paths: - path: / pathType: Prefix backend: service: name: http-svc port: number: 80 " | kubectl create -f - ``` Check the rewrite is working ``` $ curl -I -k http://approot.bar.com/ HTTP/1.1 302 Moved Temporarily Server: nginx/1.11.10 Date: Mon, 13 Mar 2017 14:57:15 GMT Content-Type: text/html Content-Length: 162 Location: http://approot.bar.com/app1 Connection: keep-alive ```
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/rewrite/README.md
main
ingress-nginx
[ 0.0354069247841835, 0.05124300718307495, 0.05368640646338463, 0.02969154715538025, -0.02260301262140274, -0.015410074964165688, 0.026842614635825157, -0.04696943610906601, 0.05076141282916069, 0.07882626354694366, -0.05257076770067215, -0.09128069877624512, 0.03447982296347618, 0.010710085...
0.114103
# Multi TLS certificate termination This example uses 2 different certificates to terminate SSL for 2 hostnames. 1. Create tls secrets for foo.bar.com and bar.baz.com as indicated in the yaml 2. Create [multi-tls.yaml](multi-tls.yaml) This should generate a segment like: ```console $ kubectl exec -it ingress-nginx-controller-6vwd1 -- cat /etc/nginx/nginx.conf | grep "foo.bar.com" -B 7 -A 35 server { listen 80; listen 443 ssl http2; ssl\_certificate /etc/nginx-ssl/default-foobar.pem; ssl\_certificate\_key /etc/nginx-ssl/default-foobar.pem; server\_name foo.bar.com; if ($scheme = http) { return 301 https://$host$request\_uri; } location / { proxy\_set\_header Host $host; # Pass Real IP proxy\_set\_header X-Real-IP $remote\_addr; # Allow websocket connections proxy\_set\_header Upgrade $http\_upgrade; proxy\_set\_header Connection $connection\_upgrade; proxy\_set\_header X-Forwarded-For $proxy\_add\_x\_forwarded\_for; proxy\_set\_header X-Forwarded-Host $host; proxy\_set\_header X-Forwarded-Proto $pass\_access\_scheme; proxy\_connect\_timeout 5s; proxy\_send\_timeout 60s; proxy\_read\_timeout 60s; proxy\_redirect off; proxy\_buffering off; proxy\_http\_version 1.1; proxy\_pass http://default-http-svc-80; } ``` And you should be able to reach your nginx service or http-svc service using a hostname switch: ```console $ kubectl get ing NAME RULE BACKEND ADDRESS AGE foo-tls - 104.154.30.67 13m foo.bar.com / http-svc:80 bar.baz.com / nginx:80 $ curl https://104.154.30.67 -H 'Host:foo.bar.com' -k CLIENT VALUES: client\_address=10.245.0.6 command=GET real path=/ query=nil request\_version=1.1 request\_uri=http://foo.bar.com:8080/ SERVER VALUES: server\_version=nginx: 1.9.11 - lua: 10001 HEADERS RECEIVED: accept=\*/\* connection=close host=foo.bar.com user-agent=curl/7.35.0 x-forwarded-for=10.245.0.1 x-forwarded-host=foo.bar.com x-forwarded-proto=https $ curl https://104.154.30.67 -H 'Host:bar.baz.com' -k Welcome to nginx on Debian! $ curl 104.154.30.67 default backend - 404 ```
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/multi-tls/README.md
main
ingress-nginx
[ -0.03295516595244408, 0.0744187980890274, 0.015578866936266422, -0.05889641121029854, -0.09917810559272766, -0.08166827261447906, -0.0053678592666983604, -0.061620768159627914, 0.10684247314929962, 0.005674590356647968, -0.049189187586307526, -0.10798922926187515, 0.027674958109855652, 0.0...
0.019453
# Canary Ingress Nginx Has the ability to handle canary routing by setting specific annotations, the following is an example of how to configure a canary deployment with weighted canary routing. ## Create your main deployment and service This is the main deployment of your application with the service that will be used to route to it ```bash echo " --- # Deployment apiVersion: apps/v1 kind: Deployment metadata: name: production labels: app: production spec: replicas: 1 selector: matchLabels: app: production template: metadata: labels: app: production spec: containers: - name: production image: registry.k8s.io/ingress-nginx/e2e-test-echo:v1.2.6@sha256:26c266b06ac87920f7665f4a3ba7062834fd249cd63fc7b7f536fcf0c4fe694d ports: - containerPort: 80 env: - name: NODE\_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: POD\_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD\_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD\_IP valueFrom: fieldRef: fieldPath: status.podIP --- # Service apiVersion: v1 kind: Service metadata: name: production labels: app: production spec: ports: - port: 80 targetPort: 80 protocol: TCP name: http selector: app: production " | kubectl apply -f - ``` ## Create the canary deployment and service This is the canary deployment that will take a weighted amount of requests instead of the main deployment ```bash echo " --- # Deployment apiVersion: apps/v1 kind: Deployment metadata: name: canary labels: app: canary spec: replicas: 1 selector: matchLabels: app: canary template: metadata: labels: app: canary spec: containers: - name: canary image: registry.k8s.io/ingress-nginx/e2e-test-echo:v1.2.6@sha256:26c266b06ac87920f7665f4a3ba7062834fd249cd63fc7b7f536fcf0c4fe694d ports: - containerPort: 80 env: - name: NODE\_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: POD\_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD\_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD\_IP valueFrom: fieldRef: fieldPath: status.podIP --- # Service apiVersion: v1 kind: Service metadata: name: canary labels: app: canary spec: ports: - port: 80 targetPort: 80 protocol: TCP name: http selector: app: canary " | kubectl apply -f - ``` ## Create Ingress Pointing To Your Main Deployment Next you will need to expose your main deployment with an ingress resource, note there are no canary specific annotations on this ingress ```bash echo " --- # Ingress apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: production annotations: spec: ingressClassName: nginx rules: - host: echo.prod.mydomain.com http: paths: - pathType: Prefix path: / backend: service: name: production port: number: 80 " | kubectl apply -f - ``` ## Create Ingress Pointing To Your Canary Deployment You will then create an Ingress that has the canary specific configuration, please pay special notice of the following: - The host name is identical to the main ingress host name - The `nginx.ingress.kubernetes.io/canary: "true"` annotation is required and defines this as a canary annotation (if you do not have this the Ingresses will clash) - The `nginx.ingress.kubernetes.io/canary-weight: "50"` annotation dictates the weight of the routing, in this case there is a "50%" chance a request will hit the canary deployment over the main deployment ```bash echo " --- # Ingress apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: canary annotations: nginx.ingress.kubernetes.io/canary: \"true\" nginx.ingress.kubernetes.io/canary-weight: \"50\" spec: ingressClassName: nginx rules: - host: echo.prod.mydomain.com http: paths: - pathType: Prefix path: / backend: service: name: canary port: number: 80 " | kubectl apply -f - ``` ## Testing your setup You can use the following command to test your setup (replacing INGRESS\_CONTROLLER\_IP with your ingresse controllers IP Address) ```bash for i in $(seq 1 10); do curl -s --resolve echo.prod.mydomain.com:80:$INGRESS\_CONTROLLER\_IP echo.prod.mydomain.com | grep "Hostname"; done ``` You will get the following output showing that your canary setup is working as expected: ```bash Hostname: production-5c5f65d859-phqzc Hostname: canary-6697778457-zkfjf Hostname: canary-6697778457-zkfjf Hostname: production-5c5f65d859-phqzc Hostname: canary-6697778457-zkfjf Hostname: production-5c5f65d859-phqzc Hostname: production-5c5f65d859-phqzc Hostname: production-5c5f65d859-phqzc Hostname: canary-6697778457-zkfjf Hostname: production-5c5f65d859-phqzc ```
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/canary/README.md
main
ingress-nginx
[ 0.038302354514598846, 0.04928012937307358, 0.009014439769089222, -0.007655706722289324, -0.041143327951431274, -0.058741096407175064, -0.052503567188978195, -0.007652896922081709, -0.030990883708000183, 0.02934913896024227, -0.07507194578647614, -0.048882775008678436, -0.06154947727918625, ...
0.0925
working as expected: ```bash Hostname: production-5c5f65d859-phqzc Hostname: canary-6697778457-zkfjf Hostname: canary-6697778457-zkfjf Hostname: production-5c5f65d859-phqzc Hostname: canary-6697778457-zkfjf Hostname: production-5c5f65d859-phqzc Hostname: production-5c5f65d859-phqzc Hostname: production-5c5f65d859-phqzc Hostname: canary-6697778457-zkfjf Hostname: production-5c5f65d859-phqzc ```
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/canary/README.md
main
ingress-nginx
[ 0.06841284036636353, 0.04275389760732651, -0.015103340148925781, -0.041289545595645905, 0.02162119187414646, -0.05840083211660385, 0.013313588686287403, -0.09585285186767578, 0.031039126217365265, 0.051301125437021255, 0.014850842766463757, -0.0803741067647934, 0.005450412165373564, 0.0286...
-0.059837
# Configuration Snippets ## Ingress The Ingress in [this example](ingress.yaml) adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at [an example of specifying custom headers](../custom-headers/README.md). ```console kubectl apply -f ingress.yaml ``` ## Test Check if the contents of the annotation are present in the nginx.conf file using: ```console kubectl exec ingress-nginx-controller-873061567-4n3k2 -n kube-system -- cat /etc/nginx/nginx.conf ```
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/customization/configuration-snippets/README.md
main
ingress-nginx
[ -0.007010378874838352, 0.1010606661438942, 0.06452378630638123, -0.035377755761146545, 0.045766692608594894, 0.012077556923031807, 0.047404803335666656, -0.02021331526339054, 0.09128882735967636, 0.0512552447617054, -0.08558651804924011, -0.12112587690353394, 0.004703613463789225, -0.02131...
0.136514
# Accommodation for JWT JWT (short for Json Web Token) is an authentication method widely used. Basically an authentication server generates a JWT and you then use this token in every request you make to a backend service. The JWT can be quite big and is present in every http headers. This means you may have to adapt the max-header size of your nginx-ingress in order to support it. ## Symptoms If you use JWT and you get http 502 error from your ingress, it may be a sign that the buffer size is not big enough. To be 100% sure look at the logs of the `ingress-nginx-controller` pod, you should see something like this: ``` upstream sent too big header while reading response header from upstream... ``` ## Increase buffer size for headers In nginx, we want to modify the property `proxy-buffer-size`. The size is arbitrary. It depends on your needs. Be aware that a high value can lower the performance of your ingress proxy. In general a value of 16k should get you covered. ### Using helm If you're using helm you can simply use the [`config` properties](https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml#L56). ```yaml # -- Will add custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/ config: proxy-buffer-size: 16k ``` ## Manually in kubernetes config files If you use an already generated config from for a provider, you will have to change the `controller-configmap.yaml` ```yaml --- # Source: ingress-nginx/templates/controller-configmap.yaml apiVersion: v1 kind: ConfigMap # ... data: #... proxy-buffer-size: "16k" ``` References: \* [Custom Configuration](../custom-configuration/README.md)
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/customization/jwt/README.md
main
ingress-nginx
[ -0.06220947578549385, 0.06923944503068924, 0.07194510102272034, -0.0420130118727684, -0.03298484906554222, -0.05129428207874298, -0.015646683052182198, 0.06887263059616089, -0.015950798988342285, 0.06792493909597397, -0.10630369931459427, -0.027402888983488083, 0.03059164248406887, -0.0399...
0.100848
# Custom Configuration Using a [ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) is possible to customize the NGINX configuration For example, if we want to change the timeouts we need to create a ConfigMap: ``` $ cat configmap.yaml apiVersion: v1 data: proxy-connect-timeout: "10" proxy-read-timeout: "120" proxy-send-timeout: "120" kind: ConfigMap metadata: name: ingress-nginx-controller ``` ``` curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/custom-configuration/configmap.yaml \ | kubectl apply -f - ``` If the Configmap is updated, NGINX will be reloaded with the new configuration.
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/customization/custom-configuration/README.md
main
ingress-nginx
[ -0.008595342747867107, 0.0510106235742569, 0.058936357498168945, 0.014922285452485085, -0.06226317584514618, 0.006471717264503241, -0.02097053825855255, -0.021248599514365196, 0.040253423154354095, 0.07657577097415924, -0.05383891612291336, 0.00457875756546855, -0.009504842571914196, -0.03...
0.101227
# Custom DH parameters for perfect forward secrecy This example aims to demonstrate the deployment of an Ingress-Nginx Controller and use a ConfigMap to configure a custom Diffie-Hellman parameters file to help with "Perfect Forward Secrecy". ## Custom configuration ```console $ cat configmap.yaml apiVersion: v1 data: ssl-dh-param: "ingress-nginx/lb-dhparam" kind: ConfigMap metadata: name: ingress-nginx-controller namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx ``` ```console $ kubectl create -f configmap.yaml ``` ## Custom DH parameters secret ```console $ openssl dhparam 4096 2> /dev/null | base64 LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ... ``` ```console $ cat ssl-dh-param.yaml apiVersion: v1 data: dhparam.pem: "LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ..." kind: Secret metadata: name: lb-dhparam namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx ``` ```console $ kubectl create -f ssl-dh-param.yaml ``` ## Test Check the contents of the configmap is present in the nginx.conf file using: ```console $ kubectl exec ingress-nginx-controller-873061567-4n3k2 -n kube-system -- cat /etc/nginx/nginx.conf ```
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/customization/ssl-dh-param/README.md
main
ingress-nginx
[ -0.046749867498874664, 0.07002180814743042, 0.006570814643055201, 0.00945805013179779, -0.025595150887966156, -0.03159303963184357, 0.01584804430603981, -0.0021771653555333614, 0.04296332970261574, 0.04964124411344528, 0.01947190798819065, -0.08374875038862228, 0.005358920432627201, -0.029...
0.039418
# Sysctl tuning This example aims to demonstrate the use of an Init Container to adjust sysctl default values using `kubectl patch`. ```console kubectl patch deployment -n ingress-nginx ingress-nginx-controller \ --patch="$(curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/sysctl/patch.json)" ``` \*\*Changes:\*\* - Backlog Queue setting `net.core.somaxconn` from `128` to `32768` - Ephemeral Ports setting `net.ipv4.ip\_local\_port\_range` from `32768 60999` to `1024 65000` In a [post from the NGINX blog](https://www.nginx.com/blog/tuning-nginx/), it is possible to see an explanation for the changes.
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/customization/sysctl/README.md
main
ingress-nginx
[ -0.04253901168704033, 0.04392767697572708, 0.02843153290450573, -0.002597078448161483, -0.01959233172237873, -0.07508967816829681, 0.009935244917869568, 0.003754651639610529, 0.004413919989019632, 0.07865244895219803, -0.04216049611568451, -0.006871986202895641, 0.0028105685487389565, -0.0...
0.115319
# External authentication, authentication service response headers propagation This example demonstrates propagation of selected authentication service response headers to a backend service. Sample configuration includes: \* Sample authentication service producing several response headers \* Authentication logic is based on HTTP header: requests with header `User` containing string `internal` are considered authenticated \* After successful authentication service generates response headers `UserID` and `UserRole` \* Sample echo service displaying header information \* Two ingress objects pointing to echo service \* Public, which allows access from unauthenticated users \* Private, which allows access from authenticated users only You can deploy the controller as follows: ```console $ kubectl create -f deploy/ deployment "demo-auth-service" created service "demo-auth-service" created ingress "demo-auth-service" created deployment "demo-echo-service" created service "demo-echo-service" created ingress "public-demo-echo-service" created ingress "secure-demo-echo-service" created $ kubectl get po NAME READY STATUS RESTARTS AGE demo-auth-service-2769076528-7g9mh 1/1 Running 0 30s demo-echo-service-3636052215-3vw8c 1/1 Running 0 29s kubectl get ing NAME HOSTS ADDRESS PORTS AGE public-demo-echo-service public-demo-echo-service.kube.local 80 1m secure-demo-echo-service secure-demo-echo-service.kube.local 80 1m ``` ## Test 1: public service with no auth header ```console $ curl -H 'Host: public-demo-echo-service.kube.local' -v 192.168.99.100 \* Rebuilt URL to: 192.168.99.100/ \* Trying 192.168.99.100... \* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: public-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: \*/\* > < HTTP/1.1 200 OK < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:19:21 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 20 < Connection: keep-alive < \* Connection #0 to host 192.168.99.100 left intact UserID: , UserRole: ``` ## Test 2: secure service with no auth header ```console $ curl -H 'Host: secure-demo-echo-service.kube.local' -v 192.168.99.100 \* Rebuilt URL to: 192.168.99.100/ \* Trying 192.168.99.100... \* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: secure-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: \*/\* > < HTTP/1.1 403 Forbidden < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:18:48 GMT < Content-Type: text/html < Content-Length: 170 < Connection: keep-alive < 403 Forbidden # 403 Forbidden --- nginx/1.11.10 \* Connection #0 to host 192.168.99.100 left intact ``` ## Test 3: public service with valid auth header ```console $ curl -H 'Host: public-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100 \* Rebuilt URL to: 192.168.99.100/ \* Trying 192.168.99.100... \* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: public-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: \*/\* > User:internal > < HTTP/1.1 200 OK < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:19:59 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 44 < Connection: keep-alive < \* Connection #0 to host 192.168.99.100 left intact UserID: 1443635317331776148, UserRole: admin ``` ## Test 4: secure service with valid auth header ```console $ curl -H 'Host: secure-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100 \* Rebuilt URL to: 192.168.99.100/ \* Trying 192.168.99.100... \* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: secure-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: \*/\* > User:internal > < HTTP/1.1 200 OK < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:17:23 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 43 < Connection: keep-alive < \* Connection #0 to host 192.168.99.100 left intact UserID: 605394647632969758, UserRole: admin ```
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/customization/external-auth-headers/README.md
main
ingress-nginx
[ -0.015269309282302856, 0.024062320590019226, -0.06012176349759102, -0.0834774374961853, -0.025403980165719986, -0.018824024125933647, 0.049503572285175323, -0.0057788509875535965, 0.07823704928159714, 0.057865407317876816, 0.04742756858468056, -0.13514509797096252, 0.0470866821706295, -0.0...
0.086774
# Custom Errors This example demonstrates how to use a custom backend to render custom error pages. If you are using the Helm Chart, look at [example values](https://github.com/kubernetes/ingress-nginx/blob/main/docs/examples/customization/custom-errors/custom-default-backend.helm.values.yaml) and don't forget to add the [ConfigMap](https://github.com/kubernetes/ingress-nginx/blob/main/docs/examples/customization/custom-errors/custom-default-backend-error\_pages.configMap.yaml) to your deployment. Otherwise, continue with [Customized default backend](#customized-default-backend) manual deployment. ## Customized default backend First, create the custom `default-backend`. It will be used by the Ingress controller later on. To do that, you can take a look at the [example manifest](https://github.com/kubernetes/ingress-nginx/blob/main/docs/examples/customization/custom-errors/custom-default-backend.yaml) in this project's GitHub repository. ``` $ kubectl create -f custom-default-backend.yaml service "nginx-errors" created deployment.apps "nginx-errors" created ``` This should have created a Deployment and a Service with the name `nginx-errors`. ``` $ kubectl get deploy,svc NAME DESIRED CURRENT READY AGE deployment.apps/nginx-errors 1 1 1 10s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/nginx-errors ClusterIP 10.0.0.12 80/TCP 10s ``` ## Ingress controller configuration If you do not already have an instance of the Ingress-Nginx Controller running, deploy it according to the [deployment guide][deploy], then follow these steps: 1. Edit the `ingress-nginx-controller` Deployment and set the value of the `--default-backend-service` flag to the name of the newly created error backend. 2. Edit the `ingress-nginx-controller` ConfigMap and create the key `custom-http-errors` with a value of `404,503`. 3. Take note of the IP address assigned to the Ingress-Nginx Controller Service. ``` $ kubectl get svc ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx ClusterIP 10.0.0.13 80/TCP,443/TCP 10m ``` !!! note The `ingress-nginx` Service is of type `ClusterIP` in this example. This may vary depending on your environment. Make sure you can use the Service to reach NGINX before proceeding with the rest of this example. [deploy]: ../../../deploy/index.md ## Testing error pages Let us send a couple of HTTP requests using cURL and validate everything is working as expected. A request to the default backend returns a 404 error with a custom message: ``` $ curl -D- http://10.0.0.13/ HTTP/1.1 404 Not Found Server: nginx/1.13.12 Date: Tue, 12 Jun 2018 19:11:24 GMT Content-Type: \*/\* Transfer-Encoding: chunked Connection: keep-alive The page you're looking for could not be found. ``` A request with a custom `Accept` header returns the corresponding document type (JSON): ``` $ curl -D- -H 'Accept: application/json' http://10.0.0.13/ HTTP/1.1 404 Not Found Server: nginx/1.13.12 Date: Tue, 12 Jun 2018 19:12:36 GMT Content-Type: application/json Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding { "message": "The page you're looking for could not be found" } ``` To go further with this example, feel free to deploy your own applications and Ingress objects, and validate that the responses are still in the correct format when a backend returns 503 (eg. if you scale a Deployment down to 0 replica). ## Maintenance page You can also leverage custom error pages to set a \*\*"\_Service under maintenance\_" page\*\* for the whole cluster, useful to prevent users from accessing your services while you are performing planned scheduled maintenance. When enabled, the maintenance page is served to the clients with an HTTP [\*\*503 Service Unavailable\*\*](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/503) response \*\*status code\*\*. To do that: - Enable a \*\*custom error page for the 503 HTTP error\*\*, by following the guide above - Set the value of the `--watch-namespace-selector` flag to the name of some non-existent namespace, e.g. `nonexistent-namespace` - This effectively prevents the NGINX Ingress Controller from reading `Ingress` resources from any namespace in the Kubernetes cluster - Set your `location-snippet` to `return 503;`, to make the NGINX Ingress Controller always return the 503 HTTP error page for all the requests
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/customization/custom-errors/README.md
main
ingress-nginx
[ 0.001349492697045207, 0.06252731382846832, 0.09500375390052795, -0.0069017838686704636, 0.0007480038329958916, 0.03248898312449455, -0.0413513109087944, 0.06367985159158707, 0.006084890570491552, 0.10366977006196976, -0.047590434551239014, -0.053914040327072144, 0.06608577072620392, -0.046...
0.051699
to `return 503;`, to make the NGINX Ingress Controller always return the 503 HTTP error page for all the requests
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/customization/custom-errors/README.md
main
ingress-nginx
[ -0.06243224814534187, 0.08205489814281464, 0.01038726419210434, -0.022637953981757164, 0.010416997596621513, -0.05184110626578331, -0.025356724858283997, -0.04523774981498718, 0.044410400092601776, 0.1440918892621994, -0.004407584201544523, -0.05671565607190132, 0.0007675262750126421, -0.0...
0.024258
# Custom Headers ## Caveats Changes to the custom header config maps do not force a reload of the ingress-nginx-controllers. ### Workaround To work around this limitation, perform a rolling restart of the deployment. ## Example This example demonstrates configuration of the Ingress-Nginx Controller via a ConfigMap to pass a custom list of headers to the upstream server. [custom-headers.yaml](custom-headers.yaml) defines a ConfigMap in the `ingress-nginx` namespace named `custom-headers`, holding several custom X-prefixed HTTP headers. ```console kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/custom-headers/custom-headers.yaml ``` [configmap.yaml](configmap.yaml) defines a ConfigMap in the `ingress-nginx` namespace named `ingress-nginx-controller`. This controls the [global configuration](../../../user-guide/nginx-configuration/configmap.md) of the ingress controller, and already exists in a standard installation. The key `proxy-set-headers` is set to cite the previously-created `ingress-nginx/custom-headers` ConfigMap. ```console kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/custom-headers/configmap.yaml ``` The Ingress-Nginx Controller will read the `ingress-nginx/ingress-nginx-controller` ConfigMap, find the `proxy-set-headers` key, read HTTP headers from the `ingress-nginx/custom-headers` ConfigMap, and include those HTTP headers in all requests flowing from nginx to the backends. The above example was for passing a custom list of headers to the upstream server. To pass the custom headers before sending response traffic to the client, use the add-headers key: ```console kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/custom-headers/configmap-client-response.yaml ``` ## Test Check the contents of the ConfigMaps are present in the nginx.conf file using: `kubectl exec ingress-nginx-controller-873061567-4n3k2 -n ingress-nginx -- cat /etc/nginx/nginx.conf`
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/customization/custom-headers/README.md
main
ingress-nginx
[ -0.03284170851111412, 0.08415326476097107, 0.08780966699123383, -0.022526197135448456, -0.023281551897525787, 0.013589256443083286, 0.003285192884504795, 0.009642161428928375, 0.020096775144338608, 0.08654523640871048, -0.030794216319918633, -0.08083748817443848, 0.01634511724114418, -0.02...
0.068098
# OpenPolicyAgent and pathType enforcing Ingress API allows users to specify different [pathType](https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types) on Ingress object. While pathType `Exact` and `Prefix` should allow only a small set of characters, pathType `ImplementationSpecific` allows any characters, as it may contain regexes, variables and other features that may be specific of the Ingress Controller being used. This means that the Ingress Admins (the persona who deployed the Ingress Controller) should trust the users allowed to use `pathType: ImplementationSpecific`, as this may allow arbitrary configuration, and this configuration may end on the proxy (aka Nginx) configuration. ## Example The example in this repo uses [Gatekeeper](https://open-policy-agent.github.io/gatekeeper/website/) to block the usage of `pathType: ImplementationSpecific`, allowing just a specific list of namespaces to use it. It is recommended that the admin modifies this rules to enforce a specific set of characters when the usage of ImplementationSpecific is allowed, or in ways that best suits their needs. First, the `ConstraintTemplate` from [template.yaml](template.yaml) will define a rule that validates if the Ingress object is being created on an exempted namespace, and case not, will validate its pathType. Then, the rule `K8sBlockIngressPathType` contained in [rule.yaml](rule.yaml) will define the parameters: what kind of object should be verified (Ingress), what are the exempted namespaces, and what kinds of pathType are blocked.
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/openpolicyagent/README.md
main
ingress-nginx
[ -0.058847181499004364, 0.007016838062554598, 0.018694816157221794, -0.0034985137172043324, -0.02309517189860344, -0.010701007209718227, 0.06150020658969879, -0.024750251322984695, 0.06360504776239395, 0.05016756057739258, 0.009077183902263641, -0.018114207312464714, 0.0033636335283517838, ...
0.184785
# Static IPs This example demonstrates how to assign a static-ip to an Ingress on through the Ingress-NGINX controller. ## Prerequisites You need a [TLS cert](../PREREQUISITES.md#tls-certificates) and a [test HTTP service](../PREREQUISITES.md#test-http-service) for this example. You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the [ingress.class annotation](../../user-guide/multiple-ingress.md), and that you have an ingress controller [running](../../deploy/index.md) in your cluster. ## Acquiring an IP Since instances of the ingress nginx controller actually run on nodes in your cluster, by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes. On GKE/GCE for example, even though nodes get static IPs, the IPs are not retained across upgrades. To acquire a static IP for the ingress-nginx-controller, simply put it behind a Service of `Type=LoadBalancer`. First, create a loadbalancer Service and wait for it to acquire an IP: ```console $ kubectl create -f static-ip-svc.yaml service "ingress-nginx-lb" created $ kubectl get svc ingress-nginx-lb NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-lb 10.0.138.113 104.154.109.191 80:31457/TCP,443:32240/TCP 15m ``` Then, update the ingress controller so it adopts the static IP of the Service by passing the `--publish-service` flag (the example yaml used in the next step already has it set to "ingress-nginx-lb"). ```console $ kubectl create -f ingress-nginx-controller.yaml deployment "ingress-nginx-controller" created ``` ## Assigning the IP to an Ingress From here on every Ingress created with the `ingress.class` annotation set to `nginx` will get the IP allocated in the previous step. ```console $ kubectl create -f ingress-nginx.yaml ingress "ingress-nginx" created $ kubectl get ing ingress-nginx NAME HOSTS ADDRESS PORTS AGE ingress-nginx \* 104.154.109.191 80, 443 13m $ curl 104.154.109.191 -kL CLIENT VALUES: client\_address=10.180.1.25 command=GET real path=/ query=nil request\_version=1.1 request\_uri=http://104.154.109.191:8080/ ... ``` ## Retaining the IP You can test retention by deleting the Ingress: ```console $ kubectl delete ing ingress-nginx ingress "ingress-nginx" deleted $ kubectl create -f ingress-nginx.yaml ingress "ingress-nginx" created $ kubectl get ing ingress-nginx NAME HOSTS ADDRESS PORTS AGE ingress-nginx \* 104.154.109.191 80, 443 13m ``` > Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all > Ingresses, because all requests are proxied through the same set of nginx > controllers. ## Promote ephemeral to static IP To promote the allocated IP to static, you can update the Service manifest: ```console $ kubectl patch svc ingress-nginx-lb -p '{"spec": {"loadBalancerIP": "104.154.109.191"}}' "ingress-nginx-lb" patched ``` ... and promote the IP to static (promotion works differently for cloudproviders, provided example is for GKE/GCE): ```console $ gcloud compute addresses create ingress-nginx-lb --addresses 104.154.109.191 --region us-central1 Created [https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/ingress-nginx-lb]. --- address: 104.154.109.191 creationTimestamp: '2017-01-31T16:34:50.089-08:00' description: '' id: '5208037144487826373' kind: compute#address name: ingress-nginx-lb region: us-central1 selfLink: https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/ingress-nginx-lb status: IN\_USE users: - us-central1/forwardingRules/a09f6913ae80e11e6a8c542010af0000 ``` Now even if the Service is deleted, the IP will persist, so you can recreate the Service with `spec.loadBalancerIP` set to `104.154.109.191`.
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/static-ip/README.md
main
ingress-nginx
[ -0.0767587423324585, 0.02366495132446289, 0.03361274302005768, 0.008379065431654453, -0.012375387363135815, -0.002602779073640704, 0.06081218272447586, -0.07716385275125504, -0.0020283167250454426, 0.08054820448160172, 0.0033509728964418173, -0.08016175776720047, 0.08022776246070862, -0.04...
0.079995
# gRPC This example demonstrates how to route traffic to a gRPC service through the Ingress-NGINX controller. ## Prerequisites 1. You have a kubernetes cluster running. 2. You have a domain name such as `example.com` that is configured to route traffic to the Ingress-NGINX controller. 3. You have the ingress-nginx-controller installed as per docs. 4. You have a backend application running a gRPC server listening for TCP traffic. If you want, you can use as an example. 5. You're also responsible for provisioning an SSL certificate for the ingress. So you need to have a valid SSL certificate, deployed as a Kubernetes secret of type `tls`, in the same namespace as the gRPC application. ### Step 1: Create a Kubernetes `Deployment` for gRPC app - Make sure your gRPC application pod is running and listening for connections. For example you can try a kubectl command like this below: ```console $ kubectl get po -A -o wide | grep go-grpc-greeter-server ``` - If you have a gRPC app deployed in your cluster, then skip further notes in this Step 1, and continue from Step 2 below. - As an example gRPC application, we can use this app . - To create a container image for this app, you can use [this Dockerfile](https://github.com/kubernetes/ingress-nginx/blob/main/images/go-grpc-greeter-server/rootfs/Dockerfile). - If you use the Dockerfile mentioned above, to create a image, then you can use the following example Kubernetes manifest to create a deployment resource that uses that image. If necessary edit this manifest to suit your needs. ``` cat </go-grpc-greeter-server # Edit this for your reponame resources: limits: cpu: 100m memory: 100Mi requests: cpu: 50m memory: 50Mi name: go-grpc-greeter-server ports: - containerPort: 50051 EOF ``` ### Step 2: Create the Kubernetes `Service` for the gRPC app - You can use the following example manifest to create a service of type ClusterIP. Edit the name/namespace/label/port to match your deployment/pod. ``` cat <. > If you are developing public gRPC endpoints, check out > https://proto.stack.build, a protocol buffer / gRPC build service that can use > to help make it easier for your users to consume your API. > See also the specific gRPC settings of NGINX: https://nginx.org/en/docs/http/ngx\_http\_grpc\_module.html ### Notes on using response/request streams > `grpc\_read\_timeout` and `grpc\_send\_timeout` will be set as `proxy\_read\_timeout` and `proxy\_send\_timeout` when you set backend protocol to `GRPC` or `GRPCS`. 1. If your server only does response streaming and you expect a stream to be open longer than 60 seconds, you will have to change the `grpc\_read\_timeout` to accommodate this. 2. If your service only does request streaming and you expect a stream to be open longer than 60 seconds, you have to change the `grpc\_send\_timeout` and the `client\_body\_timeout`. 3. If you do both response and request streaming with an open stream longer than 60 seconds, you have to change all three timeouts: `grpc\_read\_timeout`, `grpc\_send\_timeout` and `client\_body\_timeout`.
https://github.com/kubernetes/ingress-nginx/blob/main//docs/examples/grpc/README.md
main
ingress-nginx
[ -0.020629875361919403, 0.010525941848754883, -0.026163917034864426, -0.03293367102742195, -0.08882956206798553, -0.010623749345541, 0.05311312526464462, -0.012574727647006512, 0.0658067986369133, 0.05608795955777168, -0.05723066255450249, -0.10600673407316208, -0.004785172175616026, 0.0711...
0.084393