tag
dict
content
listlengths
1
139
{ "category": "Orchestration & Management", "file_name": "overview.html.md", "project_name": "OpenNebula", "subcategory": "Scheduling & Orchestration" }
[ { "data": "So we heard you want to try out OpenNebula? Welcome! You are in the right place. This Quick Start guide will guide you through the process to achieve a fully functional OpenNebula cloud. In this guide, well go through a Front-end OpenNebula environment deployment, where all the OpenNebula services needed to use, manage and run the cloud will be collocated on a single dedicated Host. Afterwards, you can continue to the Operations Basics section to add a remote Cluster based on KVM or LXC to your shiny new OpenNebula cloud! In particular, Deployment Basic will get you an OpenNebula Front-end, ready to rock. First, please choose your fighter: Deploy OpenNebula Front-end on AWS guide. Deploy OpenNebula Front-end on VMware guide. Try OpenNebula Hosted Front-end guide. Afterwards, you can move on to Operations Basics to learn how to add Edge Clusters (i.e., computing nodes) and then finally to Usage Basics to deploy your VMs, containers or multi-tier services on your new cloud!" } ]
{ "category": "Orchestration & Management", "file_name": "overview.html.md", "project_name": "StackStorm", "subcategory": "Scheduling & Orchestration" }
[ { "data": "Getting Started Automation Basics Advanced Topics Release Notes Other StackStorm is a platform for integration and automation across services and tools. It ties together your existing infrastructure and application environment so you can more easily automate that environment. It has a particular focus on taking actions in response to events. StackStorm helps automate common operational patterns. Some examples are: Facilitated Troubleshooting - triggering on system failures captured by Nagios, Sensu, New Relic and other monitoring systems, running a series of diagnostic checks on physical nodes, OpenStack or Amazon instances, and application components, and posting results to a shared communication context, like Slack or JIRA. Automated remediation - identifying and verifying hardware failure on OpenStack compute node, properly evacuating instances and emailing admins about potential downtime, but if anything goes wrong - freezing the workflow and calling PagerDuty to wake up a human. Continuous deployment - build and test with Jenkins, provision a new AWS cluster, turn on some traffic with the load balancer, and roll-forward or roll-back, based on NewRelic app performance data. StackStorm helps you compose these and other operational patterns as rules and workflows or actions. These rules and workflows - the content within the StackStorm platform - are stored as code which means they support the same approach to collaboration that you use today for code development. They can be shared with the broader open source community, for example via the StackStorm community. StackStorm architecture diagram StackStorm plugs into the environment via the extensible set of adapters containing sensors and actions. Sensors are Python plugins for either inbound or outbound integration that receives or watches for events respectively. When an event from external systems occurs and is processed by a sensor, a StackStorm trigger will be emitted into the system. Triggers are StackStorm representations of external events. There are generic triggers (e.g. timers, webhooks) and integration triggers (e.g. Sensu alert, JIRA issue updated). A new trigger type can be defined by writing a sensor" }, { "data": "Actions are StackStorm outbound integrations. There are generic actions (ssh, REST call), integrations (OpenStack, Docker, Puppet), or custom actions. Actions are either Python plugins, or any scripts, consumed into StackStorm by adding a few lines of metadata. Actions can be invoked directly by user via CLI or API, or used and called as part of rules and workflows. Rules map triggers to actions (or to workflows), applying matching criteria and mapping trigger payload to action inputs. Workflows stitch actions together into uber-actions, defining the order, transition conditions, and passing the data. Most automations are more than one-step and thus need more than one action. Workflows, just like atomic actions, are available in the Action library, and can be invoked manually or triggered by rules. Packs are the units of content deployment. They simplify the management and sharing of StackStorm pluggable content by grouping integrations (triggers and actions) and automations (rules and workflows). A growing number of packs are available on StackStorm Exchange. Users can create their own packs, share them on Github, or submit to the StackStorm Exchange. Audit trail of action executions, manual or automated, is recorded and stored with full details of triggering context and execution results. It is also captured in audit logs for integrating with external logging and analytical tools: LogStash, Splunk, statsd, syslog. StackStorm is a service with modular architecture. It comprises loosely coupled service components that communicate over the message bus, and scales horizontally to deliver automation at scale. StackStorm has a Web UI, a CLI client, and of course a full REST API. We also ship Python client bindings to make life easier for developers. StackStorm is new and under active development. We are very keen to engage the community, to get feedback and refine our directions. Contributions are always welcome! Install and run - follow Installation Build a simple automation - follow Quick Start Guide Help us with directions - comment on the Roadmap Explore the StackStorm community Copyright 2014 - 2023, StackStorm." } ]
{ "category": "Orchestration & Management", "file_name": "installation.md", "project_name": "wasmCloud", "subcategory": "Scheduling & Orchestration" }
[ { "data": "First, we'll install wash, the WAsmcloud SHell, which we'll use to install, run, and manage our wasmCloud components. Select your preferred installation method below, then run the displayed commands in your favorite terminal. ``` curl -s https://packagecloud.io/install/repositories/wasmcloud/core/script.deb.sh | sudo bash sudo apt install wash ``` Below are the contents of script.deb.sh for your verification. ``` unknown_os () { echo \"Unfortunately, your operating system distribution and version are not supported by this script.\" echo echo \"You can override the OS detection by setting os= and dist= prior to running this script.\" echo \"You can find a list of supported OSes and distributions on our website: https://packagecloud.io/docs#osdistroversion\" echo echo \"For example, to force Ubuntu Trusty: os=ubuntu dist=trusty ./script.sh\" echo echo \"Please email support@packagecloud.io and let us know if you run into any issues.\" exit 1 } gpg_check () { echo \"Checking for gpg...\" if command -v gpg > /dev/null; then echo \"Detected gpg...\" else echo \"Installing gnupg for GPG verification...\" apt-get install -y gnupg if [ \"$?\" -ne \"0\" ]; then echo \"Unable to install GPG! Your base system has a problem; please check your default OS's package repositories because GPG should work.\" echo \"Repository installation aborted.\" exit 1 fi fi } curl_check () { echo \"Checking for curl...\" if command -v curl > /dev/null; then echo \"Detected curl...\" else echo \"Installing curl...\" apt-get install -q -y curl if [ \"$?\" -ne \"0\" ]; then echo \"Unable to install curl! Your base system has a problem; please check your default OS's package repositories because curl should work.\" echo \"Repository installation aborted.\" exit 1 fi fi } installdebiankeyring () { if [ \"${os,,}\" = \"debian\" ]; then echo \"Installing debian-archive-keyring which is needed for installing \" echo \"apt-transport-https on many Debian systems.\" apt-get install -y debian-archive-keyring &> /dev/null fi } detect_os () { if [[ ( -z \"${os}\" ) && ( -z \"${dist}\" ) ]]; then if [ -e /etc/lsb-release ]; then . /etc/lsb-release if [ \"${ID}\" = \"raspbian\" ]; then os=${ID} dist=`cut --delimiter='.' -f1 /etc/debian_version` else os=${DISTRIB_ID} dist=${DISTRIB_CODENAME} if [ -z \"$dist\" ]; then dist=${DISTRIB_RELEASE} fi fi elif [ `which lsb_release 2>/dev/null` ]; then dist=`lsb_release -c | cut -f2` os=`lsb_release -i | cut -f2 | awk '{ print tolower($1) }'` elif [ -e /etc/debian_version ]; then os=`cat /etc/issue | head -1 | awk '{ print tolower($1) }'` if grep -q '/' /etc/debian_version; then dist=`cut --delimiter='/' -f1 /etc/debian_version` else dist=`cut --delimiter='.' -f1 /etc/debian_version` fi else unknown_os fi fi if [ -z \"$dist\" ]; then unknown_os fi os=\"${os// /}\" dist=\"${dist// /}\" echo \"Detected operating system as $os/$dist.\" } detectaptversion () { aptversionfull=`apt-get -v | head -1 | awk '{ print $2 }'` aptversionmajor=`echo $aptversionfull | cut -d. -f1` aptversionminor=`echo $aptversionfull | cut -d. -f2` aptversionmodified=\"${aptversionmajor}${aptversionminor}0\" echo \"Detected apt version as ${aptversionfull}\" } main () { detect_os curl_check gpg_check detectaptversion echo -n \"Running apt-get update... \" apt-get update &> /dev/null echo \"done.\" installdebiankeyring echo -n \"Installing apt-transport-https... \" apt-get install -y apt-transport-https &> /dev/null echo \"done.\" gpgkeyurl=\"https://packagecloud.io/wasmCloud/core/gpgkey\" aptconfigurl=\"https://packagecloud.io/install/repositories/wasmCloud/core/config_file.list?os=${os}&dist=${dist}&source=script\" aptsourcepath=\"/etc/apt/sources.list.d/wasmCloud_core.list\" aptkeyringsdir=\"/etc/apt/keyrings\" if [ ! -d \"$aptkeyringsdir\" ]; then mkdir -p \"$aptkeyringsdir\" fi gpgkeyringpath=\"$aptkeyringsdir/wasmCloud_core-archive-keyring.gpg\"" }, { "data": "echo -n \"Installing $aptsourcepath...\" curl -sSf \"${aptconfigurl}\" > $aptsourcepath curlexitcode=$? if [ \"$curlexitcode\" = \"22\" ]; then echo echo echo -n \"Unable to download repo config from: \" echo \"${aptconfigurl}\" echo echo \"This usually happens if your operating system is not supported by \" echo \"packagecloud.io, or this script's OS detection failed.\" echo echo \"You can override the OS detection by setting os= and dist= prior to running this script.\" echo \"You can find a list of supported OSes and distributions on our website: https://packagecloud.io/docs#osdistroversion\" echo echo \"For example, to force Ubuntu Trusty: os=ubuntu dist=trusty ./script.sh\" echo echo \"If you are running a supported OS, please email support@packagecloud.io and report this.\" [ -e $aptsourcepath ] && rm $aptsourcepath exit 1 elif [ \"$curlexitcode\" = \"35\" -o \"$curlexitcode\" = \"60\" ]; then echo \"curl is unable to connect to packagecloud.io over TLS when running: \" echo \" curl ${aptconfigurl}\" echo \"This is usually due to one of two things:\" echo echo \" 1.) Missing CA root certificates (make sure the ca-certificates package is installed)\" echo \" 2.) An old version of libssl. Try upgrading libssl on your system to a more recent version\" echo echo \"Contact support@packagecloud.io with information about your system for help.\" [ -e $aptsourcepath ] && rm $aptsourcepath exit 1 elif [ \"$curlexitcode\" -gt \"0\" ]; then echo echo \"Unable to run: \" echo \" curl ${aptconfigurl}\" echo echo \"Double check your curl installation and try again.\" [ -e $aptsourcepath ] && rm $aptsourcepath exit 1 else echo \"done.\" fi echo -n \"Importing packagecloud gpg key... \" curl -fsSL \"${gpgkeyurl}\" | gpg --dearmor > ${gpgkeyringpath} chmod 0644 \"${gpgkeyringpath}\" if [ \"${aptversionmodified}\" -lt 110 ]; then mv ${gpgkeyringpath} ${gpgkeypath_old} chmod 0644 \"${gpgkeypath_old}\" if ! ls -1qA $aptkeyringsdir | grep -q .;then rm -r $aptkeyringsdir fi echo \"Packagecloud gpg key imported to ${gpgkeypath_old}\" else echo \"Packagecloud gpg key imported to ${gpgkeyringpath}\" fi echo \"done.\" echo -n \"Running apt-get update... \" apt-get update &> /dev/null echo \"done.\" echo echo \"The repository is setup! You can now install packages.\" } main ``` ``` curl -s https://packagecloud.io/install/repositories/wasmcloud/core/script.rpm.sh | sudo bash sudo dnf install wash ``` ``` unknown_os () { echo \"Unfortunately, your operating system distribution and version are not supported by this script.\" echo echo \"You can override the OS detection by setting os= and dist= prior to running this script.\" echo \"You can find a list of supported OSes and distributions on our website: https://packagecloud.io/docs#osdistroversion\" echo echo \"For example, to force CentOS 6: os=el dist=6 ./script.sh\" echo echo \"Please email support@packagecloud.io and let us know if you run into any issues.\" exit 1 } curl_check () { echo \"Checking for curl...\" if command -v curl > /dev/null; then echo \"Detected curl...\" else echo \"Installing curl...\" yum install -d0 -e0 -y curl fi } detect_os () { if [[ ( -z \"${os}\" ) && ( -z \"${dist}\" ) ]]; then if [ -e /etc/os-release ]; then" }, { "data": "/etc/os-release os=${ID} if [ \"${os}\" = \"poky\" ]; then dist=`echo ${VERSION_ID}` elif [ \"${os}\" = \"sles\" ]; then dist=`echo ${VERSION_ID}` elif [ \"${os}\" = \"opensuse\" ]; then dist=`echo ${VERSION_ID}` elif [ \"${os}\" = \"opensuse-leap\" ]; then os=opensuse dist=`echo ${VERSION_ID}` elif [ \"${os}\" = \"amzn\" ]; then dist=`echo ${VERSION_ID}` else dist=`echo ${VERSION_ID} | awk -F '.' '{ print $1 }'` fi elif [ `which lsb_release 2>/dev/null` ]; then dist=`lsb_release -r | cut -f2 | awk -F '.' '{ print $1 }'` os=`lsb_release -i | cut -f2 | awk '{ print tolower($1) }'` elif [ -e /etc/oracle-release ]; then dist=`cut -f5 --delimiter=' ' /etc/oracle-release | awk -F '.' '{ print $1 }'` os='ol' elif [ -e /etc/fedora-release ]; then dist=`cut -f3 --delimiter=' ' /etc/fedora-release` os='fedora' elif [ -e /etc/redhat-release ]; then os_hint=`cat /etc/redhat-release | awk '{ print tolower($1) }'` if [ \"${os_hint}\" = \"centos\" ]; then dist=`cat /etc/redhat-release | awk '{ print $3 }' | awk -F '.' '{ print $1 }'` os='centos' elif [ \"${os_hint}\" = \"scientific\" ]; then dist=`cat /etc/redhat-release | awk '{ print $4 }' | awk -F '.' '{ print $1 }'` os='scientific' else dist=`cat /etc/redhat-release | awk '{ print tolower($7) }' | cut -f1 --delimiter='.'` os='redhatenterpriseserver' fi else aws=`grep -q Amazon /etc/issue` if [ \"$?\" = \"0\" ]; then dist='6' os='aws' else unknown_os fi fi fi if [[ ( -z \"${os}\" ) || ( -z \"${dist}\" ) ]]; then unknown_os fi os=\"${os// /}\" dist=\"${dist// /}\" echo \"Detected operating system as ${os}/${dist}.\" if [[ \"$os\" = \"ol\" || \"$os\" = \"el\" ]] && [ $(($dist)) \\> 7 ]; then skippygpgme=1 else skippygpgme=0 fi } finalizeyumrepo () { if [ \"$skippygpgme\" = 0 ]; then echo \"Installing pygpgme to verify GPG signatures...\" yum install -y pygpgme --disablerepo=\"${repoconfigname}\" pypgpme_check=`rpm -qa | grep -qw pygpgme` if [ \"$?\" != \"0\" ]; then echo echo \"WARNING: \" echo \"The pygpgme package could not be installed. This means GPG verification is not possible for any RPM installed on your system. \" echo \"To fix this, add a repository with pygpgme. Usualy, the EPEL repository for your system will have this. \" echo \"More information: https://fedoraproject.org/wiki/EPEL#HowcanIusetheseextrapackages.3F\" echo sed -i'' 's/repogpgcheck=1/repogpgcheck=0/' /etc/yum.repos.d/$repoconfigname.repo fi fi echo \"Installing yum-utils...\" yum install -y yum-utils --disablerepo=\"${repoconfigname}\" yumutilscheck=`rpm -qa | grep -qw yum-utils` if [ \"$?\" != \"0\" ]; then echo echo \"WARNING: \" echo \"The yum-utils package could not be installed. This means you may not be able to install source RPMs or use other yum features.\" echo fi echo \"Generating yum cache for ${repoconfigname}...\" yum -q makecache -y --disablerepo='*' --enablerepo=\"${repoconfigname}\" echo \"Generating yum cache for ${repoconfigname}-source...\" yum -q makecache -y --disablerepo='*' --enablerepo=\"${repoconfigname}-source\" } finalizezypperrepo () { zypper --gpg-auto-import-keys refresh $repoconfigname zypper --gpg-auto-import-keys refresh $repoconfigname-source } main () { repoconfigname=wasmCloud_core detect_os curl_check yumrepoconfigurl=\"https://packagecloud.io/install/repositories/wasmCloud/core/configfile.repo?os=${os}&dist=${dist}&source=script\" if [ \"${os}\" = \"sles\" ] || [ \"${os}\" = \"opensuse\" ]; then yumrepopath=/etc/zypp/repos.d/$repoconfigname.repo else yumrepopath=/etc/yum.repos.d/$repoconfigname.repo fi echo \"Downloading repository file: ${yumrepoconfig_url}\" curl -sSf \"${yumrepoconfigurl}\" > $yumrepo_path curlexitcode=$? if [ \"$curlexitcode\" = \"22\" ]; then echo echo echo -n \"Unable to download repo config from: \" echo \"${yumrepoconfig_url}\" echo echo \"This usually happens if your operating system is not supported by \" echo \"packagecloud.io, or this script's OS detection" }, { "data": "echo echo \"You can override the OS detection by setting os= and dist= prior to running this script.\" echo \"You can find a list of supported OSes and distributions on our website: https://packagecloud.io/docs#osdistroversion\" echo echo \"For example, to force CentOS 6: os=el dist=6 ./script.sh\" echo echo \"If you are running a supported OS, please email support@packagecloud.io and report this.\" [ -e $yumrepopath ] && rm $yumrepopath exit 1 elif [ \"$curlexitcode\" = \"35\" -o \"$curlexitcode\" = \"60\" ]; then echo echo \"curl is unable to connect to packagecloud.io over TLS when running: \" echo \" curl ${yumrepoconfig_url}\" echo echo \"This is usually due to one of two things:\" echo echo \" 1.) Missing CA root certificates (make sure the ca-certificates package is installed)\" echo \" 2.) An old version of libssl. Try upgrading libssl on your system to a more recent version\" echo echo \"Contact support@packagecloud.io with information about your system for help.\" [ -e $yumrepopath ] && rm $yumrepopath exit 1 elif [ \"$curlexitcode\" -gt \"0\" ]; then echo echo \"Unable to run: \" echo \" curl ${yumrepoconfig_url}\" echo echo \"Double check your curl installation and try again.\" [ -e $yumrepopath ] && rm $yumrepopath exit 1 else echo \"done.\" fi if [ \"${os}\" = \"sles\" ] || [ \"${os}\" = \"opensuse\" ]; then finalizezypperrepo else finalizeyumrepo fi echo echo \"The repository is setup! You can now install packages.\" } main ``` ``` snap install wash --devmode --edge ``` ``` brew install wasmcloud/wasmcloud/wash ``` ``` choco install wash ``` If your platform isn't listed, wash can be installed with cargo and a Rust toolchain. ``` cargo install wash-cli ``` The wash project is open-source and can be cloned from GitHub and built locally with a Rust toolchain. ``` git clone https://github.com/wasmCloud/wasmCloud.git cd crates/wash-cli cargo build --release ./target/release/wash ``` You'll also want to add wash to your PATH to easily run it. Refer to instructions specific to your operating system for how to do this. If you only use Docker Compose without installing wash, you'll only be able to interact with wasmCloud via the washboard. Download the sample Docker Compose file and put it into your work directory. This compose file will run NATS, a local OCI registry, Grafana and Tempo for OTEL, and the wasmcloud_host container. In this format it's easy to run all the necessary services for a wasmCloud host with only a docker installation. With the docker-compose.yml file in the current directory, start the processes with ``` docker compose up ``` The host will run until you type Ctrl-C or close the terminal window. To start the docker compose process in the background, add a -d flag: ``` docker compose up -d ``` If the wasmCloud host is running in Docker in the background, you can view its logs (live) with ``` docker logs -f wasmcloud ``` Verify that wash is properly installed with: ``` wash --help ``` If wash is installed correctly, you will see a printout of all available commands and short descriptions for each. If you ever need more detail on a specific command or sub-command just run wash <command> --help. Now that wash is installed, let's get started." } ]
{ "category": "Orchestration & Management", "file_name": "intro.md", "project_name": "wasmCloud", "subcategory": "Scheduling & Orchestration" }
[ { "data": "wasmCloud is a universal application platform that helps you build and run globally distributed WebAssembly applications on any cloud and any edge. Our goal is to make development more joyful and efficient by giving developers the tools to write only the code that mattersand making it easy to run that code anywhere. wasmCloud leverages WebAssembly's security, portability, and performance to compose applications from tiny, independent building blocks. These building blocks are managed declaratively and reconfigurable at runtime. You shouldn't need to recompile your whole app to upgrade a database client or patch a vulnerability. You shouldn't need to recompile anything to move your app from development to production. wasmCloud is designed around the following core tenets: Move from concept to production without changing your design, architecture, or your programming environment. Check out our FAQ." } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Aeraki Mesh", "subcategory": "Service Mesh" }
[ { "data": "Before installing Aeraki, please check the supported Istio versions and the corresponding Proxy version: | Aeraki | MetaProtocol Proxy | Istio | |:|:|:--| | 1.4.x | 1.4.x | 1.18.x | | 1.3.x | 1.3.x | 1.16.x | | 1.2.x | 1.2.x | 1.14.x | | 1.1.x | 1.1.x | 1.12.x | | 1.0.x | 1.0.x | 1.10.x | Please modify the istio ConfigMap to add the following content. ``` kubectl edit cm istio -n istio-system ``` ``` apiVersion: v1 data: mesh: |- defaultConfig: proxyMetadata: ISTIOMETADNS_CAPTURE: \"true\" proxyStatsMatcher: inclusionPrefixes: thrift dubbo kafka meta_protocol inclusionRegexps: .dubbo. .thrift. .kafka. .zookeeper. .meta_protocol. ``` ``` git clone https://github.com/aeraki-mesh/aeraki.git cd aeraki export AERAKI_TAG=1.3.0 make install ``` You can choose to install aerakictl tool for debug purpose. ``` git clone https://github.com/aeraki-mesh/aerakictl.git ~/aerakictl;source ~/aerakictl/aerakictl.sh ``` If you want to use Aeraki with Tencent Cloud Mesh TCM, please contact TCMs sales team or business advisors. Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve." } ]
{ "category": "Orchestration & Management", "file_name": "quickstart.md", "project_name": "Aeraki Mesh", "subcategory": "Service Mesh" }
[ { "data": "Follow these instructions to install, run, and test Aeraki: Download Aeraki from the github. ``` git clone https://github.com/aeraki-mesh/aeraki.git ``` Install Istio, Aeraki and demo applications. ``` make demo ``` Note: Aeraki requires to enable Istio DNS proxying. Please turn on DNS proxying if you are installing Aeraki with an existing Istio deployment, or you can use make demo command to install Aeraki and Istio from scratch, make demo will take care of the Istio configuration. Open the following URLs in your browser to play with Aeraki and view service metrics Learn about more ways to configure and use Aeraki from the following pages: Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve." } ]
{ "category": "Orchestration & Management", "file_name": "api-docs.md", "project_name": "Consul", "subcategory": "Service Mesh" }
[ { "data": "Learn about the Consul REST API, which is the primary interface to all functionality available in Consul. The Consul HTTP API is a RESTful interface that allows you to leverage Consul functionality in your network. This topic provides guidance about the essential API endpoints for different workstreams. Refer to the HTTP API structure docs to learn how to interact with and authenticate against the Consul HTTP API. Use the following API endpoints to configure and connect your services. The following endpoints are specific to service mesh: The following API endpoints give you control over access to services in your network and access to the Consul API. Use the following API endpoints enable network observability. The following API endpoints help you manage Consul operations. The following API endpoints enable you to dynamically configure your services. On this page:" } ]
{ "category": "Orchestration & Management", "file_name": "docs.md", "project_name": "Consul", "subcategory": "Service Mesh" }
[ { "data": "Consul is a multi-networking tool that offers a fully-featured service mesh solution. It solves the networking and security challenges of operating microservices and cloud infrastructure in multi-cloud and hybrid cloud environments. This documentation describes Consul concepts, the problems it solves, and contains quick-start tutorials for using Consul. Consul service mesh provides service-to-service connection authorization and encryption using mutual transport layer security (TLS). Consul has many integrations with Kubernetes. You can deploy Consul to Kubernetes using the Helm chart or Consul K8s CLI, sync services between Consul and Kubernetes, run Consul service mesh, and more. Consul-Terraform-Sync (CTS) enables dynamic updates to network infrastructure devices triggered by service changes. Consul integrates with several platforms and products. Learn more about Consul integrations and how to enable them. On this page:" } ]
{ "category": "Orchestration & Management", "file_name": "intro.md", "project_name": "Consul", "subcategory": "Service Mesh" }
[ { "data": "HashiCorp Consul is a service networking solution that enables teams to manage secure network connectivity between services and across on-prem and multi-cloud environments and runtimes. Consul offers service discovery, service mesh, traffic management, and automated updates to network infrastructure devices. You can use these features individually or together in a single Consul deployment. Hands-on: Complete the Getting Started tutorials to learn how to deploy Consul: Consul provides a control plane that enables you to register, query, and secure services deployed across your network. The control plane is the part of the network infrastructure that maintains a central registry to track services and their respective IP addresses. It is a distributed system that runs on clusters of nodes, such as physical servers, cloud instances, virtual machines, or containers. Consul interacts with the data plane through proxies. The data plane is the part of the network infrastructure that processes data requests. Refer to Consul Architecture for details. The core Consul workflow consists of the following stages: Consul increases application resilience, bolsters uptime, accelerates application deployment, and improves security across service-to-service communications. HashiCorp co-founder and CTO Armon Dadgar explains how Consul solves networking challenges. Adopting a microservices architecture on cloud infrastructure is a critical step toward delivering value at scale, but knowing where healthy services are running on your networks in real time becomes a challenge. Consul automates service discovery by replacing service connections usually handled with load balancers with an identity-based service catalog. The service catalog is a centralized source of truth that you can query through Consuls DNS server or API. The catalog always knows which services are available, which have been removed, and which services are healthy. Modern organizations may deploy services to a combination of on-prem infrastructure environments and public cloud providers across multiple regions. Services may run on bare metal, virtual machines, or as containers across Kubernetes clusters. Consul routes network traffic to any runtime or infrastructure environment your services need to reach. You can also use Consul API Gateway to route traffic into and out of the network. Consul service mesh provides additional capabilities, such as securing communication between services, traffic management, and observability, with no application code changes. Consul also has many integrations with Kubernetes that enable you to leverage Consul features in containerized environments. For example, Consul can automatically inject sidecar proxies into Kubernetes Pods and sync Kubernetes Services and non-Kubernetes services into the Consul service registry without manual changes to the application or changing the Pod definition. You can also schedule Consul workloads with HashiCorp Nomad to provide secure service-to-service communication between Nomad jobs and task groups. Microservice architectures are complex and difficult to secure against accidental disclosure to malicious" }, { "data": "Consul provides several mechanisms that enhance network security without any changes to your application code, including mutual transport layer security (mTLS) encryption on all traffic between services and Consul intentions, which are service-to-service permissions that you can manage through the Consul UI, API, and CLI. When you deploy Consul to Kubernetes clusters, you can also integrate with HashiCorp Vault to manage sensitive data. By default, Consul on Kubernetes leverages Kubernetes secrets as the backend system. Kubernetes secrets are base64 encoded, unencrypted, and lack lease or time-to-live properties. By leveraging Vault as a secrets backend for Consul on Kubernetes, you can manage and store Consul related secrets within a centralized Vault cluster to use across one or many Consul on Kubernetes datacenters. Refer to Vault as the Secrets Backend for additional information. You can also secure your Consul deployment, itself, by defining security policies in access control lists (ACL) to control access to data and Consul APIs. Outages are unavoidable, but with distributed systems it is critical that a power failure in one datacenter doesnt disrupt downstream service operations. You can enable automated backups, redundancy zones, read-replicas, and other features that prevent data loss and downtime after a catastrophic event. L7 observability features also deliver service traffic metrics in the Consul UI, which help you understand the state of a service and its connections within the mesh. Change to your network, including day-to-day operational tasks such as updating network device endpoints and firewall or load balancer rules, can lead to problems that disrupt operations at critical moments. You can deploy the Consul-Terraform-Sync (CTS) add-on to dynamically update network infrastructure devices when a service changes. CTS monitors the service information stored in Consul and automatically launches an instance of HashiCorp Terraform to drive relevant changes to the network infrastructure when Consul registers a change, reducing the manual effort of configuring network infrastructure. Rolling out changes can be risky, especially in complex network environments. Updated services may not behave as expected when connected to other services, resulting in upstream or downstream issues. Consul service mesh supports layer 7 (L7) traffic management, which lets you divide L7 traffic into different subsets of service instances. This enables you to divide your pool of services for canary testing, A/B tests, blue/green deployments, and soft multi-tenancy (prod/qa/staging sharing compute resources) deployments. HashiCorp offers core Consul functionality for free in the community edition, which is ideal for smaller businesses and teams that want to pilot Consul within their organizations. As your business grows, you can upgrade to Consul Enterprise, which offers additional capabilities designed to address organizational complexities of collaboration, operations, scale, and governance. HashiCorp Cloud Platform (HCP) Consul is our SaaS that delivers Consul Enterprise capabilities and shifts the burden of managing the control plane to us. Create an HCP organization and leverage our expertise to simplify control plane maintenance and configuration. Learn more at HashiCorp Cloud Platform. We welcome questions, suggestions, and contributions from the community. On this page:" } ]
{ "category": "Orchestration & Management", "file_name": "guides.md", "project_name": "Consul", "subcategory": "Service Mesh" }
[ { "data": "The Consul guides are now Consul tutorials. Guides are step by step command-line walkthroughs that demonstrate how to perform common operations using Consul, and complement the feature-focused Consul documentation. Guide content begins with getting-started tracks to help new users learn the basics of Consul, and continues through production-playbook tracks that cover topics like Day 1 and Day 2 operations, production considerations, and recommendations for securing your Consul cluster. You can work through the guides sequentially using the tracks, or just refer to the material that is most relevant to you. Tracks include:" } ]
{ "category": "Orchestration & Management", "file_name": "getting-started.md", "project_name": "Istio", "subcategory": "Service Mesh" }
[ { "data": "Istio 1.22.1 is now available! Click here to learn more 8 minute read page test This guide lets you quickly evaluate Istio. If you are already familiar with Istio or interested in installing other configuration profiles or advanced deployment models, refer to our which Istio installation method should I use? FAQ page. These steps require you to have a cluster running a supported version of Kubernetes (1.27, 1.28, 1.29, 1.30). You can use any supported platform, for example Minikube or others specified by the platform-specific setup instructions. Follow these steps to get started with Istio: Go to the Istio release page to download the installation file for your OS, or download and extract the latest release automatically (Linux or macOS): ``` $ curl -L https://istio.io/downloadIstio | sh - ``` The command above downloads the latest release (numerically) of Istio. You can pass variables on the command line to download a specific version or to override the processor architecture. For example, to download Istio 1.22.1 for the x86_64 architecture, run: ``` $ curl -L https://istio.io/downloadIstio | ISTIOVERSION=1.22.1 TARGETARCH=x86_64 sh - ``` Move to the Istio package directory. For example, if the package is istio-1.22.1: ``` $ cd istio-1.22.1 ``` The installation directory contains: Add the istioctl client to your path (Linux or macOS): ``` $ export PATH=$PWD/bin:$PATH ``` For this installation, we use the demo configuration profile. Its selected to have a good set of defaults for testing, but there are other profiles for production or performance testing. ``` $ istioctl install --set profile=demo -y Istio core installed Istiod installed Egress gateways installed Ingress gateways installed Installation complete ``` Add a namespace label to instruct Istio to automatically inject Envoy sidecar proxies when you deploy your application later: ``` $ kubectl label namespace default istio-injection=enabled namespace/default labeled ``` Deploy the Bookinfo sample application: ``` $ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created ``` The application will start. As each pod becomes ready, the Istio sidecar will be deployed along with it. ``` $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE details ClusterIP 10.0.0.212 <none> 9080/TCP 29s kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 25m productpage ClusterIP 10.0.0.57 <none> 9080/TCP 28s ratings ClusterIP 10.0.0.33 <none> 9080/TCP 29s reviews ClusterIP 10.0.0.28 <none> 9080/TCP 29s ``` and ``` $ kubectl get pods NAME READY STATUS RESTARTS AGE details-v1-558b8b4b76-2llld 2/2 Running 0 2m41s productpage-v1-6987489c74-lpkgl 2/2 Running 0 2m40s ratings-v1-7dc98c7588-vzftc 2/2 Running 0 2m41s reviews-v1-7f99cc4496-gdxfn 2/2 Running 0 2m41s reviews-v2-7d79d5bd5d-8zzqd 2/2 Running 0 2m41s reviews-v3-7dbcdcbc56-m8dph 2/2 Running 0 2m41s ``` Verify everything is working correctly up to this point. Run this command to see if the app is running inside the cluster and serving HTML pages by checking for the page title in the response: ``` $ kubectl exec \"$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')\" -c ratings -- curl -sS productpage:9080/productpage | grep -o \"<title>.*</title>\" <title>Simple Bookstore App</title> ``` The Bookinfo application is deployed but not accessible from the outside. To make it accessible, you need to create an Istio Ingress Gateway, which maps a path to a route at the edge of your" }, { "data": "Associate this application with the Istio gateway: ``` $ kubectl apply -f @samples/bookinfo/networking/bookinfo-gateway.yaml@ gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created ``` Ensure that there are no issues with the configuration: ``` $ istioctl analyze No validation issues found when analyzing namespace: default. ``` Follow these instructions to set the INGRESSHOST and INGRESSPORT variables for accessing the gateway. Use the tabs to choose the instructions for your chosen platform: Run this command in a new terminal window to start a Minikube tunnel that sends traffic to your Istio Ingress Gateway. This will provide an external load balancer, EXTERNAL-IP, for service/istio-ingressgateway. ``` $ minikube tunnel ``` Set the ingress host and ports: ``` $ export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') $ export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}') $ export SECUREINGRESSPORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}') ``` Ensure an IP address and ports were successfully assigned to each environment variable: ``` $ echo \"$INGRESS_HOST\" 127.0.0.1 ``` ``` $ echo \"$INGRESS_PORT\" 80 ``` ``` $ echo \"$SECUREINGRESSPORT\" 443 ``` Execute the following command to determine if your Kubernetes cluster is running in an environment that supports external load balancers: ``` $ kubectl get svc istio-ingressgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 172.21.109.129 130.211.10.121 80:31380/TCP,443:31390/TCP,31400:31400/TCP 17h ``` If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is <none> (or perpetually <pending>), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the services node port. Choose the instructions corresponding to your environment: Follow these instructions if you have determined that your environment has an external load balancer. Set the ingress IP and ports: ``` $ export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') $ export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}') $ export SECUREINGRESSPORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}') ``` In certain environments, the load balancer may be exposed using a host name, instead of an IP address. In this case, the ingress gateways EXTERNAL-IP value will not be an IP address, but rather a host name, and the above command will have failed to set the INGRESS_HOST environment variable. Use the following command to correct the INGRESS_HOST value: ``` $ export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') ``` Follow these instructions if your environment does not have an external load balancer and choose a node port instead. Set the ingress ports: ``` $ export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}') $ export SECUREINGRESSPORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}') ``` GKE: ``` $ export INGRESS_HOST=worker-node-address ``` You need to create firewall rules to allow the TCP traffic to the ingressgateway services ports. Run the following commands to allow the traffic for the HTTP port, the secure port (HTTPS) or both: ``` $ gcloud compute firewall-rules create allow-gateway-http --allow \"tcp:$INGRESS_PORT\" $ gcloud compute firewall-rules create allow-gateway-https --allow \"tcp:$SECUREINGRESSPORT\" ``` IBM Cloud Kubernetes Service: ``` $ ibmcloud ks workers --cluster cluster-name-or-id $ export INGRESS_HOST=public-IP-of-one-of-the-worker-nodes ``` Docker For Desktop: ``` $ export INGRESS_HOST=127.0.0.1 ``` Other environments: ``` $ export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o" }, { "data": "``` Set GATEWAY_URL: ``` $ export GATEWAYURL=$INGRESSHOST:$INGRESS_PORT ``` Ensure an IP address and port were successfully assigned to the environment variable: ``` $ echo \"$GATEWAY_URL\" 127.0.0.1:80 ``` Confirm that the Bookinfo application is accessible from outside by viewing the Bookinfo product page using a browser. Run the following command to retrieve the external address of the Bookinfo application. ``` $ echo \"http://$GATEWAY_URL/productpage\" ``` Paste the output from the previous command into your web browser and confirm that the Bookinfo product page is displayed. Istio integrates with several different telemetry applications. These can help you gain an understanding of the structure of your service mesh, display the topology of the mesh, and analyze the health of your mesh. Use the following instructions to deploy the Kiali dashboard, along with Prometheus, Grafana, and Jaeger. Install Kiali and the other addons and wait for them to be deployed. ``` $ kubectl apply -f samples/addons $ kubectl rollout status deployment/kiali -n istio-system Waiting for deployment \"kiali\" rollout to finish: 0 of 1 updated replicas are available... deployment \"kiali\" successfully rolled out ``` Access the Kiali dashboard. ``` $ istioctl dashboard kiali ``` In the left navigation menu, select Graph and in the Namespace drop down, select default. To see trace data, you must send requests to your service. The number of requests depends on Istios sampling rate and can be configured using the Telemetry API. With the default sampling rate of 1%, you need to send at least 100 requests before the first trace is visible. To send a 100 requests to the productpage service, use the following command: ``` $ for i in $(seq 1 100); do curl -s -o /dev/null \"http://$GATEWAY_URL/productpage\"; done ``` The Kiali dashboard shows an overview of your mesh with the relationships between the services in the Bookinfo sample application. It also provides filters to visualize the traffic flow. Congratulations on completing the evaluation installation! These tasks are a great place for beginners to further evaluate Istios features using this demo installation: Before you customize Istio for production use, see these resources: We welcome you to ask questions and give us feedback by joining the Istio community. To delete the Bookinfo sample application and its configuration, see Bookinfo cleanup. The Istio uninstall deletes the RBAC permissions and all resources hierarchically under the istio-system namespace. It is safe to ignore errors for non-existent resources because they may have been deleted hierarchically. ``` $ kubectl delete -f @samples/addons@ $ istioctl uninstall -y --purge ``` The istio-system namespace is not removed by default. If no longer needed, use the following command to remove it: ``` $ kubectl delete namespace istio-system ``` The label to instruct Istio to automatically inject Envoy sidecar proxies is not removed by default. If no longer needed, use the following command to remove it: ``` $ kubectl label namespace default istio-injection- ``` Getting Started with Istio and Kubernetes Gateway API Try Istios features quickly and easily. Installing Gateways Install and customize Istio Gateways. Kubernetes Native Sidecars in Istio Demoing the new SidecarContainers feature with Istio. Deploying Istio Control Planes Outside the Mesh A new deployment model for Istio. Safely Upgrade Istio using a Canary Control Plane Deployment Simplifying Istio upgrades by offering safe canary deployments of the control plane. DNS Certificate Management Provision and manage DNS certificates in Istio." } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Istio", "subcategory": "Service Mesh" }
[ { "data": "Istio 1.22.1 is now available! Click here to learn more Learn how to deploy, use, and operate Istio. Learn about the different parts of the Istio system and the abstractions it uses. Information for setting up and operating Istio in sidecar mode. Information for setting up and operating Istio with support for ambient mode. How to do single specific targeted activities with the Istio system. A variety of fully working example uses for Istio that you can experiment with. Concepts, tools, and techniques to deploy and manage an Istio mesh. Information relating to Istio releases. Detailed authoritative reference material such as command-line options, configuration options, and API calling parameters. In addition to the above documentation links, please consider the following resources:" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Merbridge", "subcategory": "Service Mesh" }
[ { "data": "1 minute read Get all information you need to know about Merbridge. Merbridge is eBPF-based and can accelerate the data plane of service meshes with a shorter packet datapath than iptables. This page outlines the features and scenarios of Merbridge, as well as its competitive advantages. This page helps you quickly get started with Merbridge. This page describes some key concepts about Merbridge. This page helps you make contributions to Merbridge. Merbridge" } ]
{ "category": "Orchestration & Management", "file_name": "docs.md", "project_name": "Merbridge", "subcategory": "Service Mesh" }
[ { "data": "1 minute read Get all information you need to know about Merbridge. Merbridge is eBPF-based and can accelerate the data plane of service meshes with a shorter packet datapath than iptables. This page outlines the features and scenarios of Merbridge, as well as its competitive advantages. This page helps you quickly get started with Merbridge. This page describes some key concepts about Merbridge. This page helps you make contributions to Merbridge. Merbridge" } ]
{ "category": "Orchestration & Management", "file_name": "docs.openservicemesh.io.md", "project_name": "Open Service Mesh", "subcategory": "Service Mesh" }
[ { "data": "You are viewing docs for the v1.0 release. View the docs for the latest release here Viewing v1.0 docs. Latest changes. A simple, complete, and standalone service mesh. OSM runs on Kubernetes. The OSM control plane implements Envoys xDS and is configured with SMI APIs. OSM injects an Envoy proxy as a sidecar container next to each instance of an application. To learn more about OSM: Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve. Open Service Mesh Authors 2023 | Documentation Distributed under CC-BY-4.0. 2023 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Orchestration & Management", "file_name": "#dingtalk.md", "project_name": "OpenSergo", "subcategory": "Service Mesh" }
[ { "data": "This topic is about how to set up and use OpenSergo Dashboard UI. OpenSergo Dashbaord requires Java 8 && Maven >= 3.6.0. There are two ways to get OpenSergo Dashboard. You could download latest version OpenSergo Dashboard as opensergo-dashboard-${version}.zip. ``` unzip opensergo-dashboard-$version.zipcd opensergo-dashboard-$version./bin/startup.sh``` You could visit http://localhost:8080/ to access OpenSergo Dashboard ``` ./bin/shutdown.sh``` Or kill the Java process." } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "OpenSergo", "subcategory": "Service Mesh" }
[ { "data": "This topic is about how to set up and use OpenSergo Dashboard UI. OpenSergo Dashbaord requires Java 8 && Maven >= 3.6.0. There are two ways to get OpenSergo Dashboard. You could download latest version OpenSergo Dashboard as opensergo-dashboard-${version}.zip. ``` unzip opensergo-dashboard-$version.zipcd opensergo-dashboard-$version./bin/startup.sh``` You could visit http://localhost:8080/ to access OpenSergo Dashboard ``` ./bin/shutdown.sh``` Or kill the Java process." } ]
{ "category": "Orchestration & Management", "file_name": "connect.md", "project_name": "Service Mesh Interface (SMI)", "subcategory": "Service Mesh" }
[ { "data": "Consul service mesh provides service-to-service connection authorization and encryption using mutual Transport Layer Security (TLS). Applications can use sidecar proxies in a service mesh configuration to establish TLS connections for inbound and outbound connections without being aware of the service mesh at all. Applications may also natively integrate with Consul service mesh for optimal performance and security. Consul service mesh can help you secure your services and provide data about service-to-service communications. The noun connect is used throughout this documentation to refer to the connect subsystem that provides Consul's service mesh capabilities. Where you encounter the noun connect, it is usually functionality specific to service mesh. Review the video below to learn more about Consul service mesh from HashiCorp's co-founder Armon. Consul service mesh enables secure deployment best-practices with automatic service-to-service encryption, and identity-based authorization. Consul uses the registered service identity, rather than IP addresses, to enforce access control with intentions. This makes it easier to control access and enables services to be rescheduled by orchestrators, including Kubernetes and Nomad. Intention enforcement is network agnostic, so Consul service mesh works with physical networks, cloud networks, software-defined networks, cross-cloud, and more. One of the key benefits of Consul service mesh is the uniform and consistent view it can provide of all the services on your network, irrespective of their different programming languages and frameworks. When you configure Consul service mesh to use sidecar proxies, those proxies see all service-to-service traffic and can collect data about it. Consul service mesh can configure Envoy proxies to collect layer 7 metrics and export them to tools like Prometheus. Correctly instrumented applications can also send open tracing data through Envoy. Complete the following tutorials try Consul service mesh in different environments: The Getting Started with Consul Service Mesh collection walks you through installing Consul as service mesh for Kubernetes using the Helm chart, deploying services in the service mesh, and using intentions to secure service communications. The Getting Started With Consul for Kubernetes tutorials guides you through installing Consul on Kubernetes to set up a service mesh for establishing communication between Kubernetes services. The Secure Service-to-Service Communication tutorial is a simple walk through of connecting two services on your local machine and configuring your first intention. The Kubernetes tutorial walks you through configuring Consul service mesh in Kubernetes using the Helm chart, and using intentions. You can run the guide on Minikube or an existing Kubernetes cluster. The observability tutorial shows how to deploy a basic metrics collection and visualization pipeline on a Minikube or Kubernetes cluster using the official Helm charts for Consul, Prometheus, and Grafana. On this page:" } ]
{ "category": "Orchestration & Management", "file_name": "github-terms-of-service.md", "project_name": "Slime", "subcategory": "Service Mesh" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "understanding-github-code-search-syntax.md", "project_name": "Slime", "subcategory": "Service Mesh" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Avi Networks", "subcategory": "Service Proxy" }
[ { "data": "An overview of the general architecture of Avi Vantage. Applications managed by Avi Vantage are grouped into independently configurable \"clouds.\" The specific technologies for which Avi Vantage is optimized. Best practices for Controller network configuration. The core of the Avi Vantage load-balancing and proxy functionality. Service Engines handle all data plane operations. Service Engines are grouped together for common configuration and high availability. Avi Vantage groups servers into pools to perform health monitoring, load balancing, persistence, and other functions. Learn more about Avi Vantage load balancing and SE autoscaling functionality. Dynamic scaling of back-end server pools in response to service load. Automatic health monitoring of deployed servers. Best practices for high availability when deploying Avi Vantage. Best practices for operating Avi Vantage securely. Avi Vantage rate shaping and throttling capabilities. Overview of Avi Vantage acceleration capabilities. Overview of Avi Vantage analytics capabilities. Overview of Avi Vantage real-time visibility and traffic inspection capabilities. Options for integrating with third-party virtualized and cloud environments. Overview of Avi Vantage administration options. Guides for installing Avi Vantage in specific environments. Avi Vantage configuration guide. Avi Vantage command-line interface guide. Guide to using DataScript, the Avi Vantage scripting language and environment. Reference manual for the Avi Vantage REST API. Important details for DevOps and transition teams. Version: 22.1" } ]
{ "category": "Orchestration & Management", "file_name": "docs.md", "project_name": "Avi Networks", "subcategory": "Service Proxy" }
[ { "data": "An overview of the general architecture of Avi Vantage. Applications managed by Avi Vantage are grouped into independently configurable \"clouds.\" The specific technologies for which Avi Vantage is optimized. Best practices for Controller network configuration. The core of the Avi Vantage load-balancing and proxy functionality. Service Engines handle all data plane operations. Service Engines are grouped together for common configuration and high availability. Avi Vantage groups servers into pools to perform health monitoring, load balancing, persistence, and other functions. Learn more about Avi Vantage load balancing and SE autoscaling functionality. Dynamic scaling of back-end server pools in response to service load. Automatic health monitoring of deployed servers. Best practices for high availability when deploying Avi Vantage. Best practices for operating Avi Vantage securely. Avi Vantage rate shaping and throttling capabilities. Overview of Avi Vantage acceleration capabilities. Overview of Avi Vantage analytics capabilities. Overview of Avi Vantage real-time visibility and traffic inspection capabilities. Options for integrating with third-party virtualized and cloud environments. Overview of Avi Vantage administration options. Guides for installing Avi Vantage in specific environments. Avi Vantage configuration guide. Avi Vantage command-line interface guide. Guide to using DataScript, the Avi Vantage scripting language and environment. Reference manual for the Avi Vantage REST API. Important details for DevOps and transition teams. Version: 22.1" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Caddy", "subcategory": "Service Proxy" }
[ { "data": "The Caddyfile is a convenient Caddy configuration format for humans. It is most people's favorite way to use Caddy because it is easy to write, easy to understand, and expressive enough for most use cases. It looks like this: ``` example.com { root * /var/www/wordpress encode gzip php_fastcgi unix//run/php/php-version-fpm.sock file_server } ``` (That's a real, production-ready Caddyfile that serves WordPress with fully-managed HTTPS.) The basic idea is that you first type the address of your site, then the features or functionality you need your site to have. View more common patterns. The Caddyfile is just a config adapter for Caddy. It is usually preferred when manually crafting configurations by hand, but is not as expressive, flexible, or programmable as Caddy's native JSON structure. If you are automating your Caddy configurations/deployments, you may wish to use JSON with Caddy's API. (You can actually use the Caddyfile with the API too, just to a limited extent.)" } ]
{ "category": "Orchestration & Management", "file_name": "caddyfile.md", "project_name": "Caddy", "subcategory": "Service Proxy" }
[ { "data": "Caddy's native config language is JSON, but writing JSON by hand can be tedious and error-prone. That's why Caddy supports being configured with other languages through config adapters. They are Caddy plugins which make it possible to use config in your preferred format by outputting Caddy JSON for you. For example, a config adapter could turn your NGINX config into Caddy JSON. The following config adapters are currently available (some are third-party projects): You can use a config adapter by specifying it on the command line by using the --adapter flag on most subcommands that take a config: ``` caddy run --config caddy.yaml --adapter yaml``` Or via the API at the /load endpoint: ``` curl localhost:2019/load \\ -H \"Content-Type: application/yaml\" \\ --data-binary @caddy.yaml``` If you only want to get the output JSON without running it, you can use the caddy adapt command: ``` caddy adapt --config caddy.yaml --adapter yaml``` Not all config languages are 100% compatible with Caddy; some features or behaviors simply don't translate well or are not yet programmed into the adapter or Caddy itself. Some adapters do a 1-1 translation, like YAML->JSON or TOML->JSON. Others are designed specifically for Caddy, like the Caddyfile. Generally, these adapters will always work. However, not all adapters work all of the time. Config adapters do their best to translate your input to Caddy JSON with the highest fidelity and correctness. Because this conversion process is not guaranteed to be complete and correct all the time, we don't call them \"converters\" or \"translators\". They are \"adapters\" since they will at least give you a good starting point to finish crafting your final JSON config. Config adapters can output the resulting JSON, warnings, and errors. JSON results if no errors occur. Errors occur when something is wrong with the input (for example, syntax errors). Warnings are emitted when something is wrong with the adaptation but which is not necessarily fatal (for example, feature not supported). Caution is advised if using configs that were adapted with warnings." } ]
{ "category": "Orchestration & Management", "file_name": "api.md", "project_name": "Caddy", "subcategory": "Service Proxy" }
[ { "data": "Caddy is configured through an administration endpoint which can be accessed via HTTP using a REST API. You can configure this endpoint in your Caddy config. Default address: localhost:2019 The default address can be changed by setting the CADDY_ADMIN environment variable. Some installation methods may set this to something different. The address in the Caddy config always takes precedence over the default. The latest configuration will be saved to disk after any changes (unless disabled). You can resume the last working config after a restart with caddy run --resume, which guarantees config durability in the event of a power cycle or similar. To get started with the API, try our API tutorial or, if you only have a minute, our API quick-start guide. POST /load Sets or replaces the active configuration POST /stop Stops the active configuration and exits the process GET /config/[path] Exports the config at the named path POST /config/[path] Sets or replaces object; appends to array PUT /config/[path] Creates new object; inserts into array PATCH /config/[path] Replaces an existing object or array element DELETE /config/[path] Deletes the value at the named path Using @id in JSON Easily traverse into the config structure Concurrent config changes Avoid collisions when making unsynchronized changes to config POST /adapt Adapts a configuration to JSON without running it GET /pki/ca/<id> Returns information about a particular PKI app CA GET /pki/ca/<id>/certificates Returns the certificate chain of a particular PKI app CA GET /reverse_proxy/upstreams Returns the current status of the configured proxy upstreams Sets Caddy's configuration, overriding any previous configuration. It blocks until the reload completes or fails. Configuration changes are lightweight, efficient, and incur zero downtime. If the new config fails for any reason, the old config is rolled back into place without downtime. This endpoint supports different config formats using config adapters. The request's Content-Type header indicates the config format used in the request body. Usually, this should be application/json which represents Caddy's native config format. For another config format, specify the appropriate Content-Type so that the value after the forward slash / is the name of the config adapter to use. For example, when submitting a Caddyfile, use a value like text/caddyfile; or for JSON 5, use a value such as application/json5; etc. If the new config is the same as the current one, no reload will occur. To force a reload, set Cache-Control: must-revalidate in the request headers. Set a new active configuration: ``` curl \"http://localhost:2019/load\" \\ -H \"Content-Type: application/json\" \\ -d @caddy.json``` Note: curl's -d flag removes newlines, so if your config format is sensitive to line breaks (e.g. the Caddyfile), use --data-binary instead: ``` curl \"http://localhost:2019/load\" \\ -H \"Content-Type: text/caddyfile\" \\ --data-binary @Caddyfile``` Gracefully shuts down the server and exits the process. To only stop the running configuration without exiting the process, use DELETE /config/. Stop the process: ``` curl -X POST \"http://localhost:2019/stop\"``` Exports Caddy's current configuration at the named path. Returns a JSON body. Export entire config and pretty-print it: ``` curl \"http://localhost:2019/config/\" | jq { \"apps\": { \"http\": { \"servers\": { \"myserver\": { \"listen\": [ \":443\" ], \"routes\": [ { \"match\": [ { \"host\": [" }, { "data": "] } ], \"handle\": [ { \"handler\": \"file_server\" } ] } ] } } } } }``` Export just the listener addresses: ``` curl \"http://localhost:2019/config/apps/http/servers/myserver/listen\" [\":443\"]``` Changes Caddy's configuration at the named path to the JSON body of the request. If the destination value is an array, POST appends; if an object, it creates or replaces. As a special case, many items can be added to an array if: In this case, the elements in the payload's array will be expanded, and each one will be appended to the destination array. In Go terms, this would have the same effect as: ``` baseSlice = append(baseSlice, newElems...) ``` Add a listener address: ``` curl \\ -H \"Content-Type: application/json\" \\ -d '\":8080\"' \\ \"http://localhost:2019/config/apps/http/servers/myserver/listen\"``` Add multiple listener addresses: ``` curl \\ -H \"Content-Type: application/json\" \\ -d '[\":8080\", \":5133\"]' \\ \"http://localhost:2019/config/apps/http/servers/myserver/listen/...\"``` Changes Caddy's configuration at the named path to the JSON body of the request. If the destination value is a position (index) in an array, PUT inserts; if an object, it strictly creates a new value. Add a listener address in the first slot: ``` curl -X PUT \\ -H \"Content-Type: application/json\" \\ -d '\":8080\"' \\ \"http://localhost:2019/config/apps/http/servers/myserver/listen/0\"``` Changes Caddy's configuration at the named path to the JSON body of the request. PATCH strictly replaces an existing value or array element. Replace the listener addresses: ``` curl -X PATCH \\ -H \"Content-Type: application/json\" \\ -d '[\":8081\", \":8082\"]' \\ \"http://localhost:2019/config/apps/http/servers/myserver/listen\"``` Removes Caddy's configuration at the named path. DELETE deletes the target value. To unload the entire current configuration but leave the process running: ``` curl -X DELETE \"http://localhost:2019/config/\"``` To stop only one of your HTTP servers: ``` curl -X DELETE \"http://localhost:2019/config/apps/http/servers/myserver\"``` You can embed IDs in your JSON document for easier direct access to those parts of the JSON. Simply add a field called \"@id\" to an object and give it a unique name. For example, if you had a reverse proxy handler that you wanted to access frequently: ``` { \"@id\": \"my_proxy\", \"handler\": \"reverse_proxy\" } ``` To use it, simply make a request to the /id/ API endpoint in the same way you would to the corresponding /config/ endpoint, but without the whole path. The ID takes the request directly into that scope of the config for you. For example, to access the upstreams of the reverse proxy without an ID, the path would be something like ``` /config/apps/http/servers/myserver/routes/1/handle/0/upstreams ``` but with an ID, the path becomes ``` /id/my_proxy/upstreams ``` which is much easier to remember and write by hand. This section is for all /config/ endpoints. It is experimental and subject to change. Caddy's config API provides ACID guarantees for individual requests, but changes that involve more than a single request are subject to collisions or data loss if not properly synchronized. For example, two clients may GET /config/foo at the same time, make an edit within that scope (config path), then call POST|PUT|PATCH|DELETE /config/foo/... at the same time to apply their changes, resulting in a collision: either one will overwrite the other, or the second might leave the config in an unintended state since it was applied to a different version of the config than it was prepared against. This is because the changes are not aware of each other. Caddy's API does not support transactions spanning multiple requests, and HTTP is a stateless" }, { "data": "However, you can use the Etag trailer and If-Match header to detect and prevent collisions for any and all changes as a kind of optimistic concurrency control. This is useful if there is any chance that you are using Caddy's /config/... endpoints concurrently without synchronization. All responses to GET /config/... requests have an HTTP trailer called Etag that contains the path and a hash of the contents in that scope (e.g. Etag: \"/config/apps/http/servers 65760b8e\"). Simply set the If-Match header on a mutative request to that of an Etag trailer from a previous GET request. The basic algorithm for this is as follows: This algorithm safely allows multiple, overlapping changes to Caddy's configuration without explicit synchronization. It is designed so that simultaneous changes to different parts of the config don't require a retry: only changes that overlap the same scope of the config can possibly cause a collision and thus require a retry. Adapts a configuration to Caddy JSON without loading or running it. If successful, the resulting JSON document is returned in the response body. The Content-Type header is used to specify the configuration format in the same way that /load works. For example, to adapt a Caddyfile, set Content-Type: text/caddyfile. This endpoint will adapt any configuration format as long as the associated config adapter is plugged in to your Caddy build. Adapt a Caddyfile to JSON: ``` curl \"http://localhost:2019/adapt\" \\ -H \"Content-Type: text/caddyfile\" \\ --data-binary @Caddyfile``` Returns information about a particular PKI app CA by its ID. If the requested CA ID is the default (local), then the CA will be provisioned if it has not already been. Other CA IDs will return an error if they have not been previously provisioned. ``` curl \"http://localhost:2019/pki/ca/local\" | jq { \"id\": \"local\", \"name\": \"Caddy Local Authority\", \"rootcommonname\": \"Caddy Local Authority - 2022 ECC Root\", \"intermediatecommonname\": \"Caddy Local Authority - ECC Intermediate\", \"root_certificate\": \"--BEGIN CERTIFICATE--\\nMIIB ... gRw==\\n--END CERTIFICATE--\\n\", \"intermediate_certificate\": \"--BEGIN CERTIFICATE--\\nMIIB ... FzQ==\\n--END CERTIFICATE--\\n\" }``` Returns the certificate chain of a particular PKI app CA by its ID. If the requested CA ID is the default (local), then the CA will be provisioned if it has not already been. Other CA IDs will return an error if they have not been previously provisioned. This endpoint is used internally by the caddy trust command to allow installing the CA's root certificate to your system's trust store. ``` curl \"http://localhost:2019/pki/ca/local/certificates\" --BEGIN CERTIFICATE-- MIIByDCCAW2gAwIBAgIQViS12trTXBS/nyxy7Zg9JDAKBggqhkjOPQQDAjAwMS4w ... By75JkP6C14OfU733oElfDUMa5ctbMY53rWFzQ== --END CERTIFICATE-- --BEGIN CERTIFICATE-- MIIBpDCCAUmgAwIBAgIQTS5a+3LUKNxC6qN3ZDR8bDAKBggqhkjOPQQDAjAwMS4w ... 9M9t0FwCIQCAlUr4ZlFzHE/3K6dARYKusR1ck4A3MtucSSyar6lgRw== --END CERTIFICATE--``` Returns the current status of the configured reverse proxy upstreams (backends) as a JSON document. ``` curl \"http://localhost:2019/reverse_proxy/upstreams\" | jq [ {\"address\": \"10.0.1.1:80\", \"num_requests\": 4, \"fails\": 2}, {\"address\": \"10.0.1.2:80\", \"num_requests\": 5, \"fails\": 4}, {\"address\": \"10.0.1.3:80\", \"num_requests\": 3, \"fails\": 3} ]``` Each entry in the JSON array is a configured upstream stored in the global upstream pool. If your goal is to determine a backend's availability, you will need to cross-check relevant properties of the upstream against the handler configuration you are utilizing. For example, if you've enabled passive health checks for your proxies, then you need to also take into consideration the fails and numrequests values to determine if an upstream is considered available: check that the fails amount is less than your configured maximum amount of failures for your proxy (i.e. maxfails), and that numrequests is less than or equal to your configured amount of maximum requests per upstream (i.e. unhealthyrequestcount for the whole proxy, or maxrequests for individual upstreams)." } ]
{ "category": "Orchestration & Management", "file_name": "config-adapters.md", "project_name": "Caddy", "subcategory": "Service Proxy" }
[ { "data": "Caddy is configured through an administration endpoint which can be accessed via HTTP using a REST API. You can configure this endpoint in your Caddy config. Default address: localhost:2019 The default address can be changed by setting the CADDY_ADMIN environment variable. Some installation methods may set this to something different. The address in the Caddy config always takes precedence over the default. The latest configuration will be saved to disk after any changes (unless disabled). You can resume the last working config after a restart with caddy run --resume, which guarantees config durability in the event of a power cycle or similar. To get started with the API, try our API tutorial or, if you only have a minute, our API quick-start guide. POST /load Sets or replaces the active configuration POST /stop Stops the active configuration and exits the process GET /config/[path] Exports the config at the named path POST /config/[path] Sets or replaces object; appends to array PUT /config/[path] Creates new object; inserts into array PATCH /config/[path] Replaces an existing object or array element DELETE /config/[path] Deletes the value at the named path Using @id in JSON Easily traverse into the config structure Concurrent config changes Avoid collisions when making unsynchronized changes to config POST /adapt Adapts a configuration to JSON without running it GET /pki/ca/<id> Returns information about a particular PKI app CA GET /pki/ca/<id>/certificates Returns the certificate chain of a particular PKI app CA GET /reverse_proxy/upstreams Returns the current status of the configured proxy upstreams Sets Caddy's configuration, overriding any previous configuration. It blocks until the reload completes or fails. Configuration changes are lightweight, efficient, and incur zero downtime. If the new config fails for any reason, the old config is rolled back into place without downtime. This endpoint supports different config formats using config adapters. The request's Content-Type header indicates the config format used in the request body. Usually, this should be application/json which represents Caddy's native config format. For another config format, specify the appropriate Content-Type so that the value after the forward slash / is the name of the config adapter to use. For example, when submitting a Caddyfile, use a value like text/caddyfile; or for JSON 5, use a value such as application/json5; etc. If the new config is the same as the current one, no reload will occur. To force a reload, set Cache-Control: must-revalidate in the request headers. Set a new active configuration: ``` curl \"http://localhost:2019/load\" \\ -H \"Content-Type: application/json\" \\ -d @caddy.json``` Note: curl's -d flag removes newlines, so if your config format is sensitive to line breaks (e.g. the Caddyfile), use --data-binary instead: ``` curl \"http://localhost:2019/load\" \\ -H \"Content-Type: text/caddyfile\" \\ --data-binary @Caddyfile``` Gracefully shuts down the server and exits the process. To only stop the running configuration without exiting the process, use DELETE /config/. Stop the process: ``` curl -X POST \"http://localhost:2019/stop\"``` Exports Caddy's current configuration at the named path. Returns a JSON body. Export entire config and pretty-print it: ``` curl \"http://localhost:2019/config/\" | jq { \"apps\": { \"http\": { \"servers\": { \"myserver\": { \"listen\": [ \":443\" ], \"routes\": [ { \"match\": [ { \"host\": [" }, { "data": "] } ], \"handle\": [ { \"handler\": \"file_server\" } ] } ] } } } } }``` Export just the listener addresses: ``` curl \"http://localhost:2019/config/apps/http/servers/myserver/listen\" [\":443\"]``` Changes Caddy's configuration at the named path to the JSON body of the request. If the destination value is an array, POST appends; if an object, it creates or replaces. As a special case, many items can be added to an array if: In this case, the elements in the payload's array will be expanded, and each one will be appended to the destination array. In Go terms, this would have the same effect as: ``` baseSlice = append(baseSlice, newElems...) ``` Add a listener address: ``` curl \\ -H \"Content-Type: application/json\" \\ -d '\":8080\"' \\ \"http://localhost:2019/config/apps/http/servers/myserver/listen\"``` Add multiple listener addresses: ``` curl \\ -H \"Content-Type: application/json\" \\ -d '[\":8080\", \":5133\"]' \\ \"http://localhost:2019/config/apps/http/servers/myserver/listen/...\"``` Changes Caddy's configuration at the named path to the JSON body of the request. If the destination value is a position (index) in an array, PUT inserts; if an object, it strictly creates a new value. Add a listener address in the first slot: ``` curl -X PUT \\ -H \"Content-Type: application/json\" \\ -d '\":8080\"' \\ \"http://localhost:2019/config/apps/http/servers/myserver/listen/0\"``` Changes Caddy's configuration at the named path to the JSON body of the request. PATCH strictly replaces an existing value or array element. Replace the listener addresses: ``` curl -X PATCH \\ -H \"Content-Type: application/json\" \\ -d '[\":8081\", \":8082\"]' \\ \"http://localhost:2019/config/apps/http/servers/myserver/listen\"``` Removes Caddy's configuration at the named path. DELETE deletes the target value. To unload the entire current configuration but leave the process running: ``` curl -X DELETE \"http://localhost:2019/config/\"``` To stop only one of your HTTP servers: ``` curl -X DELETE \"http://localhost:2019/config/apps/http/servers/myserver\"``` You can embed IDs in your JSON document for easier direct access to those parts of the JSON. Simply add a field called \"@id\" to an object and give it a unique name. For example, if you had a reverse proxy handler that you wanted to access frequently: ``` { \"@id\": \"my_proxy\", \"handler\": \"reverse_proxy\" } ``` To use it, simply make a request to the /id/ API endpoint in the same way you would to the corresponding /config/ endpoint, but without the whole path. The ID takes the request directly into that scope of the config for you. For example, to access the upstreams of the reverse proxy without an ID, the path would be something like ``` /config/apps/http/servers/myserver/routes/1/handle/0/upstreams ``` but with an ID, the path becomes ``` /id/my_proxy/upstreams ``` which is much easier to remember and write by hand. This section is for all /config/ endpoints. It is experimental and subject to change. Caddy's config API provides ACID guarantees for individual requests, but changes that involve more than a single request are subject to collisions or data loss if not properly synchronized. For example, two clients may GET /config/foo at the same time, make an edit within that scope (config path), then call POST|PUT|PATCH|DELETE /config/foo/... at the same time to apply their changes, resulting in a collision: either one will overwrite the other, or the second might leave the config in an unintended state since it was applied to a different version of the config than it was prepared against. This is because the changes are not aware of each other. Caddy's API does not support transactions spanning multiple requests, and HTTP is a stateless" }, { "data": "However, you can use the Etag trailer and If-Match header to detect and prevent collisions for any and all changes as a kind of optimistic concurrency control. This is useful if there is any chance that you are using Caddy's /config/... endpoints concurrently without synchronization. All responses to GET /config/... requests have an HTTP trailer called Etag that contains the path and a hash of the contents in that scope (e.g. Etag: \"/config/apps/http/servers 65760b8e\"). Simply set the If-Match header on a mutative request to that of an Etag trailer from a previous GET request. The basic algorithm for this is as follows: This algorithm safely allows multiple, overlapping changes to Caddy's configuration without explicit synchronization. It is designed so that simultaneous changes to different parts of the config don't require a retry: only changes that overlap the same scope of the config can possibly cause a collision and thus require a retry. Adapts a configuration to Caddy JSON without loading or running it. If successful, the resulting JSON document is returned in the response body. The Content-Type header is used to specify the configuration format in the same way that /load works. For example, to adapt a Caddyfile, set Content-Type: text/caddyfile. This endpoint will adapt any configuration format as long as the associated config adapter is plugged in to your Caddy build. Adapt a Caddyfile to JSON: ``` curl \"http://localhost:2019/adapt\" \\ -H \"Content-Type: text/caddyfile\" \\ --data-binary @Caddyfile``` Returns information about a particular PKI app CA by its ID. If the requested CA ID is the default (local), then the CA will be provisioned if it has not already been. Other CA IDs will return an error if they have not been previously provisioned. ``` curl \"http://localhost:2019/pki/ca/local\" | jq { \"id\": \"local\", \"name\": \"Caddy Local Authority\", \"rootcommonname\": \"Caddy Local Authority - 2022 ECC Root\", \"intermediatecommonname\": \"Caddy Local Authority - ECC Intermediate\", \"root_certificate\": \"--BEGIN CERTIFICATE--\\nMIIB ... gRw==\\n--END CERTIFICATE--\\n\", \"intermediate_certificate\": \"--BEGIN CERTIFICATE--\\nMIIB ... FzQ==\\n--END CERTIFICATE--\\n\" }``` Returns the certificate chain of a particular PKI app CA by its ID. If the requested CA ID is the default (local), then the CA will be provisioned if it has not already been. Other CA IDs will return an error if they have not been previously provisioned. This endpoint is used internally by the caddy trust command to allow installing the CA's root certificate to your system's trust store. ``` curl \"http://localhost:2019/pki/ca/local/certificates\" --BEGIN CERTIFICATE-- MIIByDCCAW2gAwIBAgIQViS12trTXBS/nyxy7Zg9JDAKBggqhkjOPQQDAjAwMS4w ... By75JkP6C14OfU733oElfDUMa5ctbMY53rWFzQ== --END CERTIFICATE-- --BEGIN CERTIFICATE-- MIIBpDCCAUmgAwIBAgIQTS5a+3LUKNxC6qN3ZDR8bDAKBggqhkjOPQQDAjAwMS4w ... 9M9t0FwCIQCAlUr4ZlFzHE/3K6dARYKusR1ck4A3MtucSSyar6lgRw== --END CERTIFICATE--``` Returns the current status of the configured reverse proxy upstreams (backends) as a JSON document. ``` curl \"http://localhost:2019/reverse_proxy/upstreams\" | jq [ {\"address\": \"10.0.1.1:80\", \"num_requests\": 4, \"fails\": 2}, {\"address\": \"10.0.1.2:80\", \"num_requests\": 5, \"fails\": 4}, {\"address\": \"10.0.1.3:80\", \"num_requests\": 3, \"fails\": 3} ]``` Each entry in the JSON array is a configured upstream stored in the global upstream pool. If your goal is to determine a backend's availability, you will need to cross-check relevant properties of the upstream against the handler configuration you are utilizing. For example, if you've enabled passive health checks for your proxies, then you need to also take into consideration the fails and numrequests values to determine if an upstream is considered available: check that the fails amount is less than your configured maximum amount of failures for your proxy (i.e. maxfails), and that numrequests is less than or equal to your configured amount of maximum requests per upstream (i.e. unhealthyrequestcount for the whole proxy, or maxrequests for individual upstreams)." } ]
{ "category": "Orchestration & Management", "file_name": "1.29.md", "project_name": "Contour", "subcategory": "Service Proxy" }
[ { "data": "Contour is an Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer. Contour supports dynamic configuration updates out of the box while maintaining a lightweight profile. See the full Contour Philosophy page. Contour bridges other solution gaps in several ways: Contour is tested with Kubernetes clusters running version 1.21 and later. Getting started with Contour is as simple as one command. See the Getting Started document. If you encounter issues review the troubleshooting page, file an issue, or talk to us on the Read our getting started documentation." } ]
{ "category": "Orchestration & Management", "file_name": "docs.md", "project_name": "Caddy", "subcategory": "Service Proxy" }
[ { "data": "Caddy is a powerful, extensible platform to serve your sites, services, and apps, written in Go. If you're new to Caddy, the way you serve the Web is about to change. Most people use Caddy as a web server or proxy, but at its core, Caddy is a server of servers. With the requisite modules, it can take on the role of any long-running process! Configuration is both dynamic and exportable with Caddy's API. Although no config files required, you can still use them; most people's favorite way of configuring Caddy is using the Caddyfile. The format of the config document takes many forms with config adapters, but Caddy's native config language is JSON. Caddy compiles for all major platforms and has no runtime dependencies. No problem! We suggest that everyone regardless of experience go through our Getting Started guide. It will give you a well-rounded perspective on your new web server that will be invaluable as you continue learning. If you only have a few minutes and need to hit the ground running, try one of our quick starts. For expanded content like specific examples, check out our community wiki - then contribute to it! We recommend sticking to these official resources to install, configure, and run Caddy, rather than running commands or copying config snippets from random blogs and Q&A boards. You will find that our material is generally more accurate and up-to-date. We also encourage you to craft your own configurations to ensure that you understand how your server works so you'll be more able to fix problems if they arise later on. But whatever you do, enjoy using your new web server. Caddy is an experience unlike any other server software you've used! If you need help using Caddy, please ask nicely in our community forum. We would be happy to help you. All we ask is that you fill out the help template as thoroughly as possible, and pay it forward by helping others. We always need more helpers. Only use our issue tracker if you've positively identified a bug in Caddy or have a specific feature request. This website is maintained on GitHub. To submit improvements, open an issue or pull request. Thank you for participating in our community! We hope Caddy will serve you well." } ]
{ "category": "Orchestration & Management", "file_name": "api.md", "project_name": "F5", "subcategory": "Service Proxy" }
[ { "data": "Version notice: The F5 BIG-IP offers many programmable interfaces, from control-plane to data-plane. The documentation in this section focuses on these areas: The BIG-IP API Reference documentation contains community-contributed content. F5 does not monitor or control community code contributions. We make no guarantees or warranties regarding the available code, and it may contain errors, defects, bugs, inaccuracies, or security vulnerabilities. Your access to and use of any code available in the BIG-IP API reference guides is solely at your own risk." } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Inlets", "subcategory": "Service Proxy" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. Inlets documentation | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:|:|:|-:|-:| | Latest commitHistory100 Commits | Latest commitHistory100 Commits | Latest commitHistory100 Commits | nan | nan | | .github | .github | .github | nan | nan | | docs | docs | docs | nan | nan | | .DEREK.yml | .DEREK.yml | .DEREK.yml | nan | nan | | CONTRIBUTING.md | CONTRIBUTING.md | CONTRIBUTING.md | nan | nan | | LICENSE | LICENSE | LICENSE | nan | nan | | README.md | README.md | README.md | nan | nan | | mkdocs.yml | mkdocs.yml | mkdocs.yml | nan | nan | | requirements.txt | requirements.txt | requirements.txt | nan | nan | | View all files | View all files | View all files | nan | nan | Pre-reqs You'll need Python and pip installed, then run: ``` pip install -r requirements.txt``` Local testing: ``` mkdocs serve``` Access the site at http://127.0.0.1:8000 See the inlets contribution guide All commits must be signed-off with the CLI using git commit --sign-off Inlets documentation" } ]
{ "category": "Orchestration & Management", "file_name": "docs.inlets.dev#connecting-with-the-inlets-community.md", "project_name": "Inlets", "subcategory": "Service Proxy" }
[ { "data": "Inlets brings secure tunnels to Cloud Native workloads. You can visit the inlets homepage at https://inlets.dev/ With inlets you are in control of your data, unlike with a SaaS tunnel where shared servers mean your data may be at risk. You can use inlets for local development and in your production environment. It works just as well on bare-metal as in VMs, containers and Kubernetes clusters. inlets is not just compatible with tricky networks and Cloud Native architecture, it was purpose-built for them. Common use-cases include: Do you want to connect to hundreds of remote services without exposing them on the Internet? You may be looking for inlets uplink Inlets tunnels connect to each other over a secure websocket with TLS encryption. Over that private connection, you can then tunnel HTTPS or TCP traffic to computers in another network or to the Internet. One of the most common use-cases is to expose a local HTTP endpoint on the Internet via a HTTPS tunnel. You may be working with webhooks, integrating with OAuth, sharing a draft of a blog post or integrating with a partner's API. After deploying an inlets HTTPS server on a public cloud VM, you can then connect the client and access it. There is more that inlets can do for you than exposing local endpoints. inlets also supports local forwarding and can be used to replace more cumbersome services like SSH, complex VPNs or expensive direct connect uplinks. Read more in the: the inlets FAQ. These guides walk you through a specific use-case with inlets. If you have questions or cannot find what you need, there are options for connecting with the community at the end of this page. Inlets can tunnel either HTTP or TCP traffic: inlets is available for Windows, MacOS (including M1) and Linux (including ARM): You can also use the container image from ghcr.io: ghcr.io/inlets/inlets-pro:latest Expose one or more HTTPS domains from your local" }, { "data": "If you don't want to use automation tools to create a server for the inlets-pro server, then you can follow this manual guide to generate and install a systemd service instead. inlets is not limited to HTTP connections, you can also tunnel TCP protocols like RDP, VNC, SSH, TLS and databases. If you want to mix HTTP and TCP tunnels on the same tunnel server, you could either only use TCP ports, or enable both. If you're looking to scale inlets to host many tunnels, then Kubernetes is probably a better option. You may have an on-premises Kubernetes cluster that needs ingress. Perhaps you have a homelab, or Raspberry Pi cluster, that you want to self host services on. Some teams want to have dev work like production, with tools Istio working locally just like in the cloud. Tutorial: Expose an Istio gateway with the inlets-operator Tutorial: Access the Kubernetes API server from anywhere like managed service See also: helm charts The Inlets Uplink distribution is a Kubernetes operator that makes it quick and easy to onboard hundreds or thousands of customers, each with their own dedicated tunnel. It can also be used for remote management and command and control of IT systems and IoT devices. Learn more: Inlets Uplink Inlets offers you multiple options to monitor your tunnels and get insight in their performance. Find out tunnel statistics, uptime and connected clients with the inlets-pro status command or collect the Prometheus metrics from the monitoring endpoint. Learn how to use inletsctl to provision tunnel servers on various public clouds. Learn how to set up the inlets-operator for Kubernetes, which provisions public cloud VMs and gives IP addresses to your public LoadBalancers. For news, use-cases and guides check out the blog: Watch a video, or read a blog post from the community: Open Source tools for managing inlets tunnels: Who built inlets? Inlets is a commercial solution developed and supported by OpenFaaS Ltd. You can also contact the team via the contact page. The code for this website is open source and available on GitHub inlets is proud to be featured on the Cloud Native Landscape in the Service Proxy category." } ]
{ "category": "Orchestration & Management", "file_name": "docs.inlets.dev#becoming-a-tunnel-provider-or-operating-a-hosting-service.md", "project_name": "Inlets", "subcategory": "Service Proxy" }
[ { "data": "Inlets brings secure tunnels to Cloud Native workloads. You can visit the inlets homepage at https://inlets.dev/ With inlets you are in control of your data, unlike with a SaaS tunnel where shared servers mean your data may be at risk. You can use inlets for local development and in your production environment. It works just as well on bare-metal as in VMs, containers and Kubernetes clusters. inlets is not just compatible with tricky networks and Cloud Native architecture, it was purpose-built for them. Common use-cases include: Do you want to connect to hundreds of remote services without exposing them on the Internet? You may be looking for inlets uplink Inlets tunnels connect to each other over a secure websocket with TLS encryption. Over that private connection, you can then tunnel HTTPS or TCP traffic to computers in another network or to the Internet. One of the most common use-cases is to expose a local HTTP endpoint on the Internet via a HTTPS tunnel. You may be working with webhooks, integrating with OAuth, sharing a draft of a blog post or integrating with a partner's API. After deploying an inlets HTTPS server on a public cloud VM, you can then connect the client and access it. There is more that inlets can do for you than exposing local endpoints. inlets also supports local forwarding and can be used to replace more cumbersome services like SSH, complex VPNs or expensive direct connect uplinks. Read more in the: the inlets FAQ. These guides walk you through a specific use-case with inlets. If you have questions or cannot find what you need, there are options for connecting with the community at the end of this page. Inlets can tunnel either HTTP or TCP traffic: inlets is available for Windows, MacOS (including M1) and Linux (including ARM): You can also use the container image from ghcr.io: ghcr.io/inlets/inlets-pro:latest Expose one or more HTTPS domains from your local" }, { "data": "If you don't want to use automation tools to create a server for the inlets-pro server, then you can follow this manual guide to generate and install a systemd service instead. inlets is not limited to HTTP connections, you can also tunnel TCP protocols like RDP, VNC, SSH, TLS and databases. If you want to mix HTTP and TCP tunnels on the same tunnel server, you could either only use TCP ports, or enable both. If you're looking to scale inlets to host many tunnels, then Kubernetes is probably a better option. You may have an on-premises Kubernetes cluster that needs ingress. Perhaps you have a homelab, or Raspberry Pi cluster, that you want to self host services on. Some teams want to have dev work like production, with tools Istio working locally just like in the cloud. Tutorial: Expose an Istio gateway with the inlets-operator Tutorial: Access the Kubernetes API server from anywhere like managed service See also: helm charts The Inlets Uplink distribution is a Kubernetes operator that makes it quick and easy to onboard hundreds or thousands of customers, each with their own dedicated tunnel. It can also be used for remote management and command and control of IT systems and IoT devices. Learn more: Inlets Uplink Inlets offers you multiple options to monitor your tunnels and get insight in their performance. Find out tunnel statistics, uptime and connected clients with the inlets-pro status command or collect the Prometheus metrics from the monitoring endpoint. Learn how to use inletsctl to provision tunnel servers on various public clouds. Learn how to set up the inlets-operator for Kubernetes, which provisions public cloud VMs and gives IP addresses to your public LoadBalancers. For news, use-cases and guides check out the blog: Watch a video, or read a blog post from the community: Open Source tools for managing inlets tunnels: Who built inlets? Inlets is a commercial solution developed and supported by OpenFaaS Ltd. You can also contact the team via the contact page. The code for this website is open source and available on GitHub inlets is proud to be featured on the Cloud Native Landscape in the Service Proxy category." } ]
{ "category": "Orchestration & Management", "file_name": "docs.inlets.dev#how-does-it-work.md", "project_name": "Inlets", "subcategory": "Service Proxy" }
[ { "data": "Inlets brings secure tunnels to Cloud Native workloads. You can visit the inlets homepage at https://inlets.dev/ With inlets you are in control of your data, unlike with a SaaS tunnel where shared servers mean your data may be at risk. You can use inlets for local development and in your production environment. It works just as well on bare-metal as in VMs, containers and Kubernetes clusters. inlets is not just compatible with tricky networks and Cloud Native architecture, it was purpose-built for them. Common use-cases include: Do you want to connect to hundreds of remote services without exposing them on the Internet? You may be looking for inlets uplink Inlets tunnels connect to each other over a secure websocket with TLS encryption. Over that private connection, you can then tunnel HTTPS or TCP traffic to computers in another network or to the Internet. One of the most common use-cases is to expose a local HTTP endpoint on the Internet via a HTTPS tunnel. You may be working with webhooks, integrating with OAuth, sharing a draft of a blog post or integrating with a partner's API. After deploying an inlets HTTPS server on a public cloud VM, you can then connect the client and access it. There is more that inlets can do for you than exposing local endpoints. inlets also supports local forwarding and can be used to replace more cumbersome services like SSH, complex VPNs or expensive direct connect uplinks. Read more in the: the inlets FAQ. These guides walk you through a specific use-case with inlets. If you have questions or cannot find what you need, there are options for connecting with the community at the end of this page. Inlets can tunnel either HTTP or TCP traffic: inlets is available for Windows, MacOS (including M1) and Linux (including ARM): You can also use the container image from ghcr.io: ghcr.io/inlets/inlets-pro:latest Expose one or more HTTPS domains from your local" }, { "data": "If you don't want to use automation tools to create a server for the inlets-pro server, then you can follow this manual guide to generate and install a systemd service instead. inlets is not limited to HTTP connections, you can also tunnel TCP protocols like RDP, VNC, SSH, TLS and databases. If you want to mix HTTP and TCP tunnels on the same tunnel server, you could either only use TCP ports, or enable both. If you're looking to scale inlets to host many tunnels, then Kubernetes is probably a better option. You may have an on-premises Kubernetes cluster that needs ingress. Perhaps you have a homelab, or Raspberry Pi cluster, that you want to self host services on. Some teams want to have dev work like production, with tools Istio working locally just like in the cloud. Tutorial: Expose an Istio gateway with the inlets-operator Tutorial: Access the Kubernetes API server from anywhere like managed service See also: helm charts The Inlets Uplink distribution is a Kubernetes operator that makes it quick and easy to onboard hundreds or thousands of customers, each with their own dedicated tunnel. It can also be used for remote management and command and control of IT systems and IoT devices. Learn more: Inlets Uplink Inlets offers you multiple options to monitor your tunnels and get insight in their performance. Find out tunnel statistics, uptime and connected clients with the inlets-pro status command or collect the Prometheus metrics from the monitoring endpoint. Learn how to use inletsctl to provision tunnel servers on various public clouds. Learn how to set up the inlets-operator for Kubernetes, which provisions public cloud VMs and gives IP addresses to your public LoadBalancers. For news, use-cases and guides check out the blog: Watch a video, or read a blog post from the community: Open Source tools for managing inlets tunnels: Who built inlets? Inlets is a commercial solution developed and supported by OpenFaaS Ltd. You can also contact the team via the contact page. The code for this website is open source and available on GitHub inlets is proud to be featured on the Cloud Native Landscape in the Service Proxy category." } ]
{ "category": "Orchestration & Management", "file_name": "monitoring-and-metrics.md", "project_name": "Inlets", "subcategory": "Service Proxy" }
[ { "data": "Inlets Uplink comes with an integrated Prometheus deployment that automatically collects metrics for each tunnel. Note Prometheus is deployed with Inlets Uplink by default. If you don't need monitoring you can disable it in the values.yaml of the Inlets Uplink Helm chart: ``` prometheus: create: false ``` You can explore the inlets data using Prometheus's built-in expression browser. To access it, port forward the prometheus service and than navigate to http://localhost:9090/graph ``` kubectl port-forward \\ -n inlets \\ svc/prometheus 9090:9090 ``` The control-plane metrics can give you insights into the number of clients that are connected and the number of http requests made to the control-plane endpoint for each tunnel. | Metric | Type | Description | Labels | |:--|:--|:-|:| | controlplaneconnectedgauge | gauge | gauge of inlets clients connected to the control plane | tunnel_name | | controlplanerequeststotal | counter | total HTTP requests processed by connecting clients on the control plane | code, tunnel_name | The data-plane metrics can give you insights in the services that are exposed through your tunnel. | Metric | Type | Description | Labels | |:--|:-|:-|:| | dataplaneconnectionsgauge | gauge | gauge of connections established over data plane | port, type, tunnel_name | | dataplaneconnectionstotal | counter | total count of connections established over data plane | port, type, tunnel_name | | dataplanerequeststotal | counter | total HTTP requests processed | code, host, method, tunnel_name | | dataplanerequestdurationseconds | histogram | seconds spent serving HTTP requests | code, host, method, tunnelname, | The connections metrics show the number of connections that are open at this point in time, and on which ports. The type label indicates whether the connection is for a http or tcp upstream. The request metrics only include HTTP upstreams. These metrics can be used to get Rate, Error, Duration (RED) information for any API or website that is connected through the tunnel. Grafana can be used to visualize the data collected by the inlets uplink Prometheus instance. We provide a sample dashboard that you can use as a starting point. Inlets uplink Grafana dashboard The dashboard can help you get insights in: There are three options we recommend for getting access to Grafana. You can install Grafana in one line with arkade: ``` arkade install grafana ``` Port forward grafana and retrieve the admin password to login: ``` kubectl --namespace grafana port-forward service/grafana 3000:80 kubectl get secret --namespace grafana grafana -o jsonpath=\"{.data.admin-password}\" | base64 --decode ; echo ``` Access Grafana on http://127.0.0.1:3000 and login as admin. Before you import the dashboard, you need to add the inlets-uplink prometheus instance as a data source: Select Data sources. This opens the data sources page, which displays a list of previously configured data sources for the Grafana instance. Select Add data source and pick Prometheus from the list of supported data sources. Configure the inlets Prometheus instance as a data source: if you installed inlets uplink in a different namespace this url should be http://prometheus.<namespace>:9090 Import the inlets uplink dashboard in Grafana: Paste the dashboard JSON into the text area." } ]
{ "category": "Orchestration & Management", "file_name": "docs.inlets.dev#other-resources.md", "project_name": "Inlets", "subcategory": "Service Proxy" }
[ { "data": "Inlets brings secure tunnels to Cloud Native workloads. You can visit the inlets homepage at https://inlets.dev/ With inlets you are in control of your data, unlike with a SaaS tunnel where shared servers mean your data may be at risk. You can use inlets for local development and in your production environment. It works just as well on bare-metal as in VMs, containers and Kubernetes clusters. inlets is not just compatible with tricky networks and Cloud Native architecture, it was purpose-built for them. Common use-cases include: Do you want to connect to hundreds of remote services without exposing them on the Internet? You may be looking for inlets uplink Inlets tunnels connect to each other over a secure websocket with TLS encryption. Over that private connection, you can then tunnel HTTPS or TCP traffic to computers in another network or to the Internet. One of the most common use-cases is to expose a local HTTP endpoint on the Internet via a HTTPS tunnel. You may be working with webhooks, integrating with OAuth, sharing a draft of a blog post or integrating with a partner's API. After deploying an inlets HTTPS server on a public cloud VM, you can then connect the client and access it. There is more that inlets can do for you than exposing local endpoints. inlets also supports local forwarding and can be used to replace more cumbersome services like SSH, complex VPNs or expensive direct connect uplinks. Read more in the: the inlets FAQ. These guides walk you through a specific use-case with inlets. If you have questions or cannot find what you need, there are options for connecting with the community at the end of this page. Inlets can tunnel either HTTP or TCP traffic: inlets is available for Windows, MacOS (including M1) and Linux (including ARM): You can also use the container image from ghcr.io: ghcr.io/inlets/inlets-pro:latest Expose one or more HTTPS domains from your local" }, { "data": "If you don't want to use automation tools to create a server for the inlets-pro server, then you can follow this manual guide to generate and install a systemd service instead. inlets is not limited to HTTP connections, you can also tunnel TCP protocols like RDP, VNC, SSH, TLS and databases. If you want to mix HTTP and TCP tunnels on the same tunnel server, you could either only use TCP ports, or enable both. If you're looking to scale inlets to host many tunnels, then Kubernetes is probably a better option. You may have an on-premises Kubernetes cluster that needs ingress. Perhaps you have a homelab, or Raspberry Pi cluster, that you want to self host services on. Some teams want to have dev work like production, with tools Istio working locally just like in the cloud. Tutorial: Expose an Istio gateway with the inlets-operator Tutorial: Access the Kubernetes API server from anywhere like managed service See also: helm charts The Inlets Uplink distribution is a Kubernetes operator that makes it quick and easy to onboard hundreds or thousands of customers, each with their own dedicated tunnel. It can also be used for remote management and command and control of IT systems and IoT devices. Learn more: Inlets Uplink Inlets offers you multiple options to monitor your tunnels and get insight in their performance. Find out tunnel statistics, uptime and connected clients with the inlets-pro status command or collect the Prometheus metrics from the monitoring endpoint. Learn how to use inletsctl to provision tunnel servers on various public clouds. Learn how to set up the inlets-operator for Kubernetes, which provisions public cloud VMs and gives IP addresses to your public LoadBalancers. For news, use-cases and guides check out the blog: Watch a video, or read a blog post from the community: Open Source tools for managing inlets tunnels: Who built inlets? Inlets is a commercial solution developed and supported by OpenFaaS Ltd. You can also contact the team via the contact page. The code for this website is open source and available on GitHub inlets is proud to be featured on the Cloud Native Landscape in the Service Proxy category." } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "MetalLB", "subcategory": "Service Proxy" }
[ { "data": "This page shows how to create an external load balancer. When creating a Service, you have the option of automatically creating a cloud load balancer. This provides an externally-accessible IP address that sends traffic to the correct port on your cluster nodes, provided your cluster runs in a supported environment and is configured with the correct cloud load balancer provider package. You can also use an Ingress in place of Service. For more information, check the Ingress documentation. You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds: Your cluster must be running in a cloud or other environment that already has support for configuring external load balancers. To create an external load balancer, add the following line to your Service manifest: ``` type: LoadBalancer ``` Your manifest might then look like: ``` apiVersion: v1 kind: Service metadata: name: example-service spec: selector: app: example ports: port: 8765 targetPort: 9376 type: LoadBalancer ``` You can alternatively create the service with the kubectl expose command and its --type=LoadBalancer flag: ``` kubectl expose deployment example --port=8765 --target-port=9376 \\ --name=example-service --type=LoadBalancer ``` This command creates a new Service using the same selectors as the referenced resource (in the case of the example above, a Deployment named example). For more information, including optional flags, refer to the kubectl expose reference. You can find the IP address created for your service by getting the service information through kubectl: ``` kubectl describe services example-service ``` which should produce output similar to: ``` Name: example-service Namespace: default Labels: app=example Annotations: <none> Selector: app=example Type: LoadBalancer IP Families: <none> IP: 10.3.22.96 IPs: 10.3.22.96 LoadBalancer Ingress: 192.0.2.89 Port: <unset> 8765/TCP TargetPort: 9376/TCP NodePort: <unset> 30593/TCP Endpoints: 172.17.0.3:9376 Session Affinity: None External Traffic Policy: Cluster Events: <none> ``` The load balancer's IP address is listed next to LoadBalancer Ingress. If you are running your service on Minikube, you can find the assigned IP address and port with: ``` minikube service example-service --url ``` By default, the source IP seen in the target container is not the original source IP of the" }, { "data": "To enable preservation of the client IP, the following fields can be configured in the .spec of the Service: Setting externalTrafficPolicy to Local in the Service manifest activates this feature. For example: ``` apiVersion: v1 kind: Service metadata: name: example-service spec: selector: app: example ports: port: 8765 targetPort: 9376 externalTrafficPolicy: Local type: LoadBalancer ``` Load balancing services from some cloud providers do not let you configure different weights for each target. With each target weighted equally in terms of sending traffic to Nodes, external traffic is not equally load balanced across different Pods. The external load balancer is unaware of the number of Pods on each node that are used as a target. Where NumServicePods << NumNodes or NumServicePods >> NumNodes, a fairly close-to-equal distribution will be seen, even without weights. Internal pod to pod traffic should behave similar to ClusterIP services, with equal probability across all pods. In usual case, the correlating load balancer resources in cloud provider should be cleaned up soon after a LoadBalancer type Service is deleted. But it is known that there are various corner cases where cloud resources are orphaned after the associated Service is deleted. Finalizer Protection for Service LoadBalancers was introduced to prevent this from happening. By using finalizers, a Service resource will never be deleted until the correlating load balancer resources are also deleted. Specifically, if a Service has type LoadBalancer, the service controller will attach a finalizer named service.kubernetes.io/load-balancer-cleanup. The finalizer will only be removed after the load balancer resource is cleaned up. This prevents dangling load balancer resources even in corner cases such as the service controller crashing. It is important to note that the datapath for this functionality is provided by a load balancer external to the Kubernetes cluster. When the Service type is set to LoadBalancer, Kubernetes provides functionality equivalent to type equals ClusterIP to pods within the cluster and extends it by programming the (external to Kubernetes) load balancer with entries for the nodes hosting the relevant Kubernetes pods. The Kubernetes control plane automates the creation of the external load balancer, health checks (if needed), and packet filtering rules (if needed). Once the cloud provider allocates an IP address for the load balancer, the control plane looks up that external IP address and populates it into the Service object. Was this page helpful? Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub Repository if you want to report a problem or suggest an improvement." } ]
{ "category": "Orchestration & Management", "file_name": "docs.inlets.dev#your-first-https-tunnel-with-an-automated-tunnel-server-intermediate.md", "project_name": "Inlets", "subcategory": "Service Proxy" }
[ { "data": "Inlets brings secure tunnels to Cloud Native workloads. You can visit the inlets homepage at https://inlets.dev/ With inlets you are in control of your data, unlike with a SaaS tunnel where shared servers mean your data may be at risk. You can use inlets for local development and in your production environment. It works just as well on bare-metal as in VMs, containers and Kubernetes clusters. inlets is not just compatible with tricky networks and Cloud Native architecture, it was purpose-built for them. Common use-cases include: Do you want to connect to hundreds of remote services without exposing them on the Internet? You may be looking for inlets uplink Inlets tunnels connect to each other over a secure websocket with TLS encryption. Over that private connection, you can then tunnel HTTPS or TCP traffic to computers in another network or to the Internet. One of the most common use-cases is to expose a local HTTP endpoint on the Internet via a HTTPS tunnel. You may be working with webhooks, integrating with OAuth, sharing a draft of a blog post or integrating with a partner's API. After deploying an inlets HTTPS server on a public cloud VM, you can then connect the client and access it. There is more that inlets can do for you than exposing local endpoints. inlets also supports local forwarding and can be used to replace more cumbersome services like SSH, complex VPNs or expensive direct connect uplinks. Read more in the: the inlets FAQ. These guides walk you through a specific use-case with inlets. If you have questions or cannot find what you need, there are options for connecting with the community at the end of this page. Inlets can tunnel either HTTP or TCP traffic: inlets is available for Windows, MacOS (including M1) and Linux (including ARM): You can also use the container image from ghcr.io: ghcr.io/inlets/inlets-pro:latest Expose one or more HTTPS domains from your local" }, { "data": "If you don't want to use automation tools to create a server for the inlets-pro server, then you can follow this manual guide to generate and install a systemd service instead. inlets is not limited to HTTP connections, you can also tunnel TCP protocols like RDP, VNC, SSH, TLS and databases. If you want to mix HTTP and TCP tunnels on the same tunnel server, you could either only use TCP ports, or enable both. If you're looking to scale inlets to host many tunnels, then Kubernetes is probably a better option. You may have an on-premises Kubernetes cluster that needs ingress. Perhaps you have a homelab, or Raspberry Pi cluster, that you want to self host services on. Some teams want to have dev work like production, with tools Istio working locally just like in the cloud. Tutorial: Expose an Istio gateway with the inlets-operator Tutorial: Access the Kubernetes API server from anywhere like managed service See also: helm charts The Inlets Uplink distribution is a Kubernetes operator that makes it quick and easy to onboard hundreds or thousands of customers, each with their own dedicated tunnel. It can also be used for remote management and command and control of IT systems and IoT devices. Learn more: Inlets Uplink Inlets offers you multiple options to monitor your tunnels and get insight in their performance. Find out tunnel statistics, uptime and connected clients with the inlets-pro status command or collect the Prometheus metrics from the monitoring endpoint. Learn how to use inletsctl to provision tunnel servers on various public clouds. Learn how to set up the inlets-operator for Kubernetes, which provisions public cloud VMs and gives IP addresses to your public LoadBalancers. For news, use-cases and guides check out the blog: Watch a video, or read a blog post from the community: Open Source tools for managing inlets tunnels: Who built inlets? Inlets is a commercial solution developed and supported by OpenFaaS Ltd. You can also contact the team via the contact page. The code for this website is open source and available on GitHub inlets is proud to be featured on the Cloud Native Landscape in the Service Proxy category." } ]
{ "category": "Orchestration & Management", "file_name": "github-privacy-statement.md", "project_name": "Netflix Zuul", "subcategory": "Service Proxy" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "docs.github.com.md", "project_name": "Netflix Zuul", "subcategory": "Service Proxy" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "github-terms-of-service.md", "project_name": "Netflix Zuul", "subcategory": "Service Proxy" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "contribution-guideline.html.md", "project_name": "Sentinel", "subcategory": "Service Proxy" }
[ { "data": "Welcome to Sentinel! This document is a guideline about how to contribute to Sentinel. If you find something incorrect or missing, please leave comments / suggestions. Please make sure to read and observe our Code of Conduct. You should have JDK 1.8 or later installed in your system. We are always very happy to have contributions, whether for typo fix, bug fix or big new features. Please do not ever hesitate to ask a question or send a pull request. We strongly value documentation and integration with other projects. We are very glad to accept improvements for these aspects. We use the master branch as the development branch, which indicates that this is a unstable branch. Here are the workflow for contributors: Please follow the pull request template. Please make sure the PR has a corresponding issue. After creating a PR, one or more reviewers will be assigned to the pull request. The reviewers will review the code. Before merging a PR, squash any fix review feedback, typo, merged, and rebased sorts of commits. The final commit message should be clear and concise. We use GitHub Issues and Pull Requests for trackers. If you find a typo in document, find a bug in code, or want new features, or want to give suggestions, you can open an issue on GitHub to report it. Please follow the guideline message in the issue template. If you want to contribute, please follow the contribution workflow and create a new pull request. If your PR contains large changes, e.g. component refactor or new components, please write detailed documents about its design and usage. Note that a single PR should not be too large. If heavy changes are required, it's better to separate the changes to a few individual PRs. All code should be well reviewed by one or more committers. Some principles: If you have any questions or advice, please contact sentinel@linux.alibaba.com. Our Gitter room: https://gitter.im/alibaba/Sentinel. Sentinel is an open-source project under Apache License 2.0." } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Skipper", "subcategory": "Service Proxy" }
[ { "data": "Write your documentation in Markdown and create a professional static site in minutes searchable, customizable, in 60+ languages, for all devices. Focus on the content of your documentation and create a professional static site in minutes. No need to know HTML, CSS or JavaScript let Material for MkDocs do the heavy lifting for you. Serve your documentation with confidence Material for MkDocs automatically adapts to perfectly fit the available screen estate, no matter the type or size of the viewing device. Desktop. Tablet. Mobile. All great. Make it yours change the colors, fonts, language, icons, logo, and more with a few lines of configuration. Material for MkDocs can be easily extended and provides many options to alter appearance and behavior. Don't let your users wait get incredible value with a small footprint by using one of the fastest themes available with excellent performance, yielding optimal search engine rankings and happy users that return. Own your documentation's complete sources and outputs, guaranteeing both integrity and security no need to entrust the backbone of your product knowledge to third-party platforms. Retain full control. You're in good company choose a mature and actively maintained solution built with state-of-the-art Open Source technologies, trusted by more than 20.000 individuals and organizations. Licensed under MIT. Material for MkDocs makes your documentation instantly searchable with zero effort: say goodbye to costly third-party crawler-based solutions that can take hours to update. Ship your documentation with a highly customizable and blazing fast search running entirely in the user's browser at no extra cost. Even better: search inside code blocks, exclude specific sections or entire pages, boost important pages in the results and build searchable documentation that works offline. Learn more Some examples need more explanation than others, which is why Material for MkDocs offers a unique and elegant way to add rich text almost anywhere in a code block. Code annotations can host formatted text, images, diagrams, code blocks, call-outs, content tabs, even interactive elements basically everything that can be expressed in Markdown or HTML. Of course, code annotations work beautifully on mobile and other touch devices and can be printed. Learn more Make an impact on social media and increase engagement when sharing links to your documentation by leveraging the built-in social plugin. Material for MkDocs makes it effortless to generate a beautiful preview image for each page, which will drive more interested users to your Open Source or commercial project. While the social plugin uses what's already there, i.e. your project's name and logo, as well as each page's title and description, it's easy to customize preview images. Supercharge your technical writing by making better use of the processing power of the visual cortex: Material for MkDocs ships more than 10,000 icons and emojis, which can be used in Markdown and HTML with simple shortcodes and an easy-to-remember syntax. Add color to icons and animate them. Make it pop. Use our dedicated icon search to quickly find the perfect icon for almost every use case and add custom icon sets with minimal configuration. Get started By joining the Insiders program, you'll get immediate access to the latest features while also helping support the ongoing development of Material for MkDocs. Thanks to our awesome sponsors, this project is actively maintained and kept in good shape. Together, we can build documentation that simply works! Learn more Follow @squidfunk on Twitter Follow @squidfunk on Fosstodon Material for MkDocs on GitHub Material for MkDocs on DockerHub Material for MkDocs on PyPI" } ]
{ "category": "Orchestration & Management", "file_name": "introduction.html.md", "project_name": "Sentinel", "subcategory": "Service Proxy" }
[ { "data": "As distributed systems are becoming increasingly popular, the reliability between services is becoming more important than ever before. Sentinel is a powerful flow control component that takes \"flow\" as the breakthrough point and covers multiple fields including flow control, concurrency limiting, circuit breaking, and adaptive system protection to guarantee the reliability of microservices. Resource is a key concept in Sentinel. It could be anything, such as a service, a method, or even a code snippet. Once it is wrapped by Sentinel API, it is defined as a resource and can apply for protections provided by Sentinel. The way Sentinel protects resources is defined by rules, such as flow control, concurrency limiting, and circuit breaking rules. Rules can be dynamically changed, and take effect in real-time. Sentinel provides the ability to handle random incoming requests according to the appropriate shape as needed, as illustrated below: Flow control is based on the following statistics: Sentinel allows applications to combine all these statistics in a flexible manner. Circuit breaking is used to detect failures and encapsulates the logic of preventing failure from constantly reoccurring during maintenance, temporary external system failure or unexpected system difficulties. To tackle this problem, Hystrix chose to use threads and thread-pools to achieve isolation. The main benefit of thread pools is that the isolation is thorough, but it could bring extra overheads and leads to problems in scenarios related to ThreadLocal (e.g. Spring transactions). Sentinel uses the following principles to implement circuit breaking: Instead of using thread pools, Sentinel reduces the impact of unstable resources by restricting the number of concurrent threads (i.e. semaphore isolation). When the response time of a resource becomes longer, threads will start to be occupied. When the number of threads accumulates to a certain amount, new incoming requests will be rejected. Vice versa, when the resource restores and becomes stable, the occupied thread will be released as well, and new requests will be accepted. By restricting concurrent threads instead of thread pools, you no longer need to pre-allocate the size of the thread pools and can thus avoid the computational overhead such as the queueing, scheduling, and context switching. Besides restricting the concurrency, downgrade unstable resources according to their response time is also an effective way to guarantee reliability. When the response time of a resource is too large, all access to the resource will be rejected in the specified time window. Sentinel can be used to protect your server in case the system load or CPU usage goes too high. It helps you achieve a good balance between system load and incoming requests and the system can be restored very quickly. Sentinel is an open-source project under Apache License 2.0." } ]
{ "category": "Orchestration & Management", "file_name": "quick-start.html.md", "project_name": "Sentinel", "subcategory": "Service Proxy" }
[ { "data": "Below is a simple demo that guides new users to use Sentinel in just 3 steps. It also shows how to monitor this demo using the dashboard. Note: Sentinel requires Java 8 or later. If your application is build in maven, just add the following code in pom.xml. ``` <dependency> <groupId>com.alibaba.csp</groupId> <artifactId>sentinel-core</artifactId> <version>1.8.6</version> </dependency> ``` If not, you can download JAR in Maven Center Repository. Wrap our code snippet with Sentinel API: SphU.entry(\"resourceName\") and entry.exit(). In below example, it is System.out.println(\"hello world\");: ``` Entry entry = null; try { entry = SphU.entry(\"HelloWorld\"); // BIZ logic being protected System.out.println(\"hello world\"); } catch (BlockException e) { // handle block logic } finally { // make sure that the exit() logic is called if (entry != null) { entry.exit(); } } ``` So far the code modification is done. We also provide annotation support module to define resource easier. To limit the QPS of the resources, we could add flow rules. The following code defines a rule that limits access to the reource to 20 times per second at the maximum. ``` List<FlowRule> rules = new ArrayList<FlowRule>(); FlowRule rule = new FlowRule(); rule.setResource(\"HelloWorld\"); // set limit qps to 20 rule.setCount(20); rule.setGrade(RuleConstant.FLOWGRADEQPS); rules.add(rule); FlowRuleManager.loadRules(rules); ``` For more information, please refer to How To Use. After running the demo for a while, you can see the following records in ~/logs/csp/${appName}-metrics.log. ``` |--timestamp-|date time-|--resource-|p |block|s |e|rt 1529998904000|2018-06-26 15:41:44|hello world|20|0 |20|0|0 1529998905000|2018-06-26 15:41:45|hello world|20|5579 |20|0|728 1529998906000|2018-06-26 15:41:46|hello world|20|15698|20|0|0 1529998907000|2018-06-26 15:41:47|hello world|20|19262|20|0|0 1529998908000|2018-06-26 15:41:48|hello world|20|19502|20|0|0 1529998909000|2018-06-26 15:41:49|hello world|20|18386|20|0|0 p stands for incoming request, block for blocked by rules, success for success handled by Sentinel, e for exception count, rt for average response time (ms) ``` This shows that the demo can print \"hello world\" 20 times per second. More examples and information can be found in the How To Use section. Samples can be found in the sentinel-demo module. Sentinel also provides a simple dashboard application, on which you can monitor the clients and configure the rules in real time. For details please refer to Sentinel dashboard document. Sentinel will generate logs for troubleshooting. All the information can be found in Sentinel logs. Sentinel is an open-source project under Apache License 2.0." } ]
{ "category": "Orchestration & Management", "file_name": "aria-docs.md", "project_name": "Snapt Nova", "subcategory": "Service Proxy" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:|:|:|-:|-:| | Latest commitHistory1 Commit | Latest commitHistory1 Commit | Latest commitHistory1 Commit | nan | nan | | Aria Documentation.zip.001 | Aria Documentation.zip.001 | Aria Documentation.zip.001 | nan | nan | | Aria Documentation.zip.002 | Aria Documentation.zip.002 | Aria Documentation.zip.002 | nan | nan | | Aria Documentation.zip.003 | Aria Documentation.zip.003 | Aria Documentation.zip.003 | nan | nan | | Aria Documentation.zip.004 | Aria Documentation.zip.004 | Aria Documentation.zip.004 | nan | nan | | View all files | View all files | View all files | nan | nan |" } ]
{ "category": "Orchestration & Management", "file_name": "docs.github.com.md", "project_name": "Snapt Nova", "subcategory": "Service Proxy" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "github-privacy-statement.md", "project_name": "Snapt Nova", "subcategory": "Service Proxy" }
[ { "data": "Effective date: February 1, 2024 Welcome to the GitHub Privacy Statement. This is where we describe how we handle your Personal Data, which is information that is directly linked or can be linked to you. It applies to the Personal Data that GitHub, Inc. or GitHub B.V., processes as the Data Controller when you interact with websites, applications, and services that display this Statement (collectively, Services). This Statement does not apply to services or products that do not display this Statement, such as Previews, where relevant. When a school or employer supplies your GitHub account, they assume the role of Data Controller for most Personal Data used in our Services. This enables them to: Should you access a GitHub Service through an account provided by an organization, such as your employer or school, the organization becomes the Data Controller, and this Privacy Statement's direct applicability to you changes. Even so, GitHub remains dedicated to preserving your privacy rights. In such circumstances, GitHub functions as a Data Processor, adhering to the Data Controller's instructions regarding your Personal Data's processing. A Data Protection Agreement governs the relationship between GitHub and the Data Controller. For further details regarding their privacy practices, please refer to the privacy statement of the organization providing your account. In cases where your organization grants access to GitHub products, GitHub acts as the Data Controller solely for specific processing activities. These activities are clearly defined in a contractual agreement with your organization, known as a Data Protection Agreement. You can review our standard Data Protection Agreement at GitHub Data Protection Agreement. For those limited purposes, this Statement governs the handling of your Personal Data. For all other aspects of GitHub product usage, your organization's policies apply. When you use third-party extensions, integrations, or follow references and links within our Services, the privacy policies of these third parties apply to any Personal Data you provide or consent to share with them. Their privacy statements will govern how this data is processed. Personal Data is collected from you directly, automatically from your device, and also from third parties. The Personal Data GitHub processes when you use the Services depends on variables like how you interact with our Services (such as through web interfaces, desktop or mobile applications), the features you use (such as pull requests, Codespaces, or GitHub Copilot) and your method of accessing the Services (your preferred IDE). Below, we detail the information we collect through each of these channels: The Personal Data we process depends on your interaction and access methods with our Services, including the interfaces (web, desktop, mobile apps), features used (pull requests, Codespaces, GitHub Copilot), and your preferred access tools (like your IDE). This section details all the potential ways GitHub may process your Personal Data: When carrying out these activities, GitHub practices data minimization and uses the minimum amount of Personal Information required. We may share Personal Data with the following recipients: If your GitHub account has private repositories, you control the access to that information. GitHub personnel does not access private repository information without your consent except as provided in this Privacy Statement and for: GitHub will provide you with notice regarding private repository access unless doing so is prohibited by law or if GitHub acted in response to a security threat or other risk to security. GitHub processes Personal Data in compliance with the GDPR, ensuring a lawful basis for each processing" }, { "data": "The basis varies depending on the data type and the context, including how you access the services. Our processing activities typically fall under these lawful bases: Depending on your residence location, you may have specific legal rights regarding your Personal Data: To exercise these rights, please send an email to privacy[at]github[dot]com and follow the instructions provided. To verify your identity for security, we may request extra information before addressing your data-related request. Please contact our Data Protection Officer at dpo[at]github[dot]com for any feedback or concerns. Depending on your region, you have the right to complain to your local Data Protection Authority. European users can find authority contacts on the European Data Protection Board website, and UK users on the Information Commissioners Office website. We aim to promptly respond to requests in compliance with legal requirements. Please note that we may retain certain data as necessary for legal obligations or for establishing, exercising, or defending legal claims. GitHub stores and processes Personal Data in a variety of locations, including your local region, the United States, and other countries where GitHub, its affiliates, subsidiaries, or subprocessors have operations. We transfer Personal Data from the European Union, the United Kingdom, and Switzerland to countries that the European Commission has not recognized as having an adequate level of data protection. When we engage in such transfers, we generally rely on the standard contractual clauses published by the European Commission under Commission Implementing Decision 2021/914, to help protect your rights and enable these protections to travel with your data. To learn more about the European Commissions decisions on the adequacy of the protection of personal data in the countries where GitHub processes personal data, see this article on the European Commission website. GitHub also complies with the EU-U.S. Data Privacy Framework (EU-U.S. DPF), the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. Data Privacy Framework (Swiss-U.S. DPF) as set forth by the U.S. Department of Commerce. GitHub has certified to the U.S. Department of Commerce that it adheres to the EU-U.S. Data Privacy Framework Principles (EU-U.S. DPF Principles) with regard to the processing of personal data received from the European Union in reliance on the EU-U.S. DPF and from the United Kingdom (and Gibraltar) in reliance on the UK Extension to the EU-U.S. DPF. GitHub has certified to the U.S. Department of Commerce that it adheres to the Swiss-U.S. Data Privacy Framework Principles (Swiss-U.S. DPF Principles) with regard to the processing of personal data received from Switzerland in reliance on the Swiss-U.S. DPF. If there is any conflict between the terms in this privacy statement and the EU-U.S. DPF Principles and/or the Swiss-U.S. DPF Principles, the Principles shall govern. To learn more about the Data Privacy Framework (DPF) program, and to view our certification, please visit https://www.dataprivacyframework.gov/. GitHub has the responsibility for the processing of Personal Data it receives under the Data Privacy Framework (DPF) Principles and subsequently transfers to a third party acting as an agent on GitHubs behalf. GitHub shall remain liable under the DPF Principles if its agent processes such Personal Data in a manner inconsistent with the DPF Principles, unless the organization proves that it is not responsible for the event giving rise to the damage. In compliance with the EU-U.S. DPF, the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. DPF, GitHub commits to resolve DPF Principles-related complaints about our collection and use of your personal" }, { "data": "EU, UK, and Swiss individuals with inquiries or complaints regarding our handling of personal data received in reliance on the EU-U.S. DPF, the UK Extension, and the Swiss-U.S. DPF should first contact GitHub at: dpo[at]github[dot]com. If you do not receive timely acknowledgment of your DPF Principles-related complaint from us, or if we have not addressed your DPF Principles-related complaint to your satisfaction, please visit https://go.adr.org/dpf_irm.html for more information or to file a complaint. The services of the International Centre for Dispute Resolution are provided at no cost to you. An individual has the possibility, under certain conditions, to invoke binding arbitration for complaints regarding DPF compliance not resolved by any of the other DPF mechanisms. For additional information visit https://www.dataprivacyframework.gov/s/article/ANNEX-I-introduction-dpf?tabset-35584=2. GitHub is subject to the investigatory and enforcement powers of the Federal Trade Commission (FTC). Under Section 5 of the Federal Trade Commission Act (15 U.S.C. 45), an organization's failure to abide by commitments to implement the DPF Principles may be challenged as deceptive by the FTC. The FTC has the power to prohibit such misrepresentations through administrative orders or by seeking court orders. GitHub uses appropriate administrative, technical, and physical security controls to protect your Personal Data. Well retain your Personal Data as long as your account is active and as needed to fulfill contractual obligations, comply with legal requirements, resolve disputes, and enforce agreements. The retention duration depends on the purpose of data collection and any legal obligations. GitHub uses administrative, technical, and physical security controls where appropriate to protect your Personal Data. Contact us via our contact form or by emailing our Data Protection Officer at dpo[at]github[dot]com. Our addresses are: GitHub B.V. Prins Bernhardplein 200, Amsterdam 1097JB The Netherlands GitHub, Inc. 88 Colin P. Kelly Jr. St. San Francisco, CA 94107 United States Our Services are not intended for individuals under the age of 13. We do not intentionally gather Personal Data from such individuals. If you become aware that a minor has provided us with Personal Data, please notify us. GitHub may periodically revise this Privacy Statement. If there are material changes to the statement, we will provide at least 30 days prior notice by updating our website or sending an email to your primary email address associated with your GitHub account. Below are translations of this document into other languages. In the event of any conflict, uncertainty, or apparent inconsistency between any of those versions and the English version, this English version is the controlling version. Cliquez ici pour obtenir la version franaise: Dclaration de confidentialit de GitHub (PDF). For translations of this statement into other languages, please visit https://docs.github.com/ and select a language from the drop-down menu under English. GitHub uses cookies to provide, secure and improve our Service or to develop new features and functionality of our Service. For example, we use them to (i) keep you logged in, (ii) remember your preferences, (iii) identify your device for security and fraud purposes, including as needed to maintain the integrity of our Service, (iv) compile statistical reports, and (v) provide information and insight for future development of GitHub. We provide more information about cookies on GitHub that describes the cookies we set, the needs we have for those cookies, and the expiration of such cookies. For Enterprise Marketing Pages, we may also use non-essential cookies to (i) gather information about enterprise users interests and online activities to personalize their experiences, including by making the ads, content, recommendations, and marketing seen or received more relevant and (ii) serve and measure the effectiveness of targeted advertising and other marketing" }, { "data": "If you disable the non-essential cookies on the Enterprise Marketing Pages, the ads, content, and marketing you see may be less relevant. Our emails to users may contain a pixel tag, which is a small, clear image that can tell us whether or not you have opened an email and what your IP address is. We use this pixel tag to make our email communications more effective and to make sure we are not sending you unwanted email. The length of time a cookie will stay on your browser or device depends on whether it is a persistent or session cookie. Session cookies will only stay on your device until you stop browsing. Persistent cookies stay until they expire or are deleted. The expiration time or retention period applicable to persistent cookies depends on the purpose of the cookie collection and tool used. You may be able to delete cookie data. For more information, see \"GitHub General Privacy Statement.\" We use cookies and similar technologies, such as web beacons, local storage, and mobile analytics, to operate and provide our Services. When visiting Enterprise Marketing Pages, like resources.github.com, these and additional cookies, like advertising IDs, may be used for sales and marketing purposes. Cookies are small text files stored by your browser on your device. A cookie can later be read when your browser connects to a web server in the same domain that placed the cookie. The text in a cookie contains a string of numbers and letters that may uniquely identify your device and can contain other information as well. This allows the web server to recognize your browser over time, each time it connects to that web server. Web beacons are electronic images (also called single-pixel or clear GIFs) that are contained within a website or email. When your browser opens a webpage or email that contains a web beacon, it automatically connects to the web server that hosts the image (typically operated by a third party). This allows that web server to log information about your device and to set and read its own cookies. In the same way, third-party content on our websites (such as embedded videos, plug-ins, or ads) results in your browser connecting to the third-party web server that hosts that content. Mobile identifiers for analytics can be accessed and used by apps on mobile devices in much the same way that websites access and use cookies. When visiting Enterprise Marketing pages, like resources.github.com, on a mobile device these may allow us and our third-party analytics and advertising partners to collect data for sales and marketing purposes. We may also use so-called flash cookies (also known as Local Shared Objects or LSOs) to collect and store information about your use of our Services. Flash cookies are commonly used for advertisements and videos. The GitHub Services use cookies and similar technologies for a variety of purposes, including to store your preferences and settings, enable you to sign-in, analyze how our Services perform, track your interaction with the Services, develop inferences, combat fraud, and fulfill other legitimate purposes. Some of these cookies and technologies may be provided by third parties, including service providers and advertising" }, { "data": "For example, our analytics and advertising partners may use these technologies in our Services to collect personal information (such as the pages you visit, the links you click on, and similar usage information, identifiers, and device information) related to your online activities over time and across Services for various purposes, including targeted advertising. GitHub will place non-essential cookies on pages where we market products and services to enterprise customers, for example, on resources.github.com. We and/or our partners also share the information we collect or infer with third parties for these purposes. The table below provides additional information about how we use different types of cookies: | Purpose | Description | |:--|:--| | Required Cookies | GitHub uses required cookies to perform essential website functions and to provide the services. For example, cookies are used to log you in, save your language preferences, provide a shopping cart experience, improve performance, route traffic between web servers, detect the size of your screen, determine page load times, improve user experience, and for audience measurement. These cookies are necessary for our websites to work. | | Analytics | We allow third parties to use analytics cookies to understand how you use our websites so we can make them better. For example, cookies are used to gather information about the pages you visit and how many clicks you need to accomplish a task. We also use some analytics cookies to provide personalized advertising. | | Social Media | GitHub and third parties use social media cookies to show you ads and content based on your social media profiles and activity on GitHubs websites. This ensures that the ads and content you see on our websites and on social media will better reflect your interests. This also enables third parties to develop and improve their products, which they may use on websites that are not owned or operated by GitHub. | | Advertising | In addition, GitHub and third parties use advertising cookies to show you new ads based on ads you've already seen. Cookies also track which ads you click or purchases you make after clicking an ad. This is done both for payment purposes and to show you ads that are more relevant to you. For example, cookies are used to detect when you click an ad and to show you ads based on your social media interests and website browsing history. | You have several options to disable non-essential cookies: Specifically on GitHub Enterprise Marketing Pages Any GitHub page that serves non-essential cookies will have a link in the pages footer to cookie settings. You can express your preferences at any time by clicking on that linking and updating your settings. Some users will also be able to manage non-essential cookies via a cookie consent banner, including the options to accept, manage, and reject all non-essential cookies. Generally for all websites You can control the cookies you encounter on the web using a variety of widely-available tools. For example: These choices are specific to the browser you are using. If you access our Services from other devices or browsers, take these actions from those systems to ensure your choices apply to the data collected when you use those systems. This section provides extra information specifically for residents of certain US states that have distinct data privacy laws and regulations. These laws may grant specific rights to residents of these states when the laws come into effect. This section uses the term personal information as an equivalent to the term Personal Data. These rights are common to the US State privacy laws: We may collect various categories of personal information about our website visitors and users of \"Services\" which includes GitHub applications, software, products, or" }, { "data": "That information includes identifiers/contact information, demographic information, payment information, commercial information, internet or electronic network activity information, geolocation data, audio, electronic, visual, or similar information, and inferences drawn from such information. We collect this information for various purposes. This includes identifying accessibility gaps and offering targeted support, fostering diversity and representation, providing services, troubleshooting, conducting business operations such as billing and security, improving products and supporting research, communicating important information, ensuring personalized experiences, and promoting safety and security. To make an access, deletion, correction, or opt-out request, please send an email to privacy[at]github[dot]com and follow the instructions provided. We may need to verify your identity before processing your request. If you choose to use an authorized agent to submit a request on your behalf, please ensure they have your signed permission or power of attorney as required. To opt out of the sharing of your personal information, you can click on the \"Do Not Share My Personal Information\" link on the footer of our Websites or use the Global Privacy Control (\"GPC\") if available. Authorized agents can also submit opt-out requests on your behalf. We also make the following disclosures for purposes of compliance with California privacy law: Under California Civil Code section 1798.83, also known as the Shine the Light law, California residents who have provided personal information to a business with which the individual has established a business relationship for personal, family, or household purposes (California Customers) may request information about whether the business has disclosed personal information to any third parties for the third parties direct marketing purposes. Please be aware that we do not disclose personal information to any third parties for their direct marketing purposes as defined by this law. California Customers may request further information about our compliance with this law by emailing (privacy[at]github[dot]com). Please note that businesses are required to respond to one request per California Customer each year and may not be required to respond to requests made by means other than through the designated email address. California residents under the age of 18 who are registered users of online sites, services, or applications have a right under California Business and Professions Code Section 22581 to remove, or request and obtain removal of, content or information they have publicly posted. To remove content or information you have publicly posted, please submit a Private Information Removal request. Alternatively, to request that we remove such content or information, please send a detailed description of the specific content or information you wish to have removed to GitHub support. Please be aware that your request does not guarantee complete or comprehensive removal of content or information posted online and that the law may not permit or require removal in certain circumstances. If you have any questions about our privacy practices with respect to California residents, please send an email to privacy[at]github[dot]com. We value the trust you place in us and are committed to handling your personal information with care and respect. If you have any questions or concerns about our privacy practices, please email our Data Protection Officer at dpo[at]github[dot]com. If you live in Colorado, Connecticut, or Virginia you have some additional rights: We do not sell your covered information, as defined under Chapter 603A of the Nevada Revised Statutes. If you still have questions about your covered information or anything else in our Privacy Statement, please send an email to privacy[at]github[dot]com. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "pulls.md", "project_name": "Snapt Nova", "subcategory": "Service Proxy" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. Pull requests help you collaborate on code with other people. As pull requests are created, theyll appear here in a searchable and filterable list. To get started, you should create a pull request." } ]
{ "category": "Orchestration & Management", "file_name": "github-terms-of-service.md", "project_name": "Snapt Nova", "subcategory": "Service Proxy" }
[ { "data": "You can verify your ownership of domains with GitHub to confirm your organization's identity. Organization owners can verify or approve a domain for an organization. After verifying ownership of your organization's domains, a \"Verified\" badge will display on the organization's profile. To display a \"Verified\" badge, the website and email information shown on an organization's profile must match the verified domain or domains. If the website and email address shown on your organization's profile are hosted on different domains, you must verify both domains. If the website and email address use variants of the same domain, you must verify both variants. For example, if the profile shows the website www.example.com and the email address info@example.com, you would need to verify both www.example.com and example.com. If you confirm your organizations identity by verifying your domain and restricting email notifications to only verified email domains, you can help prevent sensitive information from being exposed. For more information see \"Best practices for preventing data leaks in your organization.\" To verify a domain, you must have access to modify domain records with your domain hosting service. In the upper-right corner of GitHub, select your profile photo, then click Your organizations. Next to the organization, click Settings. In the \"Security\" section of the sidebar, click Verified and approved domains. Next to \"Verified & approved domains for your enterprise account\", click Add a domain. Under \"What domain would you like to add?\", type the domain you'd like to verify, then click Add domain. Follow the instructions under \"Add a DNS TXT record\" to create a DNS TXT record with your domain hosting service. Wait for your DNS configuration to change, which may take up to 72 hours. You can confirm your DNS configuration has changed by running the dig command on the command line, replacing TXT-RECORD-NAME with the name of the TXT record created in your DNS configuration. You should see your new TXT record listed in the command output. ``` dig TXT-RECORD-NAME +nostats +nocomments +nocmd TXT ``` After confirming your TXT record is added to your DNS, follow steps one through three above to navigate to your organization's approved and verified domains. To the right of the domain that's pending verification, select the dropdown menu, then click Continue verifying. Click Verify. Optionally, once the \"Verified\" badge is visible on your organization's profile page, you can delete the TXT entry from the DNS record at your domain hosting service. Note: The ability to approve a domain not owned by your organization or enterprise is currently in beta and subject to change. In the upper-right corner of GitHub, select your profile photo, then click Your organizations. Next to the organization, click Settings. In the \"Security\" section of the sidebar, click Verified and approved domains. Next to \"Verified & approved domains for your enterprise account\", click Add a domain. Under \"What domain would you like to add?\", type the domain you'd like to verify, then click Add domain. To the right of \"Can't verify this domain?\", click Approve it instead. Read the information about domain approval, then click Approve DOMAIN. In the upper-right corner of GitHub, select your profile photo, then click Your organizations. Next to the organization, click Settings. In the \"Security\" section of the sidebar, click Verified and approved domains. To the right of the domain to remove, select the dropdown menu, then click Delete. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": "issues.md", "project_name": "Snapt Nova", "subcategory": "Service Proxy" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community. By clicking Sign up for GitHub, you agree to our terms of service and privacy statement. Well occasionally send you account related emails. Already on GitHub? Sign in to your account Issues are used to track todos, bugs, feature requests, and more. As issues are created, theyll appear here in a searchable and filterable list. To get started, you should create an issue." } ]
{ "category": "Orchestration & Management", "file_name": "verifying-or-approving-a-domain-for-your-organization.md", "project_name": "Snapt Nova", "subcategory": "Service Proxy" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Orchestration & Management", "file_name": ".md", "project_name": "Tengine", "subcategory": "Service Proxy" }
[ { "data": "| 0 | 1 | |:|:--| | nginx professional servicesPrioritize. Save time. Stay focused. | nan | | nginx | englishtrke [en]nginx [en] [en]FAQ [en] [en] [en] [en]tracwikitwitternginx.com | | HTTPHTTP nginx [engine x]Igor SysoevHTTP YandexMail.RuVKontakteRamblerNetcraft2012811.48%Nginx Netflix Wordpress.com FastMail.FM NginxBSD HTTP FastCGIuwsgiSCGImemcached gziprangeschunkedXSLTSSISSI SSIFastCGI SSLTLS SNI HTTP IP Keep-alivepipelined 3xx-5xx rewriteURI IPHTTP HTTP referer PUTDELETEMKCOLCOPYMOVE FLVMP4 Perl HTTPIMAP/POP3 HTTPSMTP POP3: USER/PASS, APOP, AUTH LOGIN/PLAIN/CRAM-MD5; IMAP: LOGIN, AUTH LOGIN/PLAIN/CRAM-MD5; SMTP: AUTH LOGIN/PLAIN/CRAM-MD5; SSL STARTTLSSTLS kqueueFreeBSD 4.1+epollLinux 2.6+rt signalsLinux 2.2.19+/dev/pollSolaris 7 11/99+event portsSolaris 10selectpoll kqueueEVCLEAREVDISABLENOTELOWATEVEOF sendfileFreeBSD 3.1+, Linux 2.2+, Mac OS X 10.5+sendfile64Linux 2.4.21+sendfilevSolaris 8 7/01+ AIOFreeBSD 4.3+, Linux 2.6.22+ DIRECTIO (FreeBSD 4.4+, Linux 2.4+, Solaris 2.6+, Mac OS X); Accept-filtersFreeBSD 4.1+, NetBSD 5.0+ TCPDEFERACCEPTLinux 2.4+ 10000HTTP keep-alive2.5M FreeBSD 3 10 / i386; FreeBSD 5 10 / amd64; Linux 2.2 3 / i386; Linux 2.6 3 / amd64; Solaris 9 / i386, sun4u; Solaris 10 / i386, amd64, sun4v; AIX 7.1 / powerpc; HP-UX 11.31 / ia64; MacOS X / ppc, i386; Windows XP, Windows Server 2003. | englishtrke [en]nginx [en] [en]FAQ [en] [en] [en] [en]tracwikitwitternginx.com | | 0 | |:-| | nginx professional services | | Prioritize. Save time. Stay focused. | | 0 | |:| | HTTPHTTP | nginx [engine x]Igor SysoevHTTP YandexMail.RuVKontakteRamblerNetcraft2012811.48%Nginx Netflix Wordpress.com FastMail.FM NginxBSD" } ]
{ "category": "Provisioning", "file_name": "release-notes.html.md", "project_name": "Airship", "subcategory": "Automation & Configuration" }
[ { "data": "Learn About Airship 2 Try Airship 2 Develop Airship 2 Airship Project Documentation Airship 1 Documentation Airship is a robust system for delivering container-based cloud infrastructure (or any other containerized workload) at scale on bare metal, public clouds, and edge clouds. Airship integrates best-of-class CNCF projects, such as Cluster API, Kustomize, Metal3, and Helm Operator, to deliver a resilient and predictable lifecycle experience. Combining easy lifecycle management with zero-downtime real-time upgrade capability, Airship can handle the provisioning and configuration of the operating system, RAID services, and the network. Release 2.1 introduces the following enhancements: Upgrade components to parity with Cluster API v1alpha4, including Bare Metal Operator v1alpha5 Kubernetes upgrade to version 1.21 Docker provider upgrade to v1alpha3 CAPD and CAPZ upgrades to versions 0.4.2 and 0.5.2, respectively Airship in a Pod hardening and improvements such as support for custom site manifest locations and private repositories. (477) Helm Controller upgrade to version 0.11.1 and Source Controller to version 0.15.3 (607) Kustomize upgrade to version 4.2.0 KPT upgrade to to version 1.0.0-beta.7 Support of iLO5 in bare metal node bootstrapping Release 2.0 introduces a variety of significant improvements: No-touch bootstrap for remote sites as well as local sites Declarative image building for both ephemeral ISO and bare metal targeted QCOWs Declarative cluster lifecycle Lifecycle for bare metal, public cloud, and edge cloud infrastructure Single command line airshipctl Lifecycle defined as a sequence of phases Introduction of a plan for the phases Seamless integration with CNCF projects (CAPI, Metal3, Kustomize) Seamless integration with security plugins like SOPS Generic container interface: mechanism to extend airshipctl with ad hoc functionality Introduction of host config operator for day 2 operations Change logs list feature and defect details for a release. For the complete set of releases including links to change logs, see the treasuremap and airshipctl Github release pages. Copyright 2019-2021, The Airship Authors" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Akri", "subcategory": "Automation & Configuration" }
[ { "data": "If you prefer to learn through videos rather than written documentation, the following is a list of informative talks and demos on Akri. Bridge Your IoT Leaf Devices to Local Clusters with Ease Using Akri and Dynamic Resource Allocation - Latest Akri introduction at KubeCon EU 2024. Introducing industrial edge - An introduction to Akri and how it fits to SUSE's industrial edge solution. Includes a demo of discovering an USB camera. Azure Arc Jumpstart with Akri - A talk in the Azure Arc Jumpstart channel. Includes a demo of discovering an ONVIF camera with Akri and feeding the stream to an edge AI model. Discovering and Managing IoT Devices from Kubernetes with Akri - A deep dive for Akri. Includes a step-by-step demo of discovering the ONVIF cameras and performing firmware update. To try more demos/examples with step-by-step guidance, check the rest of the pages under Demo section. Last updated 1 month ago Was this helpful?" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Airship", "subcategory": "Automation & Configuration" }
[ { "data": "Learn About Airship 2 Try Airship 2 Develop Airship 2 Airship Project Documentation Airship 1 Documentation Airship is a collection of components that declaratively configure, deploy and maintain a Kubernetes environment defined by YAML documents. Airship is supported by the OpenStack Foundation. Airship documentation serves the entire community with resources for users and developers. Learn About Airship 2 Try Airship 2 Develop Airship 2 Airship Project Documentation Airship 1 Documentation Airship Blog Airship Website Airship Wiki Receive Airship announcements and interact with our community on our mailing lists. Airship is constantly evolving. Contribute to the Airship design process and day-to-day community operations in our weekly calls. Get in touch with Airship developers and operators in our Slack workspace. Copyright 2019-2021, The Airship Authors" } ]
{ "category": "Provisioning", "file_name": "building.md", "project_name": "Akri", "subcategory": "Automation & Configuration" }
[ { "data": "Make sure you have at least one Onvif camera that is reachable so Onvif discovery handler can discovery your Onvif camera. To test accessing Onvif with credentials, make sure your Onvif camera is authentication-enabled. Write down the username and password, they are required in the flow below. Add Akri helm chart repo and set the environment variable AKRIHELMCRICTL_CONFIGURATION to proper value. ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm repo update``` Set up the Kubernetes distribution being used, here we use 'k8s', make sure to replace it with a value that matches the Kubernetes distribution you used. See the cluster setup steps for information on how to set the crictl configuration variable AKRIHELMCRICTL_CONFIGURATION ``` export AKRIHELMCRICTL_CONFIGURATION=\"--set kubernetesDistro=k8s\"``` In real product scenarios, the device uuids are acquired directly from the vendors or already known before installing Akri Configuration. If you already know the device uuids, you can skip this and go to the next step. First use the following helm chart to deploy an Akri Configuration and see if your camera is discovered. ``` helm install akri akri-helm-charts/akri-dev \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set onvif.discovery.enabled=true \\ --set onvif.configuration.name=akri-onvif \\ --set onvif.configuration.enabled=true \\ --set onvif.configuration.capacity=3 \\ --set onvif.configuration.brokerPod.image.repository=\"nginx\" \\ --set onvif.configuration.brokerPod.image.tag=\"stable-alpine\"``` Here is the result of running the installation command above on a cluster with 1 control plane and 2 work nodes. There is one Onvif camera connects to the network, thus 1 pods running on each node. ``` $ kubectl get nodes,akric,akrii,pods NAME STATUS ROLES AGE VERSION node/kube-01 Ready control-plane 22d v1.26.1 node/kube-02 Ready <none> 22d v1.26.1 node/kube-03 Ready <none> 22d v1.26.1 NAME CAPACITY AGE configuration.akri.sh/akri-onvif 3 62s NAME CONFIG SHARED NODES AGE instance.akri.sh/akri-onvif-029957 akri-onvif true [\"kube-03\",\"kube-02\"] 48s NAME READY STATUS RESTARTS AGE pod/akri-agent-daemonset-gnwb5 1/1 Running 0 62s pod/akri-agent-daemonset-zn2gb 1/1 Running 0 62s pod/akri-controller-deployment-56b9796c5-wqdwr 1/1 Running 0 62s pod/akri-onvif-discovery-daemonset-wcp2f 1/1 Running 0 62s pod/akri-onvif-discovery-daemonset-xml6t 1/1 Running 0 62s pod/akri-webhook-configuration-75d9b95fbc-wqhgw 1/1 Running 0 62s pod/kube-02-akri-onvif-029957-pod 1/1 Running 0 48s pod/kube-03-akri-onvif-029957-pod 1/1 Running 0 48s``` Get the device uuid from the Akri Instance. Below is an example, the Onvif discovery handler discovers the camera and expose the device's uuid. Write down the device uuid for later use. Note that in real product scenarios, the device uuids are acquired directly from the vendors or already known before installing Akri Configuration. ``` $ kubectl get akrii akri-onvif-029957 -o yaml | grep ONVIFDEVICEUUID ONVIFDEVICEUUID: 3fa1fe68-b915-4053-a3e1-ac15a21f5f91``` Now we can set up the credential information to Kubernetes Secret. Replace the device uuid and the values of username/password with information of your camera. ``` cat > /tmp/onvif-auth-secret.yaml<< EOF apiVersion: v1 kind: Secret metadata: name: onvif-auth-secret type: Opaque stringData: devicecredentiallist: |+ [ \"credential_list\" ] credential_list: |+ { \"3fa1fe68-b915-4053-a3e1-ac15a21f5f91\" : { \"username\" : \"camuser\", \"password\" : \"HappyDay\" } } EOF kubectl apply -f /tmp/onvif-auth-secret.yaml ``` Upgrade the Akri Configuration to include the secret information and the sample video broker container. ``` helm upgrade akri akri-helm-charts/akri-dev \\ --install \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set onvif.discovery.enabled=true \\ --set onvif.configuration.enabled=true \\ --set onvif.configuration.capacity=3 \\ --set onvif.configuration.discoveryProperties[0].name=devicecredentiallist \\ --set onvif.configuration.discoveryProperties[0].valueFrom.secretKeyRef.name=onvif-auth-secret \\ --set onvif.configuration.discoveryProperties[0].valueFrom.secretKeyRef.namesapce=default \\ --set onvif.configuration.discoveryProperties[0].valueFrom.secretKeyRef.key=devicecredentiallist \\ --set onvif.configuration.discoveryProperties[0].valueFrom.secretKeyRef.optoinal=false \\ --set onvif.configuration.brokerPod.image.repository=\"ghcr.io/project-akri/akri/onvif-video-broker\" \\ --set onvif.configuration.brokerPod.image.tag=\"latest-dev\" \\ --set onvif.configuration.brokerPod.image.pullPolicy=\"Always\" \\ --set onvif.configuration.brokerProperties.CREDENTIALDIRECTORY=\"/etc/credentialdirectory\" \\ --set onvif.configuration.brokerProperties.CREDENTIALCONFIGMAPDIRECTORY=\"/etc/credentialcfgmapdirectory\" \\ --set onvif.configuration.brokerPod.volumeMounts[0].name=\"credentials\" \\ --set onvif.configuration.brokerPod.volumeMounts[0].mountPath=\"/etc/credential_directory\" \\ --set onvif.configuration.brokerPod.volumeMounts[0].readOnly=true \\ --set onvif.configuration.brokerPod.volumes[0].name=\"credentials\" \\ --set" }, { "data": "With the secret information, the Onvif discovery handler is able to discovery the Onvif camera and the video broker is up and running ``` $ kubectl get nodes,akric,akrii,pods NAME STATUS ROLES AGE VERSION node/kube-01 Ready control-plane 22d v1.26.1 node/kube-02 Ready <none> 22d v1.26.1 node/kube-03 Ready <none> 22d v1.26.1 NAME CAPACITY AGE configuration.akri.sh/akri-onvif 3 18m NAME CONFIG SHARED NODES AGE instance.akri.sh/akri-onvif-029957 akri-onvif true [\"kube-03\",\"kube-02\"] 22s NAME READY STATUS RESTARTS AGE pod/akri-agent-daemonset-bq494 1/1 Running 0 18m pod/akri-agent-daemonset-c2rng 1/1 Running 0 18m pod/akri-controller-deployment-56b9796c5-rtm5q 1/1 Running 0 18m pod/akri-onvif-discovery-daemonset-rbgwq 1/1 Running 0 18m pod/akri-onvif-discovery-daemonset-xwjlp 1/1 Running 0 18m pod/akri-webhook-configuration-75d9b95fbc-cr6bc 1/1 Running 0 18m pod/kube-02-akri-onvif-029957-pod 1/1 Running 0 22s pod/kube-03-akri-onvif-029957-pod 1/1 Running 0 22s $ kubectl logs kube-02-akri-onvif-029957-pod [Akri] ONVIF request http://192.168.1.145:2020/onvif/device_service http://www.onvif.org/ver10/device/wsdl/GetService [Akri] ONVIF media url http://192.168.1.145:2020/onvif/service [Akri] ONVIF request http://192.168.1.145:2020/onvif/service http://www.onvif.org/ver10/media/wsdl/GetProfiles [Akri] ONVIF profile list contains: profile_1 [Akri] ONVIF profile list contains: profile_2 [Akri] ONVIF profile list profile_1 [Akri] ONVIF request http://192.168.1.145:2020/onvif/service http://www.onvif.org/ver10/media/wsdl/GetStreamUri [Akri] ONVIF streaming uri list contains: rtsp://192.168.1.145:554/stream1 [Akri] ONVIF streaming uri rtsp://192.168.1.145:554/stream1 [VideoProcessor] Processing RTSP stream: rtsp://-:-@192.168.1.145:554/stream1 info: Microsoft.Hosting.Lifetime[0] Now listening on: http://[::]:8083 info: Microsoft.Hosting.Lifetime[0] Application started. Press Ctrl+C to shut down. info: Microsoft.Hosting.Lifetime[0] Hosting environment: Production info: Microsoft.Hosting.Lifetime[0] Content root path: /app Ready True Adding frame from rtsp://-:-@192.168.1.145:554/stream1, Q size: 1, frame size: 862986 Adding frame from rtsp://-:-@192.168.1.145:554/stream1, Q size: 2, frame size: 865793 Adding frame from rtsp://-:-@192.168.1.145:554/stream1, Q size: 2, frame size: 868048 Adding frame from rtsp://-:-@192.168.1.145:554/stream1, Q size: 2, frame size: 869655 Adding frame from rtsp://-:-@192.168.1.145:554/stream1, Q size: 2, frame size: 871353``` Deploy the sample video streaming application Instructions described from the step 4 of camera demo Deploy a video streaming web application that points to both the Configuration and Instance level services that were automatically created by Akri. Copy and paste the contents into a file and save it as akri-video-streaming-app.yaml ``` cat > /tmp/akri-video-streaming-app.yaml<< EOF apiVersion: apps/v1 kind: Deployment metadata: name: akri-video-streaming-app spec: replicas: 1 selector: matchLabels: app: akri-video-streaming-app template: metadata: labels: app: akri-video-streaming-app spec: serviceAccountName: akri-video-streaming-app-sa containers: name: akri-video-streaming-app image: ghcr.io/project-akri/akri/video-streaming-app:latest-dev imagePullPolicy: Always securityContext: runAsUser: 1000 allowPrivilegeEscalation: false runAsNonRoot: true readOnlyRootFilesystem: true capabilities: drop: [\"ALL\"] env: name: CONFIGURATION_NAME value: akri-onvif apiVersion: v1 kind: Service metadata: name: akri-video-streaming-app namespace: default labels: app: akri-video-streaming-app spec: selector: app: akri-video-streaming-app ports: name: http port: 80 targetPort: 5000 type: NodePort apiVersion: v1 kind: ServiceAccount metadata: name: akri-video-streaming-app-sa kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: akri-video-streaming-app-role rules: apiGroups: [\"\"] resources: [\"services\"] verbs: [\"list\"] apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: akri-video-streaming-app-binding roleRef: apiGroup: \"\" kind: ClusterRole name: akri-video-streaming-app-role subjects: kind: ServiceAccount name: akri-video-streaming-app-sa namespace: default EOF``` Deploy the video stream app ``` kubectl apply -f /tmp/akri-video-streaming-app.yaml``` Determine which port the service is running on. Save this port number for the next step: ``` kubectl get service/akri-video-streaming-app --output=jsonpath='{.spec.ports[?(@.name==\"http\")].nodePort}' && echo``` SSH port forwarding can be used to access the streaming application. Open a new terminal, enter your ssh command to to access your machine followed by the port forwarding request. The following command will use port 50000 on the host. Feel free to change it if it is not available. Be sure to replace <streaming-app-port> with the port number outputted in the previous step. ``` ssh someuser@<machine IP address> -L 50000:localhost:<streaming-app-port>``` Navigate to http://localhost:50000/ using browser. The large feed points to Configuration level service, while the bottom feed points to the service for each Instance or camera. Close the page http://localhost:50000/ from the browser Delete the sample streaming application resources ``` kubectl delete -f /tmp/akri-video-streaming-app.yaml``` Delete the Secret information ``` kubectl delete -f /tmp/onvif-auth-secret.yaml``` Delete deployment and Akri installation to clean up the system. ``` helm delete akri kubectl delete crd configurations.akri.sh kubectl delete crd instances.akri.sh``` Last updated 8 months ago Was this" } ]
{ "category": "Provisioning", "file_name": "agent-in-depth.md", "project_name": "Akri", "subcategory": "Automation & Configuration" }
[ { "data": "This will demonstrate how to get Akri working on a Raspberry Pi 4 and walk through using Akri to discover mock USB cameras attached to nodes in a Kubernetes cluster. You'll see how Akri automatically deploys workloads to pull frames from the cameras. We will then deploy a streaming application that will point to services automatically created by Akri to access the video frames from the workloads. The following will be covered in this demo: Setting up single node cluster on a Raspberry Pi 4 Setting up mock udev video devices Installing Akri via Helm with settings to create your Akri udev Configuration Inspecting Akri Deploying a streaming application Cleanup Going beyond the demo Using instructions found here, download 64-bit Ubuntu:18.04 Using the instructions found here, apply the Ubuntu image to an SD card. Plug in SD card and start Raspberry Pi 4. Install docker. ``` sudo apt install -y docker.io``` Install Helm. ``` sudo apt install -y curl curl -L https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash``` Install Kubernetes. ``` curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add sudo apt-add-repository \"deb http://apt.kubernetes.io/ kubernetes-xenial main\" sudo apt install -y kubectl kubeadm kubelet``` Enable cgroup memory by appending cgroupenable=cpuset and cgroupenable=memory cgroup_memory=1 to this file: /boot/firmware/nobtcmd.txt Start master node ``` sudo kubeadm init``` You will then need to setup kubenetes config and environment variables using the commands below ``` mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config export KUBECONFIG=$HOME/.kube/config``` To enable workloads on our single-node cluster, remove the master taint. ``` kubectl taint nodes --all node-role.kubernetes.io/master-``` Apply a network provider to the cluster. ``` kubectl apply -f \"https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\\n')\"``` Open a new terminal and ssh into your ubuntu server that your cluster is running on. To setup fake usb video devices, install the v4l2loopback kernel module and its prerequisites. Learn more about v4l2 loopback here ``` sudo apt update sudo apt -y install linux-headers-$(uname -r) sudo apt -y install linux-modules-extra-$(uname -r) sudo apt -y install dkms curl http://deb.debian.org/debian/pool/main/v/v4l2loopback/v4l2loopback-dkms0.12.5-1all.deb -o v4l2loopback-dkms0.12.5-1all.deb sudo dpkg -i v4l2loopback-dkms0.12.5-1all.deb``` Note: If not able to install the debian package of v4l2loopback due to using a different Linux kernel, you can clone the repo, build the module, and setup the module dependencies like so: ``` git clone https://github.com/umlaeute/v4l2loopback.git cd v4l2loopback make & sudo make install sudo make install-utils sudo depmod -a ``` \"Plug-in\" two cameras by inserting the kernel module. To create different number video devices modify the video_nr argument. ``` sudo modprobe v4l2loopback exclusivecaps=1 videonr=1,2``` Confirm that two video device nodes (video1 and video2) have been created. ``` ls /dev/video*``` Install the necessary Gstreamer packages. ``` sudo apt-get install -y \\ libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-base \\ gstreamer1.0-plugins-good gstreamer1.0-libav``` Now that our cameras are set up, lets use Gstreamer to pass fake video streams through them. ``` mkdir camera-logs sudo gst-launch-1.0 -v videotestsrc pattern=ball ! \"video/x-raw,width=640,height=480,framerate=10/1\" ! avenc_mjpeg ! v4l2sink device=/dev/video1 > camera-logs/ball.log 2>&1 & sudo gst-launch-1.0 -v videotestsrc pattern=smpte horizontal-speed=1 ! \"video/x-raw,width=640,height=480,framerate=10/1\" ! avenc_mjpeg ! v4l2sink device=/dev/video2 > camera-logs/smpte.log 2>&1 &``` Note: If this generates an error, be sure that there are no existing video streams targeting the video device nodes by running the following and then re-running the previous command: ``` if pgrep gst-launch-1.0 > /dev/null; then sudo pkill -9 gst-launch-1.0 fi``` You tell Akri what you want to find with an Akri Configuration, which is one of Akri's Kubernetes custom resources. The Akri Configuration is simply a yaml file that you apply to your" }, { "data": "Within it, you specify three things: a Discovery Handler any additional device filtering an image for a Pod (that we call a \"broker\") that you want to be automatically deployed to utilize each discovered device For this demo, we will specify Akri's udev Discovery Handler, which is used to discover devices in the Linux device file system. Akri's udev Discovery Handler supports filtering by udev rules. We want to find all mock USB cameras in the Linux device file system, which can be specified with a simple udev rule KERNEL==\"video[0-9]*\". It matches name of the mock USB cameras. Note, when real USB cameras are used, the filtering udev rule can be more precise to avoid mistaken device match. For example, a better rule is KERNEL==\"video[0-9]\"\\, ENV{ID_V4L_CAPABILITIES}==\":capture:\" that adds a criteria on device capability. We may go further by adding criteria such as vendor name. An example is KERNEL==\"video[0-9]\"\\, ENV{IDV4LCAPABILITIES}==\":capture:\"\\, ENV{ID_VENDOR}==\"Great Vendor\". In order to write correct rule, check output of \"udevadm\" command for USB cameras. A example is \"udevadm info --query=all --name=video1\". a broker Pod image, we will use a sample container that Akri has provided that pulls frames from the cameras and serves them over gRPC. All of Akri's components can be deployed by specifying values in its Helm chart during an installation. Instead of having to build a Configuration from scratch, Akri has provided Helm templates for Configurations for each supported Discovery Handler. Lets customize the generic udev Configuration Helm template with our three specifications above. We can also set the name for the Configuration to be akri-udev-video. In order for the Agent to know how to discover video devices, the udev Discovery Handler must exist. Akri supports an Agent image that includes all supported Discovery Handlers. This Agent will be used if agent.full=true is set. By default, a slim Agent without any embedded Discovery Handlers is deployed and the required Discovery Handlers can be deployed as DaemonSets. This demo will use that strategy, deploying the udev Discovery Handlers by specifying udev.discovery.enabled=true when installing Akri. Add the Akri Helm chart and run the install command, setting Helm values as described above. Note: See the cluster setup steps for information on how to set the crictl configuration variable AKRIHELMCRICTL_CONFIGURATION ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set udev.discovery.enabled=true \\ --set udev.configuration.enabled=true \\ --set udev.configuration.name=akri-udev-video \\ --set udev.configuration.discoveryDetails.udevRules[0]='KERNEL==\"video[0-9]*\"' \\ --set udev.configuration.brokerPod.image.repository=\"ghcr.io/project-akri/akri/udev-video-broker\" ``` After installing Akri, since the /dev/video1 and /dev/video2 devices are running on this node, the Akri Agent will discover them and create an Instance for each camera. List all that Akri has automatically created and deployed, namely Akri Configuration we created when installing Akri, two Instances (which are the Akri custom resource that represents each device), two broker Pods (one for each camera), a service for each broker Pod, a service for all brokers, the Controller Pod, Agent Pod, and the udev Discovery Handler Pod. ``` watch kubectl get pods,akric,akrii,services -o wide``` Look at the Configuration and Instances in more detail. Inspect the Configuration that was created via the Akri udev Helm template and values that were set when installing Akri by running the following. ``` kubectl get akric -o yaml``` Inspect the two Instances. Notice that in the brokerProperties of each instance, you can see the device nodes (/dev/video1 or /dev/video2) that the Instance represents. The brokerProperties of an Instance are set as environment variables in the broker Pods that are utilizing the device the Instance represents. This told the broker which device to connect to. We can also see in the Instance a usage slot and that it was reserved for this node. Each Instance represents a device and its" }, { "data": "``` kubectl get akrii -o yaml``` If this was a shared device (such as an IP camera), you may have wanted to increase the number of nodes that could use the same device by specifying capacity. There is a capacity parameter for each Configuration, which defaults to 1. Its value could have been increased when installing Akri (via --set <discovery handler name>.configuration.capacity=2 to allow 2 nodes to use the same device) and more usage slots (the number of usage slots is equal to capacity) would have been created in the Instance. Deploy a video streaming web application that points to both the Configuration and Instance level services that were automatically created by Akri. ``` kubectl apply -f https://raw.githubusercontent.com/project-akri/akri/main/deployment/samples/akri-video-streaming-app.yaml watch kubectl get pods``` Determine which port the service is running on. Be sure to save this port number for the next step. ``` kubectl get service/akri-video-streaming-app --output=jsonpath='{.spec.ports[?(@.name==\"http\")].nodePort}' && echo``` SSH port forwarding can be used to access the streaming application. In a new terminal, enter your ssh command to to access your VM followed by the port forwarding request. The following command will use port 50000 on the host. Feel free to change it if it is not available. Be sure to replace <streaming-app-port> with the port number outputted in the previous step. ``` ssh someuser@<Ubuntu VM IP address> -L 50000:localhost:<streaming-app-port>``` Note we've noticed issues with port forwarding with WSL 2. Please use a different terminal. Navigate to http://localhost:50000/. The large feed points to Configuration level service (udev-camera-svc), while the bottom feed points to the service for each Instance or camera (udev-camera-svc-<id>). Bring down the streaming service. ``` kubectl delete service akri-video-streaming-app kubectl delete deployment akri-video-streaming-app watch kubectl get pods``` Delete the configuration, and watch the associated instances, pods, and services be deleted. ``` kubectl delete akric akri-udev-video watch kubectl get pods,services,akric,akrii -o wide``` If you are done using Akri, it can be uninstalled via Helm. ``` helm delete akri``` Delete Akri's CRDs. ``` kubectl delete crd instances.akri.sh kubectl delete crd configurations.akri.sh``` Stop video streaming from the video devices. ``` if pgrep gst-launch-1.0 > /dev/null; then sudo pkill -9 gst-launch-1.0 fi``` \"Unplug\" the fake video devices by removing the kernel module. ``` sudo modprobe -r v4l2loopback``` Plug in real cameras! You can pass environment variables to the frame server broker to specify the format, resolution width/height, and frames per second of your cameras. Apply the ONVIF Configuration and make the streaming app display footage from both the local video devices and onvif cameras. To do this, modify the video streaming yaml as described in the inline comments in order to create a larger service that aggregates the output from both the udev-camera-svc service and onvif-camera-svc service. Add more nodes to the cluster. Modify the udev rule to find a more specific subset of cameras Instead of finding all video4linux device nodes, the udev rule can be modified to exclude certain device nodes, find devices only made by a certain manufacturer, and more. For example, the rule can be narrowed by matching cameras with specific properties. To see the properties of a camera on a node, do udevadm info --query=property --name /dev/video0, passing in the proper devnode name. In this example, ID_VENDOR=Microsoft was one of the outputted properties. To only find cameras made by Microsoft, the rule can be modified like the following: ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set udev.discovery.enabled=true \\ --set udev.configuration.enabled=true \\ --set udev.configuration.name=akri-udev-video \\ --set udev.configuration.discoveryDetails.udevRules[0]='KERNEL==\"video[0-9]*\"\\, ENV{IDV4LCAPABILITIES}==\":capture:\"\\, ENV{ID_VENDOR}==\"Microsoft\"' \\ --set udev.configuration.brokerPod.image.repository=\"ghcr.io/project-akri/akri/udev-video-broker\" ``` Discover other udev devices by creating a new udev configuration and broker. Learn more about the udev Discovery Handler" } ]
{ "category": "Provisioning", "file_name": "broker-development.md", "project_name": "Akri", "subcategory": "Automation & Configuration" }
[ { "data": "Akri supports creating a Kubernetes resource (i.e. device plugin) for each individual device. Since each device in Akri is represented as an Instance custom resource, these are called Instance-level resources. Instance-level resources are named in the format <configuration-name>-<instance-id>. Akri also creates a Kubernetes Device Plugin for a Configuration called Configuration-level resource. A Configuration-level resource is a resource that represents all of the devices discovered via a Configuration. With Configuration-level resources, instead of needing to know the specific Instances to request, resources could be requested by the Configuration name and the Agent will do the work of selecting which Instances to reserve. The example below shows a deployment that requests the resource at Configuration level and would deploy a nginx broker to each discovered device respectively. ``` apiVersion: apps/v1 kind: Deployment metadata: name: onvif-camera-broker-deployment labels: app: onvif-camera-broker spec: replicas: 1 selector: matchLabels: app: onvif-camera-broker template: metadata: labels: app: onvif-camera-broker spec: containers: name: onvif-camera-broker image: nginx resources: limits: akri.sh/onvif-camera: \"2\" requests: akri.sh/onvif-camera: \"2\"``` With Configuration-level resources, users could use higher level Kubernetes objects (Deployments, ReplicaSets, DaemonSets, etc.) or develop their own deployment strategies, rather than relying on the Akri Controller to deploy Pods to discovered devices. The in-depth resource sharing doc describes how the Configuration.capacity and Instance.deviceUsage are used to achieve resource sharing between nodes. The same data is used to achieve sharing the same resource between Configuration-level and Instance-level resources. The Instance.deviceUsage in Akri Instances is extended to support Configuration device plugin. The Instance.deviceUsage may look like this: ``` deviceUsage: my-resource-00095f-0: \"\" my-resource-00095f-1: \"\" my-resource-00095f-2: \"\" my-resource-00095f-3: \"node-a\" my-resource-00095f-4: \"\"``` where empty string means the slot is free and non-empty string indicates the slot is used (by the node). To support Configuration device plugin, the Instance.deviceUsage format is extended to hold the additional information, the deviceUsage can be a \"<nodename>\" (for Instance) or a \"C:<virtualdeviceid>:<nodename>\" (for Configuration). For example, the Instance.deviceUsage shows the slot my-resource-00095f-2 is used by virtual device id \"0\" of the Configuration device plugin on node-b. The slot my-resource-00095f-3 is used by Instance device plugin on node-a. The other 3 slots are" }, { "data": "``` deviceUsage: my-resource-00095f-0: \"\" my-resource-00095f-1: \"\" my-resource-00095f-2: \"C:0:node-b\" my-resource-00095f-3: \"node-a\" my-resource-00095f-4: \"\"``` The Akri Agent and Discovery Handlers enable device discovery and Kubernetes resource creation: they discover devices, create Kubernetes resources to represent the devices, and ensure only capacity containers are using a device at once via the device plugin framework. The Akri Controller eases device use. If a broker is specified in a Configuration, the Controller will automatically deploy Kubernetes Pods or Jobs to discovered devices. Currently the Controller only supports two deployment strategies: either deploying a non-terminating Pod (that Akri calls a \"broker\") to each Node that can see a device or deploying a single Job to the cluster for each device discovered. There are plenty of scenarios that do not fit these two strategies such as a ReplicaSet like deployment of n number of Pods to the cluster. With Configuration-level resources, users could easily achieve their own scenarios without the Akri Controller, as selecting resources is more declarative. A user specifies in a resource request how many OPC UA servers are needed rather than needing to delineate the exact ones already discovered by Akri, as explained in Akri's current documentation on requesting Akri resources. For example, with Configuration-level resources, the following Deployment could be applied to a cluster: ``` apiVersion: \"apps/v1\" kind: Deployment metadata: name: onvif-broker-deployment spec: replicas: 2 selector: matchLabels: name: onvif-broker template: metadata: labels: name: onvif-broker spec: containers: name: nginx image: \"nginx:latest\" resources: requests: \"akri.sh/akri-onvif\": \"2\" limits: \"akri.sh/akri-onvif\": \"2\"``` Pods will only be successfully scheduled to a Node and run if the resources exist and are available. In the case of the above scenario, if there were two cameras on the network, two Pods would be deployed to the cluster. If there are not enough resources, say there is only one camera on the network, the two Pods will be left in a Pending state until another is discovered. This is the case with any deployment on Kubernetes where there are not enough resources. However, Pending Pods do not use up cluster resources. Last updated 7 months ago Was this helpful?" } ]
{ "category": "Provisioning", "file_name": "controller-in-depth.md", "project_name": "Akri", "subcategory": "Automation & Configuration" }
[ { "data": "To enable a deeper understanding of the state of an Akri deployment and Node resource usage by Akri containers, Akri exposes metrics with Prometheus. This document will cover: Installing Prometheus Enabling Prometheus with Akri Visualizing metrics with Grafana Akri's currently exposed metrics Exposing metrics from an Akri Broker Pod In order to expose Akri's metrics, Prometheus must be deployed to your cluster. If you already have Prometheus running on your cluster, you can skip this step. Prometheus is comprised of many components. Instead of manually deploying all the components, the entire kube-prometheus stack can be deployed via its Helm chart. It includes the Prometheus operator, node exporter, built in Grafana support, and more. Get the kube-prometheus stack Helm repo. ``` helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update``` Install the chart, specifying what namespace you want Prometheus to run in. It does not have to be the same namespace in which you are running Akri. For example, it may be in a namespace called monitoring as in the command below. By default, Prometheus only discovers PodMonitors within its namespace. This should be disabled by settingpodMonitorSelectorNilUsesHelmValues to false so that Akri's custom PodMonitors can be discovered. Additionally, the Grafana service can be exposed to the host by making it a NodePort service. It may take a minute or so to deploy all the components. ``` helm install prometheus prometheus-community/kube-prometheus-stack \\ --set grafana.service.type=NodePort \\ --set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false \\ --namespace monitoring``` The Prometheus dashboard can also be exposed to the host by adding --set prometheus.service.type=NodePort. If intending to expose metrics from a Broker Pod via a ServiceMonitor also set serviceMonitorSelectorNilUsesHelmValues to false. The Akri Controller and Agent publish metrics to port 8080 at a /metrics endpoint. However, these cannot be accessed by Prometheus without creating PodMonitors, which are custom resources that tell Prometheus which Pods to monitor. These components can all be automatically created and deployed via Helm by setting --set prometheus.enabled=true when installing Akri. Install Akri and expose the Controller and Agent's metrics to Prometheus by running: Note: See the cluster setup steps for information on how to set the crictl configuration variable AKRIHELMCRICTL_CONFIGURATION ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set prometheus.enabled=true``` This documentation assumes you are using vanilla Kubernetes. Be sure to reference the user guide to determine whether the distribution you are using requires crictl path configuration. Now that Akri's metrics are being exposed to Prometheus, they can be visualized in Grafana. Determine the port that the Grafana Service is running on, specifying the namespace if necessary, and save it for the next step. ``` kubectl get service/prometheus-grafana --namespace=monitoring --output=jsonpath='{.spec.ports[?(@.name==\"service\")].nodePort}' && echo``` SSH port forwarding can be used to access Grafana. Open a new terminal, and enter your ssh command to access the machine running Akri and Prometheus followed by the port forwarding request. The following command will use port 50000 on the host. Feel free to change it if it is not available. Be sure to replace <Grafana Service port> with the port number outputted in the previous step. ``` ssh someuser@<IP address> -L 50000:localhost:<Grafana Service port>``` Navigate to http://localhost:50000/ and enter Grafana's default username admin and password prom-operator. Once logged in, the username and password can be changed in account settings. Now, you can create a Dashboard to display the Akri metrics. Akri uses the Rust Prometheus client library to expose" }, { "data": "It exposes all the default process metrics, such as Agent or Controller total CPU time usage (processcpusecondstotal) and RAM usage (processresidentmemorybytes), along with the following custom metrics, all of which are prefixed with akri. | Metric Name | Metric Type | Metric Source | Buckets | |:-|:--|:-|:-| | akriinstancecount | IntGaugeVec | Agent | Configuration, shared | | akridiscoveryresponse_result | IntCounterVec | Agent | Discovery Handler name, response result (Success/Fail) | | akridiscoveryresponse_time | HistogramVec | Agent | Configuration | | akribrokerpod_count | IntGaugeVec | Controller | Configuration, Node | akriinstancecount IntGaugeVec Agent Configuration, shared akridiscoveryresponse_result IntCounterVec Agent Discovery Handler name, response result (Success/Fail) akridiscoveryresponse_time HistogramVec Agent Configuration akribrokerpod_count IntGaugeVec Controller Configuration, Node Metrics can also be published by Broker Pods and exposed to Prometheus. This workflow is not unique to Akri and is equivalent to exposing metrics from any deployment to Prometheus. Using the appropriate Prometheus client library for your broker, expose some metrics. Then, deploy a Service to expose the metrics, specifying the name of the associated Akri Configuration as a selector (akri.sh/configuration: <Akri Configuration>), since the Configuration name is added as a label to all the Broker Pods by the Akri Controller. Finally, deploy a ServiceMonitor that selects for the previously mentioned service. This tells Prometheus which service(s) to discover. As an example, an akriframecount metric has been created in the sample udev-video-broker. Like the Agent and Controller, it publishes both the default process metrics and the custom akriframecount metric to port 8080 at a /metrics endpoint. Akri can be installed with the udev Configuration, filtering for only usb video cameras and specifying a Configuration name of akri-udev-video, by running: ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set udev.enabled=true \\ --set udev.name=akri-udev-video \\ --set udev.udevRules[0]='KERNEL==\"video[0-9]*\"\\, ENV{IDV4LCAPABILITIES}==\":capture:\"' \\ --set udev.brokerPod.image.repository=\"ghcr.io/project-akri/akri/udev-video-broker\"``` Note: This instruction assumes you are using vanilla Kubernetes. Be sure to reference the user guide to determine whether the distribution you are using requires crictl path configuration. Note: To expose the Agent and Controller's Prometheus metrics, add --set prometheus.enabled=true. Note: If Prometheus is running in a different namespace as Akri and was not enabled to discover ServiceMonitors in other namespaces when installed, upgrade your Prometheus Helm installation to set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues to false. ``` helm upgrade prometheus prometheus-community/kube-prometheus-stack \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set grafana.service.type=NodePort \\ --set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false \\ --set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false \\ --namespace monitoring``` Then, create a Service for exposing these metrics, targeting all Pods labeled with the Configuration name akri-udev-video. ``` apiVersion: v1 kind: Service metadata: name: akri-udev-video-broker-metrics labels: app: akri-udev-video-broker-metrics spec: selector: akri.sh/configuration: akri-udev-video ports: name: metrics port: 8080 type: ClusterIP``` The metrics also could have been exposed by adding the metrics port to the Configuration level service in the udev Configuration. Apply the Service to your cluster. ``` kubectl apply -f akri-udev-video-broker-metrics-service.yaml``` Create the associated ServiceMonitor. Note how the selector matches the app name of the Service. ``` apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: akri-udev-video-broker-metrics labels: release: prometheus spec: selector: matchLabels: app: akri-udev-video-broker-metrics endpoints: port: metrics``` Apply the ServiceMonitor to your cluster. ``` kubectl apply -f akri-udev-video-broker-metrics-service-monitor.yaml``` The frame count metric reports the number of video frames that have been requested by some application. It will remain at zero unless an application is deployed that utilizes the video Brokers. Deploy the Akri sample streaming application by running the following: ``` kubectl apply -f https://raw.githubusercontent.com/project-akri/akri/main/deployment/samples/akri-video-streaming-app.yaml watch kubectl get pods``` Last updated 8 months ago Was this helpful?" } ]
{ "category": "Provisioning", "file_name": "debugging.md", "project_name": "Akri", "subcategory": "Automation & Configuration" }
[ { "data": "The Akri Agent executes on all worker Nodes in the cluster. It is primarily tasked with: Handling resource availability changes Enabling resource sharing These two tasks enable Akri to find configured resources (leaf devices), expose them to the Kubernetes cluster for workload scheduling, and allow resources to be shared by multiple Nodes. The first step in handling resource availability is determining what resources (leaf devices) to look for. This is accomplished by finding existing Configurations and watching for changes to them. Once the Akri Agent understands what resources to look for (via Configuration.discovery_handler), it will find any resources that are visible. For each resource that is found: An Instance is created and uploaded to etcd A connection with the kubelet is established according to the Kubernetes Device Plugin framework. This connection is used to convey availability changes to the kubelet. The kubelet will, in turn, expose these availability changes to the Kubernetes scheduler. Each protocol will periodically reassess what resources are visible and update both the Instance and the kubelet with the current availability. This process allows Akri to dynamically represent resources that appear and disappear. To enable resource sharing, the Akri Agent creates and updates the Instance.deviceUsage map and communicates with kubelet. The Instance.deviceUsage map is used to coordinate between Nodes. The kubelet communication allows Akri Agent to communicate any resource availability changes to the Kubernetes scheduler. For more detailed information, see the in-depth resource sharing doc. Akri Agent also exposes all discovered resources at Configuration level. Configuration level resources can be referred by the name of Configuration so Configuration name can be used to requst resources without the need to know the specific Instances id to request. Agent will behind the scenes do the work of selecting which Instances to reserve. For more detailed information about Configuration level resource, see the Configuration-level resources doc. The Agent discovers resources via Discovery Handlers (DHs). A Discovery Handler is anything that implements the DiscoveryHandler service defined in discovery.proto. In order to be utilized, a DH must register with the Agent, which hosts the Registration service defined in discovery.proto. The Agent maintains a list of registered DHs and their connectivity statuses, which is either Waiting, Active, or Offline(Instant). When registered, a DH's status is Waiting. Once a Configuration requesting resources discovered by a DH is applied to the Akri-enabled cluster, the Agent will create a connection with the DH requested in the Configuration and set the status of the DH to Active. If the Agent is unable to connect or loses a connection with a DH, its status is set to Offline(Instant). The Instant marks the time at which the DH became unresponsive. If the DH has been offline for more than 5 minutes, it is removed from the Agent's list of registered Discovery" }, { "data": "If a Configuration is deleted, the Agent drops the connection it made with all DHs for that Configuration and marks the DHs' statuses as Waiting. Note, while probably not commonplace, the Agent allows for multiple DHs to be registered for the same protocol. IE: you could have two udev DHs running on a node on different sockets. The Agent's registration service defaults to running on the socket /var/lib/akri/agent-registration.sock but can be Configured with Helm. While Discovery Handlers must register with this service over UDS, the Discovery Handler's service can run over UDS or an IP based endpoint. Supported Rust DHs each have a library and a binary implementation. This allows them to either be run within the Agent binary or in their own Pod. Reference the Discovery Handler development document to learn how to implement a Discovery Handler. In addition to the discoveryDetails in Configuration that sets details for narrowing the Discovery Handlers' search, the discoveryProperties can be used to pass additional information to Discovery Handler. One of scenarios that can leverage discoveryProperties is to pass credential data to Discovery Handlers to perform authenticated resource discovery. It is common for a device to require authentication in order to access its properties. The Discovery Handler then need these credentials to properly discover and filter the device. The credential data can be placed in discoverProperties, if it is specified in Configuration, Agent reads the content and generate a list of string key-value pair properties and pass the list to Discovery Handler along with discoveryDetails. Agent supports plain text, K8s secret and configMap in the schema of discoverProperies. An example below shows how each type of property is specified in discoveryProperties. The name of property is required and needs to be in C_IDENTIFIER format. The value can be specified by value or valueFrom. For value specified by valueFrom, it can be from secret or configMap. The optional attribute is default to false, it means if the data doesn't exist (in the secret or configMap), the Configuration deployment will fail. If optional is true, Agent will ignore the entry if the data doesn't exist, and pass all exist properties to Discovery Handler, the Configuration deployment will success. ``` discoveryProperties: name: propertyfromplain_text value: plain text data name: propertyfromsecret valueFrom: secretKeyRef: name: mysecret namespace: mysecret-namespace key: secret-key optional: false name: propertyfromconfigmap valueFrom: configMapKeyRef: name: myconfigMap namespace: myconfigmap-namespace key: configmap-key optional: true``` For the example above, with the content of secret and configMap. ``` apiVersion: v1 kind: Secret metadata: name: mysecret namespace: mysecret-namespace type: Opaque stringData: secret-key: \"secret1\" apiVersion: v1 kind: ConfigMap metadata: name: myconfigMap namespace: myconfigmap-namespace data: configmap-key: \"configmap1\"``` Agent read all properties and pass the string key-value pair list to Discovery Handle. ``` \"propertyfromplain_text\": plain text data \"propertyfromsecret\": \"secret1\" \"propertyfromconfigmap\": \"configmap1\"``` Last updated 8 months ago Was this helpful?" } ]
{ "category": "Provisioning", "file_name": "cluster-setup.md", "project_name": "Akri", "subcategory": "Automation & Configuration" }
[ { "data": "Akri is hosted by the Cloud Native Computing Foundation (CNCF) as a Sandbox project. Akri is a Kubernetes Resource Interface that lets you easily expose heterogeneous leaf devices (such as IP cameras and USB devices) as resources in a Kubernetes cluster, while also supporting the exposure of embedded hardware resources such as GPUs and FPGAs. Akri continually detects nodes that have access to these devices and schedules workloads based on them. Simply put: you name it, Akri finds it, you use it. At the edge, there are a variety of sensors, controllers, and MCU class devices that are producing data and performing actions. For Kubernetes to be a viable edge computing solution, these heterogeneous leaf devices need to be easily utilized by Kubernetes clusters. However, many of these leaf devices are too small to run Kubernetes themselves. Akri is an open source project that exposes these leaf devices as resources in a Kubernetes cluster. It leverages and extends the Kubernetes device plugin framework, which was created with the cloud in mind and focuses on advertising static resources such as GPUs and other system hardware. Akri took this framework and applied it to the edge, where there is a diverse set of leaf devices with unique communication protocols and intermittent availability. Akri is made for the edge, handling the dynamic appearance and disappearance of leaf devices. Akri provides an abstraction layer similar to CNI, but instead of abstracting the underlying network details, it is removing the work of finding, utilizing, and monitoring the availability of the leaf device. An operator simply has to apply a Akri Configuration to a cluster, specifying the Discovery Handler (say ONVIF) that should be used to discover the devices and the Pod that should be deployed upon discovery (say a video frame server). Then, Akri does the rest. An operator can also allow multiple nodes to utilize a leaf device, thereby providing high availability in the case where a node goes offline. Furthermore, Akri will automatically create a Kubernetes service for each type of leaf device (or Akri Configuration), removing the need for an application to track the state of pods or nodes. Most importantly, Akri was built to be extensible. Akri currently supports ONVIF, udev, and OPC UA Discovery Handlers, but more can be easily added by community members like you. The more protocols Akri can support, the wider an array of leaf devices Akri can discover. We are excited to work with you to build a more connected edge. Akri's documentation is divided into six sections: User Guide: Documentation for Akri users. Discovery Handlers: Documentation on how to configure Akri using Akri's currently supported Discovery Handlers Demos: End-to-End demos that demostrate how Akri can discover and use devices. Contain sample brokers and end applications. Architecture: Documentation that details the design and implementation of Akri's components. Development: Documentation for Akri developers or how to build, test, and extend Akri. Community: Information on what's next for Akri and how to get involved! The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page Last updated 3 months ago Was this helpful?" } ]
{ "category": "Provisioning", "file_name": "development.md", "project_name": "Akri", "subcategory": "Automation & Configuration" }
[ { "data": "To best understand the benefits of Akri and jump into using it, we recommend you start off by completing the end to end demo. In the demo, you will see Akri discover mock video cameras and a streaming app display the footage from those cameras. It includes instructions on K8s cluster setup. To get started using Akri, you must first decide what you want to discover and whether Akri currently supports a Discovery Handler that can be used to discover resources of that type. Akri discovers devices via Discovery Handlers, which are often protocol implementations that understand filter information passed via an Akri Configuration. To see the list of currently supported Discovery Handlers, see our roadmap. Akri is most easily deployed with Helm charts. Helm charts provide convenient packaging and configuration. Starting in v0.0.36, an akri-dev Helm chart will be published for each build version. Each Akri build is verified with end-to-end tests on Kubernetes, K3s, and MicroK8s. These builds may be less stable than our Releases. You can deploy these versions of Akri with this command (note: akri-dev): ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri-dev \\ $AKRIHELMCRICTL_CONFIGURATION``` Note: See the cluster setup steps for information on how to set the crictl configuration variable AKRIHELMCRICTL_CONFIGURATION Starting in Release v0.0.44, an akri Helm chart will be published for each Release. Releases will generally reflect milestones and will have more rigorous testing. You can deploy Release versions of Akri with this command (note: akri): ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION``` To use the latest containers of the Akri components, add --set useLatestContainers=true when installing Akri like so: ``` helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set useLatestContainers=true``` Before v0.4.0, all of Akri's Discovery Handlers were embedded in the Agent. As more Discovery Handlers are added to Akri, this will become unsustainable and cause the Agent to have a larger footprint than oftentimes necessary (if only one of the many Discovery Handlers is being leveraged). Starting in v0.4.0, Akri is starting the transition to mainly supporting an Agent image without any embedded Discovery Handlers, which will be the image used by Akri's Helm chart by default. The required Discovery Handlers can be deployed as DaemonSets by setting <discovery handler name>.discovery.enabled=true when installing Akri, as explained in the user flow. To instead use the previous strategy of an Agent image with embedded udev, OPC UA, and ONVIF Discovery Handlers, set agent.full=true. To see which version of the akri and akri-dev Helm charts are stored locally, run helm inspect chart akri-helm-charts/akri and helm inspect chart akri-helm-charts/akri-dev, respectively. To grab the latest Akri Helm charts, run helm repo update. Before deploying Akri, you must have a Kubernetes cluster (v1.16 or higher) running with kubectl and Helm installed. Reference our cluster setup documentation to set up a cluster or adapt your currently existing cluster. Akri currently supports Linux Nodes on amd64, arm64v8, or" }, { "data": "Akri is installed using its Helm Chart, which contains settings for deploying the Akri Agents, Controller, Discovery Handlers, and Configurations. All these can be installed in one command, in several different Helm installations, or via consecutive helm upgrades. This section will focus on the latter strategy, helping you construct your Akri installation command, assuming you have already decided what you want Akri to discover. Akri's Helm chart deploys the Akri Controller and Agent by default, so you only need to specify which Discovery Handlers and Configurations need to be deployed in your command. Akri discovers devices via Discovery Handlers, which are often protocol implementations. Akri currently supports three Discovery Handlers (udev, OPC UA and ONVIF); however, custom discovery handlers can be created and deployed as explained in Akri's Discovery Handler development document. Akri is told what to discover via Akri Configurations, which specify the name of the Discovery Handler that should be used, any discovery details (such as filters) that need to be passed to the Discovery Handler, and optionally any broker Pods and services that should be created upon discovery. For example, the ONVIF Discovery Handler can receive requests to include or exclude cameras with certain IP addresses. Let's walk through building an Akri installation command: Get Akri's Helm repo ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/``` Install Akri's Controller and Agent, specifying the crictl configuration from the cluster setup steps in not using vanilla Kubernetes: ``` helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION``` Note: To use Akri's latest dev releases, specify akri-helm-charts/akri Upgrade the installation to deploy the Discovery Handler you wish to use. Discovery Handlers are deployed as DaemonSets like the Agent when <discovery handler name>.discovery.enabled is set. ``` helm upgrade akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set <discovery handler name>.discovery.enabled=true``` Note: To install a full Agent with embedded udev, OPC UA, and ONVIF Discovery Handlers, set agent.full=true instead of enabling the Discovery Handlers. Note, this we restart the Agent Pods. ``` helm upgrade akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set agent.full=true``` Upgrade the installation to apply a Configuration, which requests discovery of certain devices by a Discovery Handler. A Configuration is applied by setting <discovery handler name>.configuration.enabled. While some Configurations may not require any discovery details to be set, oftentimes setting details is preferable for narrowing the Discovery Handlers' search. These are set under <discovery handler name>.configuration.discoveryDetails. For example, udev rules are passed to the udev Discovery Handler to specify which devices in the Linux device file system it should search for by setting udev.configuration.discoveryDetails.udevRules. Akri can be instructed to automatically deploy workloads called \"brokers\" to each discovered device by setting a broker Pod image in a Configuration via --set <protocol>.configuration.brokerPod.image.repository=<your broker image>. Learn more about creating brokers in the broker development document. ``` helm upgrade akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set <discovery handler name>.discovery.enabled=true \\ --set <discovery handler name>.configuration.enabled=true \\ Installation could have been done in one step rather than a series of upgrades: ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set <discovery handler name>.discovery.enabled=true \\ --set <discovery handler" }, { "data": "\\ As a real example, Akri's Controller, Agents, udev Discovery Handlers, and a udev Configuration that specifies the discovery of only USB video devices and an Nginx broker Pod image are installed like so: ``` helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set udev.discovery.enabled=true \\ --set udev.configuration.enabled=true \\ --set udev.configuration.discoveryDetails.udevRules[0]='KERNEL==\"video[0-9]*\"\\, ENV{IDV4LCAPABILITIES}==\":capture:\"' \\ --set udev.configuration.brokerPod.image.repository=nginx``` Note: set <discovery handler name>.brokerPod.image.tag to specify an image tag (defaults to latest). A terminating BusyBox Job broker could have been specified instead by setting the image of the brokerJob instead of the brokerPod. ``` helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set udev.discovery.enabled=true \\ --set udev.configuration.enabled=true \\ --set udev.configuration.discoveryDetails.udevRules[0]='KERNEL==\"video[0-9]*\"\\, ENV{IDV4LCAPABILITIES}==\":capture:\"' \\ --set udev.configuration.brokerJob.image.repository=busybox``` This installation can be expanded to install multiple Discovery Handlers and/or Configurations. See the documentation on udev, OPC UA, and ONVIF Configurations to learn more about setting the discovery details passed to their Discovery Handlers and more. See modifying an Akri Installation to learn about how to use Akri's Helm chart to install additional Configurations and Discovery Handlers. Run kubectl get crd, and you should see Akri's two CRDs listed. Run kubectl get pods -o wide, and you should see the Akri Controller, Agent, and (if specified) broker pods. Run kubectl get akric, and you should see the Configuration for the protocol you specified. If devices were discovered, the instances can be seen by running kubectl get akrii and further inspected by running kubectl get akrii <discovery handler name>-<ID> -o yaml. List all that Akri has automatically created and deployed, namely the Akri Controller, Agents, Configurations, Instances (which are the Akri custom resource that represents each device), and if specified, broker Pods, a service for each broker Pod, and a service for all brokers. ``` watch microk8s kubectl get pods,akric,akrii,services -o wide``` For K3s and vanilla Kubernetes ``` watch kubectl get pods,akric,akrii,services -o wide``` Deleting Akri Configurations To tell Akri to stop discovering devices, simply delete the Configuration that initiated the discovery. Watch as all instances that represent the discovered devices are deleted. ``` kubectl delete akric akri-<discovery handler name> kubectl get akrii``` If you are done using Akri, it can be uninstalled via Helm. ``` helm delete akri``` Delete Akri's CRDs. ``` kubectl delete crd instances.akri.sh kubectl delete crd configurations.akri.sh``` By default the Controller can be deployed to any control plane or worker node. This can be changed by adding extra settings when installing Akri below. If you don't want the Controller to ever be scheduled to control plane nodes, add --set controller.allowOnControlPlane=false to your install command below. Conversely, if you only want the Controller to run on control plane nodes, add --set controller.onlyOnControlPlane=true. This will guarantee the Controller only runs on nodes with the label (key, value) of (node-role.kubernetes.io/master, \"\"), which is the default label for the control plane node for Kubernetes. However, control plane nodes on MicroK8s and K3s/RKE2 may not have this exact label by default, so you can add it by running kubectl label node ${HOSTNAME,,} node-role.kubernetes.io/master=--overwrite=true. Or alternatively, in K3s/RKE2, you can keep the default label value on the master and set controller.nodeSelectors.\"node-role\\.kubernetes\\.io/master\"=true. Last updated 10 months ago Was this helpful?" } ]
{ "category": "Provisioning", "file_name": "configuration-level-resource-in-depth.md", "project_name": "Akri", "subcategory": "Automation & Configuration" }
[ { "data": "In this guide, we will walk through using Akri to discover mock USB cameras attached to nodes in a Kubernetes cluster. You'll see how Akri automatically deploys workloads to pull frames from the cameras. We will then deploy a streaming application that will point to services automatically created by Akri to access the video frames from the workloads. The following will be covered in this demo: Setting up mock udev video devices Setting up a cluster Installing Akri via Helm with settings to create your Akri udev Configuration Inspecting Akri Deploying a streaming application Cleanup Going beyond the demo Acquire an Ubuntu 20.04 LTS, 18.04 LTS or 16.04 LTS environment to run the commands. This demo assumes that the VM being used supports the proper kernel modules, which may not be the case if using a cloud-based VM which sometimes have been slimmed down to remove unnecessary modules such as for USB devices. For example, on an Ubuntu 20.04 VM in Azure, the following prerequisite step is needed to add the necessary kernel modules: ``` sudo apt update sudo apt -y install linux-modules-extra-azure``` Note: There are also guides Akri's HackMD for running the demo on DigitalOcean and Google Compute Engine (and you can skip the rest of the steps in this document). Note, these guides are unmaintained and may not be up to date. To setup fake usb video devices, install the v4l2loopback kernel module and its prerequisites. Learn more about v4l2 loopback here ``` sudo apt update sudo apt -y install linux-headers-$(uname -r) sudo apt -y install linux-modules-extra-$(uname -r) sudo apt -y install dkms curl http://deb.debian.org/debian/pool/main/v/v4l2loopback/v4l2loopback-dkms0.12.5-1all.deb -o v4l2loopback-dkms0.12.5-1all.deb sudo dpkg -i v4l2loopback-dkms0.12.5-1all.deb``` Note When running on Ubuntu 20.04 LTS, 18.04 LTS or 16.04 LTS, do NOT install v4l2loopback through sudo apt install -y v4l2loopback-dkms, you will get an older version (0.12.3). 0.12.5-1 is required for gstreamer to work properly. Note: If not able to install the debian package of v4l2loopback due to using a different Linux kernel, you can clone the repo, build the module, and setup the module dependencies like so: ``` git clone https://github.com/umlaeute/v4l2loopback.git cd v4l2loopback make & sudo make install sudo make install-utils sudo depmod -a``` \"Plug-in\" two cameras by inserting the kernel module. To create different number video devices modify the video_nr argument. ``` sudo modprobe v4l2loopback exclusivecaps=1 videonr=1,2``` Confirm that two video device nodes (video1 and video2) have been created. ``` ls /dev/video*``` Install the necessary Gstreamer packages. ``` sudo apt-get install -y \\ libgstreamer1.0-0 gstreamer1.0-tools gstreamer1.0-plugins-base \\ gstreamer1.0-plugins-good gstreamer1.0-libav``` Now that our cameras are set up, lets use Gstreamer to pass fake video streams through them. ``` mkdir camera-logs sudo gst-launch-1.0 -v videotestsrc pattern=ball ! \"video/x-raw,width=640,height=480,framerate=10/1\" ! avenc_mjpeg ! v4l2sink device=/dev/video1 > camera-logs/ball.log 2>&1 & sudo gst-launch-1.0 -v videotestsrc pattern=smpte horizontal-speed=1 ! \"video/x-raw,width=640,height=480,framerate=10/1\" ! avenc_mjpeg ! v4l2sink device=/dev/video2 >" }, { "data": "2>&1 &``` Note: If this generates an error, be sure that there are no existing video streams targeting the video device nodes by running the following and then re-running the previous command: ``` if pgrep gst-launch-1.0 > /dev/null; then sudo pkill -9 gst-launch-1.0 fi``` Reference our cluster setup documentation to set up a cluster for this demo. For ease of setup, only create single-node cluster, so if installing K3s or MicroK8s, you can skip the last step of the installation instructions of adding additional nodes. If you have an existing cluster, feel free to leverage it for the demo. This documentation assumes you are using a single-node cluster; however, you can certainly use a multi-node cluster. You will see additional Akri Agents and Discovery Handlers deployed when inspecting the Akri installation. Note, if using MicroK8s, enable privileged Pods, as the udev video broker pods run privileged to easily grant them access to video devices. More explicit device access could have been configured by setting the appropriate security context in the broker PodSpec in the Configuration. You tell Akri what you want to find with an Akri Configuration, which is one of Akri's Kubernetes custom resources. The Akri Configuration is simply a yaml file that you apply to your cluster. Within it, you specify three things: a Discovery Handler any additional device filtering an image for a Pod (that we call a \"broker\") that you want to be automatically deployed to utilize each discovered device For this demo, we will specify Akri's udev Discovery Handler, which is used to discover devices in the Linux device file system. Akri's udev Discovery Handler supports filtering by udev rules. We want to find all mock USB cameras in the Linux device file system, which can be specified with a simple udev rule KERNEL==\"video[0-9]*\". It matches name of the mock USB cameras. Note, when real USB cameras are used, the filtering udev rule can be more precise to avoid mistaken device match. For example, a better rule is KERNEL==\"video[0-9]\"\\, ENV{ID_V4L_CAPABILITIES}==\":capture:\" that adds a criteria on device capability. We may go further by adding criteria such as vendor name. An example is KERNEL==\"video[0-9]\"\\, ENV{IDV4LCAPABILITIES}==\":capture:\"\\, ENV{ID_VENDOR}==\"Great Vendor\". In order to write correct rule, check output of \"udevadm\" command for USB cameras. A example is \"udevadm info --query=all --name=video1\". a broker Pod image, we will use a sample container that Akri has provided that pulls frames from the cameras and serves them over gRPC. All of Akri's components can be deployed by specifying values in its Helm chart during an installation. Instead of having to build a Configuration from scratch, Akri has provided Helm templates for Configurations for each supported Discovery Handler. Lets customize the generic udev Configuration Helm template with our three specifications above. We can also set the name for the Configuration to be akri-udev-video. Also, if using MicroK8s or K3s, configure the crictl path and socket using the AKRIHELMCRICTL_CONFIGURATION variable created when setting up your cluster. In order for the Agent to know how to discover video devices, the udev Discovery Handler must exist. Akri supports an Agent image that includes all supported Discovery" }, { "data": "This Agent will be used if agent.full=true is set. By default, a slim Agent without any embedded Discovery Handlers is deployed and the required Discovery Handlers can be deployed as DaemonSets. This demo will use that strategy, deploying the udev Discovery Handlers by specifying udev.discovery.enabled=true when installing Akri. Add the Akri Helm chart and run the install command, setting Helm values as described above. Note: See the cluster setup steps for information on how to set the crictl configuration variable AKRIHELMCRICTL_CONFIGURATION ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set udev.discovery.enabled=true \\ --set udev.configuration.enabled=true \\ --set udev.configuration.name=akri-udev-video \\ --set udev.configuration.discoveryDetails.udevRules[0]='KERNEL==\"video[0-9]*\"' \\ --set udev.configuration.brokerPod.image.repository=\"ghcr.io/project-akri/akri/udev-video-broker\"``` After installing Akri, since the /dev/video1 and /dev/video2 devices are running on this node, the Akri Agent will discover them and create an Instance for each camera. List all that Akri has automatically created and deployed, namely Akri Configuration we created when installing Akri, two Instances (which are the Akri custom resource that represents each device), two broker Pods (one for each camera), a service for each broker Pod, a service for all brokers, the Controller Pod, Agent Pod, and the udev Discovery Handler Pod. ``` watch microk8s kubectl get pods,akric,akrii,services -o wide``` For K3s and vanilla Kubernetes ``` watch kubectl get pods,akric,akrii,services -o wide``` Look at the Configuration and Instances in more detail. Inspect the Configuration that was created via the Akri udev Helm template and values that were set when installing Akri by running the following. ``` kubectl get akric -o yaml``` Inspect the two Instances. Notice that in the brokerProperties of each instance, you can see the device nodes (/dev/video1 or /dev/video2) that the Instance represents. The brokerProperties of an Instance are set as environment variables in the broker Pods that are utilizing the device the Instance represents. This told the broker which device to connect to. We can also see in the Instance a usage slot and that it was reserved for this node. Each Instance represents a device and its usage. ``` kubectl get akrii -o yaml``` If this was a shared device (such as an IP camera), you may have wanted to increase the number of nodes that could use the same device by specifying capacity. There is a capacity parameter for each Configuration, which defaults to 1. Its value could have been increased when installing Akri (via --set <discovery handler name>.configuration.capacity=2 to allow 2 nodes to use the same device) and more usage slots (the number of usage slots is equal to capacity) would have been created in the Instance. Deploying a streaming application Deploy a video streaming web application that points to both the Configuration and Instance level services that were automatically created by Akri. ``` kubectl apply -f https://raw.githubusercontent.com/project-akri/akri/main/deployment/samples/akri-video-streaming-app.yaml``` For MicroK8s ``` watch microk8s kubectl get pods``` For K3s and vanilla Kubernetes ``` watch kubectl get pods``` Determine which port the service is running on. Be sure to save this port number for the next" }, { "data": "``` kubectl get service/akri-video-streaming-app --output=jsonpath='{.spec.ports[?(@.name==\"http\")].nodePort}' && echo``` SSH port forwarding can be used to access the streaming application. In a new terminal, enter your ssh command to to access your VM followed by the port forwarding request. The following command will use port 50000 on the host. Feel free to change it if it is not available. Be sure to replace <streaming-app-port> with the port number outputted in the previous step. ``` ssh someuser@<Ubuntu VM IP address> -L 50000:localhost:<streaming-app-port>``` Note we've noticed issues with port forwarding with WSL 2. Please use a different terminal. Navigate to http://localhost:50000/. The large feed points to Configuration level service (udev-camera-svc), while the bottom feed points to the service for each Instance or camera (udev-camera-svc-<id>). Bring down the streaming service. ``` kubectl delete service akri-video-streaming-app kubectl delete deployment akri-video-streaming-app``` For MicroK8s ``` watch microk8s kubectl get pods``` For K3s and vanilla Kubernetes ``` watch kubectl get pods``` Delete the configuration, and watch the associated instances, pods, and services be deleted. ``` kubectl delete akric akri-udev-video``` For MicroK8s ``` watch microk8s kubectl get pods,services,akric,akrii -o wide``` For K3s and vanilla Kubernetes ``` watch kubectl get pods,services,akric,akrii -o wide``` If you are done using Akri, it can be uninstalled via Helm. ``` helm delete akri``` Delete Akri's CRDs. ``` kubectl delete crd instances.akri.sh kubectl delete crd configurations.akri.sh``` Stop video streaming from the video devices. ``` if pgrep gst-launch-1.0 > /dev/null; then sudo pkill -9 gst-launch-1.0 fi``` \"Unplug\" the fake video devices by removing the kernel module. ``` sudo modprobe -r v4l2loopback``` Plug in real cameras! You can pass environment variables to the frame server broker to specify the format, resolution width/height, and frames per second of your cameras. Apply the ONVIF Configuration and make the streaming app display footage from both the local video devices and onvif cameras. To do this, modify the video streaming yaml as described in the inline comments in order to create a larger service that aggregates the output from both the udev-camera-svc service and onvif-camera-svc service. Add more nodes to the cluster. Modify the udev rule to find a more specific subset of cameras Instead of finding all video4linux device nodes, the udev rule can be modified to exclude certain device nodes, find devices only made by a certain manufacturer, and more. For example, the rule can be narrowed by matching cameras with specific properties. To see the properties of a camera on a node, do udevadm info --query=property --name /dev/video0, passing in the proper devnode name. In this example, ID_VENDOR=Microsoft was one of the outputted properties. To only find cameras made by Microsoft, the rule can be modified like the following: ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set udev.discovery.enabled=true \\ --set udev.configuration.enabled=true \\ --set udev.configuration.name=akri-udev-video \\ --set udev.configuration.discoveryDetails.udevRules[0]='KERNEL==\"video[0-9]*\"\\, ENV{IDV4LCAPABILITIES}==\":capture:\"\\, ENV{ID_VENDOR}==\"Microsoft\"' \\ --set udev.configuration.brokerPod.image.repository=\"ghcr.io/project-akri/akri/udev-video-broker\" ``` Discover other udev devices by creating a new udev configuration and broker. Learn more about the udev Discovery Handler Configuration here. Last updated 1 year ago Was this helpful?" } ]
{ "category": "Provisioning", "file_name": "development-walkthrough.md", "project_name": "Akri", "subcategory": "Automation & Configuration" }
[ { "data": "OPC UA (Open Platform Communications Unified Architecture) is a communication protocol for industrial automation. Akri has implemented a Discovery Handler for discovering OPC UA Servers that live at specified endpoints or are registered with specified Local Discovery Servers. Background on the OPC UA Discovery Handler implementation can be found in the proposal. To try out using Akri to discover and utilize OPC UA servers, see the OPC UA end-to-end demo. All of Akri's components can be deployed by specifying values in its Helm chart during an installation. This section will cover the values that should be set to (1) deploy the OPC UA Discovery Handlers and (2) apply a Configuration that tells Akri to discover devices using that Discovery Handler. In order for the Agent to know how to discover OPC UA servers an OPC UA Discovery Handler must exist. Akri supports an Agent image that includes all supported Discovery Handlers. This Agent will be used if agent.full=true. By default, a slim Agent without any embedded Discovery Handlers is deployed and the required Discovery Handlers can be deployed as DaemonSets. This documentation will use that strategy, deploying OPC UA Discovery Handlers by specifying opcua.discovery.enabled=true when installing Akri. Instead of having to assemble your own OPC UA Configuration yaml, we have provided a Helm template. Helm allows us to parametrize the commonly modified fields in our configuration files, and we have provided many for OPC UA (to see them, run helm inspect values akri-helm-charts/akri). More information about the Akri Helm charts can be found in the user guide. To apply the OPC UA Configuration to your cluster, simply set opcua.configuration.enabled=true along with any of the following additional Configuration settings when installing Akri. Discovery Handlers are passed discovery details that are set in a Configuration to determine what to discover, filter out of discovery, and so on. The OPC UA Discovery Handler, requires a set of DiscoveryURLs to direct its search. Every OPC UA server/application has a DiscoveryEndpoint that Clients can access without establishing a session. The address for this endpoint is defined by a DiscoveryURL. A Local Discovery Server (LDS) is a unique type of OPC UA server which maintains a list of OPC UA servers that have registered with it. The generic OPC UA Configuration takes in a list of DiscoveryURLs, whether for LDSes or a specific servers and an optional list of application names to either include or exclude. By default, if no DiscoveryURLs are set, the Discovery Handler will attempt to reach out to the Local Discovery Server on its host at the default address from OPC UA Specification 12 of opc.tcp://localhost:4840/ and get the list of OPC UA servers registered with it. | Helm Key | Value | Default | Description | |:-|:|:|:| | opcua.configuration.discoveryDetails.discoveryUrls | array of DiscoveryURLs | [\"opc.tcp://localhost:4840/\"] | DiscoveryURLs for OPC UA Servers or Local Discovery Servers | | opcua.configuration.discoveryDetails.applicationNames.action | Include, Exclude | Exclude | filter action to take on a set of OPC UA Applications | | opcua.configuration.discoveryDetails.applicationNames.items | array of application names | empty | application names that the filter action acts upon | opcua.configuration.discoveryDetails.discoveryUrls array of DiscoveryURLs [\"opc.tcp://localhost:4840/\"] DiscoveryURLs for OPC UA Servers or Local Discovery Servers opcua.configuration.discoveryDetails.applicationNames.action Include, Exclude Exclude filter action to take on a set of OPC UA Applications" }, { "data": "array of application names empty application names that the filter action acts upon If you would like non-terminating workloads (\"broker\" Pods) to be deployed automatically to discovered devices, a broker image should be specified (under brokerPod) in the Configuration. Alternatively, if it meets your scenario, you could use the Akri frame server broker (\"ghcr.io/project-akri/akri/opcua-video-broker\"). If you would rather manually deploy pods to utilize the devices advertized by Akri, don't specify a broker pod and see our documentation on requesting resources advertized by Akri. Note only a brokerJob OR brokerPod should be specified. | Helm Key | Value | Default | Description | |:|:-|:-|:-| | opcua.configuration.brokerPod.image.repository | image string | \"\" | image of broker Pod that should be deployed to discovered devices | | opcua.configuration.brokerPod.image.tag | tag string | \"latest\" | image tag of broker Pod that should be deployed to discovered devices | | opcua.configuration.brokerPod.resources.memoryRequest | string | \"76Mi\" | the minimum amount of RAM that must be available to this Pod for it to be scheduled by the Kubernetes Scheduler. Default based on the Akri OPC UA sample broker. Adjust to the size of your broker. | | opcua.configuration.brokerPod.resources.cpuRequest | string | \"9m\" | the minimum amount of CPU that must be available to this Pod for it to be scheduled by the Kubernetes Scheduler. Default based on the Akri OPC UA sample broker. Adjust to the size of your broker. | | opcua.configuration.brokerPod.resources.memoryLimit | string | \"200Mi\" | the maximum amount of RAM this Pod can consume. Default based on the Akri OPC UA sample broker. Adjust to the size of your broker. | | opcua.configuration.brokerPod.resources.cpuLimit | string | \"30m\" | the maximum amount of CPU this Pod can consume. Default based on the Akri OPC UA sample broker. Adjust to the size of your broker. | opcua.configuration.brokerPod.image.repository image string \"\" image of broker Pod that should be deployed to discovered devices opcua.configuration.brokerPod.image.tag tag string \"latest\" image tag of broker Pod that should be deployed to discovered devices opcua.configuration.brokerPod.resources.memoryRequest string \"76Mi\" the minimum amount of RAM that must be available to this Pod for it to be scheduled by the Kubernetes Scheduler. Default based on the Akri OPC UA sample broker. Adjust to the size of your broker. opcua.configuration.brokerPod.resources.cpuRequest string \"9m\" the minimum amount of CPU that must be available to this Pod for it to be scheduled by the Kubernetes Scheduler. Default based on the Akri OPC UA sample broker. Adjust to the size of your broker. opcua.configuration.brokerPod.resources.memoryLimit string \"200Mi\" the maximum amount of RAM this Pod can consume. Default based on the Akri OPC UA sample broker. Adjust to the size of your broker. opcua.configuration.brokerPod.resources.cpuLimit string \"30m\" the maximum amount of CPU this Pod can consume. Default based on the Akri OPC UA sample broker. Adjust to the size of your broker. If you would like terminating Jobs to be deployed automatically to discovered servers, a broker image should be specified (under brokerJob) in the Configuration. A Kubernetes Job deploys a set number of terminating Pods. Note only a brokerJob OR brokerPod should be specified. | Helm Key | Value | Default | Description | |:|:-|:-|:-| | opcua.configuration.brokerJob.image.repository | image string | \"\" | image of broker Job that should be deployed to discovered devices | | opcua.configuration.brokerJob.image.tag | tag string | \"latest\" | image tag of broker Job that should be deployed to discovered devices | |" }, { "data": "| string | \"76Mi\" | the minimum amount of RAM that must be available to this Pod for it to be scheduled by the Kubernetes Scheduler. Default based on the Akri OPC UA sample broker. Adjust to the size of your broker. | | opcua.configuration.brokerJob.resources.cpuRequest | string | \"9m\" | the minimum amount of CPU that must be available to this Pod for it to be scheduled by the Kubernetes Scheduler. Default based on the Akri OPC UA sample broker. Adjust to the size of your broker. | | opcua.configuration.brokerJob.resources.memoryLimit | string | \"200Mi\" | the maximum amount of RAM this Pod can consume. Default based on the Akri OPC UA sample broker. Adjust to the size of your broker. | | opcua.configuration.brokerJob.resources.cpuLimit | string | \"30m\" | the maximum amount of CPU this Pod can consume. Default based on the Akri OPC UA sample broker. Adjust to the size of your broker. | | opcua.configuration.brokerJob.command | string array | Empty | command to be executed in the Pod | | opcua.configuration.brokerJob.restartPolicy | string array | OnFailure | RestartPolicy for the Job. Can either be OnFailure or Never for Jobs. | | opcua.configuration.brokerJob.backoffLimit | number | 2 | defines the Kubernetes Job backoff failure policy | | opcua.configuration.brokerJob.parallelism | number | 1 | defines the Kubernetes Job parallelism | | opcua.configuration.brokerJob.completions | number | 1 | defines the Kubernetes Job completions | opcua.configuration.brokerJob.image.repository image string \"\" image of broker Job that should be deployed to discovered devices opcua.configuration.brokerJob.image.tag tag string \"latest\" image tag of broker Job that should be deployed to discovered devices opcua.configuration.brokerJob.resources.memoryRequest string \"76Mi\" the minimum amount of RAM that must be available to this Pod for it to be scheduled by the Kubernetes Scheduler. Default based on the Akri OPC UA sample broker. Adjust to the size of your broker. opcua.configuration.brokerJob.resources.cpuRequest string \"9m\" the minimum amount of CPU that must be available to this Pod for it to be scheduled by the Kubernetes Scheduler. Default based on the Akri OPC UA sample broker. Adjust to the size of your broker. opcua.configuration.brokerJob.resources.memoryLimit string \"200Mi\" the maximum amount of RAM this Pod can consume. Default based on the Akri OPC UA sample broker. Adjust to the size of your broker. opcua.configuration.brokerJob.resources.cpuLimit string \"30m\" the maximum amount of CPU this Pod can consume. Default based on the Akri OPC UA sample broker. Adjust to the size of your broker. opcua.configuration.brokerJob.command string array Empty command to be executed in the Pod opcua.configuration.brokerJob.restartPolicy string array OnFailure RestartPolicy for the Job. Can either be OnFailure or Never for Jobs. opcua.configuration.brokerJob.backoffLimit number 2 defines the Kubernetes Job backoff failure policy opcua.configuration.brokerJob.parallelism number 1 defines the Kubernetes Job parallelism opcua.configuration.brokerJob.completions number 1 defines the Kubernetes Job completions See Mounting OPC UA credentials to enable security for more details on how to use this setting. | Helm Key | Value | Default | Description | |:--|:|:-|:--| | opcua.configuration.mountCertificates | true, false | False | specify whether to mount a secret named opcua-broker-credentials into the OPC UA brokers | opcua.configuration.mountCertificates true, false false specify whether to mount a secret named opcua-broker-credentials into the OPC UA brokers By default, if a broker Pod is specified, the generic OPC UA Configuration will create services for all the brokers of a specific Akri Instance and all the brokers of an Akri Configuration. The creation of these services can be disabled. | Helm Key | Value | Default | Description | |:--|:|:-|:-| |" }, { "data": "| true, false | True | a service should be automatically created for each broker Pod | | opcua.configuration.createConfigurationService | true, false | True | a single service should be created for all brokers of a Configuration | opcua.configuration.createInstanceServices true, false true a service should be automatically created for each broker Pod opcua.configuration.createConfigurationService true, false true a single service should be created for all brokers of a Configuration By default, if a broker Pod is specified, a single broker Pod is deployed to each device. To modify the Configuration so that an OPC UA server is accessed by more or fewer nodes via broker Pods, update the opcua.configuration.capacity setting to reflect the correct number. For example, if your high availability needs are met by having 1 redundant pod, you can update the Configuration like this by setting opcua.configuration.capacity=2. | Helm Key | Value | Default | Description | |:--|:--|-:|:--| | opcua.configuration.capacity | number | 1 | maximum number of brokers that can be deployed to utilize a device (up to 1 per Node) | opcua.configuration.capacity number 1 maximum number of brokers that can be deployed to utilize a device (up to 1 per Node) Leveraging the above settings, Akri can be installed with the OPC UA Discovery Handler and an OPC UA Configuration that specifies discovery via the default LDS DiscoveryURL: Note: See the cluster setup steps for information on how to set the crictl configuration variable AKRIHELMCRICTL_CONFIGURATION ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set opcua.discovery.enabled=true \\ --set opcua.configuration.enabled=true``` If you have a workload that you would like to automatically be deployed to each discovered server, specify the workload image when installing Akri. As an example, the installation below will deploy an empty nginx pod for each server. Instead, you should point to your image, say ghcr.io/<USERNAME>/opcua-broker. ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set opcua.discovery.enabled=true \\ --set opcua.configuration.enabled=true \\ --set opcua.configuration.brokerPod.image.repository=nginx``` Note: set opcua.configuration.brokerPod.image.tag to specify an image tag (defaults to latest). The following installation examples have been given to show how to the OPC UA Configuration can be tailored to you cluster: Specifying the DiscoveryURLs for OPC UA Local Discovery Servers Specifying the DiscoveryURLs for specific OPC UA servers Specifying the DiscoveryURLs for both Local Discovery Servers and servers Filtering the servers by application name Mounting OPC UA credentials to enable security If no DiscoveryURLs are passed as Helm values, the default DiscoveryURL for LocalDiscoveryServers is used. Instead of using the default opc.tcp://localhost:4840/ LDS DiscoveryURL, an operator can specify the addresses of one or more Local Discovery Servers, like in the following example: ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set opcua.discovery.enabled=true \\ --set opcua.configuration.enabled=true \\ --set opcua.configuration.discoveryDetails.discoveryUrls[0]=\"opc.tcp://10.1.2.3:4840/\" \\ --set opcua.configuration.discoveryDetails.discoveryUrls[1]=\"opc.tcp://10.1.3.4:4840/\"``` If you know the DiscoveryURLs for the OPC UA Servers you want Akri to discover, manually list them when deploying Akri, like in the following: ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set opcua.discovery.enabled=true \\ --set opcua.configuration.enabled=true \\ --set opcua.configuration.discoveryDetails.discoveryUrls[0]=\"opc.tcp://10.123.456.7:4855/\"``` OPC UA discovery can also receive a list of both OPC UA LDS DiscoveryURLs and specific Server urls, as in the following. ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set opcua.discovery.enabled=true \\ --set opcua.configuration.enabled=true \\ --set opcua.configuration.discoveryDetails.discoveryUrls[0]=\"opc.tcp://10.1.2.3:4840/\" \\ --set opcua.configuration.discoveryDetails.discoveryUrls[1]=\"opc.tcp://10.1.3.4:4840/\" \\ --set opcua.configuration.discoveryDetails.discoveryUrls[2]=\"opc.tcp://10.123.456.7:4855/\"``` Note: The Agent's OPC UA discovery method only supports tcp DiscoveryURLs, since the Rust OPC UA library has yet to support" }, { "data": "Instead of discovering all servers registered with specified Local Discovery Servers, you can choose to include or exclude a list of application names (the applicationName property of a server's ApplicationDescription as specified by OPC UA Specification). For example, to discover all servers registered with the default LDS except for the server named \"Duke\", do the following. ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set opcua.discovery.enabled=true \\ --set opcua.configuration.enabled=true \\ --set opcua.configuration.discoveryDetails.applicationNames.action=Exclude \\ --set opcua.configuration.discoveryDetails.applicationNames.items[0]=\"Duke\"``` Alternatively, to only discover the server named \"Go Tar Heels!\", do the following: ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set opcua.discovery.enabled=true \\ --set opcua.configuration.enabled=true \\ --set opcua.configuration.discoveryDetails.applicationNames.action=Include \\ --set opcua.configuration.discoveryDetails.applicationNames.items[0]=\"Go Tar Heels!\"``` For your broker pod to utilize a discovered OPC UA server, it will need to contain an OPC UA Client. OPC UA Clients and Servers can establish an insecure connection so long as the OPC UA Servers support a Security Policy of None. However, if you would like your broker's OPC UA Client to establish a secure connection with an OPC UA server, the Client and Server must trust each other's x509 v3 certificates. This can be done in one of the three ways explained in the OPC UA proposal. The simplest method is to sign the OPC UA broker's certificate with the same Certificate Authority (CA) as the Server with which it wishes to connect. The certificates are passed to the broker via a Kubernetes Secret mounted as a volume to the directory /etc/opcua-certs/client-pki. It is the operator's responsibility to generate the certificates and securely create a Kubernetes Secret named opcua-broker-credentials, ideally using a KMS. More information about using Kubernetes Secrets securely can be found in the credentials passing proposal. The following is an example kubectl command to create the Kubernetes Secret, projecting each certificate/crl/private key with the expected key name (ie clientcertificate, clientkey, cacertificate, and cacrl). ``` kubectl create secret generic opcua-broker-credentials \\ --from-file=client_certificate=/path/to/AkriBroker.der \\ --from-file=client_key=/path/to/AkriBroker.pfx \\ --from-file=ca_certificate=/path/to/SomeCA.der \\ --from-file=ca_crl=/path/to/SomeCA.crl``` Certificates can be created and signed with a CA manually using openssl, by using the OPC Foundation certificate generator tool, or Akri's certificate generator. Be sure that the certificates are in the format expected by your OPC UA Client. Finally, when mounting certificates is enabled with Helm via --set opcua.configuration.mountCertificates='true', the secret named opcua-broker-credentials will be mounted into the OPC UA brokers. It is mounted to the volume credentials at the mountPath /etc/opcua-certs/client-pki, as shown in the OPC UA Helm template. This is the path where the broker expects to find the certificates. The following is an example how to enable security: ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set opcua.discovery.enabled=true \\ --set opcua.configuration.enabled=true \\ --set opcua.configuration.mountCertificates='true'``` Note: If the Helm template for the OPC UA Configuration is too specific, you can customize the Configuration yaml to suit your needs. Akri has provided further documentation on modifying the broker PodSpec, instanceServiceSpec, or configurationServiceSpec More information about how to modify an installed Configuration, add additional Configurations to a cluster, or delete a Configuration can be found in the Customizing an Akri Installation document. The OPC UA implementation can be understood by looking at several things: OpcuaDiscoveryDetails defines the required properties. OpcuaDiscoveryHandler defines OPC UA Server discovery. sample-brokers/opcua-monitoring-broker defines a sample OPC UA protocol broker that monitors an OPC UA Variable with a specific NodeID. Last updated 2 years ago Was this helpful?" } ]
{ "category": "Provisioning", "file_name": "docs.akri.sh#trademark.md", "project_name": "Akri", "subcategory": "Automation & Configuration" }
[ { "data": "Kubernetes provides a device plugin framework that you can use to advertise system hardware resources to the Kubelet. Instead of customizing the code for Kubernetes itself, vendors can implement a device plugin that you deploy either manually or as a DaemonSet. The targeted devices include GPUs, high-performance NICs, FPGAs, InfiniBand adapters, and other similar computing resources that may require vendor specific initialization and setup. The kubelet exports a Registration gRPC service: ``` service Registration { rpc Register(RegisterRequest) returns (Empty) {} } ``` A device plugin can register itself with the kubelet through this gRPC service. During the registration, the device plugin needs to send: Following a successful registration, the device plugin sends the kubelet the list of devices it manages, and the kubelet is then in charge of advertising those resources to the API server as part of the kubelet node status update. For example, after a device plugin registers hardware-vendor.example/foo with the kubelet and reports two healthy devices on a node, the node status is updated to advertise that the node has 2 \"Foo\" devices installed and available. Then, users can request devices as part of a Pod specification (see container). Requesting extended resources is similar to how you manage requests and limits for other resources, with the following differences: Suppose a Kubernetes cluster is running a device plugin that advertises resource hardware-vendor.example/foo on certain nodes. Here is an example of a pod requesting this resource to run a demo workload: ``` apiVersion: v1 kind: Pod metadata: name: demo-pod spec: containers: name: demo-container-1 image: registry.k8s.io/pause:2.0 resources: limits: hardware-vendor.example/foo: 2 ``` The general workflow of a device plugin includes the following steps: Initialization. During this phase, the device plugin performs vendor-specific initialization and setup to make sure the devices are in a ready state. The plugin starts a gRPC service, with a Unix socket under the host path /var/lib/kubelet/device-plugins/, that implements the following interfaces: ``` service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device Manager. rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plugin can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // GetPreferredAllocation returns a preferred set of devices to allocate // from a list of available ones. The resulting preferred allocation is not // guaranteed to be the allocation ultimately performed by the // devicemanager. It is only designed to help the devicemanager make a more // informed allocation decision when possible. rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {} // PreStartContainer is called, if indicated by Device Plugin during registration phase, // before each container start. Device plugin can run device specific operations // such as resetting the device before making devices available to the container. rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {} } ``` The plugin registers itself with the kubelet through the Unix socket at host path /var/lib/kubelet/device-plugins/kubelet.sock. After successfully registering itself, the device plugin runs in serving mode, during which it keeps monitoring device health and reports back to the kubelet upon any device state" }, { "data": "It is also responsible for serving Allocate gRPC requests. During Allocate, the device plugin may do device-specific preparation; for example, GPU cleanup or QRNG initialization. If the operations succeed, the device plugin returns an AllocateResponse that contains container runtime configurations for accessing the allocated devices. The kubelet passes this information to the container runtime. An AllocateResponse contains zero or more ContainerAllocateResponse objects. In these, the device plugin defines modifications that must be made to a container's definition to provide access to the device. These modifications include: A device plugin is expected to detect kubelet restarts and re-register itself with the new kubelet instance. A new kubelet instance deletes all the existing Unix sockets under /var/lib/kubelet/device-plugins when it starts. A device plugin can monitor the deletion of its Unix socket and re-register itself upon such an event. You can deploy a device plugin as a DaemonSet, as a package for your node's operating system, or manually. The canonical directory /var/lib/kubelet/device-plugins requires privileged access, so a device plugin must run in a privileged security context. If you're deploying a device plugin as a DaemonSet, /var/lib/kubelet/device-plugins must be mounted as a Volume in the plugin's PodSpec. If you choose the DaemonSet approach you can rely on Kubernetes to: place the device plugin's Pod onto Nodes, to restart the daemon Pod after failure, and to help automate upgrades. Previously, the versioning scheme required the Device Plugin's API version to match exactly the Kubelet's version. Since the graduation of this feature to Beta in v1.12 this is no longer a hard requirement. The API is versioned and has been stable since Beta graduation of this feature. Because of this, kubelet upgrades should be seamless but there still may be changes in the API before stabilization making upgrades not guaranteed to be non-breaking. As a project, Kubernetes recommends that device plugin developers: To run device plugins on nodes that need to be upgraded to a Kubernetes release with a newer device plugin API version, upgrade your device plugins to support both versions before upgrading these nodes. Taking that approach will ensure the continuous functioning of the device allocations during the upgrade. In order to monitor resources provided by device plugins, monitoring agents need to be able to discover the set of devices that are in-use on the node and obtain metadata to describe which container the metric should be associated with. Prometheus metrics exposed by device monitoring agents should follow the Kubernetes Instrumentation Guidelines, identifying containers using pod, namespace, and container prometheus labels. The kubelet provides a gRPC service to enable discovery of in-use devices, and to provide metadata for these devices: ``` // PodResourcesLister is a service provided by the kubelet that provides information about the // node resources consumed by pods and containers on the node service PodResourcesLister { rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {} rpc GetAllocatableResources(AllocatableResourcesRequest) returns (AllocatableResourcesResponse) {} rpc Get(GetPodResourcesRequest) returns (GetPodResourcesResponse) {} } ``` The List endpoint provides information on resources of running pods, with details such as the id of exclusively allocated CPUs, device id as it was reported by device plugins and id of the NUMA node where these devices are allocated. Also, for NUMA-based machines, it contains the information about memory and hugepages reserved for a container. Starting from Kubernetes" }, { "data": "the List endpoint can provide information on resources of running pods allocated in ResourceClaims by the DynamicResourceAllocation API. To enable this feature kubelet must be started with the following flags: ``` --feature-gates=DynamicResourceAllocation=true,KubeletPodResourcesDynamicResources=true ``` ``` // ListPodResourcesResponse is the response returned by List function message ListPodResourcesResponse { repeated PodResources pod_resources = 1; } // PodResources contains information about the node resources assigned to a pod message PodResources { string name = 1; string namespace = 2; repeated ContainerResources containers = 3; } // ContainerResources contains information about the resources assigned to a container message ContainerResources { string name = 1; repeated ContainerDevices devices = 2; repeated int64 cpu_ids = 3; repeated ContainerMemory memory = 4; repeated DynamicResource dynamic_resources = 5; } // ContainerMemory contains information about memory and hugepages assigned to a container message ContainerMemory { string memory_type = 1; uint64 size = 2; TopologyInfo topology = 3; } // Topology describes hardware topology of the resource message TopologyInfo { repeated NUMANode nodes = 1; } // NUMA representation of NUMA node message NUMANode { int64 ID = 1; } // ContainerDevices contains information about the devices assigned to a container message ContainerDevices { string resource_name = 1; repeated string device_ids = 2; TopologyInfo topology = 3; } // DynamicResource contains information about the devices assigned to a container by Dynamic Resource Allocation message DynamicResource { string class_name = 1; string claim_name = 2; string claim_namespace = 3; repeated ClaimResource claim_resources = 4; } // ClaimResource contains per-plugin resource information message ClaimResource { repeated CDIDevice cdi_devices = 1 [(gogoproto.customname) = \"CDIDevices\"]; } // CDIDevice specifies a CDI device information message CDIDevice { // Fully qualified CDI device name // for example: vendor.com/gpu=gpudevice1 // see more details in the CDI specification: // https://github.com/container-orchestrated-devices/container-device-interface/blob/main/SPEC.md string name = 1; } ``` cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoint needs to be used in conjunction with the GetAllocatableResources endpoint as explained below: GetAllocatableResources provides information on resources initially available on the worker node. It provides more information than kubelet exports to APIServer. GetAllocatableResources should only be used to evaluate allocatable resources on a node. If the goal is to evaluate free/unallocated resources it should be used in conjunction with the List() endpoint. The result obtained by GetAllocatableResources would remain the same unless the underlying resources exposed to kubelet change. This happens rarely but when it does (for example: hotplug/hotunplug, device health changes), client is expected to call GetAlloctableResources endpoint. However, calling GetAllocatableResources endpoint is not sufficient in case of cpu and/or memory update and Kubelet needs to be restarted to reflect the correct resource capacity and allocatable. ``` // AllocatableResourcesResponses contains information about all the devices known by the kubelet message AllocatableResourcesResponse { repeated ContainerDevices devices = 1; repeated int64 cpu_ids = 2; repeated ContainerMemory memory = 3; } ``` ContainerDevices do expose the topology information declaring to which NUMA cells the device is affine. The NUMA cells are identified using a opaque integer ID, which value is consistent to what device plugins report when they register themselves to the kubelet. The gRPC service is served over a unix socket at" }, { "data": "Monitoring agents for device plugin resources can be deployed as a daemon, or as a DaemonSet. The canonical directory /var/lib/kubelet/pod-resources requires privileged access, so monitoring agents must run in a privileged security context. If a device monitoring agent is running as a DaemonSet, /var/lib/kubelet/pod-resources must be mounted as a Volume in the device monitoring agent's PodSpec. When accessing the /var/lib/kubelet/pod-resources/kubelet.sock from DaemonSet or any other app deployed as a container on the host, which is mounting socket as a volume, it is a good practice to mount directory /var/lib/kubelet/pod-resources/ instead of the /var/lib/kubelet/pod-resources/kubelet.sock. This will ensure that after kubelet restart, container will be able to re-connect to this socket. Container mounts are managed by inode referencing the socket or directory, depending on what was mounted. When kubelet restarts, socket is deleted and a new socket is created, while directory stays untouched. So the original inode for the socket become unusable. Inode to directory will continue working. The Get endpoint provides information on resources of a running Pod. It exposes information similar to those described in the List endpoint. The Get endpoint requires PodName and PodNamespace of the running Pod. ``` // GetPodResourcesRequest contains information about the pod message GetPodResourcesRequest { string pod_name = 1; string pod_namespace = 2; } ``` To enable this feature, you must start your kubelet services with the following flag: ``` --feature-gates=KubeletPodResourcesGet=true ``` The Get endpoint can provide Pod information related to dynamic resources allocated by the dynamic resource allocation API. To enable this feature, you must ensure your kubelet services are started with the following flags: ``` --feature-gates=KubeletPodResourcesGet=true,DynamicResourceAllocation=true,KubeletPodResourcesDynamicResources=true ``` The Topology Manager is a Kubelet component that allows resources to be co-ordinated in a Topology aligned manner. In order to do this, the Device Plugin API was extended to include a TopologyInfo struct. ``` message TopologyInfo { repeated NUMANode nodes = 1; } message NUMANode { int64 ID = 1; } ``` Device Plugins that wish to leverage the Topology Manager can send back a populated TopologyInfo struct as part of the device registration, along with the device IDs and the health of the device. The device manager will then use this information to consult with the Topology Manager and make resource assignment decisions. TopologyInfo supports setting a nodes field to either nil or a list of NUMA nodes. This allows the Device Plugin to advertise a device that spans multiple NUMA nodes. Setting TopologyInfo to nil or providing an empty list of NUMA nodes for a given device indicates that the Device Plugin does not have a NUMA affinity preference for that device. An example TopologyInfo struct populated for a device by a Device Plugin: ``` pluginapi.Device{ID: \"25102017\", Health: pluginapi.Healthy, Topology:&pluginapi.TopologyInfo{Nodes: []*pluginapi.NUMANode{&pluginapi.NUMANode{ID: 0,},}}} ``` Here are some examples of device plugin implementations: Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or projects. See the CNCF website guidelines for more details. You should read the content guide before proposing a change that adds an extra third-party link. Was this page helpful? Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub Repository if you want to report a problem or suggest an improvement." } ]
{ "category": "Provisioning", "file_name": "docs.akri.sh#documentation.md", "project_name": "Akri", "subcategory": "Automation & Configuration" }
[ { "data": "Akri is hosted by the Cloud Native Computing Foundation (CNCF) as a Sandbox project. Akri is a Kubernetes Resource Interface that lets you easily expose heterogeneous leaf devices (such as IP cameras and USB devices) as resources in a Kubernetes cluster, while also supporting the exposure of embedded hardware resources such as GPUs and FPGAs. Akri continually detects nodes that have access to these devices and schedules workloads based on them. Simply put: you name it, Akri finds it, you use it. At the edge, there are a variety of sensors, controllers, and MCU class devices that are producing data and performing actions. For Kubernetes to be a viable edge computing solution, these heterogeneous leaf devices need to be easily utilized by Kubernetes clusters. However, many of these leaf devices are too small to run Kubernetes themselves. Akri is an open source project that exposes these leaf devices as resources in a Kubernetes cluster. It leverages and extends the Kubernetes device plugin framework, which was created with the cloud in mind and focuses on advertising static resources such as GPUs and other system hardware. Akri took this framework and applied it to the edge, where there is a diverse set of leaf devices with unique communication protocols and intermittent availability. Akri is made for the edge, handling the dynamic appearance and disappearance of leaf devices. Akri provides an abstraction layer similar to CNI, but instead of abstracting the underlying network details, it is removing the work of finding, utilizing, and monitoring the availability of the leaf device. An operator simply has to apply a Akri Configuration to a cluster, specifying the Discovery Handler (say ONVIF) that should be used to discover the devices and the Pod that should be deployed upon discovery (say a video frame server). Then, Akri does the rest. An operator can also allow multiple nodes to utilize a leaf device, thereby providing high availability in the case where a node goes offline. Furthermore, Akri will automatically create a Kubernetes service for each type of leaf device (or Akri Configuration), removing the need for an application to track the state of pods or nodes. Most importantly, Akri was built to be extensible. Akri currently supports ONVIF, udev, and OPC UA Discovery Handlers, but more can be easily added by community members like you. The more protocols Akri can support, the wider an array of leaf devices Akri can discover. We are excited to work with you to build a more connected edge. Akri's documentation is divided into six sections: User Guide: Documentation for Akri users. Discovery Handlers: Documentation on how to configure Akri using Akri's currently supported Discovery Handlers Demos: End-to-End demos that demostrate how Akri can discover and use devices. Contain sample brokers and end applications. Architecture: Documentation that details the design and implementation of Akri's components. Development: Documentation for Akri developers or how to build, test, and extend Akri. Community: Information on what's next for Akri and how to get involved! The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page Last updated 3 months ago Was this helpful?" } ]
{ "category": "Provisioning", "file_name": "roadmap.md", "project_name": "Akri", "subcategory": "Automation & Configuration" }
[ { "data": "Akri supports creating a Kubernetes resource (i.e. device plugin) for each individual device. Since each device in Akri is represented as an Instance custom resource, these are called Instance-level resources. Instance-level resources are named in the format <configuration-name>-<instance-id>. Akri also creates a Kubernetes Device Plugin for a Configuration called Configuration-level resource. A Configuration-level resource is a resource that represents all of the devices discovered via a Configuration. With Configuration-level resources, instead of needing to know the specific Instances to request, resources could be requested by the Configuration name and the Agent will do the work of selecting which Instances to reserve. The example below shows a deployment that requests the resource at Configuration level and would deploy a nginx broker to each discovered device respectively. ``` apiVersion: apps/v1 kind: Deployment metadata: name: onvif-camera-broker-deployment labels: app: onvif-camera-broker spec: replicas: 1 selector: matchLabels: app: onvif-camera-broker template: metadata: labels: app: onvif-camera-broker spec: containers: name: onvif-camera-broker image: nginx resources: limits: akri.sh/onvif-camera: \"2\" requests: akri.sh/onvif-camera: \"2\"``` With Configuration-level resources, users could use higher level Kubernetes objects (Deployments, ReplicaSets, DaemonSets, etc.) or develop their own deployment strategies, rather than relying on the Akri Controller to deploy Pods to discovered devices. The in-depth resource sharing doc describes how the Configuration.capacity and Instance.deviceUsage are used to achieve resource sharing between nodes. The same data is used to achieve sharing the same resource between Configuration-level and Instance-level resources. The Instance.deviceUsage in Akri Instances is extended to support Configuration device plugin. The Instance.deviceUsage may look like this: ``` deviceUsage: my-resource-00095f-0: \"\" my-resource-00095f-1: \"\" my-resource-00095f-2: \"\" my-resource-00095f-3: \"node-a\" my-resource-00095f-4: \"\"``` where empty string means the slot is free and non-empty string indicates the slot is used (by the node). To support Configuration device plugin, the Instance.deviceUsage format is extended to hold the additional information, the deviceUsage can be a \"<nodename>\" (for Instance) or a \"C:<virtualdeviceid>:<nodename>\" (for Configuration). For example, the Instance.deviceUsage shows the slot my-resource-00095f-2 is used by virtual device id \"0\" of the Configuration device plugin on node-b. The slot my-resource-00095f-3 is used by Instance device plugin on node-a. The other 3 slots are" }, { "data": "``` deviceUsage: my-resource-00095f-0: \"\" my-resource-00095f-1: \"\" my-resource-00095f-2: \"C:0:node-b\" my-resource-00095f-3: \"node-a\" my-resource-00095f-4: \"\"``` The Akri Agent and Discovery Handlers enable device discovery and Kubernetes resource creation: they discover devices, create Kubernetes resources to represent the devices, and ensure only capacity containers are using a device at once via the device plugin framework. The Akri Controller eases device use. If a broker is specified in a Configuration, the Controller will automatically deploy Kubernetes Pods or Jobs to discovered devices. Currently the Controller only supports two deployment strategies: either deploying a non-terminating Pod (that Akri calls a \"broker\") to each Node that can see a device or deploying a single Job to the cluster for each device discovered. There are plenty of scenarios that do not fit these two strategies such as a ReplicaSet like deployment of n number of Pods to the cluster. With Configuration-level resources, users could easily achieve their own scenarios without the Akri Controller, as selecting resources is more declarative. A user specifies in a resource request how many OPC UA servers are needed rather than needing to delineate the exact ones already discovered by Akri, as explained in Akri's current documentation on requesting Akri resources. For example, with Configuration-level resources, the following Deployment could be applied to a cluster: ``` apiVersion: \"apps/v1\" kind: Deployment metadata: name: onvif-broker-deployment spec: replicas: 2 selector: matchLabels: name: onvif-broker template: metadata: labels: name: onvif-broker spec: containers: name: nginx image: \"nginx:latest\" resources: requests: \"akri.sh/akri-onvif\": \"2\" limits: \"akri.sh/akri-onvif\": \"2\"``` Pods will only be successfully scheduled to a Node and run if the resources exist and are available. In the case of the above scenario, if there were two cameras on the network, two Pods would be deployed to the cluster. If there are not enough resources, say there is only one camera on the network, the two Pods will be left in a Pending state until another is discovered. This is the case with any deployment on Kubernetes where there are not enough resources. However, Pending Pods do not use up cluster resources. Last updated 7 months ago Was this helpful?" } ]
{ "category": "Provisioning", "file_name": "handler-development.md", "project_name": "Akri", "subcategory": "Automation & Configuration" }
[ { "data": "Make sure you have at least one Onvif camera that is reachable so Onvif discovery handler can discovery your Onvif camera. To test accessing Onvif with credentials, make sure your Onvif camera is authentication-enabled. Write down the username and password, they are required in the flow below. Add Akri helm chart repo and set the environment variable AKRIHELMCRICTL_CONFIGURATION to proper value. ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm repo update``` Set up the Kubernetes distribution being used, here we use 'k8s', make sure to replace it with a value that matches the Kubernetes distribution you used. See the cluster setup steps for information on how to set the crictl configuration variable AKRIHELMCRICTL_CONFIGURATION ``` export AKRIHELMCRICTL_CONFIGURATION=\"--set kubernetesDistro=k8s\"``` In real product scenarios, the device uuids are acquired directly from the vendors or already known before installing Akri Configuration. If you already know the device uuids, you can skip this and go to the next step. First use the following helm chart to deploy an Akri Configuration and see if your camera is discovered. ``` helm install akri akri-helm-charts/akri-dev \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set onvif.discovery.enabled=true \\ --set onvif.configuration.name=akri-onvif \\ --set onvif.configuration.enabled=true \\ --set onvif.configuration.capacity=3 \\ --set onvif.configuration.brokerPod.image.repository=\"nginx\" \\ --set onvif.configuration.brokerPod.image.tag=\"stable-alpine\"``` Here is the result of running the installation command above on a cluster with 1 control plane and 2 work nodes. There is one Onvif camera connects to the network, thus 1 pods running on each node. ``` $ kubectl get nodes,akric,akrii,pods NAME STATUS ROLES AGE VERSION node/kube-01 Ready control-plane 22d v1.26.1 node/kube-02 Ready <none> 22d v1.26.1 node/kube-03 Ready <none> 22d v1.26.1 NAME CAPACITY AGE configuration.akri.sh/akri-onvif 3 62s NAME CONFIG SHARED NODES AGE instance.akri.sh/akri-onvif-029957 akri-onvif true [\"kube-03\",\"kube-02\"] 48s NAME READY STATUS RESTARTS AGE pod/akri-agent-daemonset-gnwb5 1/1 Running 0 62s pod/akri-agent-daemonset-zn2gb 1/1 Running 0 62s pod/akri-controller-deployment-56b9796c5-wqdwr 1/1 Running 0 62s pod/akri-onvif-discovery-daemonset-wcp2f 1/1 Running 0 62s pod/akri-onvif-discovery-daemonset-xml6t 1/1 Running 0 62s pod/akri-webhook-configuration-75d9b95fbc-wqhgw 1/1 Running 0 62s pod/kube-02-akri-onvif-029957-pod 1/1 Running 0 48s pod/kube-03-akri-onvif-029957-pod 1/1 Running 0 48s``` Get the device uuid from the Akri Instance. Below is an example, the Onvif discovery handler discovers the camera and expose the device's uuid. Write down the device uuid for later use. Note that in real product scenarios, the device uuids are acquired directly from the vendors or already known before installing Akri Configuration. ``` $ kubectl get akrii akri-onvif-029957 -o yaml | grep ONVIFDEVICEUUID ONVIFDEVICEUUID: 3fa1fe68-b915-4053-a3e1-ac15a21f5f91``` Now we can set up the credential information to Kubernetes Secret. Replace the device uuid and the values of username/password with information of your camera. ``` cat > /tmp/onvif-auth-secret.yaml<< EOF apiVersion: v1 kind: Secret metadata: name: onvif-auth-secret type: Opaque stringData: devicecredentiallist: |+ [ \"credential_list\" ] credential_list: |+ { \"3fa1fe68-b915-4053-a3e1-ac15a21f5f91\" : { \"username\" : \"camuser\", \"password\" : \"HappyDay\" } } EOF kubectl apply -f /tmp/onvif-auth-secret.yaml ``` Upgrade the Akri Configuration to include the secret information and the sample video broker container. ``` helm upgrade akri akri-helm-charts/akri-dev \\ --install \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set onvif.discovery.enabled=true \\ --set onvif.configuration.enabled=true \\ --set onvif.configuration.capacity=3 \\ --set onvif.configuration.discoveryProperties[0].name=devicecredentiallist \\ --set onvif.configuration.discoveryProperties[0].valueFrom.secretKeyRef.name=onvif-auth-secret \\ --set onvif.configuration.discoveryProperties[0].valueFrom.secretKeyRef.namesapce=default \\ --set onvif.configuration.discoveryProperties[0].valueFrom.secretKeyRef.key=devicecredentiallist \\ --set onvif.configuration.discoveryProperties[0].valueFrom.secretKeyRef.optoinal=false \\ --set onvif.configuration.brokerPod.image.repository=\"ghcr.io/project-akri/akri/onvif-video-broker\" \\ --set onvif.configuration.brokerPod.image.tag=\"latest-dev\" \\ --set onvif.configuration.brokerPod.image.pullPolicy=\"Always\" \\ --set onvif.configuration.brokerProperties.CREDENTIALDIRECTORY=\"/etc/credentialdirectory\" \\ --set onvif.configuration.brokerProperties.CREDENTIALCONFIGMAPDIRECTORY=\"/etc/credentialcfgmapdirectory\" \\ --set onvif.configuration.brokerPod.volumeMounts[0].name=\"credentials\" \\ --set onvif.configuration.brokerPod.volumeMounts[0].mountPath=\"/etc/credential_directory\" \\ --set onvif.configuration.brokerPod.volumeMounts[0].readOnly=true \\ --set onvif.configuration.brokerPod.volumes[0].name=\"credentials\" \\ --set" }, { "data": "With the secret information, the Onvif discovery handler is able to discovery the Onvif camera and the video broker is up and running ``` $ kubectl get nodes,akric,akrii,pods NAME STATUS ROLES AGE VERSION node/kube-01 Ready control-plane 22d v1.26.1 node/kube-02 Ready <none> 22d v1.26.1 node/kube-03 Ready <none> 22d v1.26.1 NAME CAPACITY AGE configuration.akri.sh/akri-onvif 3 18m NAME CONFIG SHARED NODES AGE instance.akri.sh/akri-onvif-029957 akri-onvif true [\"kube-03\",\"kube-02\"] 22s NAME READY STATUS RESTARTS AGE pod/akri-agent-daemonset-bq494 1/1 Running 0 18m pod/akri-agent-daemonset-c2rng 1/1 Running 0 18m pod/akri-controller-deployment-56b9796c5-rtm5q 1/1 Running 0 18m pod/akri-onvif-discovery-daemonset-rbgwq 1/1 Running 0 18m pod/akri-onvif-discovery-daemonset-xwjlp 1/1 Running 0 18m pod/akri-webhook-configuration-75d9b95fbc-cr6bc 1/1 Running 0 18m pod/kube-02-akri-onvif-029957-pod 1/1 Running 0 22s pod/kube-03-akri-onvif-029957-pod 1/1 Running 0 22s $ kubectl logs kube-02-akri-onvif-029957-pod [Akri] ONVIF request http://192.168.1.145:2020/onvif/device_service http://www.onvif.org/ver10/device/wsdl/GetService [Akri] ONVIF media url http://192.168.1.145:2020/onvif/service [Akri] ONVIF request http://192.168.1.145:2020/onvif/service http://www.onvif.org/ver10/media/wsdl/GetProfiles [Akri] ONVIF profile list contains: profile_1 [Akri] ONVIF profile list contains: profile_2 [Akri] ONVIF profile list profile_1 [Akri] ONVIF request http://192.168.1.145:2020/onvif/service http://www.onvif.org/ver10/media/wsdl/GetStreamUri [Akri] ONVIF streaming uri list contains: rtsp://192.168.1.145:554/stream1 [Akri] ONVIF streaming uri rtsp://192.168.1.145:554/stream1 [VideoProcessor] Processing RTSP stream: rtsp://-:-@192.168.1.145:554/stream1 info: Microsoft.Hosting.Lifetime[0] Now listening on: http://[::]:8083 info: Microsoft.Hosting.Lifetime[0] Application started. Press Ctrl+C to shut down. info: Microsoft.Hosting.Lifetime[0] Hosting environment: Production info: Microsoft.Hosting.Lifetime[0] Content root path: /app Ready True Adding frame from rtsp://-:-@192.168.1.145:554/stream1, Q size: 1, frame size: 862986 Adding frame from rtsp://-:-@192.168.1.145:554/stream1, Q size: 2, frame size: 865793 Adding frame from rtsp://-:-@192.168.1.145:554/stream1, Q size: 2, frame size: 868048 Adding frame from rtsp://-:-@192.168.1.145:554/stream1, Q size: 2, frame size: 869655 Adding frame from rtsp://-:-@192.168.1.145:554/stream1, Q size: 2, frame size: 871353``` Deploy the sample video streaming application Instructions described from the step 4 of camera demo Deploy a video streaming web application that points to both the Configuration and Instance level services that were automatically created by Akri. Copy and paste the contents into a file and save it as akri-video-streaming-app.yaml ``` cat > /tmp/akri-video-streaming-app.yaml<< EOF apiVersion: apps/v1 kind: Deployment metadata: name: akri-video-streaming-app spec: replicas: 1 selector: matchLabels: app: akri-video-streaming-app template: metadata: labels: app: akri-video-streaming-app spec: serviceAccountName: akri-video-streaming-app-sa containers: name: akri-video-streaming-app image: ghcr.io/project-akri/akri/video-streaming-app:latest-dev imagePullPolicy: Always securityContext: runAsUser: 1000 allowPrivilegeEscalation: false runAsNonRoot: true readOnlyRootFilesystem: true capabilities: drop: [\"ALL\"] env: name: CONFIGURATION_NAME value: akri-onvif apiVersion: v1 kind: Service metadata: name: akri-video-streaming-app namespace: default labels: app: akri-video-streaming-app spec: selector: app: akri-video-streaming-app ports: name: http port: 80 targetPort: 5000 type: NodePort apiVersion: v1 kind: ServiceAccount metadata: name: akri-video-streaming-app-sa kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: akri-video-streaming-app-role rules: apiGroups: [\"\"] resources: [\"services\"] verbs: [\"list\"] apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: akri-video-streaming-app-binding roleRef: apiGroup: \"\" kind: ClusterRole name: akri-video-streaming-app-role subjects: kind: ServiceAccount name: akri-video-streaming-app-sa namespace: default EOF``` Deploy the video stream app ``` kubectl apply -f /tmp/akri-video-streaming-app.yaml``` Determine which port the service is running on. Save this port number for the next step: ``` kubectl get service/akri-video-streaming-app --output=jsonpath='{.spec.ports[?(@.name==\"http\")].nodePort}' && echo``` SSH port forwarding can be used to access the streaming application. Open a new terminal, enter your ssh command to to access your machine followed by the port forwarding request. The following command will use port 50000 on the host. Feel free to change it if it is not available. Be sure to replace <streaming-app-port> with the port number outputted in the previous step. ``` ssh someuser@<machine IP address> -L 50000:localhost:<streaming-app-port>``` Navigate to http://localhost:50000/ using browser. The large feed points to Configuration level service, while the bottom feed points to the service for each Instance or camera. Close the page http://localhost:50000/ from the browser Delete the sample streaming application resources ``` kubectl delete -f /tmp/akri-video-streaming-app.yaml``` Delete the Secret information ``` kubectl delete -f /tmp/onvif-auth-secret.yaml``` Delete deployment and Akri installation to clean up the system. ``` helm delete akri kubectl delete crd configurations.akri.sh kubectl delete crd instances.akri.sh``` Last updated 8 months ago Was this" } ]
{ "category": "Provisioning", "file_name": "usb-camera-demo-rpi4.md", "project_name": "Akri", "subcategory": "Automation & Configuration" }
[ { "data": "If you prefer to learn through videos rather than written documentation, the following is a list of informative talks and demos on Akri. Bridge Your IoT Leaf Devices to Local Clusters with Ease Using Akri and Dynamic Resource Allocation - Latest Akri introduction at KubeCon EU 2024. Introducing industrial edge - An introduction to Akri and how it fits to SUSE's industrial edge solution. Includes a demo of discovering an USB camera. Azure Arc Jumpstart with Akri - A talk in the Azure Arc Jumpstart channel. Includes a demo of discovering an ONVIF camera with Akri and feeding the stream to an edge AI model. Discovering and Managing IoT Devices from Kubernetes with Akri - A deep dive for Akri. Includes a step-by-step demo of discovering the ONVIF cameras and performing firmware update. To try more demos/examples with step-by-step guidance, check the rest of the pages under Demo section. Last updated 1 month ago Was this helpful?" } ]
{ "category": "Provisioning", "file_name": "docs.akri.sh#why-akri.md", "project_name": "Akri", "subcategory": "Automation & Configuration" }
[ { "data": "OPC UA is a communication protocol for industrial automation. It is a client/server technology that comes with a security and communication framework. This demo will help you get started using Akri to discover OPC UA PLC Servers and utilize them via a broker that contains an OPC UA Client. Specifically, a Akri Configuration called OPC UA Monitoring was created for this scenario, which will show how Akri can be used to detect anomaly values of a specific OPC UA Variable. To do so, the OPC UA Clients in the brokers will subscribe to that variable and serve its value over gRPC for an anomaly detection web application to consume. This Configuration could be used to monitor a barometer, CO detector, and more; however, for this example, that variable will represent the PLC values for temperature of a thermostat and any value outside the range of 70-80 degrees is an anomaly. The demo consists of the following components: Two OPC UA PLC Servers (Optional) Certificates for the Servers and Akri brokers An OPC UA Monitoring broker that contains an OPC UA Client that subscribes to a specific NodeID (for that PLC variable) Akri installation An anomaly detection web application An operator (meaning you!) applies to a single-node cluster the OPC UA Configuration, which specifies the addresses of the OPC UA Servers, which OPC UA Variable to monitor, and whether to use security. Agent sees the OPC UA Configuration, discovers the servers specified in the Configuration, and creates an Instance for each server. The Akri Controller sees the Instances in etcd and schedules an OPC UA Monitoring broker pod for each server. Once the OPC UA Monitoring broker pod starts up, it will create an OPC UA Client that will create a secure channel with its server. The OPC UA Client will subscribe to the OPC UA Variable with the NodeID with Identifier \"FastUInt1\" and NamespaceIndex 2 as specified in the OPC UA Configuration. The server will publish any time the value of that variable changes. The OPC UA Monitoring broker will serve over gRPC the latest value of the OPC UA Variable and the address of the OPC UA Server that published the value. The anomaly detection web application will test whether that value is an outlier to its pre-configured dataset. It then will display a log of the values on a web application, showing outliers in red and normal values in green. The following steps need to be completed to run the demo: Setting up a single-node cluster (Optional) Creating X.509 v3 Certificates for the servers and Akri broker and storing them in a Kubernetes Secret Creating two OPC UA Servers Running Akri Deploying an anomaly detection web application as an end consumer of the brokers If at any point in the demo, you want to dive deeper into OPC UA or clarify a term, you can reference the online OPC UA specifications. Reference our cluster setup documentation to set up a cluster for this demo. For ease of setup, only create a single-node cluster, so if installing K3s or MicroK8s, you can skip the last step of the installation instructions of adding additional nodes. If you have an existing cluster, feel free to leverage it for the demo. This documentation assumes you are using a single-node cluster; however, you can certainly use a multi-node" }, { "data": "If security is not desired, skip to Creating OPC UA Servers, as each monitoring broker will use an OPC UA Security Policy of None if it cannot find credentials mounted in its pod. Akri will deploy an OPC UA Monitoring broker for each OPC UA Server a node in the cluster can see. This broker contains an OPC UA Client that will need the proper credentials in order to communicate with the OPC UA Server in a secure fashion. Specifically, before establishing a session, an OPC UA Client and Server must create a secure channel over the communication layer to ensure message integrity, confidentiality, and application authentication. Proper application credentials in the form of X.509 v3 certificates are needed for application authentication. Every OPC UA Application, whether Client, Server, or DiscoveryServer, has a certificate store, which includes the application's own credentials along with a list of trusted and rejected application instance certificates. According to OPC UA specification, there are three ways to configure OPC UA Server and Clients' certificate stores so that they trust each other's certificates, which are explained in the OPC UA proposal. This demo will walk through the third method of creating Client and Server certificates that are issued by a common Certificate Authority (CA). Then, that root CA certificate simply needs to be added to the trusted folder of Client and Servers' certificate stores, and they will automatically trust each other on the basis of having a common root certificate. The following image walks through how to configure the Client and Server certificate stores for Akri. Generate an X.509 v3 Certificate for Akri OPC UA Monitoring brokers and sign it with the same CA that has signed the certificates of all the OPC UA Servers that will be discovered. Create a Kubernetes Secret named opcua-broker-credentials that contains four items with the following key names: clientcertificate, clientkey, cacertificate, and cacrl. The credentials will be mounted in the broker at the path /etc/opcua-certs/client-pki. Create three (one for the broker and each server) OPC UA compliant X.509v3 certificates, ensuring that the certificate contains the necessary components such as an application URI. They should all be signed by a common Certificate Authority (CA). There are many tools for generating proper certificates for OPC UA, such as the OPC Foundation's Certificate Generator or openssl (as in this walk through). The OPC UA Client certificate will be passed to the OPC UA Monitoring broker as a Kubernetes Secret mounted as a volume. Read more about the decision to use Kubernetes secrets to pass the Client certificates in the Credentials Passing Proposal. Create a Kubernetes Secret, projecting each certificate/crl/private key with the expected key name (i.e. clientcertificate, clientkey, cacertificate, and cacrl). Specify the file paths such that they point to the credentials made in the previous section. ``` kubectl create secret generic opcua-broker-credentials \\ --from-file=client_certificate=/path/to/AkriBroker/own/certs/AkriBroker\\ \\[<hash>\\].der \\ --from-file=client_key=/path/to/AkriBroker/own/private/AkriBroker\\ \\[<hash>\\].pfx \\ --from-file=ca_certificate=/path/to/ca/certs/SomeCA\\ \\[<hash>\\].der \\ --from-file=ca_crl=/path/to/ca/crl/SomeCA\\ \\[<hash>\\].crl``` When mounting certificates is enabled later in the Running Akri section with Helm via --set opcua.configuration.mountCertificates='true', the secret named opcua-broker-credentials will be mounted into the OPC UA monitoring brokers. It is mounted to the volume credentials at the mountPath /etc/opcua-certs/client-pki, as shown in the OPC UA Configuration Helm template. This is the path where the brokers expect to find the certificates. Now, we must create some OPC UA PLC Servers to" }, { "data": "Instead of starting from scratch, we deploy OPC PLC server containers. You can read more about the containers and their parameters here. Create an empty YAML file called opc-deployment.yaml. (Optional) If you are using security, place the OpcPlc certificate and the CA certificate as below. ``` plc own certs OpcPlc [hash].der private OpcPlc [hash].pfx trusted certs someCA.der crl someCA.crl``` (A) If you are not using security, copy and paste the contents below into the YAML file. ``` apiVersion: apps/v1 kind: Deployment metadata: name: opcplc labels: app: opcplc spec: selector: matchLabels: app: opcplc template: metadata: labels: app: opcplc name: opc-plc-server spec: hostNetwork: true containers: name: opcplc1 image: mcr.microsoft.com/iotedge/opc-plc:latest ports: containerPort: 50000 args: [\"--portnum=50000\", \"--autoaccept\", \"--fastnodes=1\", \"--fasttype=uint\", \"--fasttypelowerbound=65\", \"--fasttypeupperbound=85\", \"--fasttyperandomization=True\", \"--showpnjsonph\", \"--unsecuretransport\"] name: opcplc2 image: mcr.microsoft.com/iotedge/opc-plc:latest ports: containerPort: 50001 args: [\"--portnum=50001\", \"--autoaccept\", \"--fastnodes=1\", \"--fasttype=uint\", \"--fasttypelowerbound=65\", \"--fasttypeupperbound=85\", \"--fasttyperandomization=True\", \"--showpnjsonph\", \"--unsecuretransport\"]``` (B) If you are using security, copy and paste the contents below into the YAML file, replacing the path in the last line with your path to the folder that contains the certificates. ``` apiVersion: apps/v1 kind: Deployment metadata: name: opcplc labels: app: opcplc spec: selector: matchLabels: app: opcplc template: metadata: labels: app: opcplc name: opc-plc-server spec: hostNetwork: true containers: name: opcplc1 image: mcr.microsoft.com/iotedge/opc-plc:latest ports: containerPort: 50000 args: [\"--portnum=50000\", \"--autoaccept\", \"--fastnodes=1\", \"--fasttype=uint\", \"--fasttypelowerbound=65\", \"--fasttypeupperbound=85\", \"--fasttyperandomization=True\", \"--showpnjsonph\"] volumeMounts: mountPath: /app/pki name: opc-certs name: opcplc2 image: mcr.microsoft.com/iotedge/opc-plc:latest ports: containerPort: 50001 args: [\"--portnum=50001\", \"--autoaccept\", \"--fastnodes=1\", \"--fasttype=uint\", \"--fasttypelowerbound=65\", \"--fasttypeupperbound=85\", \"--fasttyperandomization=True\", \"--showpnjsonph\"] volumeMounts: mountPath: /app/pki name: opc-certs volumes: name: opc-certs hostPath: path: <path/to/plc>``` Save the file, then simply apply your deployment YAML to create two OPC UA servers. ``` kubectl apply -f opc-deployment.yaml``` We have successfully created two OPC UA PLC servers, each with one fast PLC node which generates an unsigned integer with lower bound = 65 and upper bound = 85 at a rate of 1. It should be up and running. Make sure your OPC UA PLC Servers are running. Now it is time to install the Akri using Helm. When installing Akri, we can specify that we want to deploy the OPC UA Discovery Handlers by setting the helm value opcua.discovery.enabled=true. We also specify that we want to create an OPC UA Configuration with --set opcua.configuration.enabled=true. In the Configuration, any values that should be set as environment variables in brokers can be set in opcua.configuration.brokerProperties. In this scenario, we will specify the Identifier and NamespaceIndex of the NodeID we want the brokers to monitor. In our case that is our temperature variable we made earlier, which has an Identifier of FastUInt1 and NamespaceIndex of 2. Your OPC PLC discovery URL will look something like \"opc.tcp://<host IP address>:50000/. If using security, uncomment --set opcua.configuration.mountCertificates='true'. Note: See the cluster setup steps for information on how to set the crictl configuration variable AKRIHELMCRICTL_CONFIGURATION ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set opcua.discovery.enabled=true \\ --set opcua.configuration.enabled=true \\ --set opcua.configuration.name=akri-opcua-monitoring \\ --set opcua.configuration.brokerPod.image.repository=\"ghcr.io/project-akri/akri/opcua-monitoring-broker\" \\ --set opcua.configuration.brokerProperties.IDENTIFIER='FastUInt1' \\ --set opcua.configuration.brokerProperties.NAMESPACE_INDEX='2' \\ --set opcua.configuration.discoveryDetails.discoveryUrls[0]=\"opc.tcp://<HOST IP>:50000/\" \\ --set opcua.configuration.discoveryDetails.discoveryUrls[1]=\"opc.tcp://<HOST IP>:50001/\" \\ Note: FastUInt1 is the identifier of the fast changing node that is provided by the OPC PLC server. Akri Agent will discover the two Servers and create an Instance for each Server. Watch two broker pods spin up, one for each" }, { "data": "``` kubectl get pods -o wide --watch``` To inspect more of the elements of Akri: Run kubectl get crd, and you should see the CRDs listed. Run kubectl get akric, and you should see akri-opcua-monitoring. If the OPC PLC Servers were discovered and pods spun up, the instances can be seen by running kubectl get akrii and further inspected by running kubectl get akrii akri-opcua-monitoring-<ID> -o yaml A sample anomaly detection web application was created for this end-to-end demo. It has a gRPC stub that calls the brokers' gRPC services, getting the latest temperature value. It then determines whether this value is an outlier to the dataset using the Local Outlier Factor strategy. The dataset is simply a csv with the numbers between 70-80 repeated several times; therefore, any value significantly outside this range will be seen as an outlier. The web application serves as a log, displaying all the temperature values and the address of the OPC UA Server that sent the values. It shows anomaly values in red. The anomalies always have a value of 120 due to how we set up the DoSimulation function in the OPC UA Servers. Deploy the anomaly detection app and watch a pod spin up for the app. ``` kubectl apply -f https://raw.githubusercontent.com/project-akri/akri/main/deployment/samples/akri-anomaly-detection-app.yaml``` ``` kubectl get pods -o wide --watch``` Determine which port the service is running on. Be sure to save this port number for the next step. ``` kubectl get service/akri-anomaly-detection-app --output=jsonpath='{.spec.ports[?(@.name==\"http\")].nodePort}' && echo``` SSH port forwarding can be used to access the streaming application. In a new terminal, enter your ssh command to to access your VM followed by the port forwarding request. The following command will use port 50000 on the host. Feel free to change it if it is not available. Be sure to replace <anomaly-app-port> with the port number outputted in the previous step. ``` ssh someuser@<Ubuntu VM IP address> -L 50000:localhost:<anomaly-app-port>``` Note we've noticed issues with port forwarding with WSL 2. Please use a different terminal. Navigate to http://localhost:50000/. It takes 3 seconds for the site to load, after which, you should see a log of the temperature values, which updates every few seconds. Note how the values are coming from two different DiscoveryURLs, namely the ones for each of the two OPC UA Servers. Delete the anomaly detection application deployment and service. ``` kubectl delete service akri-anomaly-detection-app kubectl delete deployment akri-anomaly-detection-app``` Delete the OPC UA Monitoring Configuration and watch the instances, pods, and services be deleted. ``` kubectl delete akric akri-opcua-monitoring watch kubectl get pods,services,akric,akrii -o wide``` Bring down the Akri Agent, Controller, and CRDs. ``` helm delete akri kubectl delete crd instances.akri.sh kubectl delete crd configurations.akri.sh``` Delete the OPC UA server deployment. ``` kubectl delete -f opc-deployment.yaml``` Now that you have the end to end demo running let's talk about some ways you can go beyond the demo to better understand the advantages of Akri. This section will cover: Adding a node to the cluster Using a Local Discovery Server to discover the Servers instead of passing the DiscoveryURLs to the OPC UA Monitoring Configuration Modifying the OPC UA Configuration to filter out an OPC UA Server Creating a different broker and end application Creating a new OPC UA Configuration To see how Akri easily scales as nodes are added to the cluster, add another node to your (K3s, MicroK8s, or vanilla Kubernetes) cluster." }, { "data": "If you are using MicroK8s, create another MicroK8s instance, following the same steps as in Setting up a single-node cluster above. Then, in your first VM that is currently running Akri, get the join command by running microk8s add-node. In your new VM, run one of the join commands outputted in the previous step. Confirm that you have successfully added a node to the cluster by running the following in your control plane VM: ``` kubectl get no``` You can see that another Agent pod has been deployed to the new node; however, no new OPC UA Monitoring brokers have been deployed. This is because the default capacity for OPC UA is 1, so by default only one Node is allowed to utilize a device via a broker. ``` kubectl get pods -o wide``` Let's play around with the capacity value and use the helm upgrade command to modify our OPC UA Monitoring Configuration such that the capacity is 2. On the control plane node, run the following, once again uncommenting --set opcua.configuration.mountCertificates='true' if using security. Watch as the broker terminates and then four come online in a Running state. ``` helm upgrade akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set opcua.discovery.enabled=true \\ --set opcua.configuration.enabled=true \\ --set opcua.configuration.name=akri-opcua-monitoring \\ --set opcua.configuration.brokerPod.image.repository=\"ghcr.io/project-akri/akri/opcua-monitoring-broker\" \\ --set opcua.configuration.brokerProperties.IDENTIFIER='FastUInt1' \\ --set opcua.configuration.brokerProperties.NAMESPACE_INDEX='2' \\ --set opcua.configuration.discoveryDetails.discoveryUrls[0]=\"opc.tcp://<HOST IP>:50000/\" \\ --set opcua.configuration.discoveryDetails.discoveryUrls[1]=\"opc.tcp://<HOST IP>:50001/\" \\ --set opcua.capacity=2 \\ ``` watch kubectl get pods,akrii -o wide``` Once you are done using Akri, you can remove your worker node from the cluster. For MicroK8s this is done by running on the worker node: ``` microk8s leave``` Then, to complete the node removal, on the host run the following, inserting the name of the worker node (you can look it up with microk8s kubectl get no): ``` microk8s remove-node <node name>``` This walk-through only supports setting up an LDS on Windows, since that is the OS the OPC Foundation sample LDS executable was written for. A Local Discovery Server (LDS) is a unique type of OPC UA server which maintains a list of OPC UA servers that have registered with it. The OPC UA Configuration takes in a list of DiscoveryURLs, whether for LDSes or a specific servers. Rather than having to pass in the DiscoveryURL for every OPC UA Server you want Akri to discover and deploy brokers to, you can set up a Local Discovery Server on the machine your servers are running on, make the servers register with the LDS on start up, and pass only the LDS DiscoveryURL into the OPC UA Monitoring Configuration. Agent will ask the LDS for the addresses of all the servers registered with it and the demo continues as it would've without an LDS. The OPC Foundation has provided a Windows based LDS executable which can be downloaded from their website. Download version 1.03.401. It runs as a background service on Windows and can be started or stopped under Windows -> Services. The OPC Foundation has provided documentation on configuring your LDS. Most importantly, it states that you must add the LDS executable to your firewall as an inbound rule. Make sure you have restarted your OPC UA Servers, since they attempt to register with their LDS on start up. Now, we can install Akri with the OPC UA Configuration, passing in the LDS DiscoveryURL instead of both servers'" }, { "data": "Replace \"Windows host IP address\" with the IP address of the Windows machine you installed the LDS on (and is hosting the servers). Be sure to uncomment mounting certificates if you are enabling security: ``` helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set opcua.discovery.enabled=true \\ --set opcua.configuration.enabled=true \\ --set opcua.configuration.name=akri-opcua-monitoring \\ --set opcua.configuration.brokerPod.image.repository=\"ghcr.io/project-akri/akri/opcua-monitoring-broker\" \\ --set opcua.configuration.brokerProperties.IDENTIFIER='FastUInt1' \\ --set opcua.configuration.brokerProperties.NAMESPACE_INDEX='2' \\ --set opcua.configuration.discoveryDetails.discoveryUrls[0]=\"opc.tcp://<Windows host IP address>:4840/\" \\ You can watch as an Instance is created for each Server and two broker pods are spun up. ``` watch kubectl get pods,akrii -o wide``` Instead of deploying brokers to all servers registered with specified Local Discovery Servers, an operator can choose to include or exclude a list of application names (the applicationName property of a server's ApplicationDescription as specified by UA Specification 12). For example, to discover all servers registered with the default LDS except for the server named \"SomeServer0\", do the following. ``` helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set opcua.discovery.enabled=true \\ --set opcua.configuration.enabled=true \\ --set opcua.configuration.name=akri-opcua-monitoring \\ --set opcua.configuration.brokerPod.image.repository=\"ghcr.io/project-akri/akri/opcua-monitoring-broker\" \\ --set opcua.configuration.brokerProperties.IDENTIFIER='FastUInt1' \\ --set opcua.configuration.brokerProperties.NAMESPACE_INDEX='2' \\ --set opcua.configuration.discoveryDetails.discoveryUrls[0]=\"opc.tcp://<Windows host IP address>:4840/\" \\ --set opcua.configuration.discoveryDetails.applicationNames.action=Exclude \\ --set opcua.configuration.discoveryDetails.applicationNames.items[0]=\"SomeServer0\" \\ Note: See the cluster setup steps for information on how to set the crictl configuration variable AKRIHELMCRICTL_CONFIGURATION Alternatively, to only discover the server named \"SomeServer0\", do the following: ``` helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set opcua.discovery.enabled=true \\ --set opcua.configuration.enabled=true \\ --set opcua.configuration.name=akri-opcua-monitoring \\ --set opcua.configuration.brokerPod.image.repository=\"ghcr.io/project-akri/akri/opcua-monitoring-broker\" \\ --set opcua.configuration.brokerProperties.IDENTIFIER='FastUInt1' \\ --set opcua.configuration.brokerProperties.NAMESPACE_INDEX='2' \\ --set opcua.configuration.discoveryDetails.discoveryUrls[0]=\"opc.tcp://<Windows host IP address>:4840/\" \\ --set opcua.configuration.discoveryDetails.applicationNames.action=Include \\ --set opcua.configuration.discoveryDetails.applicationNames.items[0]=\"SomeServer0\" \\ The OPC UA Monitoring broker and anomaly detection application support a very specific scenario: monitoring an OPC UA Variable for anomalies. The workload or broker you want to deploy to discovered OPC UA Servers may be different. OPC UA Servers' address spaces are widely varied, so the options for broker implementations are endless. Passing the NodeID Identifier and NamespaceIndex as environment variables may still suit your needs; however, if targeting one NodeID is too limiting or irrelevant, instead of passing a specific NodeID to your broker Pods, you could specify any other environment variables via --set opcua.configuration.brokerProperties.KEY='VALUE'. Or, your broker may not need additional information passed to it at all. Decide whether to pass environment variables, what servers to discover, and set the broker pod image to be your container image, say ghcr.io/<USERNAME>/opcua-broker. ``` helm repo add akri-helm-charts https://project-akri.github.io/akri/ helm install akri akri-helm-charts/akri \\ $AKRIHELMCRICTL_CONFIGURATION \\ --set opcua.discovery.enabled=true \\ --set opcua.configuration.enabled=true \\ --set opcua.configuration.discoveryDetails.discoveryUrls[0]=\"opc.tcp://<HOST IP>:50000/\" \\ --set opcua.configuration.discoveryDetails.discoveryUrls[1]=\"opc.tcp://<HOST IP>:50001/\" \\ --set opcua.configuration.brokerPod.image.repository='ghcr.io/<USERNAME>/opcua-broker' Note: set opcua.configuration.brokerPod.image.tag to specify an image tag (defaults to latest). Now, your broker will be deployed to all discovered OPC UA servers. Next, you can create a Kubernetes deployment for your own end application like anomaly-detection-app.yaml and apply it to your Kubernetes cluster. Helm allows us to parametrize the commonly modified fields in our Configuration files, and we have provided many. Run helm inspect values akri-helm-charts/akri to see what values of the generic OPC UA Configuration can be customized, such as the Configuration and Instance ServiceSpecs, capacity, and broker PodSpec. We saw in the previous section how broker Pod environment variables can be specified via --set opcua.configuration.brokerProperties.KEY='VALUE'. For more advanced configuration changes that are not aided by the generic OPC UA Configuration Helm chart, such as credentials naming, we suggest downloading the OPC UA Configuration file using Helm and then manually modifying it. See the documentation on customizing an Akri installation for more details. Last updated 8 months ago Was this helpful?" } ]
{ "category": "Provisioning", "file_name": "basic_concepts.html.md", "project_name": "Ansible", "subcategory": "Automation & Configuration" }
[ { "data": "Ansible getting started Installation, Upgrade & Configuration Using Ansible Contributing to Ansible Extending Ansible Common Ansible Scenarios Network Automation Ansible Galaxy Reference & Appendices Roadmaps These concepts are common to all uses of Ansible. You should understand them before using Ansible or reading the documentation. Control node Managed nodes Inventory Playbooks Plays Roles Tasks Handlers Modules Plugins Collections The machine from which you run the Ansible CLI tools (ansible-playbook , ansible, ansible-vault and others). You can use any computer that meets the software requirements as a control node - laptops, shared desktops, and servers can all run Ansible. You can also run Ansible in containers known as Execution Environments. Multiple control nodes are possible, but Ansible itself does not coordinate across them, see AAP for such features. Also referred to as hosts, these are the target devices (servers, network appliances or any computer) you aim to manage with Ansible. Ansible is not normally installed on managed nodes, unless you are using ansible-pull, but this is rare and not the recommended setup. A list of managed nodes provided by one or more inventory sources. Your inventory can specify information specific to each node, like IP address. It is also used for assigning groups, that both allow for node selection in the Play and bulk variable assignment. To learn more about inventory, see the Working with Inventory section. Sometimes an inventory source file is also referred to as a hostfile. They contain Plays (which are the basic unit of Ansible execution). This is both an execution concept and how we describe the files on which ansible-playbook operates. Playbooks are written in YAML and are easy to read, write, share and understand. To learn more about playbooks, see Ansible playbooks. The main context for Ansible execution, this playbook object maps managed nodes (hosts) to" }, { "data": "The Play contains variables, roles and an ordered lists of tasks and can be run repeatedly. It basically consists of an implicit loop over the mapped hosts and tasks and defines how to iterate over them. A limited distribution of reusable Ansible content (tasks, handlers, variables, plugins, templates and files) for use inside of a Play. To use any Role resource, the Role itself must be imported into the Play. The definition of an action to be applied to the managed host. You can execute a single task once with an ad hoc command using ansible or ansible-console (both create a virtual Play). A special form of a Task, that only executes when notified by a previous task which resulted in a changed status. The code or binaries that Ansible copies to and executes on each managed node (when needed) to accomplish the action defined in each Task. Each module has a particular use, from administering users on a specific type of database to managing VLAN interfaces on a specific type of network device. You can invoke a single module with a task, or invoke several different modules in a playbook. Ansible modules are grouped in collections. For an idea of how many collections Ansible includes, see the Collection Index. Pieces of code that expand Ansibles core capabilities. Plugins can control how you connect to a managed node (connection plugins), manipulate data (filter plugins) and even control what is displayed in the console (callback plugins). See Working with plugins for details. A format in which Ansible content is distributed that can contain playbooks, roles, modules, and plugins. You can install and use collections through Ansible Galaxy. To learn more about collections, see Using Ansible collections. Collection resources can be used independently and discretely from each other. Copyright Ansible project contributors. Last updated on Jun 06, 2024." } ]
{ "category": "Provisioning", "file_name": "usb-camera-demo.md", "project_name": "Akri", "subcategory": "Automation & Configuration" }
[ { "data": "Akri is hosted by the Cloud Native Computing Foundation (CNCF) as a Sandbox project. Akri is a Kubernetes Resource Interface that lets you easily expose heterogeneous leaf devices (such as IP cameras and USB devices) as resources in a Kubernetes cluster, while also supporting the exposure of embedded hardware resources such as GPUs and FPGAs. Akri continually detects nodes that have access to these devices and schedules workloads based on them. Simply put: you name it, Akri finds it, you use it. At the edge, there are a variety of sensors, controllers, and MCU class devices that are producing data and performing actions. For Kubernetes to be a viable edge computing solution, these heterogeneous leaf devices need to be easily utilized by Kubernetes clusters. However, many of these leaf devices are too small to run Kubernetes themselves. Akri is an open source project that exposes these leaf devices as resources in a Kubernetes cluster. It leverages and extends the Kubernetes device plugin framework, which was created with the cloud in mind and focuses on advertising static resources such as GPUs and other system hardware. Akri took this framework and applied it to the edge, where there is a diverse set of leaf devices with unique communication protocols and intermittent availability. Akri is made for the edge, handling the dynamic appearance and disappearance of leaf devices. Akri provides an abstraction layer similar to CNI, but instead of abstracting the underlying network details, it is removing the work of finding, utilizing, and monitoring the availability of the leaf device. An operator simply has to apply a Akri Configuration to a cluster, specifying the Discovery Handler (say ONVIF) that should be used to discover the devices and the Pod that should be deployed upon discovery (say a video frame server). Then, Akri does the rest. An operator can also allow multiple nodes to utilize a leaf device, thereby providing high availability in the case where a node goes offline. Furthermore, Akri will automatically create a Kubernetes service for each type of leaf device (or Akri Configuration), removing the need for an application to track the state of pods or nodes. Most importantly, Akri was built to be extensible. Akri currently supports ONVIF, udev, and OPC UA Discovery Handlers, but more can be easily added by community members like you. The more protocols Akri can support, the wider an array of leaf devices Akri can discover. We are excited to work with you to build a more connected edge. Akri's documentation is divided into six sections: User Guide: Documentation for Akri users. Discovery Handlers: Documentation on how to configure Akri using Akri's currently supported Discovery Handlers Demos: End-to-End demos that demostrate how Akri can discover and use devices. Contain sample brokers and end applications. Architecture: Documentation that details the design and implementation of Akri's components. Development: Documentation for Akri developers or how to build, test, and extend Akri. Community: Information on what's next for Akri and how to get involved! The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page Last updated 3 months ago Was this helpful?" } ]
{ "category": "Provisioning", "file_name": "get_started_ansible.html.md", "project_name": "Ansible", "subcategory": "Automation & Configuration" }
[ { "data": "Ansible getting started Installation, Upgrade & Configuration Using Ansible Contributing to Ansible Extending Ansible Common Ansible Scenarios Network Automation Ansible Galaxy Reference & Appendices Roadmaps Get started with Ansible by creating an automation project, building an inventory, and creating a Hello World playbook. Install Ansible. ``` pip install ansible ``` Create a project folder on your filesystem. ``` mkdir ansiblequickstart && cd ansiblequickstart ``` Using a single directory structure makes it easier to add to source control as well as to reuse and share automation content. Continue getting started with Ansible by building an inventory. See also Installation guide with instructions for installing Ansible on various operating systems Demonstrations of different Ansible usecases Labs to provide further knowledge on different topics Questions? Help? Ideas? Ask the community Copyright Ansible project contributors. Last updated on Jun 06, 2024." } ]
{ "category": "Provisioning", "file_name": "#welcome-to.md", "project_name": "BOSH", "subcategory": "Automation & Configuration" }
[ { "data": "BOSH is a project that unifies release engineering, deployment, and lifecycle management of small and large-scale cloud software. BOSH can provision and deploy software over hundreds of VMs. It also performs monitoring, failure recovery, and software updates with zero-to-minimal downtime. While BOSH was developed to deploy Cloud Foundry PaaS, it can also be used to deploy almost any other software (Hadoop, for instance). BOSH is particularly well-suited for large distributed systems. In addition, BOSH supports multiple Infrastructure as a Service (IaaS) providers like VMware vSphere, Google Cloud Platform, Amazon Web Services EC2, Microsoft Azure, OpenStack, and Alibaba Cloud. There is a Cloud Provider Interface (CPI) that enables users to extend BOSH to support additional IaaS providers such as Apache CloudStack and VirtualBox. The bosh CLI is the command line tool used for interacting with all things BOSH. Release binaries are available on GitHub. See Installation for more details on how to download and install." } ]
{ "category": "Provisioning", "file_name": "introduction.html.md", "project_name": "Ansible", "subcategory": "Automation & Configuration" }
[ { "data": "Ansible getting started Installation, Upgrade & Configuration Using Ansible Contributing to Ansible Extending Ansible Common Ansible Scenarios Network Automation Ansible Galaxy Reference & Appendices Roadmaps Ansible provides open-source automation that reduces complexity and runs everywhere. Using Ansible lets you automate virtually any task. Here are some common use cases for Ansible: Eliminate repetition and simplify workflows Manage and maintain system configuration Continuously deploy complex software Perform zero-downtime rolling updates Ansible uses simple, human-readable scripts called playbooks to automate your tasks. You declare the desired state of a local or remote system in your playbook. Ansible ensures that the system remains in that state. As automation technology, Ansible is designed around the following principles: Low maintenance overhead by avoiding the installation of additional software across IT infrastructure. Automation playbooks use straightforward YAML syntax for code that reads like documentation. Ansible is also decentralized, using SSH existing OS credentials to access to remote machines. Easily and quickly scale the systems you automate through a modular design that supports a large range of operating systems, cloud platforms, and network devices. When the system is in the state your playbook describes Ansible does not change anything, even if the playbook runs multiple times. Ready to start using Ansible? Get up and running in a few easy steps. Copyright Ansible project contributors. Last updated on Jun 06, 2024." } ]
{ "category": "Provisioning", "file_name": "code_of_conduct.html.md", "project_name": "Ansible", "subcategory": "Automation & Configuration" }
[ { "data": "Ansible getting started Installation, Upgrade & Configuration Using Ansible Contributing to Ansible Extending Ansible Common Ansible Scenarios Network Automation Ansible Galaxy Reference & Appendices Roadmaps Topics Community Code of Conduct Anti-harassment policy Policy violations Every community can be strengthened by a diverse variety of viewpoints, insights, opinions, skillsets, and skill levels. However, with diversity comes the potential for disagreement and miscommunication. The purpose of this Code of Conduct is to ensure that disagreements and differences of opinion are conducted respectfully and on their own merits, without personal attacks or other behavior that might create an unsafe or unwelcoming environment. These policies are not designed to be a comprehensive set of Things You Cannot Do. We ask that you treat your fellow community members with respect and courtesy, and in general, Dont Be A Jerk. This Code of Conduct is meant to be followed in spirit as much as in letter and is not exhaustive. All Ansible events and participants therein are governed by this Code of Conduct and anti-harassment policy. We expect organizers to enforce these guidelines throughout all events, and we expect attendees, speakers, sponsors, and volunteers to help ensure a safe environment for our whole community. Specifically, this Code of Conduct covers participation in all Ansible-related forums and mailing lists, code and documentation contributions, public chat (Matrix, IRC), private correspondence, and public meetings. Ansible community members are Considerate Contributions of every kind have far-ranging consequences. Just as your work depends on the work of others, decisions you make surrounding your contributions to the Ansible community will affect your fellow community members. You are strongly encouraged to take those consequences into account while making decisions. Patient Asynchronous communication can come with its own frustrations, even in the most responsive of communities. Please remember that our community is largely built on volunteered time, and that questions, contributions, and requests for support may take some time to receive a response. Repeated bumps or reminders in rapid succession are not good displays of patience. Additionally, it is considered poor manners to ping a specific person with general questions. Pose your question to the community as a whole, and wait patiently for a response. Respectful Every community inevitably has disagreements, but remember that it is possible to disagree respectfully and courteously. Disagreements are never an excuse for rudeness, hostility, threatening behavior, abuse (verbal or physical), or personal attacks. Kind Everyone should feel welcome in the Ansible community, regardless of their background. Please be courteous, respectful and polite to fellow community members. Do not make or post offensive comments related to skill level, gender, gender identity or expression, sexual orientation, disability, physical appearance, body size, race, or religion. Sexualized images or imagery, real or implied violence, intimidation, oppression, stalking, sustained disruption of activities, publishing the personal information of others without explicit permission to do so, unwanted physical contact, and unwelcome sexual attention are all strictly prohibited. Additionally, you are encouraged not to make assumptions about the background or identity of your fellow community members. Inquisitive The only stupid question is the one that does not get asked. We encourage our users to ask early and ask often. Rather than asking whether you can ask a question (the answer is always yes!), instead, simply ask your question. You are encouraged to provide as many specifics as" }, { "data": "Code snippets in the form of Gists or other paste site links are almost always needed in order to get the most helpful answers. Refrain from pasting multiple lines of code directly into the chat channels - instead use gist.github.com or another paste site to provide code snippets. Helpful The Ansible community is committed to being a welcoming environment for all users, regardless of skill level. We were all beginners once upon a time, and our community cannot grow without an environment where new users feel safe and comfortable asking questions. It can become frustrating to answer the same questions repeatedly; however, community members are expected to remain courteous and helpful to all users equally, regardless of skill or knowledge level. Avoid providing responses that prioritize snideness and snark over useful information. At the same time, everyone is expected to read the provided documentation thoroughly. We are happy to answer questions, provide strategic guidance, and suggest effective workflows, but we are not here to do your job for you. Harassment includes (but is not limited to) all of the following behaviors: Offensive comments related to gender (including gender expression and identity), age, sexual orientation, disability, physical appearance, body size, race, and religion Derogatory terminology including words commonly known to be slurs Posting sexualized images or imagery in public spaces Deliberate intimidation Stalking Posting others personal information without explicit permission Sustained disruption of talks or other events Inappropriate physical contact Unwelcome sexual attention Participants asked to stop any harassing behavior are expected to comply immediately. Sponsors are also subject to the anti-harassment policy. In particular, sponsors should not use sexualized images, activities, or other material. Meetup organizing staff and other volunteer organizers should not use sexualized attire or otherwise create a sexualized environment at community events. In addition to the behaviors outlined above, continuing to behave a certain way after you have been asked to stop also constitutes harassment, even if that behavior is not specifically outlined in this policy. It is considerate and respectful to stop doing something after you have been asked to stop, and all community members are expected to comply with such requests immediately. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting codeofconduct@ansible.com, to anyone with administrative power in community chat (Admins or Moderators on Matrix, ops on IRC), or to the local organizers of an event. Meetup organizers are encouraged to prominently display points of contact for reporting unacceptable behavior at local events. If a participant engages in harassing behavior, the meetup organizers may take any action they deem appropriate. These actions may include but are not limited to warning the offender, expelling the offender from the event, and barring the offender from future community events. Organizers will be happy to help participants contact security or local law enforcement, provide escorts to an alternate location, or otherwise assist those experiencing harassment to feel safe for the duration of the meetup. We value the safety and well-being of our community members and want everyone to feel welcome at our events, both online and offline. We expect all participants, organizers, speakers, and attendees to follow these policies at all of our event venues and event-related social events. The Ansible Community Code of Conduct is licensed under the Creative Commons Attribution-Share Alike 3.0 license. Our Code of Conduct was adapted from Codes of Conduct of other open source projects, including: Contributor Covenant Elastic The Fedora Project OpenStack Puppet Labs Ubuntu Copyright Ansible project contributors. Last updated on Jun 06, 2024." } ]
{ "category": "Provisioning", "file_name": "mkdocs-material.md", "project_name": "BOSH", "subcategory": "Automation & Configuration" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. Documentation that simply works | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:--|:--|:--|-:|-:| | Latest commitHistory6,272 Commits | Latest commitHistory6,272 Commits | Latest commitHistory6,272 Commits | nan | nan | | .devcontainer | .devcontainer | .devcontainer | nan | nan | | .github | .github | .github | nan | nan | | .vscode | .vscode | .vscode | nan | nan | | docs | docs | docs | nan | nan | | includes | includes | includes | nan | nan | | material | material | material | nan | nan | | src | src | src | nan | nan | | tools/build | tools/build | tools/build | nan | nan | | typings | typings | typings | nan | nan | | .browserslistrc | .browserslistrc | .browserslistrc | nan | nan | | .dockerignore | .dockerignore | .dockerignore | nan | nan | | .editorconfig | .editorconfig | .editorconfig | nan | nan | | .eslintignore | .eslintignore | .eslintignore | nan | nan | | .eslintrc | .eslintrc | .eslintrc | nan | nan | | .gitattributes | .gitattributes | .gitattributes | nan | nan | | .gitignore | .gitignore | .gitignore | nan | nan | | .stylelintignore | .stylelintignore | .stylelintignore | nan | nan | | .stylelintrc | .stylelintrc | .stylelintrc | nan | nan | | CHANGELOG | CHANGELOG | CHANGELOG | nan | nan | | CODEOFCONDUCT.md | CODEOFCONDUCT.md | CODEOFCONDUCT.md | nan | nan | | CONTRIBUTING.md | CONTRIBUTING.md | CONTRIBUTING.md | nan | nan | | Dockerfile | Dockerfile | Dockerfile | nan | nan | | LICENSE | LICENSE | LICENSE | nan | nan | | README.md | README.md | README.md | nan | nan | | giscus.json | giscus.json | giscus.json | nan | nan | | mkdocs.yml | mkdocs.yml | mkdocs.yml | nan | nan | | package-lock.json | package-lock.json | package-lock.json | nan | nan | | package.json | package.json | package.json | nan | nan | | pyproject.toml | pyproject.toml | pyproject.toml | nan | nan | | requirements.txt | requirements.txt | requirements.txt | nan | nan | | tsconfig.json | tsconfig.json | tsconfig.json | nan | nan | | View all files | View all files | View all files | nan | nan | A powerful documentation framework on top of MkDocs Write your documentation in Markdown and create a professional static site for your Open Source or commercial project in minutes searchable, customizable, more than 60 languages, for all" }, { "data": "Check out the demo squidfunk.github.io/mkdocs-material. Silver sponsors Bronze sponsors Focus on the content of your documentation and create a professional static site in minutes. No need to know HTML, CSS or JavaScript let Material for MkDocs do the heavy lifting for you. Serve your documentation with confidence Material for MkDocs automatically adapts to perfectly fit the available screen estate, no matter the type or size of the viewing device. Desktop. Tablet. Mobile. All great. Make it yours change the colors, fonts, language, icons, logo, and more with a few lines of configuration. Material for MkDocs can be easily extended and provides many options to alter appearance and behavior. Don't let your users wait get incredible value with a small footprint by using one of the fastest themes available with excellent performance, yielding optimal search engine rankings and happy users that return. Make accessibility a priority users can navigate your documentation with touch devices, keyboards, and screen readers. Semantic markup ensures that your documentation works for everyone. Trust 20,000+ users choose a mature and actively maintained solution built with state-of-the-art Open Source technologies. Keep ownership of your content without fear of vendor lock-in. Licensed under MIT. Material for MkDocs can be installed with pip: ``` pip install mkdocs-material``` Add the following lines to mkdocs.yml: ``` theme: name: material``` For detailed installation instructions, configuration options, and a demo, visit squidfunk.github.io/mkdocs-material ArXiv, Atlassian, AWS, Bloomberg, CERN, CloudFlare, Datadog, Google, Hewlett Packard, ING, Intel, JetBrains, LinkedIn, Microsoft, Mozilla, Netflix, Red Hat, Salesforce, SIEMENS, Slack, Square, Zalando Arduino, Auto-GPT, AutoKeras, BFE, CentOS, Crystal, Electron, FastAPI, GoReleaser, Knative, Kubernetes, kSQL, Nokogiri, OpenFaaS, Percona, Pi-Hole, Pydantic, PyPI, Renovate, Traefik, Trivy, Vapor, ZeroNet, WebKit, WTF MIT License Copyright (c) 2016-2024 Martin Donath Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Documentation that simply works" } ]
{ "category": "Provisioning", "file_name": "mkdocs.md", "project_name": "BOSH", "subcategory": "Automation & Configuration" }
[ { "data": "Note Do not follow this procedure if vSphere HA is enabled and bosh-vsphere-cpi is v30+; vSphere HA will automatically move all VMs from the failed host to other good hosts. This topic describes how to recreate VMs in the event of an ESXi host failure. The BOSH Resurrector is unable to recreate a VM on a failed ESXi host without manual intervention. It can not recreate a VM until the VM has been successfully deleted, and it can not delete the VM because the ESXi host is unavailable. The following steps will allow the Resurrector to recreate these VMs on a healthy host. Re-upload all stemcells currently in use by the director ``` ++++--+ | Name | OS | Version | CID | ++++--+ | bosh-vsphere-esxi-hvm-centos-7-go_agent | centos-7 | 3184.1 | sc-bc3d762c-71a1-4e76-ae6d-7d2d4366821b | | bosh-vsphere-esxi-ubuntu-xenial-go_agent | ubuntu-xenial | 456.3 | sc-46509b02-a164-4306-89de-99abdaffe8a8 | | bosh-vsphere-esxi-ubuntu-xenial-go_agent | ubuntu-xenial | 456.112 | sc-86d76a55-5bcb-4c12-9fa7-460edd8f94cf | | bosh-vsphere-esxi-ubuntu-xenial-go_agent | ubuntu-xenial | 621.74* | sc-97e9ba2d-6ae0-41d1-beea-082b6635e7cb | ++++--+ ``` ``` bosh upload stemcell https://bosh.io/d/stemcells/bosh-vsphere-esxi-ubuntu-xenial-go_agent?v=621.74 --fix ```" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "BOSH", "subcategory": "Automation & Configuration" }
[ { "data": "Note This feature is available with bosh-vsphere-cpi v9+. This topic describes how to migrate VMs and persistent disks from one datastore to another with controlled downtime (limited to max-in-flight instances concurrently updated). Attach new datastore(s) to the hosts where the VMs are running while keeping the old datastore(s) attached to the same hosts. Change deployment manifest for the Director to configure vSphere CPI to reference new datastore(s). ``` properties: vsphere: host: 172.16.68.3 user: root password: vmware datacenters: name: BOSH_DC vm_folder: prod-vms template_folder: prod-templates disk_path: prod-disks datastore_pattern: '\\Anew-prod-ds\\z' # < persistentdatastorepattern: '\\Anew-prod-ds\\z' # < clusters: [BOSH_CL] ``` Redeploy the Director Verify that the Director VM's root, ephemeral and persistent disks are now on the new datastore(s). For each one of the deployments managed by the Director (visible via bosh deployments), run bosh deploy --recreate so that VMs are recreated and persistent disks are reattached. Verify that the persistent disks and VMs were moved to new datastore(s) and there are no remaining disks in the old datastore(s). Alternatively, you may modify the cloud-config at the global level, az level, or disk-type level to specify the target datastore (without a director redeploy)." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Cadence Workflow", "subcategory": "Automation & Configuration" }
[ { "data": "The Cadence CLI is a command-line tool you can use to perform various tasks on a Cadence server. It can perform domain operations such as register, update, and describe as well as workflow operations like start workflow, show workflow history, and signal workflow. ``` brew install cadence-workflow ``` After the installation is done, you can use CLI: ``` cadence --help ``` This will always install the latest version. Follow this instructions (opens new window) if you need to install older versions of Cadence CLI. The Cadence CLI can be used directly from the Docker Hub image ubercadence/cli or by building the CLI tool locally. Example of using the docker image to describe a domain ``` docker run -it --rm ubercadence/cli:master --address <frontendAddress> --domain samples-domain domain describe ``` master will be the latest CLI binary from the project. But you can specify a version to best match your server version: ``` docker run -it --rm ubercadence/cli:<version> --address <frontendAddress> --domain samples-domain domain describe ``` For example docker run --rm ubercadence/cli:0.21.3 --domain samples-domain domain describe will be the CLI that is released as part of the v0.21.3 release (opens new window). See docker hub page (opens new window) for all the CLI image tags. Note that CLI versions of 0.20.0 works for all server versions of 0.12 to 0.19 as well. That's because the CLI version doesn't change in those versions (opens new window). NOTE: On Docker versions 18.03 and later, you may get a \"connection refused\" error when connecting to local server. You can work around this by setting the host to \"host.docker.internal\" (see here (opens new window) for more info). ``` docker run -it --rm ubercadence/cli:master --address host.docker.internal:7933 --domain samples-domain domain describe ``` NOTE: Be sure to update your image when you want to try new features: docker pull ubercadence/cli:master NOTE: If you are running docker-compose Cadence server, you can also logon to the container to execute CLI: ``` docker exec -it dockercadence1 /bin/bash ``` To build the CLI tool locally, clone the Cadence server repo (opens new window), check out the version tag (e.g. git checkout v0.21.3) and run make tools. This produces an executable called cadence. With a local build, the same command to describe a domain would look like this: ``` cadence --domain samples-domain domain describe ``` Alternatively, you can build the CLI image, see instructions CLI are documented by --help or -h in ANY tab of all levels: ``` $cadence --help NAME: cadence - A command-line tool for cadence users USAGE: cadence [global options] command [command options] [arguments...] VERSION: 0.18.4 COMMANDS: domain, d Operate cadence domain workflow, wf Operate cadence workflow tasklist, tl Operate cadence tasklist admin, adm Run admin operation cluster, cl Operate cadence cluster help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --address value, --ad value host:port for cadence frontend service [$CADENCECLIADDRESS] --domain value, --do value cadence workflow domain [$CADENCECLIDOMAIN] --contexttimeout value, --ct value optional timeout for context of RPC call in seconds (default: 5) [$CADENCECONTEXT_TIMEOUT] --help, -h show help --version, -v print the version ``` And ``` $cadence workflow -h NAME: cadence workflow - Operate cadence workflow USAGE: cadence workflow command [command options]" }, { "data": "COMMANDS: activity, act operate activities of workflow show show workflow history showid show workflow history with given workflowid and runid (a shortcut of `show -w <wid> -r <rid>`). run_id is only required for archived history start start a new workflow execution run start a new workflow execution and get workflow progress cancel, c cancel a workflow execution signal, s signal a workflow execution signalwithstart signal the current open workflow if exists, or attempt to start a new run based on IDResuePolicy and signals it terminate, term terminate a new workflow execution list, l list open or closed workflow executions listall, la list all open or closed workflow executions listarchived list archived workflow executions scan, sc, scanall scan workflow executions (need to enable Cadence server on ElasticSearch). It will be faster than listall, but result are not sorted. count, cnt count number of workflow executions (need to enable Cadence server on ElasticSearch) query query workflow execution stack query workflow execution with stack_trace as query type describe, desc show information of workflow execution describeid, descid show information of workflow execution with given workflowid and optional runid (a shortcut of `describe -w <wid> -r <rid>`) observe, ob show the progress of workflow history observeid, obid show the progress of workflow history with given workflowid and optional runid (a shortcut of `observe -w <wid> -r <rid>`) reset, rs reset the workflow, by either eventID or resetType. reset-batch reset workflow in batch by resetType: LastDecisionCompleted,LastContinuedAsNew,BadBinary,DecisionCompletedTime,FirstDecisionScheduled,LastDecisionScheduled,FirstDecisionCompletedTo get base workflowIDs/runIDs to reset, source is from input file or visibility query. batch batch operation on a list of workflows from query. OPTIONS: --help, -h show help ``` ``` $cadence wf signal -h NAME: cadence workflow signal - signal a workflow execution USAGE: cadence workflow signal [command options] [arguments...] OPTIONS: --workflow_id value, --wid value, -w value WorkflowID --run_id value, --rid value, -r value RunID --name value, -n value SignalName --input value, -i value Input for the signal, in JSON format. --input_file value, --if value Input for the signal from JSON file. ``` And etc. The example commands below will use cadence for brevity. Setting environment variables for repeated parameters can shorten the CLI commands. Run cadence for help on top level commands and global options Run cadence domain for help on domain operations Run cadence workflow for help on workflow operations Run cadence tasklist for help on tasklist operations (cadence help, cadence help [domain|workflow] will also print help messages) Note: make sure you have a Cadence server running before using CLI ``` cadence --domain samples-domain domain register cadence --do samples-domain d re ``` If your Cadence cluster has enable global domain(XDC replication) (opens new window), then you have to specify the replicaiton settings when registering a domain: ``` cadence --domains amples-domain domain register --active_cluster clusterNameA --clusters clusterNameA clusterNameB ``` ``` cadence --domain samples-domain domain describe ``` The following examples assume the CADENCECLIDOMAIN environment variable is set. Start a workflow and see its progress. This command doesn't finish until workflow completes. ``` cadence workflow run --tl helloWorldGroup --wt" }, { "data": "--et 60 -i '\"cadence\"' cadence workflow run -h ``` Brief explanation: To run a workflow, the user must specify the following: s example uses this cadence-samples workflow (opens new window) and takes a string as input with the -i '\"cadence\"' parameter. Single quotes ('') are used to wrap input as JSON. Note: You need to start the worker so that the workflow can make progress. (Run make && ./bin/helloworld -m worker in cadence-samples to start the worker) ``` cadence tasklist desc --tl helloWorldGroup ``` ``` cadence workflow start --tl helloWorldGroup --wt main.Workflow --et 60 -i '\"cadence\"' cadence workflow start -h cadence workflow start --tl helloWorldGroup --wt main.WorkflowWith3Args --et 60 -i '\"yourinputstring\" 123 {\"Name\":\"my-string\", \"Age\":12345}' ``` The workflow start command is similar to the run command, but immediately returns the workflow_id and run_id after starting the workflow. Use the show command to view the workflow's history/progress. Use option --workflowidreusepolicy or --wrp to configure the workflow ID reuse policy. Option 0 AllowDuplicateFailedOnly: Allow starting a workflow execution using the same workflow ID when a workflow with the same workflow ID is not already running and the last execution close state is one of [terminated, cancelled, timedout, failed]. Option 1 AllowDuplicate: Allow starting a workflow execution using the same workflow ID when a workflow with the same workflow ID is not already running. Option 2 RejectDuplicate: Do not allow starting a workflow execution using the same workflow ID as a previous workflow. ``` cadence workflow start --tl helloWorldGroup --wt main.Workflow --et 60 -i '\"cadence\"' --wid \"<duplicated workflow id>\" --wrp 0 cadence workflow run --tl helloWorldGroup --wt main.Workflow --et 60 -i '\"cadence\"' --wid \"<duplicated workflow id>\" --wrp 1 ``` Memos are immutable key/value pairs that can be attached to a workflow run when starting the workflow. These are visible when listing workflows. More information on memos can be found here. ``` cadence wf start -tl helloWorldGroup -wt main.Workflow -et 60 -i '\"cadence\"' -memo_key Service Env Instance -memo serverName1 test 5 ``` ``` cadence workflow show -w 3ea6b242-b23c-4279-bb13-f215661b4717 -r 866ae14c-88cf-4f1e-980f-571e031d71b0 cadence workflow showid 3ea6b242-b23c-4279-bb13-f215661b4717 866ae14c-88cf-4f1e-980f-571e031d71b0 cadence workflow show -w 3ea6b242-b23c-4279-bb13-f215661b4717 cadence workflow showid 3ea6b242-b23c-4279-bb13-f215661b4717 ``` ``` cadence workflow describe -w 3ea6b242-b23c-4279-bb13-f215661b4717 -r 866ae14c-88cf-4f1e-980f-571e031d71b0 cadence workflow describeid 3ea6b242-b23c-4279-bb13-f215661b4717 866ae14c-88cf-4f1e-980f-571e031d71b0 cadence workflow describe -w 3ea6b242-b23c-4279-bb13-f215661b4717 cadence workflow describeid 3ea6b242-b23c-4279-bb13-f215661b4717 ``` ``` cadence workflow list cadence workflow list -m ``` Use --query to list workflows with SQL like query ``` cadence workflow list --query \"WorkflowType='main.SampleParentWorkflow' AND CloseTime = missing \" ``` This will return all open workflows with workflowType as \"main.SampleParentWorkflow\". ``` cadence workflow query -w <wid> -r <rid> --qt <query-type> cadence workflow query -w <wid> -r <rid> --qt stack_trace cadence workflow stack -w <wid> -r <rid> ``` ``` cadence workflow signal -w <wid> -r <rid> -n <signal-name> -i '\"signal-value\"' cadence workflow cancel -w <wid> -r <rid> cadence workflow terminate -w <wid> -r <rid> --reason ``` Terminating a running workflow execution will record a WorkflowExecutionTerminated event as the closing event in the history. No more decision tasks will be scheduled for a terminated workflow execution. Canceling a running workflow execution will record a WorkflowExecutionCancelRequested event in the history, and a new decision task will be scheduled. The workflow has a chance to do some clean up work after cancellation. Batch job is based on List Workflow Query(--query). It supports signal, cancel and terminate as batch job type. For terminating workflows as batch job, it will terminte the children" }, { "data": "Start a batch job(using signal as batch type): ``` cadence --do samples-domain wf batch start --query \"WorkflowType='main.SampleParentWorkflow' AND CloseTime=missing\" --reason \"test\" --bt signal --sig testname This batch job will be operating on 5 workflows. Please confirm[Yes/No]:yes { \"jobID\": \"<batch-job-id>\", \"msg\": \"batch job is started\" } ``` You need to remember the JobID or use List command to get all your batch jobs: ``` cadence --do samples-domain wf batch list ``` Describe the progress of a batch job: ``` cadence --do samples-domain wf batch desc -jid <batch-job-id> ``` Terminate a batch job: ``` cadence --do samples-domain wf batch terminate -jid <batch-job-id> ``` Note that the operation performed by a batch will not be rolled back by terminating the batch. However, you can use reset to rollback your workflows. The Reset command allows resetting a workflow to a particular point and continue running from there. There are a lot of use cases: You can reset to some predefined event types: ``` cadence workflow reset -w <wid> -r <rid> --resettype <resettype> --reason \"some_reason\" ``` If you are familiar with the Cadence history event, You can also reset to any decision finish event by using: ``` cadence workflow reset -w <wid> -r <rid> --eventid <decisionfinisheventid> --reason \"some_reason\" ``` Some things to note: To reset multiple workflows, you can use batch reset command: ``` cadence workflow reset-batch --inputfile <fileofworkflowstoreset> --resettype <resettype> --reason \"somereason\" ``` If a bad deployment lets a workflow run into a wrong state, you might want to reset the workflow to the point that the bad deployment started to run. But usually it is not easy to find out all the workflows impacted, and every reset point for each workflow. In this case, auto-reset will automatically reset all the workflows given a bad deployment identifier. Let's get familiar with some concepts. Each deployment will have an identifier, we call it \"Binary Checksum\" as it is usually generated by the md5sum of a binary file. For a workflow, each binary checksum will be associated with an auto-reset point, which contains a runID, an eventID, and the created_time that binary/deployment made the first decision for the workflow. To find out which binary checksum of the bad deployment to reset, you should be aware of at least one workflow running into a bad state. Use the describe command with --resetpointsonly option to show all the reset points: ``` cadence wf desc -w <WorkflowID> --resetpointsonly +-+--+--++ | BINARY CHECKSUM | CREATE TIME | RUNID | EVENTID | +-+--+--++ | c84c5afa552613a83294793f4e664a7f | 2019-05-24 10:01:00.398455019 | 2dd29ab7-2dd8-4668-83e0-89cae261cfb1 | 4 | | aae748fdc557a3f873adbe1dd066713f | 2019-05-24 11:01:00.067691445 | d42d21b8-2adb-4313-b069-3837d44d6ce6 | 4 | ... ... ``` Then use this command to tell Cadence to auto-reset all workflows impacted by the bad deployment. The command will store the bad binary checksum into domain info and trigger a process to reset all your workflows. ``` cadence --do <YourDomainName> domain update --addbadbinary aae748fdc557a3f873adbe1dd066713f --reason \"rollback bad deployment\" ``` As you add the bad binary checksum to your domain, Cadence will not dispatch any decision tasks to the bad binary. So make sure that you have rolled back to a good deployment(or roll out new bits with bug fixes). Otherwise your workflow can't make any progress after auto-reset. Workflow Replay and Shadowing Overview" } ]
{ "category": "Provisioning", "file_name": "cli.md", "project_name": "CDK for Kubernetes (CDK8s)", "subcategory": "Automation & Configuration" }
[ { "data": "CDK8s streamlines the automation of deployment, versioning, and configuration for your Kubernetes resources, so you can build and upgrade your clusters more easily. Whether youre creating new applications, managing microservices, or working with data processing pipelines, cdk8s has you covered. To kick start your journey, choose one of our language guides tailored to your preferred programming language." } ]
{ "category": "Provisioning", "file_name": "cadence-docs.md", "project_name": "Cadence Workflow", "subcategory": "Automation & Configuration" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:|:|:|-:|-:| | Latest commitHistory201 Commits | Latest commitHistory201 Commits | Latest commitHistory201 Commits | nan | nan | | .github/workflows | .github/workflows | .github/workflows | nan | nan | | cypress/integration/slack | cypress/integration/slack | cypress/integration/slack | nan | nan | | src | src | src | nan | nan | | .envrc | .envrc | .envrc | nan | nan | | .gitignore | .gitignore | .gitignore | nan | nan | | .node-version | .node-version | .node-version | nan | nan | | .nvmrc | .nvmrc | .nvmrc | nan | nan | | LICENSE | LICENSE | LICENSE | nan | nan | | README.md | README.md | README.md | nan | nan | | cypress.json | cypress.json | cypress.json | nan | nan | | package-lock.json | package-lock.json | package-lock.json | nan | nan | | package.json | package.json | package.json | nan | nan | | yarn.lock | yarn.lock | yarn.lock | nan | nan | | View all files | View all files | View all files | nan | nan | This will start a local server and can be accessed at http://localhost:8080/ This will start a local server and can be accessed at http://localhost:8080/blog MIT License, please see LICENSE for details." } ]
{ "category": "Provisioning", "file_name": "typescript.md", "project_name": "CDK for Kubernetes (CDK8s)", "subcategory": "Automation & Configuration" }
[ { "data": "``` import { AbstractPod } from 'cdk8s-plus-28' new AbstractPod(scope: Construct, id: string, props?: AbstractPodProps) ``` ``` public addContainer(cont: ContainerProps) ``` ``` public addHostAlias(hostAlias: HostAlias) ``` ``` public addInitContainer(cont: ContainerProps) ``` ``` public addVolume(vol: Volume) ``` ``` public attachContainer(cont: Container) ``` ``` public toNetworkPolicyPeerConfig() ``` ``` public toPodSelector() ``` ``` public toPodSelectorConfig() ``` ``` public toSubjectConfiguration() ``` ``` public readonly automountServiceAccountToken: boolean; ``` ``` public readonly containers: Container[]; ``` ``` public readonly dns: PodDns; ``` ``` public readonly hostAliases: HostAlias[]; ``` ``` public readonly initContainers: Container[]; ``` ``` public readonly podMetadata: ApiObjectMetadataDefinition; ``` ``` public readonly securityContext: PodSecurityContext; ``` ``` public readonly volumes: Volume[]; ``` ``` public readonly dockerRegistryAuth: ISecret; ``` ``` public readonly hostNetwork: boolean; ``` ``` public readonly restartPolicy: RestartPolicy; ``` ``` public readonly serviceAccount: IServiceAccount; ``` ``` public readonly terminationGracePeriod: Duration; ``` Represents an AWS Disk resource that is attached to a kubelets host machine and then exposed to the pod. https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore ``` import { AwsElasticBlockStorePersistentVolume } from 'cdk8s-plus-28' new AwsElasticBlockStorePersistentVolume(scope: Construct, id: string, props: AwsElasticBlockStorePersistentVolumeProps) ``` ``` public readonly fsType: string; ``` File system type of this volume. ``` public readonly readOnly: boolean; ``` Whether or not it is mounted as a read-only volume. ``` public readonly volumeId: string; ``` Volume id of this volume. ``` public readonly partition: number; ``` Partition of this volume. AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. ``` import { AzureDiskPersistentVolume } from 'cdk8s-plus-28' new AzureDiskPersistentVolume(scope: Construct, id: string, props: AzureDiskPersistentVolumeProps) ``` ``` public readonly azureKind: AzureDiskPersistentVolumeKind; ``` Azure kind of this volume. ``` public readonly cachingMode: AzureDiskPersistentVolumeCachingMode; ``` Caching mode of this volume. ``` public readonly diskName: string; ``` Disk name of this volume. ``` public readonly diskUri: string; ``` Disk URI of this volume. ``` public readonly fsType: string; ``` File system type of this volume. ``` public readonly readOnly: boolean; ``` Whether or not it is mounted as a read-only volume. Create a secret for basic authentication. https://kubernetes.io/docs/concepts/configuration/secret/#basic-authentication-secret ``` import { BasicAuthSecret } from 'cdk8s-plus-28' new BasicAuthSecret(scope: Construct, id: string, props: BasicAuthSecretProps) ``` ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding. ``` import { ClusterRole } from 'cdk8s-plus-28' new ClusterRole(scope: Construct, id: string, props?: ClusterRoleProps) ``` ``` public aggregate(key: string, value: string) ``` ``` public allow(verbs: string[], endpoints: IApiEndpoint) ``` The endpoints(s) to apply to. ``` public allowCreate(endpoints: IApiEndpoint) ``` The resource(s) to apply to. ``` public allowDelete(endpoints: IApiEndpoint) ``` The resource(s) to apply to. ``` public allowDeleteCollection(endpoints: IApiEndpoint) ``` The resource(s) to apply to. ``` public allowGet(endpoints: IApiEndpoint) ``` The resource(s) to apply to. ``` public allowList(endpoints: IApiEndpoint) ``` The resource(s) to apply to. ``` public allowPatch(endpoints: IApiEndpoint) ``` The resource(s) to apply to. ``` public allowRead(endpoints: IApiEndpoint) ``` The resource(s) to apply to. ``` public allowReadWrite(endpoints: IApiEndpoint) ``` The resource(s) to apply to. ``` public allowUpdate(endpoints: IApiEndpoint) ``` The resource(s) to apply to. ``` public allowWatch(endpoints: IApiEndpoint) ``` The resource(s) to apply to. ``` public bind(subjects: ISubject) ``` a list of subjects to bind to. ``` public bindInNamespace(namespace: string, subjects: ISubject) ``` the namespace to limit permissions to. a list of subjects to bind to. ``` public combine(rol: ClusterRole) ``` ``` import { ClusterRole } from 'cdk8s-plus-28' ClusterRole.fromClusterRoleName(scope: Construct, id: string, name: string) ``` ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. ``` public readonly rules: ClusterRolePolicyRule[]; ``` Rules associaated with this Role. Returns a copy, use allow to add rules. A ClusterRoleBinding grants permissions cluster-wide to a user or set of" }, { "data": "``` import { ClusterRoleBinding } from 'cdk8s-plus-28' new ClusterRoleBinding(scope: Construct, id: string, props: ClusterRoleBindingProps) ``` ``` public addSubjects(subjects: ISubject) ``` The subjects to add. ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. ``` public readonly role: IClusterRole; ``` ``` public readonly subjects: ISubject[]; ``` ConfigMap holds configuration data for pods to consume. ``` import { ConfigMap } from 'cdk8s-plus-28' new ConfigMap(scope: Construct, id: string, props?: ConfigMapProps) ``` ``` public addBinaryData(key: string, value: string) ``` The key. The value. ``` public addData(key: string, value: string) ``` The key. The value. ``` public addDirectory(localDir: string, options?: AddDirectoryOptions) ``` A path to a local directory. Options. ``` public addFile(localFile: string, key?: string) ``` The path to the local file. The ConfigMap key (default to the file name). ``` import { ConfigMap } from 'cdk8s-plus-28' ConfigMap.fromConfigMapName(scope: Construct, id: string, name: string) ``` ``` public readonly binaryData: {[ key: string ]: string}; ``` The binary data associated with this config map. Returns a copy. To add data records, use addBinaryData() or addData(). ``` public readonly data: {[ key: string ]: string}; ``` The data associated with this config map. Returns an copy. To add data records, use addData() or addBinaryData(). ``` public readonly immutable: boolean; ``` Whether or not this config map is immutable. ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. A CronJob is responsible for creating a Job and scheduling it based on provided cron schedule. This helps running Jobs in a recurring manner. ``` import { CronJob } from 'cdk8s-plus-28' new CronJob(scope: Construct, id: string, props: CronJobProps) ``` ``` public readonly concurrencyPolicy: string; ``` The policy used by this cron job to determine the concurrency mode in which to schedule jobs. ``` public readonly failedJobsRetained: number; ``` The number of failed jobs retained by this cron job. ``` public readonly resourceType: string; ``` Represents the resource type. ``` public readonly schedule: Cron; ``` The schedule this cron job is scheduled to run in. ``` public readonly startingDeadline: Duration; ``` The time by which the running cron job needs to schedule the next job execution. The job is considered as failed if it misses this deadline. ``` public readonly successfulJobsRetained: number; ``` The number of successful jobs retained by this cron job. ``` public readonly suspend: boolean; ``` Whether or not the cron job is currently suspended or not. ``` public readonly timeZone: string; ``` The timezone which this cron job would follow to schedule jobs. A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created. Some typical uses of a DaemonSet are: In a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon. A more complex setup might use multiple DaemonSets for a single type of daemon, but with different flags and/or different memory and cpu requests for different hardware types. ``` import { DaemonSet } from 'cdk8s-plus-28' new DaemonSet(scope: Construct, id: string, props?: DaemonSetProps) ``` ``` public readonly minReadySeconds: number; ``` ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. A Deployment provides declarative updates for Pods and" }, { "data": "You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments. Note: Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in the main Kubernetes repository if your use case is not covered below. Use Case The following are typical use cases for Deployments: ``` import { Deployment } from 'cdk8s-plus-28' new Deployment(scope: Construct, id: string, props?: DeploymentProps) ``` ``` public exposeViaIngress(path: string, options?: ExposeDeploymentViaIngressOptions) ``` The ingress path to register under. Additional options. ``` public exposeViaService(options?: DeploymentExposeViaServiceOptions) ``` Options to determine details of the service and port exposed. ``` public markHasAutoscaler() ``` ``` public toScalingTarget() ``` ``` public readonly minReady: Duration; ``` Minimum duration for which a newly created pod should be ready without any of its container crashing, for it to be considered available. ``` public readonly progressDeadline: Duration; ``` The maximum duration for a deployment to make progress before it is considered to be failed. ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. ``` public readonly strategy: DeploymentStrategy; ``` ``` public readonly replicas: number; ``` Number of desired pods. ``` public readonly hasAutoscaler: boolean; ``` If this is a target of an autoscaler. Create a secret for storing credentials for accessing a container image registry. https://kubernetes.io/docs/concepts/configuration/secret/#docker-config-secrets ``` import { DockerConfigSecret } from 'cdk8s-plus-28' new DockerConfigSecret(scope: Construct, id: string, props: DockerConfigSecretProps) ``` GCEPersistentDisk represents a GCE Disk resource that is attached to a kubelets host machine and then exposed to the pod. Provisioned by an admin. https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk ``` import { GCEPersistentDiskPersistentVolume } from 'cdk8s-plus-28' new GCEPersistentDiskPersistentVolume(scope: Construct, id: string, props: GCEPersistentDiskPersistentVolumeProps) ``` ``` public readonly fsType: string; ``` File system type of this volume. ``` public readonly pdName: string; ``` PD resource in GCE of this volume. ``` public readonly readOnly: boolean; ``` Whether or not it is mounted as a read-only volume. ``` public readonly partition: number; ``` Partition of this volume. Represents a group. ``` public toSubjectConfiguration() ``` ``` import { Group } from 'cdk8s-plus-28' Group.fromName(scope: Construct, id: string, name: string) ``` ``` public readonly kind: string; ``` ``` public readonly name: string; ``` ``` public readonly apiGroup: string; ``` A HorizontalPodAutoscaler scales a workload up or down in response to a metric change. This allows your services to scale up when demand is high and scale down when they are no longer needed. Typical use cases for HorizontalPodAutoscaler: The autoscaler uses the following algorithm to determine the number of replicas to scale: desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )] HorizontalPodAutoscalers can be used to with any Scalable workload: Deployment StatefulSet Targets that already have a replica count defined: Remove any replica counts from the target resource before associating with a HorizontalPodAutoscaler. If this isnt done, then any time a change to that object is applied, Kubernetes will scale the current number of Pods to the value of the target.replicas key. This may not be desired and could lead to unexpected behavior. https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#implicit-maintenance-mode-deactivation ``` import { HorizontalPodAutoscaler } from 'cdk8s-plus-28' new HorizontalPodAutoscaler(scope: Construct, id: string, props: HorizontalPodAutoscalerProps) ``` ``` public readonly maxReplicas: number; ``` The maximum number of replicas that can be scaled up to. ``` public readonly minReplicas: number; ``` The minimum number of replicas that can be scaled down to. ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. ``` public readonly scaleDown: ScalingRules; ``` The scaling behavior when scaling down. ``` public readonly scaleUp: ScalingRules; ``` The scaling behavior when scaling" }, { "data": "``` public readonly target: IScalable; ``` The workload to scale up or down. ``` public readonly metrics: Metric[]; ``` The metric conditions that trigger a scale up or scale down. Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend. An Ingress can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc. ``` import { Ingress } from 'cdk8s-plus-28' new Ingress(scope: Construct, id: string, props?: IngressProps) ``` ``` public addDefaultBackend(backend: IngressBackend) ``` The backend to use for requests that do not match any rule. ``` public addHostDefaultBackend(host: string, backend: IngressBackend) ``` The host name to match. The backend to route to. ``` public addHostRule(host: string, path: string, backend: IngressBackend, pathType?: HttpIngressPathType) ``` The host name. The HTTP path. The backend to route requests to. How the path is matched against request paths. ``` public addRule(path: string, backend: IngressBackend, pathType?: HttpIngressPathType) ``` The HTTP path. The backend to route requests to. How the path is matched against request paths. ``` public addRules(rules: IngressRule) ``` The rules to add. ``` public addTls(tls: IngressTls[]) ``` ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. A Job creates one or more Pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up the Pods it created. A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot). You can also use a Job to run multiple Pods in parallel. ``` import { Job } from 'cdk8s-plus-28' new Job(scope: Construct, id: string, props?: JobProps) ``` ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. ``` public readonly activeDeadline: Duration; ``` Duration before job is terminated. If undefined, there is no deadline. ``` public readonly backoffLimit: number; ``` Number of retries before marking failed. ``` public readonly ttlAfterFinished: Duration; ``` TTL before the job is deleted after it is finished. In Kubernetes, namespaces provides a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced objects (e.g. Deployments, Services, etc) and not for cluster-wide objects (e.g. StorageClass, Nodes, PersistentVolumes, etc). ``` import { Namespace } from 'cdk8s-plus-28' new Namespace(scope: Construct, id: string, props?: NamespaceProps) ``` ``` public toNamespaceSelectorConfig() ``` ``` public toNetworkPolicyPeerConfig() ``` ``` public toPodSelector() ``` ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#automatic-labelling Represents a group of namespaces. ``` import { Namespaces } from 'cdk8s-plus-28' new Namespaces(scope: Construct, id: string, expressions?: LabelExpression[], names?: string[], labels?: {[ key: string ]: string}) ``` ``` public toNamespaceSelectorConfig() ``` ``` public toNetworkPolicyPeerConfig() ``` ``` public toPodSelector() ``` ``` import { Namespaces } from 'cdk8s-plus-28' Namespaces.all(scope: Construct, id: string) ``` ``` import { Namespaces } from 'cdk8s-plus-28'" }, { "data": "Construct, id: string, options: NamespacesSelectOptions) ``` Control traffic flow at the IP address or port level (OSI layer 3 or 4), network policies are an application-centric construct which allow you to specify how a pod is allowed to communicate with various network peers. Outgoing traffic is allowed if there are no network policies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic matches at least one egress rule across all of the network policies that select the pod. Network policies do not conflict; they are additive. If any policy or policies apply to a given pod for a given direction, the connections allowed in that direction from that pod is the union of what the applicable policies allow. Thus, order of evaluation does not affect the policy result. For a connection from a source pod to a destination pod to be allowed, both the egress policy on the source pod and the ingress policy on the destination pod need to allow the connection. If either side does not allow the connection, it will not happen. https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource ``` import { NetworkPolicy } from 'cdk8s-plus-28' new NetworkPolicy(scope: Construct, id: string, props?: NetworkPolicyProps) ``` ``` public addEgressRule(peer: INetworkPolicyPeer, ports?: NetworkPolicyPort[]) ``` ``` public addIngressRule(peer: INetworkPolicyPeer, ports?: NetworkPolicyPort[]) ``` ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. Describes a particular CIDR (Ex. 192.168.1.1/24,2001:db9::/64) that is allowed to the pods matched by a network policy selector. The except entry describes CIDRs that should not be included within this rule. ``` public toNetworkPolicyPeerConfig() ``` ``` public toPodSelector() ``` ``` import { NetworkPolicyIpBlock } from 'cdk8s-plus-28' NetworkPolicyIpBlock.anyIpv4(scope: Construct, id: string) ``` ``` import { NetworkPolicyIpBlock } from 'cdk8s-plus-28' NetworkPolicyIpBlock.anyIpv6(scope: Construct, id: string) ``` ``` import { NetworkPolicyIpBlock } from 'cdk8s-plus-28' NetworkPolicyIpBlock.ipv4(scope: Construct, id: string, cidrIp: string, except?: string[]) ``` ``` import { NetworkPolicyIpBlock } from 'cdk8s-plus-28' NetworkPolicyIpBlock.ipv6(scope: Construct, id: string, cidrIp: string, except?: string[]) ``` ``` public readonly cidr: string; ``` A string representing the IP Block Valid examples are 192.168.1.1/24 or 2001:db9::/64. ``` public readonly except: string[]; ``` A slice of CIDRs that should not be included within an IP Block Valid examples are 192.168.1.1/24 or 2001:db9::/64. Except values will be rejected if they are outside the CIDR range. A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. ``` import { PersistentVolume } from 'cdk8s-plus-28' new PersistentVolume(scope: Construct, id: string, props?: PersistentVolumeProps) ``` ``` public asVolume() ``` ``` public bind(claim: IPersistentVolumeClaim) ``` The PVC to bind to. ``` public reserve() ``` ``` import { PersistentVolume } from 'cdk8s-plus-28' PersistentVolume.fromPersistentVolumeName(scope: Construct, id: string, volumeName: string) ``` ``` public readonly mode: PersistentVolumeMode; ``` Volume mode of this volume. ``` public readonly reclaimPolicy: PersistentVolumeReclaimPolicy; ``` Reclaim policy of this volume. ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. ``` public readonly accessModes: PersistentVolumeAccessMode[]; ``` Access modes requirement of this claim. ``` public readonly claim: IPersistentVolumeClaim; ``` PVC this volume is bound to. Undefined means this volume is not yet claimed by any PVC. ``` public readonly mountOptions: string[]; ``` Mount options of this volume. ``` public readonly storage: Size; ``` Storage size of this volume. ``` public readonly storageClassName: string; ``` Storage class this volume belongs to. A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a" }, { "data": "Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes ``` import { PersistentVolumeClaim } from 'cdk8s-plus-28' new PersistentVolumeClaim(scope: Construct, id: string, props?: PersistentVolumeClaimProps) ``` ``` public bind(vol: IPersistentVolume) ``` The PV to bind to. ``` import { PersistentVolumeClaim } from 'cdk8s-plus-28' PersistentVolumeClaim.fromClaimName(scope: Construct, id: string, claimName: string) ``` ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. ``` public readonly volumeMode: PersistentVolumeMode; ``` Volume mode requirement of this claim. ``` public readonly accessModes: PersistentVolumeAccessMode[]; ``` Access modes requirement of this claim. ``` public readonly storage: Size; ``` Storage requirement of this claim. ``` public readonly storageClassName: string; ``` Storage class requirment of this claim. ``` public readonly volume: IPersistentVolume; ``` PV this claim is bound to. Undefined means the claim is not bound to any specific volume. Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. ``` import { Pod } from 'cdk8s-plus-28' new Pod(scope: Construct, id: string, props?: PodProps) ``` ``` public readonly connections: PodConnections; ``` ``` public readonly podMetadata: ApiObjectMetadataDefinition; ``` ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. ``` public readonly scheduling: PodScheduling; ``` This label is autoamtically added by cdk8s to any pod. It provides a unique and stable identifier for the pod. Represents a group of pods. ``` import { Pods } from 'cdk8s-plus-28' new Pods(scope: Construct, id: string, expressions?: LabelExpression[], labels?: {[ key: string ]: string}, namespaces?: INamespaceSelector) ``` ``` public toNetworkPolicyPeerConfig() ``` ``` public toPodSelector() ``` ``` public toPodSelectorConfig() ``` ``` import { Pods } from 'cdk8s-plus-28' Pods.all(scope: Construct, id: string, options?: PodsAllOptions) ``` ``` import { Pods } from 'cdk8s-plus-28' Pods.select(scope: Construct, id: string, options: PodsSelectOptions) ``` Base class for all Kubernetes objects in stdk8s. Represents a single resource. ``` import { Resource } from 'cdk8s-plus-28' new Resource(scope: Construct, id: string) ``` ``` public asApiResource() ``` ``` public asNonApiResource() ``` ``` public readonly apiGroup: string; ``` The group portion of the API version (e.g. authorization.k8s.io). ``` public readonly apiVersion: string; ``` The objects API version (e.g. authorization.k8s.io/v1). ``` public readonly kind: string; ``` The object kind (e.g. Deployment). ``` public readonly metadata: ApiObjectMetadataDefinition; ``` ``` public readonly name: string; ``` The name of this API object. ``` public readonly permissions: ResourcePermissions; ``` ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. ``` public readonly resourceName: string; ``` The unique, namespace-global, name of an object inside the Kubernetes cluster. If this is omitted, the ApiResource should represent all objects of the given type. Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding. ``` import { Role } from 'cdk8s-plus-28' new Role(scope: Construct, id: string, props?: RoleProps) ``` ``` public allow(verbs: string[], resources: IApiResource) ``` The resource(s) to apply to. ``` public allowCreate(resources: IApiResource) ``` The resource(s) to apply to. ``` public allowDelete(resources: IApiResource) ``` The resource(s) to apply to. ``` public allowDeleteCollection(resources: IApiResource) ``` The resource(s) to apply to. ``` public allowGet(resources: IApiResource) ``` The resource(s) to apply to. ``` public allowList(resources: IApiResource) ``` The resource(s) to apply to. ``` public allowPatch(resources: IApiResource) ``` The resource(s) to apply to. ``` public allowRead(resources: IApiResource) ``` The resource(s) to apply to. ``` public allowReadWrite(resources: IApiResource) ``` The resource(s) to apply to. ``` public allowUpdate(resources: IApiResource) ``` The resource(s) to apply" }, { "data": "``` public allowWatch(resources: IApiResource) ``` The resource(s) to apply to. ``` public bind(subjects: ISubject) ``` a list of subjects to bind to. ``` import { Role } from 'cdk8s-plus-28' Role.fromRoleName(scope: Construct, id: string, name: string) ``` ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. ``` public readonly rules: RolePolicyRule[]; ``` Rules associaated with this Role. Returns a copy, use allow to add rules. A RoleBinding grants permissions within a specific namespace to a user or set of users. ``` import { RoleBinding } from 'cdk8s-plus-28' new RoleBinding(scope: Construct, id: string, props: RoleBindingProps) ``` ``` public addSubjects(subjects: ISubject) ``` The subjects to add. ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. ``` public readonly role: IRole; ``` ``` public readonly subjects: ISubject[]; ``` Kubernetes Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image. https://kubernetes.io/docs/concepts/configuration/secret ``` import { Secret } from 'cdk8s-plus-28' new Secret(scope: Construct, id: string, props?: SecretProps) ``` ``` public addStringData(key: string, value: string) ``` Key. Value. ``` public envValue(key: string, options?: EnvValueFromSecretOptions) ``` ``` public getStringData(key: string) ``` Key. ``` import { Secret } from 'cdk8s-plus-28' Secret.fromSecretName(scope: Construct, id: string, name: string) ``` ``` public readonly immutable: boolean; ``` Whether or not the secret is immutable. ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. An abstract way to expose an application running on a set of Pods as a network service. With Kubernetes you dont need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them. For example, consider a stateless image-processing backend which is running with 3 replicas. Those replicas are fungiblefrontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that, nor should they need to keep track of the set of backends themselves. The Service abstraction enables this decoupling. If youre able to use Kubernetes APIs for service discovery in your application, you can query the API server for Endpoints, that get updated whenever the set of Pods in a Service changes. For non-native applications, Kubernetes offers ways to place a network port or load balancer in between your application and the backend Pods. ``` import { Service } from 'cdk8s-plus-28' new Service(scope: Construct, id: string, props?: ServiceProps) ``` ``` public bind(port: number, options?: ServiceBindOptions) ``` The port definition. ``` public exposeViaIngress(path: string, options?: ExposeServiceViaIngressOptions) ``` The path to expose the service under. Additional options. ``` public select(selector: IPodSelector) ``` ``` public selectLabel(key: string, value: string) ``` ``` public readonly port: number; ``` Return the first port of the service. ``` public readonly ports: ServicePort[]; ``` Ports for this service. Use bind() to bind additional service ports. ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. ``` public readonly type: ServiceType; ``` Determines how the Service is exposed. ``` public readonly clusterIP: string; ``` The IP address of the service and is usually assigned randomly by the master. ``` public readonly externalName: string; ``` The externalName to be used for EXTERNAL_NAME" }, { "data": "A service account provides an identity for processes that run in a Pod. When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default). https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account ``` import { ServiceAccount } from 'cdk8s-plus-28' new ServiceAccount(scope: Construct, id: string, props?: ServiceAccountProps) ``` ``` public addSecret(secr: ISecret) ``` The secret. ``` public toSubjectConfiguration() ``` ``` import { ServiceAccount } from 'cdk8s-plus-28' ServiceAccount.fromServiceAccountName(scope: Construct, id: string, name: string, options?: FromServiceAccountNameOptions) ``` The name of the service account resource. additional options. ``` public readonly automountToken: boolean; ``` Whether or not a token is automatically mounted for this service account. ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. ``` public readonly secrets: ISecret[]; ``` List of secrets allowed to be used by pods running using this service account. Returns a copy. To add a secret, use addSecret(). Create a secret for a service account token. https://kubernetes.io/docs/concepts/configuration/secret/#service-account-token-secrets ``` import { ServiceAccountTokenSecret } from 'cdk8s-plus-28' new ServiceAccountTokenSecret(scope: Construct, id: string, props: ServiceAccountTokenSecretProps) ``` Create a secret for ssh authentication. https://kubernetes.io/docs/concepts/configuration/secret/#ssh-authentication-secrets ``` import { SshAuthSecret } from 'cdk8s-plus-28' new SshAuthSecret(scope: Construct, id: string, props: SshAuthSecretProps) ``` StatefulSet is the workload API object used to manage stateful applications. Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods. Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling. If you want to use storage volumes to provide persistence for your workload, you can use a StatefulSet as part of the solution. Although individual Pods in a StatefulSet are susceptible to failure, the persistent Pod identifiers make it easier to match existing volumes to the new Pods that replace any that have failed. StatefulSets are valuable for applications that require one or more of the following. ``` import { StatefulSet } from 'cdk8s-plus-28' new StatefulSet(scope: Construct, id: string, props: StatefulSetProps) ``` ``` public markHasAutoscaler() ``` ``` public toScalingTarget() ``` ``` public readonly minReady: Duration; ``` Minimum duration for which a newly created pod should be ready without any of its container crashing, for it to be considered available. ``` public readonly podManagementPolicy: PodManagementPolicy; ``` Management policy to use for the set. ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. ``` public readonly service: Service; ``` ``` public readonly strategy: StatefulSetUpdateStrategy; ``` The update startegy of this stateful set. ``` public readonly replicas: number; ``` Number of desired pods. ``` public readonly hasAutoscaler: boolean; ``` If this is a target of an autoscaler. Create a secret for storing a TLS certificate and its associated key. https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets ``` import { TlsSecret } from 'cdk8s-plus-28' new TlsSecret(scope: Construct, id: string, props: TlsSecretProps) ``` Represents a user. ``` public toSubjectConfiguration() ``` ``` import { User } from 'cdk8s-plus-28' User.fromName(scope: Construct, id: string, name: string) ``` ``` public readonly kind: string; ``` ``` public readonly name: string; ``` ``` public readonly apiGroup: string; ``` Volume represents a named volume in a pod that may be accessed by any container in the" }, { "data": "Docker also has a concept of volumes, though it is somewhat looser and less managed. In Docker, a volume is simply a directory on disk or in another Container. Lifetimes are not managed and until very recently there were only local-disk-backed volumes. Docker now provides volume drivers, but the functionality is very limited for now (e.g. as of Docker 1.7 only one volume driver is allowed per Container and there is no way to pass parameters to volumes). A Kubernetes volume, on the other hand, has an explicit lifetime - the same as the Pod that encloses it. Consequently, a volume outlives any Containers that run within the Pod, and data is preserved across Container restarts. Of course, when a Pod ceases to exist, the volume will cease to exist, too. Perhaps more importantly than this, Kubernetes supports many types of volumes, and a Pod can use any number of them simultaneously. At its core, a volume is just a directory, possibly with some data in it, which is accessible to the Containers in a Pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular volume type used. To use a volume, a Pod specifies what volumes to provide for the Pod (the .spec.volumes field) and where to mount those into Containers (the .spec.containers[*].volumeMounts field). A process in a container sees a filesystem view composed from their Docker image and volumes. The Docker image is at the root of the filesystem hierarchy, and any volumes are mounted at the specified paths within the image. Volumes can not mount onto other volumes ``` public asVolume() ``` ``` import { Volume } from 'cdk8s-plus-28' Volume.fromAwsElasticBlockStore(scope: Construct, id: string, volumeId: string, options?: AwsElasticBlockStoreVolumeOptions) ``` ``` import { Volume } from 'cdk8s-plus-28' Volume.fromAzureDisk(scope: Construct, id: string, diskName: string, diskUri: string, options?: AzureDiskVolumeOptions) ``` ``` import { Volume } from 'cdk8s-plus-28' Volume.fromConfigMap(scope: Construct, id: string, configMap: IConfigMap, options?: ConfigMapVolumeOptions) ``` The config map to use to populate the volume. Options. ``` import { Volume } from 'cdk8s-plus-28' Volume.fromCsi(scope: Construct, id: string, driver: string, options?: CsiVolumeOptions) ``` The name of the CSI driver to use to populate the volume. Options for the CSI volume, including driver-specific ones. ``` import { Volume } from 'cdk8s-plus-28' Volume.fromEmptyDir(scope: Construct, id: string, name: string, options?: EmptyDirVolumeOptions) ``` Additional options. ``` import { Volume } from 'cdk8s-plus-28' Volume.fromGcePersistentDisk(scope: Construct, id: string, pdName: string, options?: GCEPersistentDiskVolumeOptions) ``` ``` import { Volume } from 'cdk8s-plus-28' Volume.fromHostPath(scope: Construct, id: string, name: string, options: HostPathVolumeOptions) ``` ``` import { Volume } from 'cdk8s-plus-28' Volume.fromNfs(scope: Construct, id: string, name: string, options: NfsVolumeOptions) ``` ``` import { Volume } from 'cdk8s-plus-28' Volume.fromPersistentVolumeClaim(scope: Construct, id: string, claim: IPersistentVolumeClaim, options?: PersistentVolumeClaimVolumeOptions) ``` ``` import { Volume } from 'cdk8s-plus-28' Volume.fromSecret(scope: Construct, id: string, secr: ISecret, options?: SecretVolumeOptions) ``` The secret to use to populate the volume. Options. ``` public readonly name: string; ``` A workload is an application running on Kubernetes. Whether your workload is a single component or several that work together, on Kubernetes you run it inside a set of pods. In Kubernetes, a Pod represents a set of running containers on your cluster. ``` import { Workload } from 'cdk8s-plus-28' new Workload(scope: Construct, id: string, props: WorkloadProps) ``` ``` public select(selectors: LabelSelector) ``` ``` public readonly connections: PodConnections; ``` ``` public readonly matchExpressions: LabelSelectorRequirement[]; ``` The expression matchers this workload will use in order to select pods. Returns a a copy. Use select() to add expression matchers. ``` public readonly matchLabels: {[ key: string ]: string}; ``` The label matchers this workload will use in order to select" }, { "data": "Returns a a copy. Use select() to add label matchers. ``` public readonly podMetadata: ApiObjectMetadataDefinition; ``` The metadata of pods in this workload. ``` public readonly scheduling: WorkloadScheduling; ``` Properties for AbstractPod. ``` import { AbstractPodProps } from 'cdk8s-plus-28' const abstractPodProps: AbstractPodProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly automountServiceAccountToken: boolean; ``` Indicates whether a service account token should be automatically mounted. https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server ``` public readonly containers: ContainerProps[]; ``` List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. You can add additionnal containers using podSpec.addContainer() ``` public readonly dns: PodDnsProps; ``` DNS settings for the pod. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ ``` public readonly dockerRegistryAuth: ISecret; ``` A secret containing docker credentials for authenticating to a registry. ``` public readonly hostAliases: HostAlias[]; ``` HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pods hosts file. ``` public readonly hostNetwork: boolean; ``` Host network for the pod. ``` public readonly initContainers: ContainerProps[]; ``` List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added ,removed or updated. https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ ``` public readonly isolate: boolean; ``` Isolates the pod. This will prevent any ingress or egress connections to / from this pod. You can however allow explicit connections post instantiation by using the .connections property. ``` public readonly restartPolicy: RestartPolicy; ``` Restart policy for all containers within the pod. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy ``` public readonly securityContext: PodSecurityContextProps; ``` SecurityContext holds pod-level security attributes and common container settings. ``` public readonly serviceAccount: IServiceAccount; ``` A service account provides an identity for processes that run in a Pod. When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default). https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ ``` public readonly terminationGracePeriod: Duration; ``` Grace period until the pod is terminated. ``` public readonly volumes: Volume[]; ``` List of volumes that can be mounted by containers belonging to the pod. You can also add volumes later using podSpec.addVolume() https://kubernetes.io/docs/concepts/storage/volumes Options to add a deployment to a service. ``` import { AddDeploymentOptions } from 'cdk8s-plus-28' const addDeploymentOptions: AddDeploymentOptions = { ... } ``` ``` public readonly name: string; ``` The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. This maps to the Name field in EndpointPort objects. Optional if only one ServicePort is defined on this service. ``` public readonly nodePort: number; ``` The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the" }, { "data": "If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport ``` public readonly protocol: Protocol; ``` The IP protocol for this port. Supports TCP, UDP, and SCTP. Default is TCP. ``` public readonly targetPort: number; ``` The port number the service will redirect to. ``` public readonly port: number; ``` The port number the service will bind to. Options for configmap.addDirectory(). ``` import { AddDirectoryOptions } from 'cdk8s-plus-28' const addDirectoryOptions: AddDirectoryOptions = { ... } ``` ``` public readonly exclude: string[]; ``` Glob patterns to exclude when adding files. ``` public readonly keyPrefix: string; ``` A prefix to add to all keys in the config map. Options for ApiResource. ``` import { ApiResourceOptions } from 'cdk8s-plus-28' const apiResourceOptions: ApiResourceOptions = { ... } ``` ``` public readonly apiGroup: string; ``` The group portion of the API version (e.g. authorization.k8s.io). ``` public readonly resourceType: string; ``` The name of the resource type as it appears in the relevant API endpoint. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources Properties for AwsElasticBlockStorePersistentVolume. ``` import { AwsElasticBlockStorePersistentVolumeProps } from 'cdk8s-plus-28' const awsElasticBlockStorePersistentVolumeProps: AwsElasticBlockStorePersistentVolumeProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly accessModes: PersistentVolumeAccessMode[]; ``` Contains all ways the volume can be mounted. https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes ``` public readonly claim: IPersistentVolumeClaim; ``` Part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. https://kubernetes.io/docs/concepts/storage/persistent-volumes#binding ``` public readonly mountOptions: string[]; ``` A list of mount options, e.g. [ro, soft]. Not validated - mount will simply fail if one is invalid. https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options ``` public readonly reclaimPolicy: PersistentVolumeReclaimPolicy; ``` When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy tells the cluster what to do with the volume after it has been released of its claim. https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming ``` public readonly storage: Size; ``` What is the storage capacity of this volume. https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources ``` public readonly storageClassName: string; ``` Name of StorageClass to which this persistent volume belongs. ``` public readonly volumeMode: PersistentVolumeMode; ``` Defines what type of volume is required by the claim. ``` public readonly volumeId: string; ``` Unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore ``` public readonly fsType: string; ``` Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore ``` public readonly partition: number; ``` The partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as 1. Similarly, the volume partition for /dev/sda is 0 (or you can leave the property empty). ``` public readonly readOnly: boolean; ``` Specify true to force and set the ReadOnly property in VolumeMounts to true. https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Options of Volume.fromAwsElasticBlockStore. ``` import { AwsElasticBlockStoreVolumeOptions } from 'cdk8s-plus-28' const awsElasticBlockStoreVolumeOptions: AwsElasticBlockStoreVolumeOptions = { ... } ``` ``` public readonly fsType: string; ``` Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore ``` public readonly name: string; ``` The volume name. ``` public readonly partition: number; ``` The partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as" }, { "data": "Similarly, the volume partition for /dev/sda is 0 (or you can leave the property empty). ``` public readonly readOnly: boolean; ``` Specify true to force and set the ReadOnly property in VolumeMounts to true. https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Properties for AzureDiskPersistentVolume. ``` import { AzureDiskPersistentVolumeProps } from 'cdk8s-plus-28' const azureDiskPersistentVolumeProps: AzureDiskPersistentVolumeProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly accessModes: PersistentVolumeAccessMode[]; ``` Contains all ways the volume can be mounted. https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes ``` public readonly claim: IPersistentVolumeClaim; ``` Part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. https://kubernetes.io/docs/concepts/storage/persistent-volumes#binding ``` public readonly mountOptions: string[]; ``` A list of mount options, e.g. [ro, soft]. Not validated - mount will simply fail if one is invalid. https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options ``` public readonly reclaimPolicy: PersistentVolumeReclaimPolicy; ``` When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy tells the cluster what to do with the volume after it has been released of its claim. https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming ``` public readonly storage: Size; ``` What is the storage capacity of this volume. https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources ``` public readonly storageClassName: string; ``` Name of StorageClass to which this persistent volume belongs. ``` public readonly volumeMode: PersistentVolumeMode; ``` Defines what type of volume is required by the claim. ``` public readonly diskName: string; ``` The Name of the data disk in the blob storage. ``` public readonly diskUri: string; ``` The URI the data disk in the blob storage. ``` public readonly cachingMode: AzureDiskPersistentVolumeCachingMode; ``` Host Caching mode. ``` public readonly fsType: string; ``` Filesystem type to mount. Must be a filesystem type supported by the host operating system. ``` public readonly kind: AzureDiskPersistentVolumeKind; ``` Kind of disk. ``` public readonly readOnly: boolean; ``` Force the ReadOnly setting in VolumeMounts. Options of Volume.fromAzureDisk. ``` import { AzureDiskVolumeOptions } from 'cdk8s-plus-28' const azureDiskVolumeOptions: AzureDiskVolumeOptions = { ... } ``` ``` public readonly cachingMode: AzureDiskPersistentVolumeCachingMode; ``` Host Caching mode. ``` public readonly fsType: string; ``` Filesystem type to mount. Must be a filesystem type supported by the host operating system. ``` public readonly kind: AzureDiskPersistentVolumeKind; ``` Kind of disk. ``` public readonly name: string; ``` The volume name. ``` public readonly readOnly: boolean; ``` Force the ReadOnly setting in VolumeMounts. Options for BasicAuthSecret. ``` import { BasicAuthSecretProps } from 'cdk8s-plus-28' const basicAuthSecretProps: BasicAuthSecretProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly immutable: boolean; ``` If set to true, ensures that data stored in the Secret cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. ``` public readonly password: string; ``` The password or token for authentication. ``` public readonly username: string; ``` The user name for authentication. Properties for ClusterRoleBinding. ``` import { ClusterRoleBindingProps } from 'cdk8s-plus-28' const clusterRoleBindingProps: ClusterRoleBindingProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly role: IClusterRole; ``` The role to bind to. Policy rule of a `ClusterRole. ``` import { ClusterRolePolicyRule } from 'cdk8s-plus-28' const clusterRolePolicyRule: ClusterRolePolicyRule = { ... } ``` ``` public readonly endpoints: IApiEndpoint[]; ``` Endpoints this rule applies to. Can be either api resources or non api resources. ``` public readonly verbs: string[]; ``` Verbs to allow. (e.g [get, watch]) Properties for" }, { "data": "``` import { ClusterRoleProps } from 'cdk8s-plus-28' const clusterRoleProps: ClusterRoleProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly aggregationLabels: {[ key: string ]: string}; ``` Specify labels that should be used to locate ClusterRoles, whose rules will be automatically filled into this ClusterRoles rules. ``` public readonly rules: ClusterRolePolicyRule[]; ``` A list of rules the role should allow. Options for Probe.fromCommand(). ``` import { CommandProbeOptions } from 'cdk8s-plus-28' const commandProbeOptions: CommandProbeOptions = { ... } ``` ``` public readonly failureThreshold: number; ``` Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. ``` public readonly initialDelaySeconds: Duration; ``` Number of seconds after the container has started before liveness probes are initiated. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes ``` public readonly periodSeconds: Duration; ``` How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. ``` public readonly successThreshold: number; ``` Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. ``` public readonly timeoutSeconds: Duration; ``` Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Common properties for Secret. ``` import { CommonSecretProps } from 'cdk8s-plus-28' const commonSecretProps: CommonSecretProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly immutable: boolean; ``` If set to true, ensures that data stored in the Secret cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. Properties for initialization of ConfigMap. ``` import { ConfigMapProps } from 'cdk8s-plus-28' const configMapProps: ConfigMapProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly binaryData: {[ key: string ]: string}; ``` BinaryData contains the binary data. Each key must consist of alphanumeric characters, -, _ or .. BinaryData can contain byte sequences that are not in the UTF-8 range. The keys stored in BinaryData must not overlap with the ones in the Data field, this is enforced during validation process. You can also add binary data using configMap.addBinaryData(). ``` public readonly data: {[ key: string ]: string}; ``` Data contains the configuration data. Each key must consist of alphanumeric characters, -, _ or .. Values with non-UTF-8 byte sequences must use the BinaryData field. The keys stored in Data must not overlap with the keys in the BinaryData field, this is enforced during validation process. You can also add data using configMap.addData(). ``` public readonly immutable: boolean; ``` If set to true, ensures that data stored in the ConfigMap cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. Options for the ConfigMap-based volume. ``` import { ConfigMapVolumeOptions } from 'cdk8s-plus-28' const configMapVolumeOptions: ConfigMapVolumeOptions = { ... } ``` ``` public readonly defaultMode: number; ``` Mode bits to use on created files by default. Must be a value between 0 and Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits" }, { "data": "``` public readonly items: {[ key: string ]: PathMapping}; ``` If unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the .. path or start with ... ``` public readonly name: string; ``` The volume name. ``` public readonly optional: boolean; ``` Specify whether the ConfigMap or its keys must be defined. Container lifecycle properties. ``` import { ContainerLifecycle } from 'cdk8s-plus-28' const containerLifecycle: ContainerLifecycle = { ... } ``` ``` public readonly postStart: Handler; ``` This hook is executed immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. ``` public readonly preStop: Handler; ``` This hook is called immediately before a container is terminated due to an API request or management event such as a liveness/startup probe failure, preemption, resource contention and others. A call to the PreStop hook fails if the container is already in a terminated or completed state and the hook must complete before the TERM signal to stop the container can be sent. The Pods termination grace period countdown begins before the PreStop hook is executed, so regardless of the outcome of the handler, the container will eventually terminate within the Pods termination grace period. No parameters are passed to the handler. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination Optional properties of a container. ``` import { ContainerOpts } from 'cdk8s-plus-28' const containerOpts: ContainerOpts = { ... } ``` ``` public readonly args: string[]; ``` Arguments to the entrypoint. The docker images CMD is used if command is not provided. Variable references $(VAR_NAME) are expanded using the containers environment. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell ``` public readonly command: string[]; ``` Entrypoint array. Not executed within a shell. The docker images ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the containers environment. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VARNAME) syntax can be escaped with a double $$, ie: $$(VARNAME). Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell ``` public readonly envFrom: EnvFrom[]; ``` List of sources to populate environment variables in the container. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by the envVariables property with a duplicate key will take precedence. ``` public readonly envVariables: {[ key: string ]: EnvValue}; ``` Environment variables to set in the container. ``` public readonly imagePullPolicy: ImagePullPolicy; ``` Image pull policy for this container. ``` public readonly lifecycle: ContainerLifecycle; ``` Describes actions that the management system should take in response to container lifecycle events. ``` public readonly liveness: Probe; ``` Periodic probe of container liveness. Container will be restarted if the probe fails. ``` public readonly name: string; ``` Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be" }, { "data": "``` public readonly port: number; ``` ``` public readonly portNumber: number; ``` Number of port to expose on the pods IP address. This must be a valid port number, 0 < x < 65536. This is a convinience property if all you need a single TCP numbered port. In case more advanced configuartion is required, use the ports property. This port is added to the list of ports mentioned in the ports property. ``` public readonly ports: ContainerPort[]; ``` List of ports to expose from this container. ``` public readonly readiness: Probe; ``` Determines when the container is ready to serve traffic. ``` public readonly resources: ContainerResources; ``` Compute resources (CPU and memory requests and limits) required by the container. https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ ``` public readonly restartPolicy: ContainerRestartPolicy; ``` Kubelet will start init containers with restartPolicy=Always in the order with other init containers, but instead of waiting for its completion, it will wait for the container startup completion Currently, only accepted value is Always. https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/ ``` public readonly securityContext: ContainerSecurityContextProps; ``` SecurityContext defines the security options the container should be run with. If set, the fields override equivalent fields of the pods security context. https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ ``` public readonly startup: Probe; ``` StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully ``` public readonly volumeMounts: VolumeMount[]; ``` Pod volumes to mount into the containers filesystem. Cannot be updated. ``` public readonly workingDir: string; ``` Containers working directory. If not specified, the container runtimes default will be used, which might be configured in the container image. Cannot be updated. Represents a network port in a single container. ``` import { ContainerPort } from 'cdk8s-plus-28' const containerPort: ContainerPort = { ... } ``` ``` public readonly number: number; ``` Number of port to expose on the pods IP address. This must be a valid port number, 0 < x < 65536. ``` public readonly hostIp: string; ``` What host IP to bind the external port to. ``` public readonly hostPort: number; ``` Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. Most containers do not need this. ``` public readonly name: string; ``` If specified, this must be an IANASVCNAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. ``` public readonly protocol: Protocol; ``` Protocol for port. Must be UDP, TCP, or SCTP. Defaults to TCP. Properties for creating a container. ``` import { ContainerProps } from 'cdk8s-plus-28' const containerProps: ContainerProps = { ... } ``` ``` public readonly args: string[]; ``` Arguments to the entrypoint. The docker images CMD is used if command is not provided. Variable references $(VAR_NAME) are expanded using the containers environment. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell ``` public readonly command: string[]; ``` Entrypoint array. Not executed within a shell. The docker images ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the containers environment. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VARNAME) syntax can be escaped with a double $$, ie: $$(VARNAME). Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell ``` public readonly envFrom: EnvFrom[]; ``` List of sources to populate environment variables in the" }, { "data": "When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by the envVariables property with a duplicate key will take precedence. ``` public readonly envVariables: {[ key: string ]: EnvValue}; ``` Environment variables to set in the container. ``` public readonly imagePullPolicy: ImagePullPolicy; ``` Image pull policy for this container. ``` public readonly lifecycle: ContainerLifecycle; ``` Describes actions that the management system should take in response to container lifecycle events. ``` public readonly liveness: Probe; ``` Periodic probe of container liveness. Container will be restarted if the probe fails. ``` public readonly name: string; ``` Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ``` public readonly port: number; ``` ``` public readonly portNumber: number; ``` Number of port to expose on the pods IP address. This must be a valid port number, 0 < x < 65536. This is a convinience property if all you need a single TCP numbered port. In case more advanced configuartion is required, use the ports property. This port is added to the list of ports mentioned in the ports property. ``` public readonly ports: ContainerPort[]; ``` List of ports to expose from this container. ``` public readonly readiness: Probe; ``` Determines when the container is ready to serve traffic. ``` public readonly resources: ContainerResources; ``` Compute resources (CPU and memory requests and limits) required by the container. https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ ``` public readonly restartPolicy: ContainerRestartPolicy; ``` Kubelet will start init containers with restartPolicy=Always in the order with other init containers, but instead of waiting for its completion, it will wait for the container startup completion Currently, only accepted value is Always. https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/ ``` public readonly securityContext: ContainerSecurityContextProps; ``` SecurityContext defines the security options the container should be run with. If set, the fields override equivalent fields of the pods security context. https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ ``` public readonly startup: Probe; ``` StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully ``` public readonly volumeMounts: VolumeMount[]; ``` Pod volumes to mount into the containers filesystem. Cannot be updated. ``` public readonly workingDir: string; ``` Containers working directory. If not specified, the container runtimes default will be used, which might be configured in the container image. Cannot be updated. ``` public readonly image: string; ``` Docker image name. CPU and memory compute resources. ``` import { ContainerResources } from 'cdk8s-plus-28' const containerResources: ContainerResources = { ... } ``` ``` public readonly cpu: CpuResources; ``` ``` public readonly ephemeralStorage: EphemeralStorageResources; ``` ``` public readonly memory: MemoryResources; ``` Properties for ContainerSecurityContext. ``` import { ContainerSecurityContextProps } from 'cdk8s-plus-28' const containerSecurityContextProps: ContainerSecurityContextProps = { ... } ``` ``` public readonly allowPrivilegeEscalation: boolean; ``` Whether a process can gain more privileges than its parent process. ``` public readonly capabilities: ContainerSecutiryContextCapabilities; ``` POSIX capabilities for running containers. ``` public readonly ensureNonRoot: boolean; ``` Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. ``` public readonly group: number; ``` The GID to run the entrypoint of the container process. ``` public readonly privileged: boolean; ``` Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. ``` public readonly readOnlyRootFilesystem: boolean; ``` Whether this container has a read-only root filesystem. ``` public readonly user: number; ``` The UID to run the entrypoint of the container" }, { "data": "``` import { ContainerSecutiryContextCapabilities } from 'cdk8s-plus-28' const containerSecutiryContextCapabilities: ContainerSecutiryContextCapabilities = { ... } ``` ``` public readonly add: Capability[]; ``` Added capabilities. ``` public readonly drop: Capability[]; ``` Removed capabilities. CPU request and limit. ``` import { CpuResources } from 'cdk8s-plus-28' const cpuResources: CpuResources = { ... } ``` ``` public readonly limit: Cpu; ``` ``` public readonly request: Cpu; ``` Properties for CronJob. ``` import { CronJobProps } from 'cdk8s-plus-28' const cronJobProps: CronJobProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly automountServiceAccountToken: boolean; ``` Indicates whether a service account token should be automatically mounted. https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server ``` public readonly containers: ContainerProps[]; ``` List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. You can add additionnal containers using podSpec.addContainer() ``` public readonly dns: PodDnsProps; ``` DNS settings for the pod. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ ``` public readonly dockerRegistryAuth: ISecret; ``` A secret containing docker credentials for authenticating to a registry. ``` public readonly hostAliases: HostAlias[]; ``` HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pods hosts file. ``` public readonly hostNetwork: boolean; ``` Host network for the pod. ``` public readonly initContainers: ContainerProps[]; ``` List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added ,removed or updated. https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ ``` public readonly isolate: boolean; ``` Isolates the pod. This will prevent any ingress or egress connections to / from this pod. You can however allow explicit connections post instantiation by using the .connections property. ``` public readonly restartPolicy: RestartPolicy; ``` Restart policy for all containers within the pod. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy ``` public readonly securityContext: PodSecurityContextProps; ``` SecurityContext holds pod-level security attributes and common container settings. ``` public readonly serviceAccount: IServiceAccount; ``` A service account provides an identity for processes that run in a Pod. When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default). https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ ``` public readonly terminationGracePeriod: Duration; ``` Grace period until the pod is terminated. ``` public readonly volumes: Volume[]; ``` List of volumes that can be mounted by containers belonging to the pod. You can also add volumes later using podSpec.addVolume() https://kubernetes.io/docs/concepts/storage/volumes ``` public readonly podMetadata: ApiObjectMetadata; ``` The pod metadata of this workload. ``` public readonly select: boolean; ``` Automatically allocates a pod label selector for this workload and add it to the pod metadata. This ensures this workload manages pods created by its pod template. ``` public readonly spread: boolean; ``` Automatically spread pods across hostname and" }, { "data": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints ``` public readonly activeDeadline: Duration; ``` Specifies the duration the job may be active before the system tries to terminate it. ``` public readonly backoffLimit: number; ``` Specifies the number of retries before marking this job failed. ``` public readonly ttlAfterFinished: Duration; ``` Limits the lifetime of a Job that has finished execution (either Complete or Failed). If this field is set, after the Job finishes, it is eligible to be automatically deleted. When the Job is being deleted, its lifecycle guarantees (e.g. finalizers) will be honored. If this field is set to zero, the Job becomes eligible to be deleted immediately after it finishes. This field is alpha-level and is only honored by servers that enable the TTLAfterFinished feature. ``` public readonly schedule: Cron; ``` Specifies the time in which the job would run again. This is defined as a cron expression in the CronJob resource. ``` public readonly concurrencyPolicy: ConcurrencyPolicy; ``` Specifies the concurrency policy for the job. ``` public readonly failedJobsRetained: number; ``` Specifies the number of failed jobs history retained. This would retain the Job and the associated Pod resource and can be useful for debugging. ``` public readonly startingDeadline: Duration; ``` Kubernetes attempts to start cron jobs at its schedule time, but this is not guaranteed. This deadline specifies how much time can pass after a schedule point, for which kubernetes can still start the job. For example, if this is set to 100 seconds, kubernetes is allowed to start the job at a maximum 100 seconds after the scheduled time. Note that the Kubernetes CronJobController checks for things every 10 seconds, for this reason, a deadline below 10 seconds is not allowed, as it may cause your job to never be scheduled. In addition, kubernetes will stop scheduling jobs if more than 100 schedules were missed (for any reason). This property also controls what time interval should kubernetes consider when counting for missed schedules. For example, suppose a CronJob is set to schedule a new Job every one minute beginning at 08:30:00, and its startingDeadline field is not set. If the CronJob controller happens to be down from 08:29:00 to 10:21:00, the job will not start as the number of missed jobs which missed their schedule is greater than 100. However, if startingDeadline is set to 200 seconds, kubernetes will only count 3 missed schedules, and thus start a new execution at 10:22:00. ``` public readonly successfulJobsRetained: number; ``` Specifies the number of successful jobs history retained. This would retain the Job and the associated Pod resource and can be useful for debugging. ``` public readonly suspend: boolean; ``` Specifies if the cron job should be suspended. Only applies to future executions, current ones are remained untouched. ``` public readonly timeZone: string; ``` Specifies the timezone for the job. This helps aligining the schedule to follow the specified timezone. {@link https://en.wikipedia.org/wiki/Listoftzdatabasetime_zones} for list of valid timezone values. Options for the CSI driver based volume. ``` import { CsiVolumeOptions } from 'cdk8s-plus-28' const csiVolumeOptions: CsiVolumeOptions = { ... } ``` ``` public readonly attributes: {[ key: string ]: string}; ``` Any driver-specific attributes to pass to the CSI volume builder. ``` public readonly fsType: string; ``` The filesystem type to mount. Ex. ext4, xfs, ntfs. If not provided, the empty value is passed to the associated CSI driver, which will determine the default filesystem to apply. ``` public readonly name: string; ``` The volume name. ``` public readonly readOnly: boolean; ``` Whether the mounted volume should be read-only or not. Properties for DaemonSet. ``` import { DaemonSetProps } from 'cdk8s-plus-28' const daemonSetProps: DaemonSetProps = {" }, { "data": "} ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly automountServiceAccountToken: boolean; ``` Indicates whether a service account token should be automatically mounted. https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server ``` public readonly containers: ContainerProps[]; ``` List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. You can add additionnal containers using podSpec.addContainer() ``` public readonly dns: PodDnsProps; ``` DNS settings for the pod. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ ``` public readonly dockerRegistryAuth: ISecret; ``` A secret containing docker credentials for authenticating to a registry. ``` public readonly hostAliases: HostAlias[]; ``` HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pods hosts file. ``` public readonly hostNetwork: boolean; ``` Host network for the pod. ``` public readonly initContainers: ContainerProps[]; ``` List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added ,removed or updated. https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ ``` public readonly isolate: boolean; ``` Isolates the pod. This will prevent any ingress or egress connections to / from this pod. You can however allow explicit connections post instantiation by using the .connections property. ``` public readonly restartPolicy: RestartPolicy; ``` Restart policy for all containers within the pod. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy ``` public readonly securityContext: PodSecurityContextProps; ``` SecurityContext holds pod-level security attributes and common container settings. ``` public readonly serviceAccount: IServiceAccount; ``` A service account provides an identity for processes that run in a Pod. When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default). https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ ``` public readonly terminationGracePeriod: Duration; ``` Grace period until the pod is terminated. ``` public readonly volumes: Volume[]; ``` List of volumes that can be mounted by containers belonging to the pod. You can also add volumes later using podSpec.addVolume() https://kubernetes.io/docs/concepts/storage/volumes ``` public readonly podMetadata: ApiObjectMetadata; ``` The pod metadata of this workload. ``` public readonly select: boolean; ``` Automatically allocates a pod label selector for this workload and add it to the pod metadata. This ensures this workload manages pods created by its pod template. ``` public readonly spread: boolean; ``` Automatically spread pods across hostname and zones. https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints ``` public readonly minReadySeconds: number; ``` Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Options for Deployment.exposeViaService. ``` import { DeploymentExposeViaServiceOptions } from 'cdk8s-plus-28' const deploymentExposeViaServiceOptions: DeploymentExposeViaServiceOptions = { ... } ``` ``` public readonly name: string; ``` The name of the service to expose. If youd like to expose the deployment multiple times, you must explicitly set a name starting from the second expose" }, { "data": "``` public readonly ports: ServicePort[]; ``` The ports that the service should bind to. ``` public readonly serviceType: ServiceType; ``` The type of the exposed service. Properties for Deployment. ``` import { DeploymentProps } from 'cdk8s-plus-28' const deploymentProps: DeploymentProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly automountServiceAccountToken: boolean; ``` Indicates whether a service account token should be automatically mounted. https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server ``` public readonly containers: ContainerProps[]; ``` List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. You can add additionnal containers using podSpec.addContainer() ``` public readonly dns: PodDnsProps; ``` DNS settings for the pod. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ ``` public readonly dockerRegistryAuth: ISecret; ``` A secret containing docker credentials for authenticating to a registry. ``` public readonly hostAliases: HostAlias[]; ``` HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pods hosts file. ``` public readonly hostNetwork: boolean; ``` Host network for the pod. ``` public readonly initContainers: ContainerProps[]; ``` List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added ,removed or updated. https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ ``` public readonly isolate: boolean; ``` Isolates the pod. This will prevent any ingress or egress connections to / from this pod. You can however allow explicit connections post instantiation by using the .connections property. ``` public readonly restartPolicy: RestartPolicy; ``` Restart policy for all containers within the pod. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy ``` public readonly securityContext: PodSecurityContextProps; ``` SecurityContext holds pod-level security attributes and common container settings. ``` public readonly serviceAccount: IServiceAccount; ``` A service account provides an identity for processes that run in a Pod. When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default). https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ ``` public readonly terminationGracePeriod: Duration; ``` Grace period until the pod is terminated. ``` public readonly volumes: Volume[]; ``` List of volumes that can be mounted by containers belonging to the pod. You can also add volumes later using podSpec.addVolume() https://kubernetes.io/docs/concepts/storage/volumes ``` public readonly podMetadata: ApiObjectMetadata; ``` The pod metadata of this workload. ``` public readonly select: boolean; ``` Automatically allocates a pod label selector for this workload and add it to the pod metadata. This ensures this workload manages pods created by its pod template. ``` public readonly spread: boolean; ``` Automatically spread pods across hostname and zones. https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints ``` public readonly minReady: Duration; ``` Minimum duration for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Zero means the pod will be considered available as soon as it is" }, { "data": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#min-ready-seconds ``` public readonly progressDeadline: Duration; ``` The maximum duration for a deployment to make progress before it is considered to be failed. The deployment controller will continue to process failed deployments and a condition with a ProgressDeadlineExceeded reason will be surfaced in the deployment status. Note that progress will not be estimated during the time a deployment is paused. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#progress-deadline-seconds ``` public readonly replicas: number; ``` Number of desired pods. ``` public readonly strategy: DeploymentStrategy; ``` Specifies the strategy used to replace old Pods by new ones. Options for DeploymentStrategy.rollingUpdate. ``` import { DeploymentStrategyRollingUpdateOptions } from 'cdk8s-plus-28' const deploymentStrategyRollingUpdateOptions: DeploymentStrategyRollingUpdateOptions = { ... } ``` ``` public readonly maxSurge: PercentOrAbsolute; ``` The maximum number of pods that can be scheduled above the desired number of pods. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding up. This can not be 0 if maxUnavailable is 0. Example: when this is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new pods do not exceed 130% of desired pods. Once old pods have been killed, new ReplicaSet can be scaled up further, ensuring that total number of pods running at any time during the update is at most 130% of desired pods. ``` public readonly maxUnavailable: PercentOrAbsolute; ``` The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding down. This can not be 0 if maxSurge is 0. Example: when this is set to 30%, the old ReplicaSet can be scaled down to 70% of desired pods immediately when the rolling update starts. Once new pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of pods available at all times during the update is at least 70% of desired pods. Custom DNS option. ``` import { DnsOption } from 'cdk8s-plus-28' const dnsOption: DnsOption = { ... } ``` ``` public readonly name: string; ``` Option name. ``` public readonly value: string; ``` Option value. Options for DockerConfigSecret. ``` import { DockerConfigSecretProps } from 'cdk8s-plus-28' const dockerConfigSecretProps: DockerConfigSecretProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly immutable: boolean; ``` If set to true, ensures that data stored in the Secret cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. ``` public readonly data: {[ key: string ]: any}; ``` JSON content to provide for the ~/.docker/config.json file. This will be stringified and inserted as stringData. https://docs.docker.com/engine/reference/commandline/cli/#sample-configuration-file Options for volumes populated with an empty directory. ``` import { EmptyDirVolumeOptions } from 'cdk8s-plus-28' const emptyDirVolumeOptions: EmptyDirVolumeOptions = { ... } ``` ``` public readonly medium: EmptyDirMedium; ``` By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment. However, you can set the emptyDir.medium field to EmptyDirMedium.MEMORY to tell Kubernetes to mount a tmpfs (RAM-backed filesystem) for you instead. While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on node reboot and any files you write will count against your Containers memory limit. ``` public readonly sizeLimit: Size; ``` Total amount of local storage required for this EmptyDir" }, { "data": "The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. Options to specify an envionment variable value from a ConfigMap key. ``` import { EnvValueFromConfigMapOptions } from 'cdk8s-plus-28' const envValueFromConfigMapOptions: EnvValueFromConfigMapOptions = { ... } ``` ``` public readonly optional: boolean; ``` Specify whether the ConfigMap or its key must be defined. Options to specify an environment variable value from a field reference. ``` import { EnvValueFromFieldRefOptions } from 'cdk8s-plus-28' const envValueFromFieldRefOptions: EnvValueFromFieldRefOptions = { ... } ``` ``` public readonly apiVersion: string; ``` Version of the schema the FieldPath is written in terms of. ``` public readonly key: string; ``` The key to select the pod label or annotation. Options to specify an environment variable value from the process environment. ``` import { EnvValueFromProcessOptions } from 'cdk8s-plus-28' const envValueFromProcessOptions: EnvValueFromProcessOptions = { ... } ``` ``` public readonly required: boolean; ``` Specify whether the key must exist in the environment. If this is set to true, and the key does not exist, an error will thrown. Options to specify an environment variable value from a resource. ``` import { EnvValueFromResourceOptions } from 'cdk8s-plus-28' const envValueFromResourceOptions: EnvValueFromResourceOptions = { ... } ``` ``` public readonly container: Container; ``` The container to select the value from. ``` public readonly divisor: string; ``` The output format of the exposed resource. Options to specify an environment variable value from a Secret. ``` import { EnvValueFromSecretOptions } from 'cdk8s-plus-28' const envValueFromSecretOptions: EnvValueFromSecretOptions = { ... } ``` ``` public readonly optional: boolean; ``` Specify whether the Secret or its key must be defined. Emphemeral storage request and limit. ``` import { EphemeralStorageResources } from 'cdk8s-plus-28' const ephemeralStorageResources: EphemeralStorageResources = { ... } ``` ``` public readonly limit: Size; ``` ``` public readonly request: Size; ``` Options for exposing a deployment via an ingress. ``` import { ExposeDeploymentViaIngressOptions } from 'cdk8s-plus-28' const exposeDeploymentViaIngressOptions: ExposeDeploymentViaIngressOptions = { ... } ``` ``` public readonly name: string; ``` The name of the service to expose. If youd like to expose the deployment multiple times, you must explicitly set a name starting from the second expose call. ``` public readonly ports: ServicePort[]; ``` The ports that the service should bind to. ``` public readonly serviceType: ServiceType; ``` The type of the exposed service. ``` public readonly ingress: Ingress; ``` The ingress to add rules to. ``` public readonly pathType: HttpIngressPathType; ``` The type of the path. Options for exposing a service using an ingress. ``` import { ExposeServiceViaIngressOptions } from 'cdk8s-plus-28' const exposeServiceViaIngressOptions: ExposeServiceViaIngressOptions = { ... } ``` ``` public readonly ingress: Ingress; ``` The ingress to add rules to. ``` public readonly pathType: HttpIngressPathType; ``` The type of the path. ``` import { FromServiceAccountNameOptions } from 'cdk8s-plus-28' const fromServiceAccountNameOptions: FromServiceAccountNameOptions = { ... } ``` ``` public readonly namespaceName: string; ``` The name of the namespace the service account belongs to. Properties for GCEPersistentDiskPersistentVolume. ``` import { GCEPersistentDiskPersistentVolumeProps } from 'cdk8s-plus-28' const gCEPersistentDiskPersistentVolumeProps: GCEPersistentDiskPersistentVolumeProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly accessModes: PersistentVolumeAccessMode[]; ``` Contains all ways the volume can be mounted. https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes ``` public readonly claim: IPersistentVolumeClaim; ``` Part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. https://kubernetes.io/docs/concepts/storage/persistent-volumes#binding ``` public readonly mountOptions: string[]; ``` A list of mount options, e.g. [ro," }, { "data": "Not validated - mount will simply fail if one is invalid. https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options ``` public readonly reclaimPolicy: PersistentVolumeReclaimPolicy; ``` When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy tells the cluster what to do with the volume after it has been released of its claim. https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming ``` public readonly storage: Size; ``` What is the storage capacity of this volume. https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources ``` public readonly storageClassName: string; ``` Name of StorageClass to which this persistent volume belongs. ``` public readonly volumeMode: PersistentVolumeMode; ``` Defines what type of volume is required by the claim. ``` public readonly pdName: string; ``` Unique name of the PD resource in GCE. Used to identify the disk in GCE. https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk ``` public readonly fsType: string; ``` Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore ``` public readonly partition: number; ``` The partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as 1. Similarly, the volume partition for /dev/sda is 0 (or you can leave the property empty). ``` public readonly readOnly: boolean; ``` Specify true to force and set the ReadOnly property in VolumeMounts to true. https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Options of Volume.fromGcePersistentDisk. ``` import { GCEPersistentDiskVolumeOptions } from 'cdk8s-plus-28' const gCEPersistentDiskVolumeOptions: GCEPersistentDiskVolumeOptions = { ... } ``` ``` public readonly fsType: string; ``` Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore ``` public readonly name: string; ``` The volume name. ``` public readonly partition: number; ``` The partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as 1. Similarly, the volume partition for /dev/sda is 0 (or you can leave the property empty). ``` public readonly readOnly: boolean; ``` Specify true to force and set the ReadOnly property in VolumeMounts to true. https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Options for Handler.fromHttpGet. ``` import { HandlerFromHttpGetOptions } from 'cdk8s-plus-28' const handlerFromHttpGetOptions: HandlerFromHttpGetOptions = { ... } ``` ``` public readonly port: number; ``` The TCP port to use when sending the GET request. Options for Handler.fromTcpSocket. ``` import { HandlerFromTcpSocketOptions } from 'cdk8s-plus-28' const handlerFromTcpSocketOptions: HandlerFromTcpSocketOptions = { ... } ``` ``` public readonly host: string; ``` The host name to connect to on the container. ``` public readonly port: number; ``` The TCP port to connect to on the container. Properties for HorizontalPodAutoscaler. ``` import { HorizontalPodAutoscalerProps } from 'cdk8s-plus-28' const horizontalPodAutoscalerProps: HorizontalPodAutoscalerProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly maxReplicas: number; ``` The maximum number of replicas that can be scaled up to. ``` public readonly target: IScalable; ``` The workload to scale up or down. Scalable workload types: Deployment StatefulSet ``` public readonly metrics: Metric[]; ``` The metric conditions that trigger a scale up or scale down. ``` public readonly minReplicas: number; ``` The minimum number of replicas that can be scaled down to. Can be set to 0 if the alpha feature gate HPAScaleToZero is enabled and at least one Object or External metric is configured. ``` public readonly scaleDown: ScalingRules; ``` The scaling behavior when scaling down. ``` public readonly scaleUp: ScalingRules; ``` The scaling behavior when scaling" }, { "data": "HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pods /etc/hosts file. ``` import { HostAlias } from 'cdk8s-plus-28' const hostAlias: HostAlias = { ... } ``` ``` public readonly hostnames: string[]; ``` Hostnames for the chosen IP address. ``` public readonly ip: string; ``` IP address of the host file entry. Options for a HostPathVolume-based volume. ``` import { HostPathVolumeOptions } from 'cdk8s-plus-28' const hostPathVolumeOptions: HostPathVolumeOptions = { ... } ``` ``` public readonly path: string; ``` The path of the directory on the host. ``` public readonly type: HostPathVolumeType; ``` The expected type of the path found on the host. Options for Probe.fromHttpGet(). ``` import { HttpGetProbeOptions } from 'cdk8s-plus-28' const httpGetProbeOptions: HttpGetProbeOptions = { ... } ``` ``` public readonly failureThreshold: number; ``` Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. ``` public readonly initialDelaySeconds: Duration; ``` Number of seconds after the container has started before liveness probes are initiated. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes ``` public readonly periodSeconds: Duration; ``` How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. ``` public readonly successThreshold: number; ``` Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. ``` public readonly timeoutSeconds: Duration; ``` Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes ``` public readonly host: string; ``` The host name to connect to on the container. ``` public readonly port: number; ``` The TCP port to use when sending the GET request. ``` public readonly scheme: ConnectionScheme; ``` Scheme to use for connecting to the host (HTTP or HTTPS). Properties for Ingress. ``` import { IngressProps } from 'cdk8s-plus-28' const ingressProps: IngressProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly className: string; ``` Class Name for this ingress. This field is a reference to an IngressClass resource that contains additional Ingress configuration, including the name of the Ingress controller. ``` public readonly defaultBackend: IngressBackend; ``` The default backend services requests that do not match any rule. Using this option or the addDefaultBackend() method is equivalent to adding a rule with both path and host undefined. ``` public readonly rules: IngressRule[]; ``` Routing rules for this ingress. Each rule must define an IngressBackend that will receive the requests that match this rule. If both host and path are not specifiec, this backend will be used as the default backend of the ingress. You can also add rules later using addRule(), addHostRule(), addDefaultBackend() and addHostDefaultBackend(). ``` public readonly tls: IngressTls[]; ``` TLS settings for this ingress. Using this option tells the ingress controller to expose a TLS endpoint. Currently the Ingress only supports a single TLS port, 443. If multiple members of this list specify different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension, if the ingress controller fulfilling the ingress supports SNI. Represents the rules mapping the paths under a specified host to the related backend services. Incoming requests are first evaluated for a host match, then routed to the backend associated with the matching path. ``` import { IngressRule } from 'cdk8s-plus-28' const ingressRule: IngressRule = { ... } ``` ``` public readonly backend: IngressBackend; ``` Backend defines the referenced service endpoint to which the traffic will be forwarded" }, { "data": "``` public readonly host: string; ``` Host is the fully qualified domain name of a network host, as defined by RFC 3986. Note the following deviations from the host part of the URI as defined in the RFC: 1. IPs are not allowed. Currently an IngressRuleValue can only apply to the IP in the Spec of the parent Ingress. 2. The : delimiter is not respected because ports are not allowed. Currently the port of an Ingress is implicitly :80 for http and :443 for https. Both these may change in the future. Incoming requests are matched against the host before the IngressRuleValue. ``` public readonly path: string; ``` Path is an extended POSIX regex as defined by IEEE Std 1003.1, (i.e this follows the egrep/unix syntax, not the perl syntax) matched against the path of an incoming request. Currently it can contain characters disallowed from the conventional path part of a URL as defined by RFC 3986. Paths must begin with a /. ``` public readonly pathType: HttpIngressPathType; ``` Specify how the path is matched against request paths. By default, path types will be matched by prefix. https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types Represents the TLS configuration mapping that is passed to the ingress controller for SSL termination. ``` import { IngressTls } from 'cdk8s-plus-28' const ingressTls: IngressTls = { ... } ``` ``` public readonly hosts: string[]; ``` Hosts are a list of hosts included in the TLS certificate. The values in this list must match the name/s used in the TLS Secret. ``` public readonly secret: ISecret; ``` Secret is the secret that contains the certificate and key used to terminate SSL traffic on 443. If the SNI host in a listener conflicts with the Host header field used by an IngressRule, the SNI host is used for termination and value of the Host header is used for routing. Properties for Job. ``` import { JobProps } from 'cdk8s-plus-28' const jobProps: JobProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly automountServiceAccountToken: boolean; ``` Indicates whether a service account token should be automatically mounted. https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server ``` public readonly containers: ContainerProps[]; ``` List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. You can add additionnal containers using podSpec.addContainer() ``` public readonly dns: PodDnsProps; ``` DNS settings for the pod. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ ``` public readonly dockerRegistryAuth: ISecret; ``` A secret containing docker credentials for authenticating to a registry. ``` public readonly hostAliases: HostAlias[]; ``` HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pods hosts file. ``` public readonly hostNetwork: boolean; ``` Host network for the pod. ``` public readonly initContainers: ContainerProps[]; ``` List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added ,removed or updated. https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ ``` public readonly isolate: boolean; ``` Isolates the" }, { "data": "This will prevent any ingress or egress connections to / from this pod. You can however allow explicit connections post instantiation by using the .connections property. ``` public readonly restartPolicy: RestartPolicy; ``` Restart policy for all containers within the pod. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy ``` public readonly securityContext: PodSecurityContextProps; ``` SecurityContext holds pod-level security attributes and common container settings. ``` public readonly serviceAccount: IServiceAccount; ``` A service account provides an identity for processes that run in a Pod. When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default). https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ ``` public readonly terminationGracePeriod: Duration; ``` Grace period until the pod is terminated. ``` public readonly volumes: Volume[]; ``` List of volumes that can be mounted by containers belonging to the pod. You can also add volumes later using podSpec.addVolume() https://kubernetes.io/docs/concepts/storage/volumes ``` public readonly podMetadata: ApiObjectMetadata; ``` The pod metadata of this workload. ``` public readonly select: boolean; ``` Automatically allocates a pod label selector for this workload and add it to the pod metadata. This ensures this workload manages pods created by its pod template. ``` public readonly spread: boolean; ``` Automatically spread pods across hostname and zones. https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints ``` public readonly activeDeadline: Duration; ``` Specifies the duration the job may be active before the system tries to terminate it. ``` public readonly backoffLimit: number; ``` Specifies the number of retries before marking this job failed. ``` public readonly ttlAfterFinished: Duration; ``` Limits the lifetime of a Job that has finished execution (either Complete or Failed). If this field is set, after the Job finishes, it is eligible to be automatically deleted. When the Job is being deleted, its lifecycle guarantees (e.g. finalizers) will be honored. If this field is set to zero, the Job becomes eligible to be deleted immediately after it finishes. This field is alpha-level and is only honored by servers that enable the TTLAfterFinished feature. Options for LabelSelector.of. ``` import { LabelSelectorOptions } from 'cdk8s-plus-28' const labelSelectorOptions: LabelSelectorOptions = { ... } ``` ``` public readonly expressions: LabelExpression[]; ``` Expression based label matchers. ``` public readonly labels: {[ key: string ]: string}; ``` Strict label matchers. A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. ``` import { LabelSelectorRequirement } from 'cdk8s-plus-28' const labelSelectorRequirement: LabelSelectorRequirement = { ... } ``` ``` public readonly key: string; ``` The label key that the selector applies to. ``` public readonly operator: string; ``` Represents a keys relationship to a set of values. ``` public readonly values: string[]; ``` An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. Memory request and limit. ``` import { MemoryResources } from 'cdk8s-plus-28' const memoryResources: MemoryResources = { ... } ``` ``` public readonly limit: Size; ``` ``` public readonly request: Size; ``` Options for Metric.containerResource(). ``` import { MetricContainerResourceOptions } from 'cdk8s-plus-28' const metricContainerResourceOptions: MetricContainerResourceOptions = { ... } ``` ``` public readonly container: Container; ``` Container where the metric can be found. ``` public readonly target: MetricTarget; ``` Target metric value that will trigger scaling. Options for Metric.object(). ``` import { MetricObjectOptions } from 'cdk8s-plus-28' const metricObjectOptions: MetricObjectOptions = {" }, { "data": "} ``` ``` public readonly name: string; ``` The name of the metric to scale on. ``` public readonly target: MetricTarget; ``` The target metric value that will trigger scaling. ``` public readonly labelSelector: LabelSelector; ``` A selector to find a metric by label. When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. ``` public readonly object: IResource; ``` Resource where the metric can be found. Base options for a Metric. ``` import { MetricOptions } from 'cdk8s-plus-28' const metricOptions: MetricOptions = { ... } ``` ``` public readonly name: string; ``` The name of the metric to scale on. ``` public readonly target: MetricTarget; ``` The target metric value that will trigger scaling. ``` public readonly labelSelector: LabelSelector; ``` A selector to find a metric by label. When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. Options for mounts. ``` import { MountOptions } from 'cdk8s-plus-28' const mountOptions: MountOptions = { ... } ``` ``` public readonly propagation: MountPropagation; ``` Determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. Mount propagation allows for sharing volumes mounted by a Container to other Containers in the same Pod, or even to other Pods on the same node. ``` public readonly readOnly: boolean; ``` Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. ``` public readonly subPath: string; ``` Path within the volume from which the containers volume should be mounted.). ``` public readonly subPathExpr: string; ``` Expanded path within the volume from which the containers volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the containers environment. Defaults to (volumes root). subPathExpr and subPath are mutually exclusive. Properties for Namespace. ``` import { NamespaceProps } from 'cdk8s-plus-28' const namespaceProps: NamespaceProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. Configuration for selecting namespaces. ``` import { NamespaceSelectorConfig } from 'cdk8s-plus-28' const namespaceSelectorConfig: NamespaceSelectorConfig = { ... } ``` ``` public readonly labelSelector: LabelSelector; ``` A selector to select namespaces by labels. ``` public readonly names: string[]; ``` A list of names to select namespaces by names. Options for Namespaces.select. ``` import { NamespacesSelectOptions } from 'cdk8s-plus-28' const namespacesSelectOptions: NamespacesSelectOptions = { ... } ``` ``` public readonly expressions: LabelExpression[]; ``` Namespaces must satisfy these selectors. The selectors query labels, just like the labels property, but they provide a more advanced matching mechanism. ``` public readonly labels: {[ key: string ]: string}; ``` Labels the namespaces must have. This is equivalent to using an Is selector. ``` public readonly names: string[]; ``` Namespaces names must be one of these. Options for NetworkPolicy.addEgressRule. ``` import { NetworkPolicyAddEgressRuleOptions } from 'cdk8s-plus-28' const networkPolicyAddEgressRuleOptions: NetworkPolicyAddEgressRuleOptions = { ... } ``` ``` public readonly ports: NetworkPolicyPort[]; ``` Ports the rule should allow outgoing traffic to. Configuration for network peers. A peer can either by an ip block, or a selection of pods, not both. ``` import { NetworkPolicyPeerConfig } from 'cdk8s-plus-28' const networkPolicyPeerConfig: NetworkPolicyPeerConfig = { ... } ``` ``` public readonly ipBlock: NetworkPolicyIpBlock; ``` The ip block this peer represents. ``` public readonly podSelector: PodSelectorConfig; ``` The pod selector this peer represents. Properties for NetworkPolicyPort. ``` import { NetworkPolicyPortProps } from 'cdk8s-plus-28' const networkPolicyPortProps: NetworkPolicyPortProps = { ... } ``` ``` public readonly endPort: number; ``` End port (relative to port). Only applies if port is" }, { "data": "Use this to specify a port range, rather that a specific one. ``` public readonly port: number; ``` Specific port number. ``` public readonly protocol: NetworkProtocol; ``` Protocol. Properties for NetworkPolicy. ``` import { NetworkPolicyProps } from 'cdk8s-plus-28' const networkPolicyProps: NetworkPolicyProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly egress: NetworkPolicyTraffic; ``` Egress traffic configuration. ``` public readonly ingress: NetworkPolicyTraffic; ``` Ingress traffic configuration. ``` public readonly selector: IPodSelector; ``` Which pods does this policy object applies to. This can either be a single pod / workload, or a grouping of pods selected via the Pods.select function. Rules is applied to any pods selected by this property. Multiple network policies can select the same set of pods. In this case, the rules for each are combined additively. Note that Describes a rule allowing traffic from / to pods matched by a network policy selector. ``` import { NetworkPolicyRule } from 'cdk8s-plus-28' const networkPolicyRule: NetworkPolicyRule = { ... } ``` ``` public readonly peer: INetworkPolicyPeer; ``` Peer this rule interacts with. ``` public readonly ports: NetworkPolicyPort[]; ``` The ports of the rule. Describes how the network policy should configure egress / ingress traffic. ``` import { NetworkPolicyTraffic } from 'cdk8s-plus-28' const networkPolicyTraffic: NetworkPolicyTraffic = { ... } ``` ``` public readonly default: NetworkPolicyTrafficDefault; ``` Specifies the default behavior of the policy when no rules are defined. ``` public readonly rules: NetworkPolicyRule[]; ``` List of rules to be applied to the selected pods. If empty, the behavior of the policy is dictated by the default property. Options for the NFS based volume. ``` import { NfsVolumeOptions } from 'cdk8s-plus-28' const nfsVolumeOptions: NfsVolumeOptions = { ... } ``` ``` public readonly path: string; ``` Path that is exported by the NFS server. ``` public readonly server: string; ``` Server is the hostname or IP address of the NFS server. ``` public readonly readOnly: boolean; ``` If set to true, will force the NFS export to be mounted with read-only permissions. Options for NodeTaintQuery. ``` import { NodeTaintQueryOptions } from 'cdk8s-plus-28' const nodeTaintQueryOptions: NodeTaintQueryOptions = { ... } ``` ``` public readonly effect: TaintEffect; ``` The taint effect to match. ``` public readonly evictAfter: Duration; ``` How much time should a pod that tolerates the NO_EXECUTE effect be bound to the node. Only applies for the NO_EXECUTE effect. Maps a string key to a path within a volume. ``` import { PathMapping } from 'cdk8s-plus-28' const pathMapping: PathMapping = { ... } ``` ``` public readonly path: string; ``` The relative path of the file to map the key to. May not be an absolute path. May not contain the path element ... May not start with the string ... ``` public readonly mode: number; ``` Optional: mode bits to use on this file, must be a value between 0 and 0777. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. Properties for PersistentVolumeClaim. ``` import { PersistentVolumeClaimProps } from 'cdk8s-plus-28' const persistentVolumeClaimProps: PersistentVolumeClaimProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly accessModes: PersistentVolumeAccessMode[]; ``` Contains the access modes the volume should support. https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 ``` public readonly storage: Size; ``` Minimum storage size the volume should have. https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources ``` public readonly storageClassName: string; ``` Name of the StorageClass required by the" }, { "data": "When this property is not set, the behavior is as follows:. If the admission plugin is turned on, the storage class marked as default will be used. If the admission plugin is turned off, the pvc can only be bound to volumes without a storage class. https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 ``` public readonly volume: IPersistentVolume; ``` The PersistentVolume backing this claim. The control plane still checks that storage class, access modes, and requested storage size on the volume are valid. Note that in order to guarantee a proper binding, the volume should also define a claimRef referring to this claim. Otherwise, the volume may be claimed be other pvcs before it gets a chance to bind to this one. If the volume is managed (i.e not imported), you can use pv.claim() to easily create a bi-directional bounded claim. https://kubernetes.io/docs/concepts/storage/persistent-volumes/#binding. ``` public readonly volumeMode: PersistentVolumeMode; ``` Defines what type of volume is required by the claim. Options for a PersistentVolumeClaim-based volume. ``` import { PersistentVolumeClaimVolumeOptions } from 'cdk8s-plus-28' const persistentVolumeClaimVolumeOptions: PersistentVolumeClaimVolumeOptions = { ... } ``` ``` public readonly name: string; ``` The volume name. ``` public readonly readOnly: boolean; ``` Will force the ReadOnly setting in VolumeMounts. Properties for PersistentVolume. ``` import { PersistentVolumeProps } from 'cdk8s-plus-28' const persistentVolumeProps: PersistentVolumeProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly accessModes: PersistentVolumeAccessMode[]; ``` Contains all ways the volume can be mounted. https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes ``` public readonly claim: IPersistentVolumeClaim; ``` Part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. https://kubernetes.io/docs/concepts/storage/persistent-volumes#binding ``` public readonly mountOptions: string[]; ``` A list of mount options, e.g. [ro, soft]. Not validated - mount will simply fail if one is invalid. https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options ``` public readonly reclaimPolicy: PersistentVolumeReclaimPolicy; ``` When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy tells the cluster what to do with the volume after it has been released of its claim. https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming ``` public readonly storage: Size; ``` What is the storage capacity of this volume. https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources ``` public readonly storageClassName: string; ``` Name of StorageClass to which this persistent volume belongs. ``` public readonly volumeMode: PersistentVolumeMode; ``` Defines what type of volume is required by the claim. Options for PodConnections.allowFrom. ``` import { PodConnectionsAllowFromOptions } from 'cdk8s-plus-28' const podConnectionsAllowFromOptions: PodConnectionsAllowFromOptions = { ... } ``` ``` public readonly isolation: PodConnectionsIsolation; ``` Which isolation should be applied to establish the connection. ``` public readonly ports: NetworkPolicyPort[]; ``` Ports to allow incoming traffic to. Options for PodConnections.allowTo. ``` import { PodConnectionsAllowToOptions } from 'cdk8s-plus-28' const podConnectionsAllowToOptions: PodConnectionsAllowToOptions = { ... } ``` ``` public readonly isolation: PodConnectionsIsolation; ``` Which isolation should be applied to establish the connection. ``` public readonly ports: NetworkPolicyPort[]; ``` Ports to allow outgoing traffic to. Properties for PodDns. ``` import { PodDnsProps } from 'cdk8s-plus-28' const podDnsProps: PodDnsProps = { ... } ``` ``` public readonly hostname: string; ``` Specifies the hostname of the Pod. ``` public readonly hostnameAsFQDN: boolean; ``` If true the pods hostname will be configured as the pods FQDN, rather than the leaf name (the default). In Linux containers, this means setting the FQDN in the hostname field of the kernel (the nodename field of struct utsname). In Windows containers, this means setting the registry value of hostname for the registry key HKEYLOCALMACHINE\\SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters to FQDN. If a pod does not have FQDN, this has no" }, { "data": "``` public readonly nameservers: string[]; ``` A list of IP addresses that will be used as DNS servers for the Pod. There can be at most 3 IP addresses specified. When the policy is set to NONE, the list must contain at least one IP address, otherwise this property is optional. The servers listed will be combined to the base nameservers generated from the specified DNS policy with duplicate addresses removed. ``` public readonly options: DnsOption[]; ``` List of objects where each object may have a name property (required) and a value property (optional). The contents in this property will be merged to the options generated from the specified DNS policy. Duplicate entries are removed. ``` public readonly policy: DnsPolicy; ``` Set DNS policy for the pod. If policy is set to None, other configuration must be supplied. ``` public readonly searches: string[]; ``` A list of DNS search domains for hostname lookup in the Pod. When specified, the provided list will be merged into the base search domain names generated from the chosen DNS policy. Duplicate domain names are removed. Kubernetes allows for at most 6 search domains. ``` public readonly subdomain: string; ``` If specified, the fully qualified Pod hostname will be ...svc.. Properties for Pod. ``` import { PodProps } from 'cdk8s-plus-28' const podProps: PodProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly automountServiceAccountToken: boolean; ``` Indicates whether a service account token should be automatically mounted. https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server ``` public readonly containers: ContainerProps[]; ``` List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. You can add additionnal containers using podSpec.addContainer() ``` public readonly dns: PodDnsProps; ``` DNS settings for the pod. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ ``` public readonly dockerRegistryAuth: ISecret; ``` A secret containing docker credentials for authenticating to a registry. ``` public readonly hostAliases: HostAlias[]; ``` HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pods hosts file. ``` public readonly hostNetwork: boolean; ``` Host network for the pod. ``` public readonly initContainers: ContainerProps[]; ``` List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added ,removed or updated. https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ ``` public readonly isolate: boolean; ``` Isolates the pod. This will prevent any ingress or egress connections to / from this pod. You can however allow explicit connections post instantiation by using the .connections property. ``` public readonly restartPolicy: RestartPolicy; ``` Restart policy for all containers within the pod. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy ``` public readonly securityContext: PodSecurityContextProps; ``` SecurityContext holds pod-level security attributes and common container settings. ``` public readonly serviceAccount: IServiceAccount; ``` A service account provides an identity for processes that run in a Pod. When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your" }, { "data": "Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default). https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ ``` public readonly terminationGracePeriod: Duration; ``` Grace period until the pod is terminated. ``` public readonly volumes: Volume[]; ``` List of volumes that can be mounted by containers belonging to the pod. You can also add volumes later using podSpec.addVolume() https://kubernetes.io/docs/concepts/storage/volumes Options for Pods.all. ``` import { PodsAllOptions } from 'cdk8s-plus-28' const podsAllOptions: PodsAllOptions = { ... } ``` ``` public readonly namespaces: Namespaces; ``` Namespaces the pods are allowed to be in. Use Namespaces.all() to allow all namespaces. Options for PodScheduling.attract. ``` import { PodSchedulingAttractOptions } from 'cdk8s-plus-28' const podSchedulingAttractOptions: PodSchedulingAttractOptions = { ... } ``` ``` public readonly weight: number; ``` Indicates the attraction is optional (soft), with this weight score. Options for PodScheduling.colocate. ``` import { PodSchedulingColocateOptions } from 'cdk8s-plus-28' const podSchedulingColocateOptions: PodSchedulingColocateOptions = { ... } ``` ``` public readonly topology: Topology; ``` Which topology to coloate on. ``` public readonly weight: number; ``` Indicates the co-location is optional (soft), with this weight score. Options for PodScheduling.separate. ``` import { PodSchedulingSeparateOptions } from 'cdk8s-plus-28' const podSchedulingSeparateOptions: PodSchedulingSeparateOptions = { ... } ``` ``` public readonly topology: Topology; ``` Which topology to separate on. ``` public readonly weight: number; ``` Indicates the separation is optional (soft), with this weight score. Properties for PodSecurityContext. ``` import { PodSecurityContextProps } from 'cdk8s-plus-28' const podSecurityContextProps: PodSecurityContextProps = { ... } ``` ``` public readonly ensureNonRoot: boolean; ``` Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. ``` public readonly fsGroup: number; ``` Modify the ownership and permissions of pod volumes to this GID. ``` public readonly fsGroupChangePolicy: FsGroupChangePolicy; ``` Defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. ``` public readonly group: number; ``` The GID to run the entrypoint of the container process. ``` public readonly sysctls: Sysctl[]; ``` Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. ``` public readonly user: number; ``` The UID to run the entrypoint of the container process. Configuration for selecting pods, optionally in particular namespaces. ``` import { PodSelectorConfig } from 'cdk8s-plus-28' const podSelectorConfig: PodSelectorConfig = { ... } ``` ``` public readonly labelSelector: LabelSelector; ``` A selector to select pods by labels. ``` public readonly namespaces: NamespaceSelectorConfig; ``` Configuration for selecting which namepsaces are the pods allowed to be in. Options for Pods.select. ``` import { PodsSelectOptions } from 'cdk8s-plus-28' const podsSelectOptions: PodsSelectOptions = { ... } ``` ``` public readonly expressions: LabelExpression[]; ``` Expressions the pods must satisify. ``` public readonly labels: {[ key: string ]: string}; ``` Labels the pods must have. ``` public readonly namespaces: Namespaces; ``` Namespaces the pods are allowed to be in. Use Namespaces.all() to allow all namespaces. Probe options. ``` import { ProbeOptions } from 'cdk8s-plus-28' const probeOptions: ProbeOptions = { ... } ``` ``` public readonly failureThreshold: number; ``` Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. ``` public readonly initialDelaySeconds: Duration; ``` Number of seconds after the container has started before liveness probes are" }, { "data": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes ``` public readonly periodSeconds: Duration; ``` How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. ``` public readonly successThreshold: number; ``` Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. ``` public readonly timeoutSeconds: Duration; ``` Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Initialization properties for resources. ``` import { ResourceProps } from 'cdk8s-plus-28' const resourceProps: ResourceProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. Properties for RoleBinding. ``` import { RoleBindingProps } from 'cdk8s-plus-28' const roleBindingProps: RoleBindingProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly role: IRole; ``` The role to bind to. A RoleBinding can reference a Role or a ClusterRole. Policy rule of a `Role. ``` import { RolePolicyRule } from 'cdk8s-plus-28' const rolePolicyRule: RolePolicyRule = { ... } ``` ``` public readonly resources: IApiResource[]; ``` Resources this rule applies to. ``` public readonly verbs: string[]; ``` Verbs to allow. (e.g [get, watch]) Properties for Role. ``` import { RoleProps } from 'cdk8s-plus-28' const roleProps: RoleProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly rules: RolePolicyRule[]; ``` A list of rules the role should allow. ``` import { ScalingPolicy } from 'cdk8s-plus-28' const scalingPolicy: ScalingPolicy = { ... } ``` ``` public readonly replicas: Replicas; ``` The type and quantity of replicas to change. ``` public readonly duration: Duration; ``` The amount of time the scaling policy has to continue scaling before the target metric must be revalidated. Must be greater than 0 seconds and no longer than 30 minutes. Defines the scaling behavior for one direction. ``` import { ScalingRules } from 'cdk8s-plus-28' const scalingRules: ScalingRules = { ... } ``` ``` public readonly policies: ScalingPolicy[]; ``` The scaling policies. ``` public readonly stabilizationWindow: Duration; ``` Defines the window of past metrics that the autoscaler should consider when calculating wether or not autoscaling should occur. Minimum duration is 1 second, max is 1 hour. ``` public readonly strategy: ScalingStrategy; ``` The strategy to use when scaling. Properties used to configure the target of an Autoscaler. ``` import { ScalingTarget } from 'cdk8s-plus-28' const scalingTarget: ScalingTarget = { ... } ``` ``` public readonly apiVersion: string; ``` The objects API version (e.g. authorization.k8s.io/v1). ``` public readonly containers: Container[]; ``` Container definitions associated with the target. ``` public readonly kind: string; ``` The object kind (e.g. Deployment). ``` public readonly name: string; ``` The Kubernetes name of this resource. ``` public readonly replicas: number; ``` The fixed number of replicas defined on the target. This is used for validation purposes as Scalable targets should not have a fixed number of replicas. Options for Secret. ``` import { SecretProps } from 'cdk8s-plus-28' const secretProps: SecretProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly immutable: boolean; ``` If set to true, ensures that data stored in the Secret cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any" }, { "data": "``` public readonly stringData: {[ key: string ]: string}; ``` stringData allows specifying non-binary secret data in string form. It is provided as a write-only convenience method. All keys and values are merged into the data field on write, overwriting any existing values. It is never output when reading from the API. ``` public readonly type: string; ``` Optional type associated with the secret. Used to facilitate programmatic handling of secret data by various controllers. Represents a specific value in JSON secret. ``` import { SecretValue } from 'cdk8s-plus-28' const secretValue: SecretValue = { ... } ``` ``` public readonly key: string; ``` The JSON key. ``` public readonly secret: ISecret; ``` The secret. Options for the Secret-based volume. ``` import { SecretVolumeOptions } from 'cdk8s-plus-28' const secretVolumeOptions: SecretVolumeOptions = { ... } ``` ``` public readonly defaultMode: number; ``` Mode bits to use on created files by default. Must be a value between 0 and Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. ``` public readonly items: {[ key: string ]: PathMapping}; ``` If unspecified, each key-value pair in the Data field of the referenced secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the .. path or start with ... ``` public readonly name: string; ``` The volume name. ``` public readonly optional: boolean; ``` Specify whether the secret or its keys must be defined. Properties for initialization of ServiceAccount. ``` import { ServiceAccountProps } from 'cdk8s-plus-28' const serviceAccountProps: ServiceAccountProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly automountToken: boolean; ``` Indicates whether pods running as this service account should have an API token automatically mounted. Can be overridden at the pod level. https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server ``` public readonly secrets: ISecret[]; ``` List of secrets allowed to be used by pods running using this ServiceAccount. https://kubernetes.io/docs/concepts/configuration/secret Options for ServiceAccountTokenSecret. ``` import { ServiceAccountTokenSecretProps } from 'cdk8s-plus-28' const serviceAccountTokenSecretProps: ServiceAccountTokenSecretProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly immutable: boolean; ``` If set to true, ensures that data stored in the Secret cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. ``` public readonly serviceAccount: IServiceAccount; ``` The service account to store a secret for. Options for Service.bind. ``` import { ServiceBindOptions } from 'cdk8s-plus-28' const serviceBindOptions: ServiceBindOptions = { ... } ``` ``` public readonly name: string; ``` The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. This maps to the Name field in EndpointPort objects. Optional if only one ServicePort is defined on this service. ``` public readonly nodePort: number; ``` The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will" }, { "data": "Default is to auto-allocate a port if the ServiceType of this Service requires one. https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport ``` public readonly protocol: Protocol; ``` The IP protocol for this port. Supports TCP, UDP, and SCTP. Default is TCP. ``` public readonly targetPort: number; ``` The port number the service will redirect to. Options for setting up backends for ingress rules. ``` import { ServiceIngressBackendOptions } from 'cdk8s-plus-28' const serviceIngressBackendOptions: ServiceIngressBackendOptions = { ... } ``` ``` public readonly port: number; ``` The port to use to access the service. This option will fail if the service does not expose any ports. If the service exposes multiple ports, this option must be specified. If the service exposes a single port, this option is optional and if specified, it must be the same port exposed by the service. Definition of a service port. ``` import { ServicePort } from 'cdk8s-plus-28' const servicePort: ServicePort = { ... } ``` ``` public readonly name: string; ``` The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. This maps to the Name field in EndpointPort objects. Optional if only one ServicePort is defined on this service. ``` public readonly nodePort: number; ``` The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport ``` public readonly protocol: Protocol; ``` The IP protocol for this port. Supports TCP, UDP, and SCTP. Default is TCP. ``` public readonly targetPort: number; ``` The port number the service will redirect to. ``` public readonly port: number; ``` The port number the service will bind to. Properties for Service. ``` import { ServiceProps } from 'cdk8s-plus-28' const serviceProps: ServiceProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly clusterIP: string; ``` The IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are None, empty string (), or a valid IP address. None can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies ``` public readonly externalIPs: string[]; ``` A list of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP. A common example is external load-balancers that are not part of the Kubernetes system. ``` public readonly externalName: string; ``` The externalName to be used when ServiceType.EXTERNAL_NAME is set. ``` public readonly loadBalancerSourceRanges: string[]; ``` A list of CIDR IP addresses, if specified and supported by the platform, will restrict traffic through the cloud-provider load-balancer to the specified client IPs. More info: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/ ``` public readonly ports: ServicePort[]; ``` The ports this service binds to. If the selector of the service is a managed pod / workload, its ports will are automatically extracted and used as the default value. Otherwise, no ports are" }, { "data": "``` public readonly selector: IPodSelector; ``` Which pods should the service select and route to. You can pass one of the following: ``` public readonly type: ServiceType; ``` Determines how the Service is exposed. More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types Options for SshAuthSecret. ``` import { SshAuthSecretProps } from 'cdk8s-plus-28' const sshAuthSecretProps: SshAuthSecretProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly immutable: boolean; ``` If set to true, ensures that data stored in the Secret cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. ``` public readonly sshPrivateKey: string; ``` The SSH private key to use. Properties for initialization of StatefulSet. ``` import { StatefulSetProps } from 'cdk8s-plus-28' const statefulSetProps: StatefulSetProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly automountServiceAccountToken: boolean; ``` Indicates whether a service account token should be automatically mounted. https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server ``` public readonly containers: ContainerProps[]; ``` List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. You can add additionnal containers using podSpec.addContainer() ``` public readonly dns: PodDnsProps; ``` DNS settings for the pod. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ ``` public readonly dockerRegistryAuth: ISecret; ``` A secret containing docker credentials for authenticating to a registry. ``` public readonly hostAliases: HostAlias[]; ``` HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pods hosts file. ``` public readonly hostNetwork: boolean; ``` Host network for the pod. ``` public readonly initContainers: ContainerProps[]; ``` List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added ,removed or updated. https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ ``` public readonly isolate: boolean; ``` Isolates the pod. This will prevent any ingress or egress connections to / from this pod. You can however allow explicit connections post instantiation by using the .connections property. ``` public readonly restartPolicy: RestartPolicy; ``` Restart policy for all containers within the pod. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy ``` public readonly securityContext: PodSecurityContextProps; ``` SecurityContext holds pod-level security attributes and common container settings. ``` public readonly serviceAccount: IServiceAccount; ``` A service account provides an identity for processes that run in a Pod. When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default). https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ ``` public readonly terminationGracePeriod: Duration; ``` Grace period until the pod is terminated. ``` public readonly volumes: Volume[]; ``` List of volumes that can be mounted by containers belonging to the pod. You can also add volumes later using podSpec.addVolume()" }, { "data": "``` public readonly podMetadata: ApiObjectMetadata; ``` The pod metadata of this workload. ``` public readonly select: boolean; ``` Automatically allocates a pod label selector for this workload and add it to the pod metadata. This ensures this workload manages pods created by its pod template. ``` public readonly spread: boolean; ``` Automatically spread pods across hostname and zones. https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints ``` public readonly minReady: Duration; ``` Minimum duration for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Zero means the pod will be considered available as soon as it is ready. This is an alpha field and requires enabling StatefulSetMinReadySeconds feature gate. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#min-ready-seconds ``` public readonly podManagementPolicy: PodManagementPolicy; ``` Pod management policy to use for this statefulset. ``` public readonly replicas: number; ``` Number of desired pods. ``` public readonly service: Service; ``` Service to associate with the statefulset. ``` public readonly strategy: StatefulSetUpdateStrategy; ``` Indicates the StatefulSetUpdateStrategy that will be employed to update Pods in the StatefulSet when a revision is made to Template. Options for StatefulSetUpdateStrategy.rollingUpdate. ``` import { StatefulSetUpdateStrategyRollingUpdateOptions } from 'cdk8s-plus-28' const statefulSetUpdateStrategyRollingUpdateOptions: StatefulSetUpdateStrategyRollingUpdateOptions = { ... } ``` ``` public readonly partition: number; ``` If specified, all Pods with an ordinal that is greater than or equal to the partition will be updated when the StatefulSets .spec.template is updated. All Pods with an ordinal that is less than the partition will not be updated, and, even if they are deleted, they will be recreated at the previous version. If the partition is greater than replicas, updates to the pod template will not be propagated to Pods. In most cases you will not need to use a partition, but they are useful if you want to stage an update, roll out a canary, or perform a phased roll out. https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#partitions Subject contains a reference to the object or user identities a role binding applies to. This can either hold a direct API object reference, or a value for non-objects such as user and group names. ``` import { SubjectConfiguration } from 'cdk8s-plus-28' const subjectConfiguration: SubjectConfiguration = { ... } ``` ``` public readonly kind: string; ``` Kind of object being referenced. Values defined by this API group are User, Group, and ServiceAccount. If the Authorizer does not recognized the kind value, the Authorizer should report an error. ``` public readonly name: string; ``` Name of the object being referenced. ``` public readonly apiGroup: string; ``` APIGroup holds the API group of the referenced subject. Defaults to for ServiceAccount subjects. Defaults to rbac.authorization.k8s.io for User and Group subjects. ``` public readonly namespace: string; ``` Namespace of the referenced object. If the object kind is non-namespace, such as User or Group, and this value is not empty the Authorizer should report an error. Sysctl defines a kernel parameter to be set. ``` import { Sysctl } from 'cdk8s-plus-28' const sysctl: Sysctl = { ... } ``` ``` public readonly name: string; ``` Name of a property to set. ``` public readonly value: string; ``` Value of a property to set. Options for Probe.fromTcpSocket(). ``` import { TcpSocketProbeOptions } from 'cdk8s-plus-28' const tcpSocketProbeOptions: TcpSocketProbeOptions = { ... } ``` ``` public readonly failureThreshold: number; ``` Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. ``` public readonly initialDelaySeconds: Duration; ``` Number of seconds after the container has started before liveness probes are initiated. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes ``` public readonly periodSeconds: Duration; ``` How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is" }, { "data": "``` public readonly successThreshold: number; ``` Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. ``` public readonly timeoutSeconds: Duration; ``` Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes ``` public readonly host: string; ``` The host name to connect to on the container. ``` public readonly port: number; ``` The TCP port to connect to on the container. Options for TlsSecret. ``` import { TlsSecretProps } from 'cdk8s-plus-28' const tlsSecretProps: TlsSecretProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly immutable: boolean; ``` If set to true, ensures that data stored in the Secret cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. ``` public readonly tlsCert: string; ``` The TLS cert. ``` public readonly tlsKey: string; ``` The TLS key. Mount a volume from the pod to the container. ``` import { VolumeMount } from 'cdk8s-plus-28' const volumeMount: VolumeMount = { ... } ``` ``` public readonly propagation: MountPropagation; ``` Determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. Mount propagation allows for sharing volumes mounted by a Container to other Containers in the same Pod, or even to other Pods on the same node. ``` public readonly readOnly: boolean; ``` Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. ``` public readonly subPath: string; ``` Path within the volume from which the containers volume should be mounted.). ``` public readonly subPathExpr: string; ``` Expanded path within the volume from which the containers volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the containers environment. Defaults to (volumes root). subPathExpr and subPath are mutually exclusive. ``` public readonly path: string; ``` Path within the container at which the volume should be mounted. Must not contain :. ``` public readonly volume: Volume; ``` The volume to mount. Properties for Workload. ``` import { WorkloadProps } from 'cdk8s-plus-28' const workloadProps: WorkloadProps = { ... } ``` ``` public readonly metadata: ApiObjectMetadata; ``` Metadata that all persisted resources must have, which includes all objects users must create. ``` public readonly automountServiceAccountToken: boolean; ``` Indicates whether a service account token should be automatically mounted. https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server ``` public readonly containers: ContainerProps[]; ``` List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. You can add additionnal containers using podSpec.addContainer() ``` public readonly dns: PodDnsProps; ``` DNS settings for the pod. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ ``` public readonly dockerRegistryAuth: ISecret; ``` A secret containing docker credentials for authenticating to a registry. ``` public readonly hostAliases: HostAlias[]; ``` HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pods hosts file. ``` public readonly hostNetwork: boolean; ``` Host network for the pod. ``` public readonly initContainers: ContainerProps[]; ``` List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup" }, { "data": "The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added ,removed or updated. https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ ``` public readonly isolate: boolean; ``` Isolates the pod. This will prevent any ingress or egress connections to / from this pod. You can however allow explicit connections post instantiation by using the .connections property. ``` public readonly restartPolicy: RestartPolicy; ``` Restart policy for all containers within the pod. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy ``` public readonly securityContext: PodSecurityContextProps; ``` SecurityContext holds pod-level security attributes and common container settings. ``` public readonly serviceAccount: IServiceAccount; ``` A service account provides an identity for processes that run in a Pod. When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default). https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ ``` public readonly terminationGracePeriod: Duration; ``` Grace period until the pod is terminated. ``` public readonly volumes: Volume[]; ``` List of volumes that can be mounted by containers belonging to the pod. You can also add volumes later using podSpec.addVolume() https://kubernetes.io/docs/concepts/storage/volumes ``` public readonly podMetadata: ApiObjectMetadata; ``` The pod metadata of this workload. ``` public readonly select: boolean; ``` Automatically allocates a pod label selector for this workload and add it to the pod metadata. This ensures this workload manages pods created by its pod template. ``` public readonly spread: boolean; ``` Automatically spread pods across hostname and zones. https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints Options for WorkloadScheduling.spread. ``` import { WorkloadSchedulingSpreadOptions } from 'cdk8s-plus-28' const workloadSchedulingSpreadOptions: WorkloadSchedulingSpreadOptions = { ... } ``` ``` public readonly topology: Topology; ``` Which topology to spread on. ``` public readonly weight: number; ``` Indicates the spread is optional, with this weight score. Represents information about an API resource type. ``` public asApiResource() ``` ``` public asNonApiResource() ``` ``` import { ApiResource } from 'cdk8s-plus-28' ApiResource.custom(options: ApiResourceOptions) ``` ``` public readonly apiGroup: string; ``` The group portion of the API version (e.g. authorization.k8s.io). ``` public readonly resourceType: string; ``` The name of the resource type as it appears in the relevant API endpoint. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources API resource information for APIService. API resource information for Binding. API resource information for CertificateSigningRequest. API resource information for ClusterRoleBinding. API resource information for ClusterRole. API resource information for ComponentStatus. API resource information for ConfigMap. API resource information for ControllerRevision. API resource information for CronJob. API resource information for CSIDriver. API resource information for CSINode. API resource information for CSIStorageCapacity. API resource information for CustomResourceDefinition. API resource information for DaemonSet. API resource information for Deployment. API resource information for EndpointSlice. API resource information for Endpoints. API resource information for Event. API resource information for FlowSchema. API resource information for HorizontalPodAutoscaler. API resource information for IngressClass. API resource information for Ingress. API resource information for Job. API resource information for Lease. API resource information for LimitRange. API resource information for LocalSubjectAccessReview. API resource information for MutatingWebhookConfiguration. API resource information for Namespace. API resource information for NetworkPolicy. API resource information for Node. API resource information for PersistentVolumeClaim. API resource information for PersistentVolume. API resource information for PodDisruptionBudget. API resource information for PodTemplate. API resource information for Pod. API resource information for PriorityClass. API resource information for PriorityLevelConfiguration. API resource information for ReplicaSet. API resource information for ReplicationController. API resource information for" }, { "data": "API resource information for RoleBinding. API resource information for Role. API resource information for RuntimeClass. API resource information for Secret. API resource information for SelfSubjectAccessReview. API resource information for SelfSubjectRulesReview. API resource information for ServiceAccount. API resource information for Service. API resource information for StatefulSet. API resource information for StorageClass. API resource information for SubjectAccessReview. API resource information for TokenReview. API resource information for ValidatingWebhookConfiguration. API resource information for VolumeAttachment. A single application container that you want to run within a pod. ``` import { Container } from 'cdk8s-plus-28' new Container(props: ContainerProps) ``` ``` public addPort(port: ContainerPort) ``` ``` public mount(path: string, storage: IStorage, options?: MountOptions) ``` The desired path in the container. The storage to mount. ``` public readonly env: Env; ``` The environment of the container. ``` public readonly image: string; ``` The container image. ``` public readonly imagePullPolicy: ImagePullPolicy; ``` Image pull policy for this container. ``` public readonly mounts: VolumeMount[]; ``` Volume mounts configured for this container. ``` public readonly name: string; ``` The name of the container. ``` public readonly ports: ContainerPort[]; ``` Ports exposed by this containers. Returns a copy, use addPort to modify. ``` public readonly securityContext: ContainerSecurityContext; ``` The security context of the container. ``` public readonly args: string[]; ``` Arguments to the entrypoint. ``` public readonly command: string[]; ``` Entrypoint array (the command to execute when the container starts). ``` public readonly port: number; ``` ``` public readonly portNumber: number; ``` The port number that was configured for this container. If undefined, either the container doesnt expose a port, or its port configuration is stored in the ports field. ``` public readonly resources: ContainerResources; ``` Compute resources (CPU and memory requests and limits) required by the container. https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ ``` public readonly restartPolicy: ContainerRestartPolicy; ``` The restart policy of the container. ``` public readonly workingDir: string; ``` The working directory inside the container. Container security attributes and settings. ``` import { ContainerSecurityContext } from 'cdk8s-plus-28' new ContainerSecurityContext(props?: ContainerSecurityContextProps) ``` ``` public readonly ensureNonRoot: boolean; ``` ``` public readonly privileged: boolean; ``` ``` public readonly readOnlyRootFilesystem: boolean; ``` ``` public readonly allowPrivilegeEscalation: boolean; ``` ``` public readonly capabilities: ContainerSecutiryContextCapabilities; ``` ``` public readonly group: number; ``` ``` public readonly user: number; ``` Represents the amount of CPU. The amount can be passed as millis or units. ``` import { Cpu } from 'cdk8s-plus-28' Cpu.millis(amount: number) ``` ``` import { Cpu } from 'cdk8s-plus-28' Cpu.units(amount: number) ``` ``` public readonly amount: string; ``` Deployment strategies. ``` import { DeploymentStrategy } from 'cdk8s-plus-28' DeploymentStrategy.recreate() ``` ``` import { DeploymentStrategy } from 'cdk8s-plus-28' DeploymentStrategy.rollingUpdate(options?: DeploymentStrategyRollingUpdateOptions) ``` Container environment variables. ``` import { Env } from 'cdk8s-plus-28' new Env(sources: EnvFrom[], variables: {[ key: string ]: EnvValue}) ``` ``` public addVariable(name: string, value: EnvValue) ``` ``` public copyFrom(from: EnvFrom) ``` ``` import { Env } from 'cdk8s-plus-28' Env.fromConfigMap(configMap: IConfigMap, prefix?: string) ``` ``` import { Env } from 'cdk8s-plus-28' Env.fromSecret(secr: ISecret) ``` ``` public readonly sources: EnvFrom[]; ``` The list of sources used to populate the container environment, in addition to the variables. Returns a copy. To add a source use container.env.copyFrom(). ``` public readonly variables: {[ key: string ]: EnvValue}; ``` The environment variables for this container. Returns a copy. To add environment variables use container.env.addVariable(). A collection of env variables defined in other resources. ``` import { EnvFrom } from 'cdk8s-plus-28' new EnvFrom(configMap?: IConfigMap, prefix?: string, sec?: ISecret) ``` Utility class for creating reading env values from various sources. ``` import { EnvValue } from 'cdk8s-plus-28' EnvValue.fromConfigMap(configMap: IConfigMap, key: string, options?: EnvValueFromConfigMapOptions) ``` The config map. The key to extract the value from. Additional" }, { "data": "``` import { EnvValue } from 'cdk8s-plus-28' EnvValue.fromFieldRef(fieldPath: EnvFieldPaths, options?: EnvValueFromFieldRefOptions) ``` : The field reference. : Additional options. ``` import { EnvValue } from 'cdk8s-plus-28' EnvValue.fromProcess(key: string, options?: EnvValueFromProcessOptions) ``` The key to read. Additional options. ``` import { EnvValue } from 'cdk8s-plus-28' EnvValue.fromResource(resource: ResourceFieldPaths, options?: EnvValueFromResourceOptions) ``` : Resource to select the value from. : Additional options. ``` import { EnvValue } from 'cdk8s-plus-28' EnvValue.fromSecretValue(secretValue: SecretValue, options?: EnvValueFromSecretOptions) ``` The secret value (secrent + key). Additional options. ``` import { EnvValue } from 'cdk8s-plus-28' EnvValue.fromValue(value: string) ``` The value. ``` public readonly value: any; ``` ``` public readonly valueFrom: any; ``` Defines a specific action that should be taken. ``` import { Handler } from 'cdk8s-plus-28' Handler.fromCommand(command: string[]) ``` The command to execute. ``` import { Handler } from 'cdk8s-plus-28' Handler.fromHttpGet(path: string, options?: HandlerFromHttpGetOptions) ``` The URL path to hit. Options. ``` import { Handler } from 'cdk8s-plus-28' Handler.fromTcpSocket(options?: HandlerFromTcpSocketOptions) ``` Options. The backend for an ingress path. ``` import { IngressBackend } from 'cdk8s-plus-28' IngressBackend.fromResource(resource: IResource) ``` ``` import { IngressBackend } from 'cdk8s-plus-28' IngressBackend.fromService(serv: Service, options?: ServiceIngressBackendOptions) ``` The service object. A node that is matched by label selectors. ``` import { LabeledNode } from 'cdk8s-plus-28' new LabeledNode(labelSelector: NodeLabelQuery[]) ``` ``` public readonly labelSelector: NodeLabelQuery[]; ``` Represents a query that can be performed against resources with labels. ``` import { LabelExpression } from 'cdk8s-plus-28' LabelExpression.doesNotExist(key: string) ``` ``` import { LabelExpression } from 'cdk8s-plus-28' LabelExpression.exists(key: string) ``` ``` import { LabelExpression } from 'cdk8s-plus-28' LabelExpression.in(key: string, values: string[]) ``` ``` import { LabelExpression } from 'cdk8s-plus-28' LabelExpression.notIn(key: string, values: string[]) ``` ``` public readonly key: string; ``` ``` public readonly operator: string; ``` ``` public readonly values: string[]; ``` Match a resource by labels. ``` public isEmpty() ``` ``` import { LabelSelector } from 'cdk8s-plus-28' LabelSelector.of(options?: LabelSelectorOptions) ``` A metric condition that HorizontalPodAutoscalers scale on. ``` import { Metric } from 'cdk8s-plus-28' Metric.containerCpu(options: MetricContainerResourceOptions) ``` ``` import { Metric } from 'cdk8s-plus-28' Metric.containerEphemeralStorage(options: MetricContainerResourceOptions) ``` ``` import { Metric } from 'cdk8s-plus-28' Metric.containerMemory(options: MetricContainerResourceOptions) ``` ``` import { Metric } from 'cdk8s-plus-28' Metric.containerStorage(options: MetricContainerResourceOptions) ``` ``` import { Metric } from 'cdk8s-plus-28' Metric.external(options: MetricOptions) ``` ``` import { Metric } from 'cdk8s-plus-28' Metric.object(options: MetricObjectOptions) ``` ``` import { Metric } from 'cdk8s-plus-28' Metric.pods(options: MetricOptions) ``` ``` import { Metric } from 'cdk8s-plus-28' Metric.resourceCpu(target: MetricTarget) ``` ``` import { Metric } from 'cdk8s-plus-28' Metric.resourceEphemeralStorage(target: MetricTarget) ``` ``` import { Metric } from 'cdk8s-plus-28' Metric.resourceMemory(target: MetricTarget) ``` ``` import { Metric } from 'cdk8s-plus-28' Metric.resourceStorage(target: MetricTarget) ``` ``` public readonly type: string; ``` A metric condition that will trigger scaling behavior when satisfied. ``` import { MetricTarget } from 'cdk8s-plus-28' MetricTarget.averageUtilization(averageUtilization: number) ``` The percentage of the utilization metric. e.g. 50 for 50%. ``` import { MetricTarget } from 'cdk8s-plus-28' MetricTarget.averageValue(averageValue: number) ``` The average metric value. ``` import { MetricTarget } from 'cdk8s-plus-28' MetricTarget.value(value: number) ``` The target value. A node that is matched by its name. ``` import { NamedNode } from 'cdk8s-plus-28' new NamedNode(name: string) ``` ``` public readonly name: string; ``` Describes a port to allow traffic on. ``` import { NetworkPolicyPort } from 'cdk8s-plus-28' NetworkPolicyPort.allTcp() ``` ``` import { NetworkPolicyPort } from 'cdk8s-plus-28' NetworkPolicyPort.allUdp() ``` ``` import { NetworkPolicyPort } from 'cdk8s-plus-28' NetworkPolicyPort.of(props: NetworkPolicyPortProps) ``` ``` import { NetworkPolicyPort } from 'cdk8s-plus-28' NetworkPolicyPort.tcp(port: number) ``` ``` import { NetworkPolicyPort } from 'cdk8s-plus-28' NetworkPolicyPort.tcpRange(startPort: number, endPort: number) ``` ``` import { NetworkPolicyPort } from 'cdk8s-plus-28' NetworkPolicyPort.udp(port: number) ``` ``` import { NetworkPolicyPort } from 'cdk8s-plus-28' NetworkPolicyPort.udpRange(startPort: number, endPort: number) ``` Represents a node in the" }, { "data": "``` import { Node } from 'cdk8s-plus-28' new Node() ``` ``` import { Node } from 'cdk8s-plus-28' Node.labeled(labelSelector: NodeLabelQuery) ``` ``` import { Node } from 'cdk8s-plus-28' Node.named(nodeName: string) ``` ``` import { Node } from 'cdk8s-plus-28' Node.tainted(taintSelector: NodeTaintQuery) ``` Represents a query that can be performed against nodes with labels. ``` import { NodeLabelQuery } from 'cdk8s-plus-28' NodeLabelQuery.doesNotExist(key: string) ``` ``` import { NodeLabelQuery } from 'cdk8s-plus-28' NodeLabelQuery.exists(key: string) ``` ``` import { NodeLabelQuery } from 'cdk8s-plus-28' NodeLabelQuery.gt(key: string, values: string[]) ``` ``` import { NodeLabelQuery } from 'cdk8s-plus-28' NodeLabelQuery.in(key: string, values: string[]) ``` ``` import { NodeLabelQuery } from 'cdk8s-plus-28' NodeLabelQuery.is(key: string, value: string) ``` ``` import { NodeLabelQuery } from 'cdk8s-plus-28' NodeLabelQuery.lt(key: string, values: string[]) ``` ``` import { NodeLabelQuery } from 'cdk8s-plus-28' NodeLabelQuery.notIn(key: string, values: string[]) ``` Taint queries that can be perfomed against nodes. ``` import { NodeTaintQuery } from 'cdk8s-plus-28' NodeTaintQuery.any() ``` ``` import { NodeTaintQuery } from 'cdk8s-plus-28' NodeTaintQuery.exists(key: string, options?: NodeTaintQueryOptions) ``` ``` import { NodeTaintQuery } from 'cdk8s-plus-28' NodeTaintQuery.is(key: string, value: string, options?: NodeTaintQueryOptions) ``` Factory for creating non api resources. ``` public asApiResource() ``` ``` public asNonApiResource() ``` ``` import { NonApiResource } from 'cdk8s-plus-28' NonApiResource.of(url: string) ``` Union like class repsenting either a ration in percents or an absolute number. ``` public isZero() ``` ``` import { PercentOrAbsolute } from 'cdk8s-plus-28' PercentOrAbsolute.absolute(num: number) ``` ``` import { PercentOrAbsolute } from 'cdk8s-plus-28' PercentOrAbsolute.percent(percent: number) ``` ``` public readonly value: any; ``` Controls network isolation rules for inter-pod communication. ``` import { PodConnections } from 'cdk8s-plus-28' new PodConnections(instance: AbstractPod) ``` ``` public allowFrom(peer: INetworkPolicyPeer, options?: PodConnectionsAllowFromOptions) ``` ``` public allowTo(peer: INetworkPolicyPeer, options?: PodConnectionsAllowToOptions) ``` ``` public isolate() ``` Holds dns settings of the pod. ``` import { PodDns } from 'cdk8s-plus-28' new PodDns(props?: PodDnsProps) ``` ``` public addNameserver(nameservers: string) ``` ``` public addOption(options: DnsOption) ``` ``` public addSearch(searches: string) ``` ``` public readonly hostnameAsFQDN: boolean; ``` Whether or not the pods hostname is set to its FQDN. ``` public readonly nameservers: string[]; ``` Nameservers defined for this pod. ``` public readonly options: DnsOption[]; ``` Custom dns options defined for this pod. ``` public readonly policy: DnsPolicy; ``` The DNS policy of this pod. ``` public readonly searches: string[]; ``` Search domains defined for this pod. ``` public readonly hostname: string; ``` The configured hostname of the pod. Undefined means its set to a system-defined value. ``` public readonly subdomain: string; ``` The configured subdomain of the pod. Controls the pod scheduling strategy. ``` import { PodScheduling } from 'cdk8s-plus-28' new PodScheduling(instance: AbstractPod) ``` ``` public assign(node: NamedNode) ``` ``` public attract(node: LabeledNode, options?: PodSchedulingAttractOptions) ``` ``` public colocate(selector: IPodSelector, options?: PodSchedulingColocateOptions) ``` ``` public separate(selector: IPodSelector, options?: PodSchedulingSeparateOptions) ``` ``` public tolerate(node: TaintedNode) ``` Holds pod-level security attributes and common container settings. ``` import { PodSecurityContext } from 'cdk8s-plus-28' new PodSecurityContext(props?: PodSecurityContextProps) ``` ``` public readonly ensureNonRoot: boolean; ``` ``` public readonly fsGroupChangePolicy: FsGroupChangePolicy; ``` ``` public readonly sysctls: Sysctl[]; ``` ``` public readonly fsGroup: number; ``` ``` public readonly group: number; ``` ``` public readonly user: number; ``` Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. ``` import { Probe } from 'cdk8s-plus-28' Probe.fromCommand(command: string[], options?: CommandProbeOptions) ``` The command to execute. Options. ``` import { Probe } from 'cdk8s-plus-28' Probe.fromHttpGet(path: string, options?: HttpGetProbeOptions) ``` The URL path to hit. Options. ``` import { Probe } from 'cdk8s-plus-28' Probe.fromTcpSocket(options?: TcpSocketProbeOptions) ``` Options. The amount of replicas that will change. ``` import { Replicas } from 'cdk8s-plus-28' Replicas.absolute(value: number) ``` The amount of change to apply. Must be greater than" }, { "data": "``` import { Replicas } from 'cdk8s-plus-28' Replicas.percent(value: number) ``` The percentage of change to apply. Must be greater than 0. Controls permissions for operations on resources. ``` import { ResourcePermissions } from 'cdk8s-plus-28' new ResourcePermissions(instance: Resource) ``` ``` public grantRead(subjects: ISubject) ``` ``` public grantReadWrite(subjects: ISubject) ``` StatefulSet update strategies. ``` import { StatefulSetUpdateStrategy } from 'cdk8s-plus-28' StatefulSetUpdateStrategy.onDelete() ``` ``` import { StatefulSetUpdateStrategy } from 'cdk8s-plus-28' StatefulSetUpdateStrategy.rollingUpdate(options?: StatefulSetUpdateStrategyRollingUpdateOptions) ``` A node that is matched by taint selectors. ``` import { TaintedNode } from 'cdk8s-plus-28' new TaintedNode(taintSelector: NodeTaintQuery[]) ``` ``` public readonly taintSelector: NodeTaintQuery[]; ``` Available topology domains. ``` import { Topology } from 'cdk8s-plus-28' Topology.custom(key: string) ``` ``` public readonly key: string; ``` A hostname represents a single node in the cluster. https://kubernetes.io/docs/reference/labels-annotations-taints/#kubernetesiohostname A region represents a larger domain, made up of one or more zones. It is uncommon for Kubernetes clusters to span multiple regions. While the exact definition of a zone or region is left to infrastructure implementations, common properties of a region include higher network latency between them than within them, non-zero cost for network traffic between them, and failure independence from other zones or regions. For example, nodes within a region might share power infrastructure (e.g. a UPS or generator), but nodes in different regions typically would not. https://kubernetes.io/docs/reference/labels-annotations-taints/#topologykubernetesioregion A zone represents a logical failure domain. It is common for Kubernetes clusters to span multiple zones for increased availability. While the exact definition of a zone is left to infrastructure implementations, common properties of a zone include very low network latency within a zone, no-cost network traffic within a zone, and failure independence from other zones. For example, nodes within a zone might share a network switch, but nodes in different zones should not. https://kubernetes.io/docs/reference/labels-annotations-taints/#topologykubernetesiozone Controls the pod scheduling strategy of this workload. It offers some additional APIs on top of the core pod scheduling. ``` import { WorkloadScheduling } from 'cdk8s-plus-28' new WorkloadScheduling(instance: AbstractPod) ``` ``` public spread(options?: WorkloadSchedulingSpreadOptions) ``` An API Endpoint can either be a resource descriptor (e.g /pods) or a non resource url (e.g /healthz). It must be one or the other, and not both. ``` public asApiResource() ``` ``` public asNonApiResource() ``` Represents a resource or collection of resources. ``` public readonly apiGroup: string; ``` The group portion of the API version (e.g. authorization.k8s.io). ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources ``` public readonly resourceName: string; ``` The unique, namespace-global, name of an object inside the Kubernetes cluster. If this is omitted, the ApiResource should represent all objects of the given type. Extends: cdk8s-plus-28.IResource Implemented By: cdk8s-plus-28.ClusterRole, cdk8s-plus-28.IClusterRole Represents a cluster-level role. ``` public readonly node: Node; ``` The tree node. ``` public readonly apiGroup: string; ``` The group portion of the API version (e.g. authorization.k8s.io). ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources ``` public readonly resourceName: string; ``` The unique, namespace-global, name of an object inside the Kubernetes cluster. If this is omitted, the ApiResource should represent all objects of the given type. ``` public readonly apiVersion: string; ``` The objects API version (e.g. authorization.k8s.io/v1). ``` public readonly kind: string; ``` The object kind (e.g. Deployment). ``` public readonly name: string; ``` The Kubernetes name of this resource. Extends: cdk8s-plus-28.IResource Implemented By: cdk8s-plus-28.ConfigMap, cdk8s-plus-28.IConfigMap Represents a config map. ``` public readonly node: Node; ``` The tree node. ``` public readonly apiGroup: string; ``` The group portion of the API version (e.g." }, { "data": "``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources ``` public readonly resourceName: string; ``` The unique, namespace-global, name of an object inside the Kubernetes cluster. If this is omitted, the ApiResource should represent all objects of the given type. ``` public readonly apiVersion: string; ``` The objects API version (e.g. authorization.k8s.io/v1). ``` public readonly kind: string; ``` The object kind (e.g. Deployment). ``` public readonly name: string; ``` The Kubernetes name of this resource. Extends: constructs.IConstruct Implemented By: cdk8s-plus-28.Namespace, cdk8s-plus-28.Namespaces, cdk8s-plus-28.INamespaceSelector Represents an object that can select namespaces. ``` public toNamespaceSelectorConfig() ``` ``` public readonly node: Node; ``` The tree node. Extends: constructs.IConstruct Implemented By: cdk8s-plus-28.AbstractPod, cdk8s-plus-28.CronJob, cdk8s-plus-28.DaemonSet, cdk8s-plus-28.Deployment, cdk8s-plus-28.Job, cdk8s-plus-28.Namespace, cdk8s-plus-28.Namespaces, cdk8s-plus-28.NetworkPolicyIpBlock, cdk8s-plus-28.Pod, cdk8s-plus-28.StatefulSet, cdk8s-plus-28.Workload, cdk8s-plus-28.INetworkPolicyPeer Describes a peer to allow traffic to/from. ``` public toNetworkPolicyPeerConfig() ``` ``` public toPodSelector() ``` ``` public readonly node: Node; ``` The tree node. Extends: cdk8s-plus-28.IResource Implemented By: cdk8s-plus-28.AwsElasticBlockStorePersistentVolume, cdk8s-plus-28.AzureDiskPersistentVolume, cdk8s-plus-28.GCEPersistentDiskPersistentVolume, cdk8s-plus-28.PersistentVolume, cdk8s-plus-28.IPersistentVolume Contract of a PersistentVolumeClaim. ``` public readonly node: Node; ``` The tree node. ``` public readonly apiGroup: string; ``` The group portion of the API version (e.g. authorization.k8s.io). ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources ``` public readonly resourceName: string; ``` The unique, namespace-global, name of an object inside the Kubernetes cluster. If this is omitted, the ApiResource should represent all objects of the given type. ``` public readonly apiVersion: string; ``` The objects API version (e.g. authorization.k8s.io/v1). ``` public readonly kind: string; ``` The object kind (e.g. Deployment). ``` public readonly name: string; ``` The Kubernetes name of this resource. Extends: cdk8s-plus-28.IResource Implemented By: cdk8s-plus-28.PersistentVolumeClaim, cdk8s-plus-28.IPersistentVolumeClaim Contract of a PersistentVolumeClaim. ``` public readonly node: Node; ``` The tree node. ``` public readonly apiGroup: string; ``` The group portion of the API version (e.g. authorization.k8s.io). ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources ``` public readonly resourceName: string; ``` The unique, namespace-global, name of an object inside the Kubernetes cluster. If this is omitted, the ApiResource should represent all objects of the given type. ``` public readonly apiVersion: string; ``` The objects API version (e.g. authorization.k8s.io/v1). ``` public readonly kind: string; ``` The object kind (e.g. Deployment). ``` public readonly name: string; ``` The Kubernetes name of this resource. Extends: constructs.IConstruct Implemented By: cdk8s-plus-28.AbstractPod, cdk8s-plus-28.CronJob, cdk8s-plus-28.DaemonSet, cdk8s-plus-28.Deployment, cdk8s-plus-28.Job, cdk8s-plus-28.Pod, cdk8s-plus-28.Pods, cdk8s-plus-28.StatefulSet, cdk8s-plus-28.Workload, cdk8s-plus-28.IPodSelector Represents an object that can select pods. ``` public toPodSelectorConfig() ``` ``` public readonly node: Node; ``` The tree node. Extends: constructs.IConstruct, cdk8s-plus-28.IApiResource Implemented By: cdk8s-plus-28.AbstractPod, cdk8s-plus-28.AwsElasticBlockStorePersistentVolume, cdk8s-plus-28.AzureDiskPersistentVolume, cdk8s-plus-28.BasicAuthSecret, cdk8s-plus-28.ClusterRole, cdk8s-plus-28.ClusterRoleBinding, cdk8s-plus-28.ConfigMap, cdk8s-plus-28.CronJob, cdk8s-plus-28.DaemonSet, cdk8s-plus-28.Deployment, cdk8s-plus-28.DockerConfigSecret, cdk8s-plus-28.GCEPersistentDiskPersistentVolume, cdk8s-plus-28.HorizontalPodAutoscaler, cdk8s-plus-28.Ingress, cdk8s-plus-28.Job, cdk8s-plus-28.Namespace, cdk8s-plus-28.NetworkPolicy, cdk8s-plus-28.PersistentVolume, cdk8s-plus-28.PersistentVolumeClaim, cdk8s-plus-28.Pod, cdk8s-plus-28.Resource, cdk8s-plus-28.Role, cdk8s-plus-28.RoleBinding, cdk8s-plus-28.Secret, cdk8s-plus-28.Service, cdk8s-plus-28.ServiceAccount, cdk8s-plus-28.ServiceAccountTokenSecret, cdk8s-plus-28.SshAuthSecret, cdk8s-plus-28.StatefulSet, cdk8s-plus-28.TlsSecret, cdk8s-plus-28.Workload, cdk8s-plus-28.IClusterRole, cdk8s-plus-28.IConfigMap, cdk8s-plus-28.IPersistentVolume, cdk8s-plus-28.IPersistentVolumeClaim, cdk8s-plus-28.IResource, cdk8s-plus-28.IRole, cdk8s-plus-28.ISecret, cdk8s-plus-28.IServiceAccount Represents a resource. ``` public readonly node: Node; ``` The tree node. ``` public readonly apiGroup: string; ``` The group portion of the API version (e.g. authorization.k8s.io). ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources ``` public readonly resourceName: string; ``` The unique, namespace-global, name of an object inside the Kubernetes cluster. If this is omitted, the ApiResource should represent all objects of the given type. ``` public readonly apiVersion: string; ``` The objects API version (e.g. authorization.k8s.io/v1). ``` public readonly kind: string; ``` The object kind (e.g. Deployment). ``` public readonly name: string; ``` The Kubernetes name of this resource. Extends: cdk8s-plus-28.IResource Implemented By: cdk8s-plus-28.ClusterRole, cdk8s-plus-28.Role, cdk8s-plus-28.IRole A reference to any Role or" }, { "data": "``` public readonly node: Node; ``` The tree node. ``` public readonly apiGroup: string; ``` The group portion of the API version (e.g. authorization.k8s.io). ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources ``` public readonly resourceName: string; ``` The unique, namespace-global, name of an object inside the Kubernetes cluster. If this is omitted, the ApiResource should represent all objects of the given type. ``` public readonly apiVersion: string; ``` The objects API version (e.g. authorization.k8s.io/v1). ``` public readonly kind: string; ``` The object kind (e.g. Deployment). ``` public readonly name: string; ``` The Kubernetes name of this resource. Represents a scalable workload. ``` public markHasAutoscaler() ``` ``` public toScalingTarget() ``` ``` public readonly hasAutoscaler: boolean; ``` If this is a target of an autoscaler. Extends: cdk8s-plus-28.IResource Implemented By: cdk8s-plus-28.BasicAuthSecret, cdk8s-plus-28.DockerConfigSecret, cdk8s-plus-28.Secret, cdk8s-plus-28.ServiceAccountTokenSecret, cdk8s-plus-28.SshAuthSecret, cdk8s-plus-28.TlsSecret, cdk8s-plus-28.ISecret ``` public envValue(key: string, options?: EnvValueFromSecretOptions) ``` Secrets key. Additional EnvValue options. ``` public readonly node: Node; ``` The tree node. ``` public readonly apiGroup: string; ``` The group portion of the API version (e.g. authorization.k8s.io). ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources ``` public readonly resourceName: string; ``` The unique, namespace-global, name of an object inside the Kubernetes cluster. If this is omitted, the ApiResource should represent all objects of the given type. ``` public readonly apiVersion: string; ``` The objects API version (e.g. authorization.k8s.io/v1). ``` public readonly kind: string; ``` The object kind (e.g. Deployment). ``` public readonly name: string; ``` The Kubernetes name of this resource. Extends: cdk8s-plus-28.IResource, cdk8s-plus-28.ISubject Implemented By: cdk8s-plus-28.ServiceAccount, cdk8s-plus-28.IServiceAccount ``` public readonly node: Node; ``` The tree node. ``` public readonly apiGroup: string; ``` The group portion of the API version (e.g. authorization.k8s.io). ``` public readonly resourceType: string; ``` The name of a resource type as it appears in the relevant API endpoint. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources ``` public readonly resourceName: string; ``` The unique, namespace-global, name of an object inside the Kubernetes cluster. If this is omitted, the ApiResource should represent all objects of the given type. ``` public readonly apiVersion: string; ``` The objects API version (e.g. authorization.k8s.io/v1). ``` public readonly kind: string; ``` The object kind (e.g. Deployment). ``` public readonly name: string; ``` The Kubernetes name of this resource. Extends: constructs.IConstruct Implemented By: cdk8s-plus-28.AwsElasticBlockStorePersistentVolume, cdk8s-plus-28.AzureDiskPersistentVolume, cdk8s-plus-28.GCEPersistentDiskPersistentVolume, cdk8s-plus-28.PersistentVolume, cdk8s-plus-28.Volume, cdk8s-plus-28.IStorage Represents a piece of storage in the cluster. ``` public asVolume() ``` ``` public readonly node: Node; ``` The tree node. Extends: constructs.IConstruct Implemented By: cdk8s-plus-28.AbstractPod, cdk8s-plus-28.CronJob, cdk8s-plus-28.DaemonSet, cdk8s-plus-28.Deployment, cdk8s-plus-28.Group, cdk8s-plus-28.Job, cdk8s-plus-28.Pod, cdk8s-plus-28.ServiceAccount, cdk8s-plus-28.StatefulSet, cdk8s-plus-28.User, cdk8s-plus-28.Workload, cdk8s-plus-28.IServiceAccount, cdk8s-plus-28.ISubject Represents an object that can be used as a role binding subject. ``` public toSubjectConfiguration() ``` ``` public readonly node: Node; ``` The tree node. Azure disk caching modes. None. ReadOnly. ReadWrite. Azure Disk kinds. Multiple blob disks per storage account. Single blob disk per storage account. Azure managed data disk. Capability - complete list of POSIX capabilities. ALL. CAPAUDITCONTROL. CAPAUDITREAD. CAPAUDITWRITE. CAPBLOCKSUSPEND. CAP_BPF. CAPCHECKPOINTRESTORE. CAP_CHOWN. CAPDACOVERRIDE. CAPDACREAD_SEARCH. CAP_FOWNER. CAP_FSETID. CAPIPCLOCK. CAPIPCOWNER. CAP_KILL. CAP_LEASE. CAPLINUXIMMUTABLE. CAPMACADMIN. CAPMACOVERRIDE. CAP_MKNOD. CAPNETADMIN. CAPNETBIND_SERVICE. CAPNETBROADCAST. CAPNETRAW. CAP_PERFMON. CAP_SETGID. CAP_SETFCAP. CAP_SETPCAP. CAP_SETUID. CAPSYSADMIN. CAPSYSBOOT. CAPSYSCHROOT. CAPSYSMODULE. CAPSYSNICE. CAPSYSPACCT. CAPSYSPTRACE. CAPSYSRAWIO. CAPSYSRESOURCE. CAPSYSTIME. CAPSYSTTY_CONFIG. CAP_SYSLOG. CAPWAKEALARM. Concurrency policy for CronJobs. This policy allows to run job concurrently. This policy does not allow to run job concurrently. It does not let a new job to be scheduled if the previous one is not finished yet. This policy replaces the currently running job if a new job is being scheduled. Use HTTP request for connecting to host. Use HTTPS request for connecting to" }, { "data": "RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is Always. For non-init containers or when this field is not specified, the restart behavior is defined by the Pods restart policy and the container type. Setting the RestartPolicy as Always for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy Always will be shut down. This lifecycle differs from normal init containers and is often referred to as a sidecar container. https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/ If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. For regular containers, this is ignored by Kubernetes. Pod DNS policies. Any DNS query that does not match the configured cluster domain suffix, such as www.kubernetes.io, is forwarded to the upstream nameserver inherited from the node. Cluster administrators may have extra stub-domain and upstream DNS servers configured. For Pods running with hostNetwork, you should explicitly set its DNS policy ClusterFirstWithHostNet. The Pod inherits the name resolution configuration from the node that the pods run on. It allows a Pod to ignore DNS settings from the Kubernetes environment. All DNS settings are supposed to be provided using the dnsConfig field in the Pod Spec. The medium on which to store the volume. The default volume of the backing node. Mount a tmpfs (RAM-backed filesystem) for you instead. While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on node reboot and any files you write will count against your Containers memory limit. The name of the pod. The namespace of the pod. The uid of the pod. The labels of the pod. The annotations of the pod. The ipAddress of the pod. The service account name of the pod. The name of the node. The ipAddress of the node. The ipAddresess of the pod. Only change permissions and ownership if permission and ownership of root directory does not match with expected permissions of the volume. This could help shorten the time it takes to change ownership and permission of a volume Always change permission and ownership of the volume when volume is mounted. Host path types. Empty string (default) is for backward compatibility, which means that no checks will be performed before mounting the hostPath volume. If nothing exists at the given path, an empty directory will be created there as needed with permission set to 0755, having the same group and ownership with Kubelet. A directory must exist at the given path. If nothing exists at the given path, an empty file will be created there as needed with permission set to 0644, having the same group and ownership with Kubelet. A file must exist at the given path. A UNIX socket must exist at the given path. A character device must exist at the given path. A block device must exist at the given path. Specify how the path is matched against request paths. https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types Matches the URL path exactly. Matches based on a URL path prefix split by /. Matching is specified by the underlying IngressClass. Every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image" }, { "data": "If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet downloads (pulls) the image with the resolved digest, and uses that image to launch the container. Default is Always if ImagePullPolicy is omitted and either the image tag is :latest or the image tag is omitted. The image is pulled only if it is not already present locally. Default is IfNotPresent if ImagePullPolicy is omitted and the image tag is present but not :latest The image is assumed to exist locally. No attempt is made to pull the image. This volume mount will not receive any subsequent mounts that are mounted to this volume or any of its subdirectories by the host. In similar fashion, no mounts created by the Container will be visible on the host. This is the default mode. This mode is equal to private mount propagation as described in the Linux kernel documentation This volume mount will receive all subsequent mounts that are mounted to this volume or any of its subdirectories. In other words, if the host mounts anything inside the volume mount, the Container will see it mounted there. Similarly, if any Pod with Bidirectional mount propagation to the same volume mounts anything there, the Container with HostToContainer mount propagation will see it. This mode is equal to rslave mount propagation as described in the Linux kernel documentation This volume mount behaves the same the HostToContainer mount. In addition, all volume mounts created by the Container will be propagated back to the host and to all Containers of all Pods that use the same volume A typical use case for this mode is a Pod with a FlexVolume or CSI driver or a Pod that needs to mount something on the host using a hostPath volume. This mode is equal to rshared mount propagation as described in the Linux kernel documentation Caution: Bidirectional mount propagation can be dangerous. It can damage the host operating system and therefore it is allowed only in privileged Containers. Familiarity with Linux kernel behavior is strongly recommended. In addition, any volume mounts created by Containers in Pods must be destroyed (unmounted) by the Containers on termination. Default behaviors of network traffic in policies. The policy denies all traffic. Since rules are additive, additional rules or policies can allow specific traffic. The policy allows all traffic (either ingress or egress). Since rules are additive, no additional rule or policies can subsequently deny the traffic. Network protocols. TCP. UDP. SCTP. Access Modes. The volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node. The volume can be mounted as read-only by many nodes. The volume can be mounted as read-write by many nodes. The volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across whole cluster can read that PVC or write to it. This is only supported for CSI volumes and Kubernetes version 1.22+. Volume Modes. Volume is ounted into Pods into a directory. If the volume is backed by a block device and the device is empty, Kubernetes creates a filesystem on the device before mounting it for the first time. Use a volume as a raw block device. Such volume is presented into a Pod as a block device, without any filesystem on it. This mode is useful to provide a Pod the fastest possible way to access a volume, without any filesystem layer between the Pod and the volume. On the other hand, the application running in the Pod must know how to handle a raw block device Reclaim" }, { "data": "The Retain reclaim policy allows for manual reclamation of the resource. When the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered released. But it is not yet available for another claim because the previous claimants data remains on the volume. An administrator can manually reclaim the volume with the following steps: If you want to reuse the same storage asset, create a new PersistentVolume with the same storage asset definition. For volume plugins that support the Delete reclaim policy, deletion removes both the PersistentVolume object from Kubernetes, as well as the associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume. Volumes that were dynamically provisioned inherit the reclaim policy of their StorageClass, which defaults to Delete. The administrator should configure the StorageClass according to users expectations; otherwise, the PV must be edited or patched after it is created Isolation determines which policies are created when allowing connections from a a pod / workload to peers. Only creates network policies that select the pod. Only creates network policies that select the peer. Controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down. The default policy is OrderedReady, where pods are created in increasing order (pod-0, then pod-1, etc) and the controller will wait until each pod is ready before continuing. When scaling down, the pods are removed in the opposite order. The alternative policy is Parallel which will create pods in parallel to match the desired scale without waiting, and on scale down will delete all pods at once. Network protocols. TCP. UDP. SCTP. CPU limit of the container. Memory limit of the container. CPU request of the container. Memory request of the container. Ephemeral storage limit of the container. Ephemeral storage request of the container. Restart policy for all containers within the pod. Always restart the pod after it exits. Only restart if the pod exits with a non-zero exit code. Never restart the pod. Use the policy that provisions the most changes. Use the policy that provisions the least amount of changes. Disables scaling in this direction. For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, thats outside of your cluster. Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP. Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType Exposes the Service on each Nodes IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. Youll be able to contact the NodePort Service, from outside the cluster, by requesting :. Exposes the Service externally using a cloud providers load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created. Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up. Note: You need either kube-dns version 1.7 or CoreDNS version 0.0.8 or higher to use the ExternalName type. Taint effects. This means that no pod will be able to schedule onto the node unless it has a matching toleration. This is a preference or soft version of NO_SCHEDULE the system will try to avoid placing a pod that does not tolerate the taint on the node, but it is not required. This affects pods that are already running on the node as" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Cloud Custodian", "subcategory": "Automation & Configuration" }
[ { "data": "Introduction AWS Azure GCP Oracle Cloud Infrastructure (OCI) Tencent Cloud Kubernetes Tools Contributing Custodian provides for time based filters, that allow for taking periodic action on a resource, with resource schedule customization based on tag values. A common use is offhours scheduling for asgs and instances. Flexible offhours scheduling with opt-in, opt-out selection, and timezone support. Resume during offhours support. Can be combined with other filters to get a particular set ( resources with tag, vpc, etc). Can be combined with arbitrary actions Can omit a set of dates such as public holidays. We provide an onhour and offhour time filter, each should be used in a different policy, they support the same configuration options: weekends: default true, whether to leave resources off for the weekend weekends-only: default false, whether to turn the resource off only on the weekend default_tz: which timezone to utilize when evaluating time (REQUIRED) fallback-schedule: If a resource doesnt support tagging or doesnt provide a tag you can supply a default schedule that will be used. When the tag is provided this will be ignored. See ScheduleParser Time Specifications. tag: which resource tag name to use for per-resource configuration (schedule and timezone overrides and opt-in/opt-out); default is maid_offhours. opt-out: Determines the behavior for resources which do not have a tag matching the one specified for tag. Values can be either false (the default) where the policy operates on an opt-in basis and resources must have the tag in order to be acted on by the policy, or true where the policy operates on an opt-out basis, and resources without the tag are acted on by the policy. onhour: the default time to start/run resources, specified as 0-23 offhour: the default time to stop/suspend resources, specified as 0-23 skip-days: a list of dates to skip. Dates must use format YYYY-MM-DD skip-days-from: a list of dates to skip stored at a url. expr, format, and url must be passed as parameters. Same syntax as value_from. Can not specify both skip-days-from and skip-days. This example policy overrides most of the defaults for an offhour policy: ``` policies: name: offhours-stop resource: ec2 filters: type: offhour weekends: false default_tz: pt tag: downtime opt-out: true onhour: 8 offhour: 20 ``` Resources can use a special tag to override the default configuration on a per-resource basis. Note that the name of the tag is configurable via the tag option in the policy; the examples below use the default tag name, maid_offhours. The value of the tag must be one of the following: (empty) or on - An empty tag value or a value of on implies night and weekend offhours using the default time zone configured in the policy (tz=est if unspecified) and the default onhour and offhour values configured in the policy. off - If offhours is configured to run in opt-out mode, this tag can be specified to disable offhours on a given instance. If offhours is configured to run in opt-in mode, this tag will have no effect (the resource will still be opted out). a semicolon-separated string composed of one or more of the following components, which override the defaults specified in the policy: tz=<timezone> to evaluate with a resource-specific timezone, where <timezone> is either one of the supported timezone aliases defined in c7n.filters.offhours.Time.TZ_ALIASES (such as pt) or the name of a geographic timezone identifier in , such as Americas/Los_Angeles. (Note all timezone aliases are referenced to a locality to ensure taking into account local daylight savings time, if" }, { "data": "off=(time spec) and/or on=(time spec) matching time specifications supported by c7n.filters.offhours.ScheduleParser as described in the next section. Each time specification follows the format (days,hours). Multiple time specifications can be combined in square-bracketed lists, i.e. [(days,hours),(days,hours),(days,hours)]. Examples: ``` off=(M-F,19);on=(M-F,7) off=[(M-F,21),(U,18)];on=[(M-F,6),(U,10)];tz=pt ``` Possible values: | field | values | |:--|:--| | days | M, T, W, H, F, S, U | | hours | 0, 1, 2, , 22, 23 | field values days M, T, W, H, F, S, U hours 0, 1, 2, , 22, 23 Days can be specified in a range (ex. M-F). Turn ec2 instances on and off ``` policies: name: offhours-stop resource: ec2 filters: type: offhour actions: stop name: offhours-start resource: ec2 filters: type: onhour actions: start ``` Heres doing the same with auto scale groups ``` policies: name: asg-offhours-stop resource: asg filters: offhour actions: suspend name: asg-onhours-start resource: asg filters: onhour actions: resume ``` Additional policy examples and resource-type-specific information can be seen in the EC2 Offhours and ASG Offhours use cases. These policies are evaluated hourly; during each run (once an hour), cloud-custodian will act on only the resources tagged for that exact hour. In other words, if a resource has an offhours policy of stopping/suspending at 23:00 Eastern daily and starting/resuming at 06:00 Eastern daily, and you run cloud-custodian once an hour via Lambda, that resource will only be stopped once a day sometime between 23:00 and 23:59, and will only be started once a day sometime between 06:00 and 06:59. If the current hour does not exactly match the hour specified in the policy, nothing will be done at all. As a result of this, if custodian stops an instance or suspends an ASG and you need to start/resume it, you can safely do so manually and custodian wont touch it again until the next day. A number of AWS services have restrictions on the characters that can be used in tag values, such as ElasticBeanstalk and EFS. In particular, these services do not allow parenthesis, square brackets, commas, or semicolons, or empty tag values. This proves to be problematic with the tag-based schedule configuration described above. The best current workaround is to define a separate policy with a unique tag name for each unique schedule that you want to use, and then tag resources with that tag name and a value of on. Note that this can only be used in opt-in mode, not opt-out. Another option is to escape the tag value with the following mapping, generated with the chars unicode number u + hex(ord(the_char))[2:]. This works for GCP resources as well. ( and ) as u28 and u29 [ and ] as u5b and u5d , as u2c ; as u3b = as u3d / as u2f as u2d Examples: ``` offu3du28M-Fu2c18u29u3btzu3dAustraliau2fSydney off=u5bu28M-Fu2c18u29u2cu28Su2c13u29u5d ``` In order to properly implement support for public holidays, make sure to include either skip-days or skip-days-from with your policy. This list should contain all of the public holidays you wish to address and must use YYYY-MM-DD syntax for its dates. If the date the policy is being run on matches any one of those dates, the policy will not return any resources. These dates include year as many holidays vary from year to year so year is required to prevent errors. A sample policy that would not start stopped instances on a public holiday might look like: ``` policies: name: onhour-morning-start-skip-holidays resource: ec2 filters: type: onhour tag: custodian_downtime default_tz: et onhour: 6 skip-days: ['2017-12-25'] actions: start ``` Copyright ." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Couler", "subcategory": "Automation & Configuration" }
[ { "data": "Write your documentation in Markdown and create a professional static site in minutes searchable, customizable, in 60+ languages, for all devices. Focus on the content of your documentation and create a professional static site in minutes. No need to know HTML, CSS or JavaScript let Material for MkDocs do the heavy lifting for you. Serve your documentation with confidence Material for MkDocs automatically adapts to perfectly fit the available screen estate, no matter the type or size of the viewing device. Desktop. Tablet. Mobile. All great. Make it yours change the colors, fonts, language, icons, logo, and more with a few lines of configuration. Material for MkDocs can be easily extended and provides many options to alter appearance and behavior. Don't let your users wait get incredible value with a small footprint by using one of the fastest themes available with excellent performance, yielding optimal search engine rankings and happy users that return. Own your documentation's complete sources and outputs, guaranteeing both integrity and security no need to entrust the backbone of your product knowledge to third-party platforms. Retain full control. You're in good company choose a mature and actively maintained solution built with state-of-the-art Open Source technologies, trusted by more than 20.000 individuals and organizations. Licensed under MIT. Material for MkDocs makes your documentation instantly searchable with zero effort: say goodbye to costly third-party crawler-based solutions that can take hours to update. Ship your documentation with a highly customizable and blazing fast search running entirely in the user's browser at no extra cost. Even better: search inside code blocks, exclude specific sections or entire pages, boost important pages in the results and build searchable documentation that works offline. Learn more Some examples need more explanation than others, which is why Material for MkDocs offers a unique and elegant way to add rich text almost anywhere in a code block. Code annotations can host formatted text, images, diagrams, code blocks, call-outs, content tabs, even interactive elements basically everything that can be expressed in Markdown or HTML. Of course, code annotations work beautifully on mobile and other touch devices and can be printed. Learn more Make an impact on social media and increase engagement when sharing links to your documentation by leveraging the built-in social plugin. Material for MkDocs makes it effortless to generate a beautiful preview image for each page, which will drive more interested users to your Open Source or commercial project. While the social plugin uses what's already there, i.e. your project's name and logo, as well as each page's title and description, it's easy to customize preview images. Supercharge your technical writing by making better use of the processing power of the visual cortex: Material for MkDocs ships more than 10,000 icons and emojis, which can be used in Markdown and HTML with simple shortcodes and an easy-to-remember syntax. Add color to icons and animate them. Make it pop. Use our dedicated icon search to quickly find the perfect icon for almost every use case and add custom icon sets with minimal configuration. Get started By joining the Insiders program, you'll get immediate access to the latest features while also helping support the ongoing development of Material for MkDocs. Thanks to our awesome sponsors, this project is actively maintained and kept in good shape. Together, we can build documentation that simply works! Learn more Follow @squidfunk on Twitter Follow @squidfunk on Fosstodon Material for MkDocs on GitHub Material for MkDocs on DockerHub Material for MkDocs on PyPI" } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "DevStream", "subcategory": "Automation & Configuration" }
[ { "data": "In this quickstart, you will do the following automatically with DevStream: In your working directory, run: ``` sh -c \"$(curl -fsSL https://download.devstream.io/download.sh)\" ``` Note The command above does the following: Optional You can then move dtm to a place which is in your PATH. For example: mv dtm /usr/local/bin/. For more ways to install dtm, see install dtm. Run the following command to generate the template configuration file config.yaml for quickstart. ``` ./dtm show config -t quickstart > config.yaml ``` Then set the following environment variables by running (replace values within the double quotes): ``` export GITHUBUSER=\"<YOURGITHUBUSERNAME_HERE>\" export DOCKERHUBUSERNAME=\"<YOURDOCKERHUBUSERNAMEHERE>\" export IMAGEREPOPASSWORD=\"<YOURDOCKERHUBUSERNAME_HERE>\" export GITHUBTOKEN=\"<YOURGITHUBPERSONALACCESSTOKENHERE>\" ``` Tip Go to Personal Access Token to generate a new GITHUB_TOKEN for dtm. For \"Quick Start\", you only need repo,workflow,delete_repo permissions. Then you should run the following commands to update our config file with those env vars: ``` sed -i.bak \"s@YOURGITHUBUSERNAMECASESENSITIVE@${GITHUB_USER}@g\" config.yaml sed -i.bak \"s@YOURDOCKERUSERNAME@${DOCKERHUB_USERNAME}@g\" config.yaml ``` ``` sed -i \"s@YOURGITHUBUSERNAMECASESENSITIVE@${GITHUB_USER}@g\" config.yaml sed -i \"s@YOURDOCKERUSERNAME@${DOCKERHUB_USERNAME}@g\" config.yaml ``` Run: ``` ./dtm init -f config.yaml ``` Run: ``` ./dtm apply -f config.yaml -y ``` Go to your GitHub repositories list and you can see the new repo go-webapp-devstream-demo has been created. There is scaffolding code for a Golang web app in it, with GitHub Actions CI workflow set up properly. The commits (made by DevStream when scaffolding the repo and creating workflows) have triggered the CI, and the workflow has finished successfully, as shown in the screenshot below: Run: ``` ./dtm delete -f config.yaml ``` Input y then press enter to continue, and you should see similar output: Output ``` 2022-12-12 12:29:00 [INFO] Delete started. 2022-12-12 12:29:00 [INFO] Using local backend. State file: devstream.state. 2022-12-12 12:29:00 [INFO] Tool (github-actions/default) will be deleted. 2022-12-12 12:29:00 [INFO] Tool (repo-scaffolding/golang-github) will be deleted. Continue? [y/n] Enter a value (Default is n): y 2022-12-12 12:29:00 [INFO] Start executing the plan. 2022-12-12 12:29:00 [INFO] Changes count: 2. 2022-12-12 12:29:00 [INFO] -- [ Processing progress: 1/2. ] -- 2022-12-12 12:29:00 [INFO] Processing: (github-actions/default) -> Delete ... 2022-12-12 12:29:02 [INFO] Prepare to delete 'github-actions_default' from States. 2022-12-12 12:29:02 [SUCCESS] Tool (github-actions/default) delete done. 2022-12-12 12:29:02 [INFO] -- [ Processing progress: 2/2. ] -- 2022-12-12 12:29:02 [INFO] Processing: (repo-scaffolding/golang-github) -> Delete ... 2022-12-12 12:29:03 [SUCCESS] GitHub repo go-webapp-devstream-demo removed. 2022-12-12 12:29:03 [INFO] Prepare to delete 'repo-scaffolding_golang-github' from States. 2022-12-12 12:29:03 [SUCCESS] Tool (repo-scaffolding/golang-github) delete done. 2022-12-12 12:29:03 [INFO] -- [ Processing done. ] -- 2022-12-12 12:29:03 [SUCCESS] All plugins deleted successfully. 2022-12-12 12:29:03 [SUCCESS] Delete finished. ``` Now if you check your GitHub repo list again, everything has been nuked by DevStream. Hooray! You can also remove the DevStream state file (which should be empty now) by running: rm devstream.state." } ]
{ "category": "Provisioning", "file_name": "v2.html.md", "project_name": "Foreman", "subcategory": "Automation & Configuration" }
[ { "data": "Foreman API v2 is currently the default API" }, { "data": "" }, { "data": "" }, { "data": "| Resource | Description | |:|:--| | GET /api/architectures | List all architectures | | GET /api/operatingsystems/:operatingsystem_id/architectures | List all architectures for operating system | | GET /api/architectures/:id | Show an architecture | | POST /api/architectures | Create an architecture | | PUT /api/architectures/:id | Update an architecture | | DELETE /api/architectures/:id | Delete an architecture | | Resource | Description | |:-|:| | GET /api/audits | List all audits | | GET /api/hosts/:host_id/audits | List all audits for a given host | | GET /api/audits/:id | Show an audit | | Resource | Description | |:--|:| | GET /api/authsourceexternals | List external authentication sources | | GET /api/locations/:locationid/authsource_externals | List external authentication sources per location | | GET /api/organizations/:organizationid/authsource_externals | List external authentication sources per organization | | GET /api/authsourceexternals/:id | Show an external authentication source | | PUT /api/authsourceexternals/:id | Update an external authentication source | | Resource | Description | |:--|:| | GET /api/authsourceinternals | List internal authentication sources | | GET /api/authsourceinternals/:id | Show an internal authentication source | | Resource | Description | |:-|:--| | GET /api/authsourceldaps | List all LDAP authentication sources | | GET /api/locations/:locationid/authsource_ldaps | List LDAP authentication sources per location | | GET /api/organizations/:organizationid/authsource_ldaps | List LDAP authentication sources per organization | | GET /api/authsourceldaps/:id | Show an LDAP authentication source | | POST /api/authsourceldaps | Create an LDAP authentication source | | PUT /api/authsourceldaps/:id | Update an LDAP authentication source | | PUT /api/authsourceldaps/:id/test | Test LDAP connection | | DELETE /api/authsourceldaps/:id | Delete an LDAP authentication source | | Resource | Description | |:--|:-| | GET /api/auth_sources | List all authentication sources | | GET /api/locations/:locationid/authsources | List all authentication sources per location | | GET /api/organizations/:organizationid/authsources | List all authentication sources per organization | | Resource | Description | |:-|:--| | GET /api/smartproxies/:smartproxy_id/autosign | List all autosign entries | | POST /api/smartproxies/:smartproxy_id/autosign | Create autosign entry | | DELETE /api/smartproxies/:smartproxy_id/autosign/:id | Delete autosign entry | | Resource | Description | |:--|:-| | GET /api/bookmarks | List all bookmarks | | GET /api/bookmarks/:id | Show a bookmark | | POST /api/bookmarks | Create a bookmark | | PUT /api/bookmarks/:id | Update a bookmark | | DELETE /api/bookmarks/:id | Delete a bookmark | | Resource | Description | |:-|:| | GET /api/common_parameters | List all global parameters | | GET /api/common_parameters/:id | Show a global parameter | | POST /api/common_parameters | Create a global parameter | | PUT /api/common_parameters/:id | Update a global parameter | | DELETE /api/common_parameters/:id | Delete a global parameter | | Resource | Description | |:|:--| | GET /api/computeresources/:computeresourceid/computeprofiles/:computeprofileid/compute_attributes | List of compute attributes for provided compute profile and compute resource | | GET /api/computeprofiles/:computeprofileid/computeresources/:computeresourceid/compute_attributes | List of compute attributes for provided compute profile and compute resource | | GET /api/computeresources/:computeresourceid/computeattributes | List of compute attributes for compute resource | | GET /api/computeprofiles/:computeprofileid/computeattributes | List of compute attributes for compute profile | | GET /api/compute_attributes/:id | List of compute attributes | | GET /api/computeresources/:computeresourceid/computeprofiles/:computeprofileid/compute_attributes/:id | Show a compute attributes set | | GET /api/computeprofiles/:computeprofileid/computeresources/:computeresourceid/compute_attributes/:id | Show a compute attributes set | | GET /api/computeresources/:computeresourceid/computeattributes/:id | Show a compute attributes set | | GET /api/computeprofiles/:computeprofileid/computeattributes/:id | Show a compute attributes set | | GET /api/compute_attributes/:id | Show a compute attributes set | | POST /api/computeresources/:computeresourceid/computeprofiles/:computeprofileid/compute_attributes | Create a compute attributes set | | POST /api/computeprofiles/:computeprofileid/computeresources/:computeresourceid/compute_attributes | Create a compute attributes set | | POST /api/computeresources/:computeresourceid/computeattributes | Create a compute attributes set | | POST /api/computeprofiles/:computeprofileid/computeattributes | Create a compute attributes set | | POST /api/compute_attributes | Create a compute attributes set | | PUT /api/computeresources/:computeresourceid/computeprofiles/:computeprofileid/compute_attributes/:id | Update a compute attributes set | | PUT /api/computeprofiles/:computeprofileid/computeresources/:computeresourceid/compute_attributes/:id | Update a compute attributes set | | PUT /api/computeresources/:computeresourceid/computeattributes/:id | Update a compute attributes set | | PUT /api/computeprofiles/:computeprofileid/computeattributes/:id | Update a compute attributes set | | PUT /api/compute_attributes/:id | Update a compute attributes set | | Resource | Description | |:|:-| | GET /api/compute_profiles | List of compute profiles | | GET /api/compute_profiles/:id | Show a compute profile | | POST /api/compute_profiles | Create a compute profile | | PUT /api/compute_profiles/:id | Update a compute profile | | DELETE /api/compute_profiles/:id | Delete a compute profile | | Resource | Description | |:-|:--| | GET /api/compute_resources | List all compute resources | | GET /api/compute_resources/:id | Show a compute resource | | POST /api/compute_resources | Create a compute resource | | PUT /api/compute_resources/:id | Update a compute resource | | DELETE /api/compute_resources/:id | Delete a compute resource | | GET /api/computeresources/:id/availableimages | List available images for a compute resource | | GET /api/computeresources/:id/availableclusters | List available clusters for a compute resource | | GET /api/computeresources/:id/availableflavors | List available flavors for a compute resource | | GET /api/computeresources/:id/availablefolders | List available folders for a compute resource | | GET /api/computeresources/:id/availablezones | List available zone for a compute resource | | GET /api/computeresources/:id/availablenetworks | List available networks for a compute resource | | GET /api/computeresources/:id/availableclusters/:clusterid/availablenetworks | List available networks for a compute resource cluster | | GET /api/computeresources/:id/availablevnic_profiles | List available vnic profiles for a compute resource, for oVirt only | | GET /api/computeresources/:id/availableclusters/:clusterid/availableresource_pools | List resource pools for a compute resource cluster | | GET /api/computeresources/:id/storagedomains/:storagedomainid | List attributes for a given storage domain | | GET /api/computeresources/:id/availablestorage_domains | List storage domains for a compute resource | | GET /api/computeresources/:id/availablestoragedomains/:storagedomain | List attributes for a given storage domain | | GET /api/computeresources/:id/availableclusters/:clusterid/availablestorage_domains | List storage domains for a compute resource | | GET /api/computeresources/:id/storagepods/:storagepodid | List attributes for a given storage pod | | GET /api/computeresources/:id/availablestorage_pods | List storage pods for a compute resource | | GET /api/computeresources/:id/availablestoragepods/:storagepod | List attributes for a given storage pod | | GET /api/computeresources/:id/availableclusters/:clusterid/availablestorage_pods | List storage pods for a compute resource | | GET /api/computeresources/:id/availablesecurity_groups | List available security groups for a compute resource | | PUT /api/computeresources/:id/associate/:vmid | Associate VMs to Hosts | | PUT /api/computeresources/:id/refreshcache | Refresh Compute Resource Cache | | GET /api/computeresources/:id/availablevirtual_machines | List available virtual machines for a compute resource | | GET /api/computeresources/:id/availablevirtualmachines/:vmid | Show a virtual machine | | PUT /api/computeresources/:id/availablevirtualmachines/:vmid/power | Power a Virtual Machine | | DELETE /api/computeresources/:id/availablevirtualmachines/:vmid | Delete a Virtual Machine | | Resource | Description | |:--|:--| | GET /api/config_reports | List all reports | | GET /api/config_reports/:id | Show a report | | POST /api/config_reports | Create a report | | DELETE /api/config_reports/:id | Delete a report | | GET /api/hosts/:hostid/configreports/last | Show the last report for a host | | Resource | Description | |:-|:-| | GET /api/dashboard | Get dashboard details | | Resource | Description | |:|:| | GET /api/domains | List of domains | | GET /api/subnets/:subnet_id/domains | List of domains per subnet | | GET /api/locations/:location_id/domains | List of domains per location | | GET /api/organizations/:organization_id/domains | List of domains per organization | | GET /api/domains/:id | Show a domain | | POST /api/domains | Create a domain | | PUT /api/domains/:id | Update a domain | | DELETE /api/domains/:id | Delete a domain | | Resource | Description | |:|:-| | GET /api/usergroups/:usergroupid/externalusergroups | List all external user groups for user group | | GET /api/authsourceldaps/:authsourceldapid/externalusergroups | List all external user groups for LDAP authentication source | | GET /api/usergroups/:usergroupid/externalusergroups/:id | Show an external user group for user group | | GET /api/authsourceldaps/:authsourceldapid/externalusergroups/:id | Show an external user group for LDAP authentication source | | POST /api/usergroups/:usergroupid/externalusergroups | Create an external user group linked to a user group | | PUT /api/usergroups/:usergroupid/externalusergroups/:id | Update external user group | | PUT /api/usergroups/:usergroupid/externalusergroups/:id/refresh | Refresh external user group | | DELETE /api/usergroups/:usergroupid/externalusergroups/:id | Delete an external user group | | Resource | Description | |:|:-| | GET /api/fact_values | List all fact values | | GET /api/hosts/:host_id/facts | List all fact values of a given host | | Resource | Description | |:|:--| | GET /api/filters | List all filters | | GET /api/filters/:id | Show a filter | | POST /api/filters | Create a filter | | PUT /api/filters/:id | Update a filter | | DELETE /api/filters/:id | Delete a filter | | Resource | Description | |:-|:-| | GET /api | Show available API links | | GET /api/status | Show status | | Resource | Description | |:--|:-| | GET /api/host_statuses | List of host statuses | | Resource | Description | |:|:--| | GET /api/hostgroups | List all host groups | | GET /api/locations/:location_id/hostgroups | List all host groups per location | | GET /api/organizations/:organization_id/hostgroups | List all host groups per organization | | GET /api/hostgroups/:id | Show a host group | | POST /api/hostgroups | Create a host group | | PUT /api/hostgroups/:id | Update a host group | | DELETE /api/hostgroups/:id | Delete a host group | | POST /api/hostgroups/:id/clone | Clone a host group | | PUT /api/hostgroups/:id/rebuild_config | Rebuild orchestration config | | Resource | Description | |:-|:--| | GET /api/hosts | List all hosts | | GET /api/hostgroups/:hostgroup_id/hosts | List all hosts for a host group | | GET /api/locations/:location_id/hosts | List hosts per location | | GET /api/organizations/:organization_id/hosts | List hosts per organization | | GET /api/hosts/:id | Show a host | | POST /api/hosts | Create a host | | PUT /api/hosts/:id | Update a host | | DELETE /api/hosts/:id | Delete a host | | GET /api/hosts/:id/enc | Get ENC values of host | | GET /api/hosts/:id/status/:type | Get status of host | | DELETE /api/hosts/:id/status/:type | Clear sub-status of host | | GET /api/hosts/:id/vmcomputeattributes | Get vm attributes of host | | PUT /api/hosts/:id/disassociate | Disassociate the host from a VM | | PUT /api/hosts/:id/power | Run a power operation on host | | GET /api/hosts/:id/power | Fetch the status of whether the host is powered on or not. Supported hosts are VMs and physical hosts with" }, { "data": "" }, { "data": "" }, { "data": "" }, { "data": "| | PUT /api/hosts/:id/boot | Boot host from specified device | | POST /api/hosts/facts | Upload facts for a host, creating the host if required | | PUT /api/hosts/:id/rebuild_config | Rebuild orchestration config | | GET /api/hosts/:id/template/:kind | Preview rendered provisioning template content | | GET /api/hosts/:id/templates | Get provisioning templates for the host | | Resource | Description | |:--|:| | GET /api/http_proxies | List of HTTP Proxies | | GET /api/http_proxies/:id | Show an HTTP Proxy | | POST /api/http_proxies | Create an HTTP Proxy | | PUT /api/http_proxies/:id | Update an HTTP Proxy | | DELETE /api/http_proxies/:id | Delete an HTTP Proxy | | Resource | Description | |:--|:| | GET /api/computeresources/:computeresource_id/images | List all images for a compute resource | | GET /api/operatingsystems/:operatingsystem_id/images | List all images for operating system | | GET /api/architectures/:architecture_id/images | List all images for architecture | | GET /api/computeresources/:computeresource_id/images/:id | Show an image | | GET /api/operatingsystems/:operatingsystem_id/images/:id | Show an image | | GET /api/architectures/:architecture_id/images/:id | Show an image | | POST /api/computeresources/:computeresource_id/images | Create an image | | PUT /api/computeresources/:computeresource_id/images/:id | Update an image | | DELETE /api/computeresources/:computeresource_id/images/:id | Delete an image | | Resource | Description | |:-|:| | PUT /api/instancehosts/:hostid | Assign a host to the Foreman instance | | GET /api/instance_hosts | List hosts forming the Foreman instance | | DESTROY /api/instancehosts/:hostid | Unassign a given host from the Foreman instance | | Resource | Description | |:|:-| | GET /api/hosts/:host_id/interfaces | List all interfaces for host | | GET /api/domains/:domain_id/interfaces | List all interfaces for domain | | GET /api/subnets/:subnet_id/interfaces | List all interfaces for subnet | | GET /api/hosts/:host_id/interfaces/:id | Show an interface for host | | POST /api/hosts/:host_id/interfaces | Create an interface on a host | | PUT /api/hosts/:host_id/interfaces/:id | Update a host's interface | | DELETE /api/hosts/:host_id/interfaces/:id | Delete a host's interface | | Resource | Description | |:--|:-| | GET /api/locations | List all locations | | GET /api/locations/:id | Show a location | | POST /api/locations | Create a location | | PUT /api/locations/:id | Update a location | | DELETE /api/locations/:id | Delete a location | | Resource | Description | |:--|:-| | GET /api/mail_notifications | List of email notifications | | GET /api/mail_notifications/:id | Show an email notification | | POST /api/users/:userid/mailnotifications | Add an email notification for a user | | PUT /api/users/:userid/mailnotifications/:mailnotificationid | Update an email notification for a user | | DELETE /api/users/:userid/mailnotifications/:mailnotificationid | Remove an email notification for a user | | GET /api/users/:userid/mailnotifications | List all email notifications for a user | | Resource | Description | |:-|:| | GET /api/media | List all installation media | | GET /api/operatingsystems/:operatingsystem_id/media | List all media for an operating system | | GET /api/locations/:location_id/media | List all media per location | | GET /api/organizations/:organization_id/media | List all media per organization | | GET /api/media/:id | Show a medium | | POST /api/media | Create a medium | | PUT /api/media/:id | Update a medium | | DELETE /api/media/:id | Delete a medium | | Resource | Description | |:--|:-| | GET /api/models | List all hardware models | | GET /api/models/:id | Show a hardware model | | POST /api/models | Create a hardware model | | PUT /api/models/:id | Update a hardware model | | DELETE /api/models/:id | Delete a hardware model | | Resource | Description | |:|:| | GET /api/operatingsystems | List all operating systems | | GET /api/architectures/:architecture_id/operatingsystems | List all operating systems for nested architecture | | GET /api/media/:medium_id/operatingsystems | List all operating systems for nested medium | | GET /api/ptables/:ptable_id/operatingsystems" } ]
{ "category": "Provisioning", "file_name": "index.html.md", "project_name": "Idem Project", "subcategory": "Automation & Configuration" }
[ { "data": "Getting Started Guide Links Idem is an open source, Apache licensed project from VMware that represents a new way to manage complex cloud environments. Idem works from plain data that describes the infrastructure that you want, which saves you the trouble of scripting and maintaining infrastructure as code. This guide gets you up and running with Idem. For more details, see the reference documentation included in the Idem open source codebase repositories." } ]
{ "category": "Provisioning", "file_name": "amazon-ec2.md", "project_name": "Juju", "subcategory": "Automation & Configuration" }
[ { "data": "This is the documentation for Juju 3.5. To find out whats new, see Roadmap & Releases. To upgrade, see How to upgrade your deployment. Welcome to Juju, your entrypoint into the Juju universe! Juju is an open source orchestration engine for software operators that enables the deployment, integration and lifecycle management of applications at any scale, on any infrastructure, using special software operators called charms. Juju provides a model-driven way to install, provision, maintain, update, upgrade, and integrate applications on and across Kubernetes containers, Linux containers, virtual machines, and bare metal machines, on public or private cloud. As such, Juju makes it simple, intuitive, and efficient to manage the full lifecycle of complex applications in hybrid cloud. For system operators and DevOps who manage applications in the cloud, Juju simplifies code; for CIOs, it helps align code with business decisions. For a collection of existing charms, see Charmhub. To build your own charm, see the Charm SDK docs. | Unnamed: 0 | Unnamed: 1 | |:|:| | Tutorial Get started - a hands-on introduction to Juju for new users | How-to guides Step-by-step guides covering key operations and common tasks | | Explanation Discussion and clarification of key topics | Reference Technical information - specifications, APIs, architecture | Juju is an open source project that warmly welcomes community projects, contributions, suggestions, fixes and constructive feedback. Last updated 10 days ago. Help improve this document in the forum. 2024 Canonical Ltd. Manage your tracker settings Legal Information Ubuntu and Canonical are registered trademarks. All other trademarks are the property of their respective owners." } ]
{ "category": "Provisioning", "file_name": "#docs-link-menu.md", "project_name": "Juju", "subcategory": "Automation & Configuration" }
[ { "data": "Juju as a service (JAAS) provides a single location to interact, manage and audit your charmed applications using a dashboard or Juju CLI commands. Get started with JAAS Jaas simplifies the management of large scale Juju deployments and it is the ideal tool to use if you: to maintain, across multiple clouds or machine types and you would like to address them all from a central location to satisfy security or regulatory requirements, as Jaas is able to centrally enforce permission and auditing policies to control your entire deployment, so that admins can execute routine management actions at the simple press of a button You can deploy JAAS on your preferred infrastructure, leaving you complete privacy and control over what is done where. JAAS has two main components: the Juju Infinite Model Manager is a single point of contact for multiple Juju controllers a graphical user interface to simplify common administrative operations JAAS is your centralised enterprise control plane for Juju deployments. With JAAS you can: Drill down to view the details of everything that is deployed inside a model, such as applications, integrations, units, and more. Execute Juju action from the UI and view the resulting logs to confirm their status. Perform common administrative operations and apply machine configurations. Onboard controllers and add, remove or manage user access to models and controllers. Access the logs from your deployment in a single, centralised location. Perform complex searches/filters through your entire deployment and share the result through a unique URL. Jaas helps you to manage your distributed applications across any infrastructure. Thanks to Juju you can automate your systems lifecycle management across public clouds, Kubernetes, virtual (VM) and bare metal machines. You can deploy JAAS on your infrastructure with an Ubuntu Pro subscription. Ubuntu Pro is a comprehensive subscription from Canonical which includes: Get JAAS with an Ubuntu Pro subscription Juju documentation Charm SDK documentation What is a software operator? Software operators explained Charmhub chat Discourse forum Talk to a Canonical expert> In submitting this form, I confirm that I have read and agree to Canonical's Privacy Notice and Privacy Policy. 2024 Canonical Ltd. Manage your tracker settings Legal Information Ubuntu and Canonical are registered trademarks. All other trademarks are the property of their respective owners." } ]
{ "category": "Provisioning", "file_name": "reference.md", "project_name": "Juju", "subcategory": "Automation & Configuration" }
[ { "data": "Welcome to Juju Reference docs our cast of characters (tools, concepts, entities, and processes) for the Juju story! When you install a Juju client, for example the juju CLI client, and give Juju access to your cloud (Kubernetes or otherwise), your Juju client bootstraps a controller into the cloud. From that point onward you are officially a Juju user with a superuser access level and therefore able to use Juju and charms or bundles from our large collection on Charmhub to manage applications on that cloud. In fact, you can also go ahead and add another cloud definition to your controller, for any cloud in our long list of supported clouds. On any of the clouds, you can use the controller to set up a model, and then use Juju for all your application management needs from application deployment to configuration to constraints to scaling to high-availability to integration (within and between models and their clouds!) to actions to secrets to upgrading to teardown. You dont have to worry about the infrastructure the Juju controller agent takes care of all of that automatically for you. But, if you care, Juju also lets you manually control availability zones, machines, subnets, spaces, secret backends, storage. Last updated a month ago. Help improve this document in the forum. 2024 Canonical Ltd. Manage your tracker settings Legal Information Ubuntu and Canonical are registered trademarks. All other trademarks are the property of their respective owners." } ]
{ "category": "Provisioning", "file_name": "google-gke.md", "project_name": "Juju", "subcategory": "Automation & Configuration" }
[ { "data": "This documentation is aimed at Juju developers or Juju users who would like to see whats under the hood. It is not intended to stand on its own but merely to supplement the Juju documentation and the Charm SDK documentation. Note also that many of our Juju developer docs are still just on GitHub. | Unnamed: 0 | Unnamed: 1 | |-:|:| | nan | How-to guides Step-by-step guides covering key operations and common tasks | | nan | Reference Technical information - specifications, APIs, architecture | Juju is an open source project that warmly welcomes community projects, contributions, suggestions, fixes and constructive feedback. Last updated 2 months ago. Help improve this document in the forum. 2024 Canonical Ltd. Manage your tracker settings Legal Information Ubuntu and Canonical are registered trademarks. All other trademarks are the property of their respective owners." } ]
{ "category": "Provisioning", "file_name": "olm.md", "project_name": "Juju", "subcategory": "Automation & Configuration" }
[ { "data": "List of supported clouds > Microsoft Azure This document describes details specific to using your existing Microsoft Azure cloud with Juju. See more: Microsoft Azure When using the Microsoft Azure cloud with Juju, it is important to keep in mind that it is a (1) machine cloud and (2) not some other cloud. See more: Cloud differences in Juju As the differences related to (1) are already documented generically in our Tutorial, How-to guides, and Reference docs, here we record just those that follow from (2). | Juju points of variation | Notes for the Microsoft Azure cloud | |:|:--| | setup (chronological order): | nan | | CLOUD | nan | | supported versions: | nan | | requirements: | If youre in a locked-down environment: Permissions: - Microsoft.Compute/skus (read) - Microsoft.Resources/subscriptions/resourceGroups (read, write, delete) - Microsoft.Resources/deployments/ (write/read/delete/cancel/validate) - Microsoft.Network/networkSecurityGroups (write, read, delete, other - join) - Microsoft.Network/virtualNetworks/ (write, read, delete) - Microsoft.Compute/virtualMachineScaleSets/ (write, read, delete, other - start action, other - deallocate action, other - restart action, other powerOff action) - Microsoft.Network/virtualNetworks/subnets/ (read, write, delete, other - join) - Microsoft.Compute/availabilitySets (write, read, delete) - Microsoft.Network/publicIPAddresses (write, read, delete, other - join - optional for public services) - Microsoft.Network/networkInterfaces (write, read, delete, other - join) - Microsoft.Compute/virtualMachines (write, read, delete, other - start, power off, restart, deallocate) - Microsoft.Compute/disks (write, read, delete) | | definition: | Juju automatically defines a cloud of this type. | | - name: | azure or user-defined | | - type: | azure | | - authentication types: | [interactive, service-principal-secret] | | - regions: | [TO BE ADDED] | | - cloud-specific model configuration keys: | load-balancer-sku-name (string) Mirrors the LoadBalancerSkuName type in the Azure SDK. network (string) If set, uses the specified virtual network for all model machines instead of creating one. resource-group-name (string) If set, uses the specified resource group for all model artefacts instead of creating one based on the model UUID. | | CREDENTIAL | nan | | definition: | auth-type: interactive (recommended), service-principal-secret. Depending on which one you choose, you will have to provide one or more of the following: your subscription id, application name, application id, tenant id, application password. If your credential stops working: Credentials for the azure cloud have been reported to occasionally stop working over time. If this happens, try juju update-credential (passing as an argument the same credential) or juju add-credential (passing as an argument a new credential) + juju" }, { "data": "| | CONTROLLER | nan | | notes on bootstrap: | | | nan | nan | | nan | nan | | other (alphabetical order:) | nan | | CONSTRAINT | nan | | conflicting: | [instance-type] vs [arch, cores, mem] | | supported? | nan | | - allocate-public-ip | nan | | - arch | Valid values: amd64. | | - container | nan | | - cores | nan | | - cpu-power | nan | | - image-id | nan | | - instance-role | nan | | - instance-type | Valid values: See cloud provider. | | - mem | nan | | - root-disk | nan | | - root-disk-source | Represents the juju storage pool for the root disk. By specifying a storage pool, the root disk can be configured to use encryption. | | - spaces | nan | | - tags | nan | | - virt-type | nan | | - zones | nan | | PLACEMENT DIRECTIVE | nan | | <machine> | TBA | | subnet=... | nan | | system-id=... | nan | | zone=... | TBA | | MACHINE | nan | | RESOURCE (cloud) Consistent naming, tagging, and the ability to add user-controlled tags to created instances. | nan | Microsoft.Compute/skus (read) Microsoft.Resources/subscriptions/resourceGroups (read, write, delete) Microsoft.Resources/deployments/ (write/read/delete/cancel/validate) Microsoft.Network/networkSecurityGroups (write, read, delete, other - join) Microsoft.Network/virtualNetworks/ (write, read, delete) Microsoft.Compute/virtualMachineScaleSets/ (write, read, delete, other - start action, other - deallocate action, other - restart action, other powerOff action) Microsoft.Network/virtualNetworks/subnets/ (read, write, delete, other - join) Microsoft.Compute/availabilitySets (write, read, delete) Microsoft.Network/publicIPAddresses (write, read, delete, other - join - optional for public services) Microsoft.Network/networkInterfaces (write, read, delete, other - join) Microsoft.Compute/virtualMachines (write, read, delete, other - start, power off, restart, deallocate) Microsoft.Compute/disks (write, read, delete) network (string) If set, uses the specified virtual network for all model machines instead of creating one. resource-group-name (string) If set, uses the specified resource group for all model artefacts instead of creating one based on the model UUID. If your credential stops working: Credentials for the azure cloud have been reported to occasionally stop working over time. If this happens, try juju update-credential (passing as an argument the same credential) or juju add-credential (passing as an argument a new credential) + juju default-credential. Consistent naming, tagging, and the ability to add user-controlled tags to created instances. Contributors: @kylerhornor Last updated 3 months ago. Help improve this document in the forum. 2024 Canonical Ltd. Manage your tracker settings Legal Information Ubuntu and Canonical are registered trademarks. All other trademarks are the property of their respective owners." } ]
{ "category": "Provisioning", "file_name": "get-started-with-juju.md", "project_name": "Juju", "subcategory": "Automation & Configuration" }
[ { "data": "Imagine your business needs a chat service such as Mattermost backed up by a database such as PostgreSQL. In a traditional setup, this can be quite a challenge, but with Juju youll find yourself deploying, configuring, scaling, integrating, etc., applications in no time. Lets get started! The tutorial will take about 1h to complete. If youd like a quicker start: At any point, to ask for help or give feedback or contribute: Get in touch: Project and community. What youll need: What youll do: Tempted to skip this step? We strongly recommend that you do not! As you will see in a minute, the VM you set up in this step does not just provide you with an isolated test environment but also with almost everything else youll need in the rest of this tutorial (and the non-VM alternative may not yield exactly the same results). On your machine, install Multipass and use it to set up an Ubuntu virtual machine (VM) called my-juju-vm from the charm-dev blueprint. See more: Set up your test environment automatically > steps 1-2 Note: This document also contains a manual path, using which you can set things up without the Multipass VM or the charm-dev blueprint. However, please note that the manual path may yield slightly different results that may impact your experience of this tutorial. For best results we strongly recommend the automatic path, or else suggest that you follow the manual path in a way that stays very close to the definition of the charm-dev blueprint. Follow the instructions for the juju client. In addition to that, on your local workstation, create a directory called terraform-juju, then use Multipass to mount it to your Multipass VM. For example, on Linux: ``` user@ubuntu:~$ mkdir terraform-juju user@ubuntu:~$ cd terraform-juju/ user@ubuntu:~$ multipass mount ~/terraform-juju my-juju-vm:~/terraform-juju ``` This setup will enable you to create and edit Terraform files in your local editor while running them inside your VM. In this tutorial your goal is to set up a chat service on a cloud. First, decide which cloud (i.e., anything that provides storage, compute, and networking) you want to use. Juju supports a long list of clouds; in this tutorial we will use a low-ops, minimal production Kubernetes called MicroK8s. In a terminal, open a shell into your VM and verify that you already have MicroK8s installed (microk8s version). See more: Cloud, List of supported clouds, The MicroK8s cloud and Juju, Set up your test environment automatically > steps 3-4 Next, decide which charms (i.e., software operators) you want to use. Charmhub provides a large collection. For this tutorial we will use mattermost-k8s for the chat service, postgresql-k8s for its backing database, and self-signed-certificates to TLS-encrypt traffic from PostgreSQL. See more: Charm, Charmhub, Charmhub | mattermost-k8s, postgresql-k8s, self-signed-certificates Learn more about your MicroK8s cloud. 1a. Find out more about its snap: snap info microk8s. 1b. Find out the installed version: microk8s version. 1c. Check its enabled addons: microk8s status. 1d. Inspect its .kube/config file: cat ~/.kube/config. 1e. Try microk8s kubectl; you wont need it once you have Juju, but its there anyway. You will need to install a Juju client; on the client, add your cloud and cloud credentials; on the cloud, bootstrap a controller" }, { "data": "control plane); on the controller, add a model (i.e., canvas to deploy things on; namespace); on the model, deploy, configure, and integrate the charms that make up your chat service. The blueprint used to launch your VM has ensured that most of these things are already in place for you verify that you have a Juju client, that it knows about your MicroK8s cloud and cloud credentials, that the MicroK8s cloud already has a controller bootstrapped on it, and that the Microk8s controller already has a model on it. Just for practice, bootstrap a new controller and model with more informative names a controller called 31microk8s (reflecting the version of Juju that came with your VM and the cloud that the controller lives on) and a model called chat (reflecting the fact that we intend to use it for applications related to a chat service). Finally, go ahead and deploy, configure, and integrate your charms. Sample session (yours should look very similar): Split your terminal window into three. In all, access your Multipass VM shell (multipass shell my-juju-vm) and then: Shell 1: Keep using it as youve already been doing so far, namely to type the commands in this tutorial. Shell 2: Run juju status --relations --watch 1s to watch your deployment status evolve. (Things are all right if your App Status and your Unit - Workload reach active and your Unit - Agent reaches idle. See more: Status.) Shell 3: Run juju debug-log to watch all the details behind your deployment status. (Especially useful when things dont evolve as expected. In that case, please get in touch.) ``` ubuntu@my-juju-vm:~$ juju version 3.1.8-genericlinux-amd64 ubuntu@my-juju-vm:~$ juju clouds Only clouds with registered credentials are shown. There are more clouds, use --all to see them. Clouds available on the controller: Cloud Regions Default Type microk8s 1 localhost k8s Clouds available on the client: Cloud Regions Default Type Credentials Source Description localhost 1 localhost lxd 1 built-in LXD Container Hypervisor microk8s 1 localhost k8s 1 built-in A Kubernetes Cluster ubuntu@my-juju-vm:~$ juju credentials Controller Credentials: Cloud Credentials microk8s microk8s Client Credentials: Cloud Credentials localhost localhost* microk8s microk8s* ubuntu@my-juju-vm:~$ juju controllers Use --refresh option with this command to see the latest information. Controller Model User Access Cloud/Region Models Nodes HA Version lxd welcome-lxd admin superuser localhost/localhost 2 1 none 3.1.8 microk8s* welcome-k8s admin superuser microk8s/localhost 2 1 - 3.1.8 ubuntu@my-juju-vm:~$ ubuntu@my-juju-vm:~$ juju bootstrap microk8s 31microk8s Creating Juju controller \"31microk8s\" on microk8s/localhost Bootstrap to Kubernetes cluster identified as microk8s/localhost Creating k8s resources for controller \"controller-31microk8s\" Starting controller pod Bootstrap agent now started Contacting Juju controller at 10.152.183.71 to verify accessibility... Bootstrap complete, controller \"31microk8s\" is now available in namespace \"controller-31microk8s\" Now you can run juju add-model <model-name> to create a new model to deploy k8s workloads. ubuntu@my-juju-vm:~$ juju add-model chat Added 'chat' model on microk8s/localhost with credential 'microk8s' for user 'admin' ubuntu@tutorial-vm:~$ juju deploy mattermost-k8s Located charm \"mattermost-k8s\" in charm-hub, revision 27 Deploying \"mattermost-k8s\" from charm-hub charm \"mattermost-k8s\", revision 27 in channel stable on ubuntu@20.04/stable ubuntu@tutorial-vm:~$ juju deploy postgresql-k8s --channel 14/stable --trust --config profile=testing Located charm \"postgresql-k8s\" in charm-hub, revision 193 Deploying \"postgresql-k8s\" from charm-hub charm \"postgresql-k8s\", revision 193 in channel 14/stable on ubuntu@22.04/stable ubuntu@my-juju-vm:~$ juju deploy self-signed-certificates Located charm \"self-signed-certificates\" in charm-hub, revision 72 Deploying \"self-signed-certificates\" from charm-hub charm \"self-signed-certificates\", revision 72 in channel stable on" }, { "data": "ubuntu@tutorial-vm:~$ juju integrate self-signed-certificates postgresql-k8s ubuntu@tutorial-vm:~$ juju integrate postgresql-k8s:db mattermost-k8s ubuntu@my-juju-vm:~$ juju status --relations Model Controller Cloud/Region Version SLA Timestamp chat 31microk8s microk8s/localhost 3.1.8 unsupported 13:48:04+02:00 App Version Status Scale Charm Channel Rev Address Exposed Message mattermost-k8s .../mattermost:v8.1.3-20.04... active 1 mattermost-k8s stable 27 10.152.183.131 no postgresql-k8s 14.10 active 1 postgresql-k8s 14/stable 193 10.152.183.56 no self-signed-certificates active 1 self-signed-certificates stable 72 10.152.183.119 no Unit Workload Agent Address Ports Message mattermost-k8s/0* active idle 10.1.32.155 8065/TCP postgresql-k8s/0* active idle 10.1.32.152 self-signed-certificates/0* active idle 10.1.32.154 Integration provider Requirer Interface Type Message postgresql-k8s:database-peers postgresql-k8s:database-peers postgresql_peers peer postgresql-k8s:db mattermost-k8s:db pgsql regular postgresql-k8s:restart postgresql-k8s:restart rolling_op peer postgresql-k8s:upgrade postgresql-k8s:upgrade upgrade peer self-signed-certificates:certificates postgresql-k8s:certificates tls-certificates regular ``` You will need to install a Juju client; on the client, add your cloud and cloud credentials; on the cloud, bootstrap a controller (i.e., control plan); on the controller, add a model (i.e., canvas to deploy things on; namespace); on the model, deploy, configure, and integrate the charms that make up your chat service. The terraform juju client is not self-sufficient follow the instructions for the juju client all the way up to and including the step where you create the 31microk8s controller. Also get the details of that controller: juju show-controller --show-password 31microk8s. Then, on your VM, install the terraform CLI: ``` ubuntu@my-juju-vm:~$ sudo snap install terraform --classic terraform 1.7.5 from Snapcrafters installed ``` Next, in your local terraform-juju directory, create three files as follows: (a) a terraform.tffile , where youll configure terraform to use the juju provider: ``` terraform { required_providers { juju = { version = \"~> 0.11.0\" source = \"juju/juju\" } } } ``` (b) a ca-cert.pem file, where youll copy-paste the ca_certificate from the details of your juju-client-bootstrapped controller; and (c) a main.tf file, where youll configure the juju provider to point to the juju-client-bootstrapped controller and the ca-cert.pem file where youve saved its certificate, then create resources to add a model and deploy, configure, and integrate applications: ``` provider \"juju\" { controller_addresses = \"10.152.183.27:17070\" username = \"admin\" password = \"40ec19f8bebe353e122f7f020cdb6949\" ca_certificate = file(\"~/terraform-juju/ca-cert.pem\") } resource \"juju_model\" \"chat\" { name = \"chat\" } resource \"juju_application\" \"mattermost-k8s\" { model = juju_model.chat.name charm { name = \"mattermost-k8s\" } } resource \"juju_application\" \"postgresql-k8s\" { model = juju_model.welcome-k8s.name charm { name = \"postgresql-k8s\" channel = \"14/stable\" } trust = true config = { profile = \"testing\" } } resource \"juju_application\" \"self-signed-certificates\" { model = juju_model.chat.name charm { name = \"self-signed-certificates\" } } resource \"juju_integration\" \"postgresql-mattermost\" { model = juju_model.chat.name application { name = juju_application.postgresql-k8s.name endpoint = \"db\" } application { name = juju_application.mattermost-k8s.name } lifecycle { replacetriggeredby = [ juju_application.postgresql-k8s.name, juju_application.postgresql-k8s.model, juju_application.postgresql-k8s.constraints, juju_application.postgresql-k8s.placement, juju_application.postgresql-k8s.charm.name, juju_application.mattermost-k8s.name, juju_application.mattermost-k8s.model, juju_application.mattermost-k8s.constraints, juju_application.mattermost-k8s.placement, juju_application.mattermost-k8s.charm.name, ] } } resource \"juju_integration\" \"postgresql-tls\" { model = juju_model.chat.name application { name = juju_application.postgresql-k8s.name } application { name = juju_application.self-signed-certificates.name } lifecycle { replacetriggeredby = [ juju_application.postgresql-k8s.name, juju_application.postgresql-k8s.model, juju_application.postgresql-k8s.constraints, juju_application.postgresql-k8s.placement, juju_application.postgresql-k8s.charm.name, juju_application.self-signed-certificates.name, juju_application.self-signed-certificates.model, juju_application.self-signed-certificates.constraints, juju_application.self-signed-certificates.placement, juju_application.self-signed-certificates.charm.name, ] } } ``` Next, in your Multipass VM, initialise your providers configuration (terraform init), preview your plan (terraform plan), and apply your plan to your infrastructure (terraform apply): You can always repeat all three, though technically you only need to run terraform init if your terraform.tf or the provider bit of your" }, { "data": "has changed, and you only need to run terraform plan if you want to preview the changes before applying them. ``` ubuntu@my-juju-vm:~/terraform-juju$ terraform init && terraform plan && terraform apply ``` Finally, use the juju client to inspect the results: ``` ubuntu@my-juju-vm:~/terraform-juju$ juju status --relations ``` Done! [TBA] From the output of juju status> Unit > mattermost-k8s/0, retrieve the IP address and the port and feed them to curl on the template below: ``` curl <IP address>:<port>/api/v4/system/ping ``` Sample session: ``` ubuntu@my-juju-vm:~$ curl 10.1.170.150:8065/api/v4/system/ping {\"ActiveSearchBackend\":\"database\",\"AndroidLatestVersion\":\"\",\"AndroidMinVersion\":\"\",\"IosLatestVersion\":\"\",\"IosMinVersion\":\"\",\"status\":\"OK\"} ``` Congratulations, your chat service is up and running! Your computer with your Multipass VM, your MicroK8s cloud, and a live Juju controller (the charm in the Controller Unit is the juju-controller charm) + a sample deployed application on it (the charm in the Regular Unit stands for any charm that you might deploy). If in the Regular Application you replace the charm with mattermost-k8s and image a few more Regular Applications where you replace the charm with postgresql-k8s and, respectively, self-signed-certificates, and if you trace the path from postgresql-k8ss Unit Agent through the Controller Agent to self-signed-certificatess and, respectively, mattermost-k8s Unit Agent, you get a full representation of your deployment. (Note: After integration, the workloads may also know how to contact each other directly; still, all communication between their respective charms goes through the Juju controller and the result of that communication is stored in the database in the form of maps known as relation data bags.) See more: Set up your test environment automatically > steps 3-4, Install and manage the client, Manage clouds, Manage credentials, Manage controllers, Manage models, Manage applications Learn more about juju. 1a. Find out more about its snap: snap info juju. 1b. Find out the installed version: juju version. 1c. Quickly preview all the commands: juju help commands. 1d. Filter by keyword: Use juju help commands | grep <keyword> to get a quick sense of the commands related to a particular keyword (e.g., secret). Try juju help commands | grep -v Alias to exclude any aliases. 1e. Find out more about a specific command: juju help <command>. 1f. Inspect the files on your workstation associated with the client: ls ~/.local/share/juju. 1g. Learn about other Juju clients: Client. Learn more about your cloud definition and credentials in Juju. 2a. Find out more about the Juju notion of a cloud: Cloud. 2b. Find out all the clouds whose definitions your client has already: juju clouds, juju clouds --all. 2c. Take a look at how Juju has defined your MicroK8s cloud: juju show-cloud microk8s, juju credentials, juju show-credential microk8s microk8s --show-secrets. In Juju, the term credential is always about access to a cloud. 2d. Revisit the output for juju clouds or juju credentials. Notice the classification into client vs. controller. All this classification does is keep track of who is aware of a given cloud definition / credential the client, the controller, or both. However, this simple distinction has important implications can you guess which? You can use the same controllers to run multiple clouds and you can decide which cloud account to use. Learn more about Juju controllers. 3a. Find out all the controllers that your client is aware of already: juju controllers. Switch to the LXD cloud controller, then back: juju switch lxd, juju switch microk8s. Get more detail on each controller: juju show-controller <controller" }, { "data": "Take a sneak peek at their current configuration: cat ~/.local/share/juju/bootstrap-config.yaml. 3b. Revisit the output for juju controllers. Note the User and Access columns. In Juju, a user is any person able to at least log in to a Juju controller. Run juju whoami, then juju show-user admin as you can see, your user is called admin and has superuser access to the controller. Learn more about Juju models, applications, units. 4a. Find out all the models on your microk8s controller: juju models. 4b. Find out more about your chat model: juju show-model, juju status -m microk8s:chat. What do you think a model is? A model is a logical abstraction. It denotes a workspace, a canvas where you deploy, integrate, and manage applications. On a Kubernetes cloud, a Juju model corresponds to a Kubernetes namespace. Run microk8s kubectl get namespaces to verify the output should show a namespace called chat, for your chat model, and also a namespace called controller-microk8s, for your controller model. 4c. Try to guess: What is the controller model about? Switch to it and check: juju switch microk8s:controller, then juju status. When you bootstrap a controller into a cloud, this by default creates the controller model and deploys to it the juju-controller charm, whose units (=running instances of a charm) form the controller application. Find out more about the controller charm: juju info juju-controller or Charmhub | juju-controller. Find out more about the controller application: juju show-application controller. SSH into a controller application unit: juju ssh controller/0, then poke around using ls, cd, and cat (type exit to exit the unit). On a Kubernetes cloud, a Juju unit corresponds to a pod: microk8s kubectl -n controller-microk8s get pods should show a controller-0 pod, which is the Kubernetes pod corresponding to the controller/0 unit. 4d. Switch back to the chat model. Tip: When youre on the same controller, you can skip the controller prefix when you specify the model to switch to. A database failure can be very costly. Lets scale it! Sample session: ``` ubuntu@my-juju-vm:~$ juju scale-application postgresql-k8s 3 postgresql-k8s scaled to 3 units ubuntu@my-juju-vm:~$ juju status Model Controller Cloud/Region Version SLA Timestamp chat 31microk8s microk8s/localhost 3.1.8 unsupported 15:41:34+02:00 App Version Status Scale Charm Channel Rev Address Exposed Message mattermost-k8s .../mattermost:v8.1.3-20.04... active 1 mattermost-k8s stable 27 10.152.183.131 no postgresql-k8s 14.10 active 3 postgresql-k8s 14/stable 193 10.152.183.56 no self-signed-certificates active 1 self-signed-certificates stable 72 10.152.183.119 no Unit Workload Agent Address Ports Message mattermost-k8s/0* active idle 10.1.32.155 8065/TCP postgresql-k8s/0* active idle 10.1.32.152 Primary postgresql-k8s/1 active idle 10.1.32.158 postgresql-k8s/2 active executing 10.1.32.159 self-signed-certificates/0* active idle 10.1.32.154 ``` As you might have guessed, the result of scaling an application is that you have multiple running instances of your application that is, multiple units. Youll want to make sure that they are also properly distributed over multiple nodes. Our localhost MicroK8s doesnt allow us to do this (because we only have 1 node) but, if you clusterise MicroK8s, you can use it to explore this too! See more: MicroK8s | Create a multi-node cluster See more: Manage applications > Scale On your local machine, in you main.tf file, in the definition of the resource for postgresql-k8s, add a units block and set it to 3: ``` provider \"juju\" { controller_addresses = \"10.152.183.27:17070\" username = \"admin\" password = \"40ec19f8bebe353e122f7f020cdb6949\" ca_certificate =" }, { "data": "} resource \"juju_model\" \"chat\" { name = \"chat\" } resource \"juju_application\" \"mattermost-k8s\" { model = juju_model.chat.name charm { name = \"mattermost-k8s\" } } resource \"juju_application\" \"postgresql-k8s\" { model = juju_model.chat.name charm { name = \"postgresql-k8s\" channel = \"14/stable\" } trust = true config = { profile = \"testing\" } units = 3 } resource \"juju_application\" \"self-signed-certificates\" { model = juju_model.chat.name charm { name = \"self-signed-certificates\" } } resource \"juju_integration\" \"postgresql-mattermost\" { model = juju_model.chat.name application { name = juju_application.postgresql-k8s.name endpoint = \"db\" } application { name = juju_application.mattermost-k8s.name } lifecycle { replacetriggeredby = [ juju_application.postgresql-k8s.name, juju_application.postgresql-k8s.model, juju_application.postgresql-k8s.constraints, juju_application.postgresql-k8s.placement, juju_application.postgresql-k8s.charm.name, juju_application.mattermost-k8s.name, juju_application.mattermost-k8s.model, juju_application.mattermost-k8s.constraints, juju_application.mattermost-k8s.placement, juju_application.mattermost-k8s.charm.name, ] } } resource \"juju_integration\" \"postgresql-tls\" { model = juju_model.chat.name application { name = juju_application.postgresql-k8s.name } application { name = juju_application.self-signed-certificates.name } lifecycle { replacetriggeredby = [ juju_application.postgresql-k8s.name, juju_application.postgresql-k8s.model, juju_application.postgresql-k8s.constraints, juju_application.postgresql-k8s.placement, juju_application.postgresql-k8s.charm.name, juju_application.self-signed-certificates.name, juju_application.self-signed-certificates.model, juju_application.self-signed-certificates.constraints, juju_application.self-signed-certificates.placement, juju_application.self-signed-certificates.charm.name, ] } } ``` Then, in your VM, use terraform to apply the changes and juju to inspect the results: ``` ubuntu@my-juju-vm:~/terraform-juju$ terraform init && terraform plan && terraform apply ubuntu@my-juju-vm:~/terraform-juju$ juju status --relations ``` [TBA] In Juju, performing most major operations looks the same for every charm. However, charmers sometimes also define additional operations specific to a given charm. These operations are called actions and often have to do with accessing an application deployed by a charm, creating a backup, etc. Below, use the postgresql-k8s charms set-password action to generate a password for the default, operator username, then use the username and password to access the PostgreSQL application. First, get: the host IP address of the PostgreSQL unit: retrieve it from juju status or juju show-unit (in the sample outputs above, 10.1.170.142); a PostgreSQL username and password: we can use the internal, default user called operator and set a password for it using the set-password action. Sample session: ``` juju run postgresql-k8s/leader set-password username=operator password=mysecretpass ``` Now, use this information to access the PostgreSQL application: First, ssh into the PostgreSQL unit (= Kubernetes container). Sample session: ``` ubuntu@my-juju-vm:~$ juju ssh --container postgresql postgresql-k8s/leader bash root@postgresql-k8s-0:/# ``` Verify that psql is already installed. Sample session: ``` root@postgresql-k8s-0:/# psql --version psql (PostgreSQL) 14.10 (Ubuntu 14.10-0ubuntu0.22.04.1) ``` Use psql to view a list of the existing databases. Sample session (make sure to use your own host and password): ``` root@postgresql-k8s-0:/# psql --host=10.1.170.142 --username=operator --password --list Password: List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges --+-+-+++-- postgres | operator | UTF8 | C | C.UTF-8 | operator=CTc/operator + | | | | | backup=CTc/operator + | | | | | replication=CTc/operator+ | | | | | rewind=CTc/operator + | | | | | monitoring=CTc/operator + | | | | | admin=c/operator template0 | operator | UTF8 | C | C.UTF-8 | =c/operator + | | | | | operator=CTc/operator template1 | operator | UTF8 | C | C.UTF-8 | =c/operator + | | | | | operator=CTc/operator (3 rows) ``` Finally, use psql to access the postgres database and submit a query. Sample session: ``` root@postgresql-k8s-0:/# psql --host=10.1.170.142 --username=operator --password postgres Password: psql (14.10 (Ubuntu 14.10-0ubuntu0.22.04.1)) Type \"help\" for help. postgres=# SELECT version(); version version PostgreSQL 14.10 (Ubuntu 14.10-0ubuntu0.22.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04)" }, { "data": "64-bit (1 row) ``` Type exit to get back to your unit shell and then again to return to your Multipass VM shell. [TBA] [TBA] See more: Juju releases keep coming and going but our deployment is still stuck on Juju 3.1.8 (i.e., major version 3, minor version 1, patch version 8). Time to upgrade! To upgrade a deployment, we must upgrade the client, the version of the Juju agents in the controller model, the version of the Juju agents in non-controller models, (if on a machine cloud, the base of the machine), and the charms. These upgrades dont all have to happen at once, but it mostly makes sense to do them in this order. And, if upgrading an agents minor or major version of a controller / model, you must know that the only option is to upgrade the controller/model to the latest patch version of the old minor or major, then bootstrap a new controller and migrate the model to it, then upgrade the model to the current patch version of the new controller. Lets give it a try! The running processes in your Shells 2 and 3 will block your client upgrade. To prevent that, in each one, hit the C-c key combination to stop them for the duration of the client upgrade, the restart them by running again juju status --relations --watch 1s and, respectively, juju debug-log. Sample session: ``` ubuntu@my-juju-vm:~$ juju version 3.1.8-genericlinux-amd64 ubuntu@my-juju-vm:~$ sudo snap refresh juju --channel 3/stable juju (3/stable) 3.4.2 from Canonical refreshed ubuntu@my-juju-vm:~$ juju show-controller 31microk8s: details: ... agent-version: 3.1.8 agent-git-commit: 810900f47952a1f3835576f57dce2f9d1aef23d0 controller-model-version: 3.1.8 ... ubuntu@my-juju-vm:~$ juju upgrade-controller --agent-version 3.1.8 no upgrades available ubuntu@my-juju-vm:~$ juju bootstrap microk8s 34microk8s Creating Juju controller \"34microk8s\" on microk8s/localhost Bootstrap to Kubernetes cluster identified as microk8s/localhost Creating k8s resources for controller \"controller-34microk8s\" Downloading images Starting controller pod Bootstrap agent now started Contacting Juju controller at 10.152.183.187 to verify accessibility... Bootstrap complete, controller \"34microk8s\" is now available in namespace \"controller-34microk8s\" Now you can run juju add-model <model-name> to create a new model to deploy k8s workloads. ubuntu@my-juju-vm:~$ juju show-controller 34microk8s 34microk8s: details: ... agent-version: 3.4.2 agent-git-commit: a80becbb4da5985fa53c63824a4bd809e9d03954 controller-model-version: 3.4.2 ... ubuntu@my-juju-vm:~$ juju switch 31microk8s:admin/chat 34microk8s (controller) -> 31microk8s:admin/chat ubuntu@my-juju-vm:~$ juju show-model chat: name: admin/chat ... agent-version: 3.1.8 ubuntu@my-juju-vm:~$ juju upgrade-model --agent-version 3.1.8 no upgrades available ubuntu@my-juju-vm:~$ juju migrate chat 34microk8s Migration started with ID \"43c29d63-77f3-4665-82bc-e21b55ab4d6a:0\" ubuntu@my-juju-vm:~$ juju switch 34microk8s:admin/chat 34microk8s (controller) -> 34microk8s:admin/chat ubuntu@my-juju-vm:~$ juju models Controller: 34microk8s Model Cloud/Region Type Status Units Access Last connection chat* microk8s/localhost kubernetes available 5 admin 8 minutes ago controller microk8s/localhost kubernetes available 1 admin just now ubuntu@my-juju-vm:~$ juju upgrade-model --agent-version 3.4.2 best version: 3.4.2 started upgrade to 3.4.2 ubuntu@my-juju-vm:~$ juju refresh mattermost-k8s charm \"mattermost-k8s\": already up-to-date ubuntu@my-juju-vm:~$ juju refresh postgresql-k8s charm \"postgresql-k8s\": already up-to-date ubuntu@my-juju-vm:~$ juju refresh self-signed-certificates charm \"self-signed-certificates\": already up-to-date ``` [TBA] [TBA] See more: Juju roadmap & releases, Juju version compatibility matrix, Upgrade your deployment Our deployment hasnt really been up very long, but wed still like to take a closer look at our controller, to see whats" }, { "data": "Time for observability! ``` ubuntu@my-juju-vm:~$ juju add-model observability Added 'observability' model on microk8s/localhost with credential 'microk8s' for user 'admin' ubuntu@my-juju-vm:~$ juju models Controller: 34microk8s Model Cloud/Region Type Status Units Access Last connection chat microk8s/localhost kubernetes available 5 admin 9 minutes ago controller microk8s/localhost kubernetes available 1 admin just now observability* microk8s/localhost kubernetes available 6 admin 1 minute ago ubuntu@my-juju-vm:~$ juju deploy cos-lite --trust Located bundle \"cos-lite\" in charm-hub, revision 11 Located charm \"alertmanager-k8s\" in charm-hub, channel latest/stable Located charm \"catalogue-k8s\" in charm-hub, channel latest/stable Located charm \"grafana-k8s\" in charm-hub, channel latest/stable Located charm \"loki-k8s\" in charm-hub, channel latest/stable Located charm \"prometheus-k8s\" in charm-hub, channel latest/stable Located charm \"traefik-k8s\" in charm-hub, channel latest/stable ... Deploy of bundle completed. ubuntu@my-juju-vm:~$ juju offer prometheus:metrics-endpoint Application \"prometheus\" endpoints [metrics-endpoint] available at \"admin/observability.prometheus\" ubuntu@my-juju-vm:~$ juju switch controller 34microk8s:admin/observability -> 34microk8s:admin/controller ubuntu@my-juju-vm:~$ juju integrate controller admin/observability.prometheus ubuntu@my-juju-vm:~$ juju status --relations Model Controller Cloud/Region Version SLA Timestamp controller 34microk8s microk8s/localhost 3.4.2 unsupported 17:08:10+02:00 SAAS Status Store URL prometheus active 34microk8s admin/observability.prometheus App Version Status Scale Charm Channel Rev Address Exposed Message controller active 1 juju-controller 3.4/stable 79 no Unit Workload Agent Address Ports Message controller/0* active idle 10.1.32.161 37017/TCP Integration provider Requirer Interface Type Message controller:metrics-endpoint prometheus:metrics-endpoint prometheus_scrape regular ubuntu@my-juju-vm:~$ juju switch observability 34microk8s:admin/controller -> 34microk8s:admin/observability ubuntu@my-juju-vm:~$ juju run grafana/0 get-admin-password Running operation 1 with 1 task task 2 on unit-grafana-0 Waiting for task 2... admin-password: 0OpLUlxJXQaU url: http://10.238.98.110/observability-grafana ``` On your local machine, open a browser window and copy-paste the Grafana URL. In the username field, enter admin. In the password field, enter the admin-password. If everything has gone well, you should now be logged in. On the new screen, in the top-right, click on the Menu icon, then Dashboards. Then, on the new screen, in the top-left, click on New, Upload dashboard JSON file, and upload the JSON Grafana-dashboard-definition file below, then, in the IL3-2 field, from the drop-down, select the suggested juju_observability... option. Juju Controllers-1713888589960.json (200.9 KB) On the new screen, at the very top, expand the Juju Metrics section and inspect the results. How many connections to the API server does your controller show? Make a change to your controller (e.g., run juju add-model test to add another model and trigger some more API server connections) and refresh the page to view the updated results! Congratulations, you now have a functional observability setup! But your controller is not the only thing that you can monitor go ahead and try to monitor something else, for example, your PostgreSQL! [TBA] [TBA] See more: Manage controllers > Collect metrics about a controller To tear things down, remove your entire Multipass Ubuntu VM, then uninstall Multipass: See more: How to tear down your test environment automatically Follow the instructions for the juju client. In addition to that, on your host machine, delete your terraform-juju directory. [TBA] This tutorial has introduced you to the basic things you can do with Juju. But there is a lot more to explore: | If you are wondering | visit | |:-|:-| | How do I? | Juju How-to docs | | What is? | Juju Reference docs | | Why?, So what? | Juju Explanation docs | | How do I build a charm? | SDK docs | | How do I contribute to Juju? | Dev docs | Contributors: @degville , @fernape, @hmlanigan, @houz42, @hpidcock, @kayrag2 , @keirthana , @manadart, @michaeldmitry, @mrbarco, @nsakkos, @ppasotti, @selcem, @shrishtikarkera, @thp, @tmihoc Last updated 9 days ago. Help improve this document in the forum. 2024 Canonical Ltd. Manage your tracker settings Legal Information Ubuntu and Canonical are registered trademarks. All other trademarks are the" } ]
{ "category": "Provisioning", "file_name": "sdk.md", "project_name": "Juju", "subcategory": "Automation & Configuration" }
[ { "data": "List of supported clouds > OpenStack This document describes details specific to using your existing OpenStack cloud with Juju. See more: OpenStack When using the OpenStack cloud with Juju, it is important to keep in mind that it is a (1) machine cloud and (2) not some other cloud. See more: Cloud differences in Juju As the differences related to (1) are already documented generically in our Tutorial, How-to guides, and Reference docs, here we record just those that follow from (2). | Juju points of variation | Notes for the OpenStack cloud | |:|:-| | setup (chronological order): | nan | | CLOUD | nan | | supported versions: | Any version that supports: - compute v2 (Nova) - network v2 (Neutron) (optional) - volume2 (Cinder) (optional) - identity v2 or v3 (Keystone) | | requirements: | TBA | | definition: | If you want to use the novarc file (recommended): Source the OpenStack RC file (source <path to file>). This will allow Juju to detect values from preset OpenStack environment variables. Run add-cloud in interactive mode and accept the suggested defaults. | | - name: | user-defined | | - type: | openstack | | - authentication types: | [access-key, userpass] | | - regions: | [TO BE ADDED] | | - cloud-specific model configuration keys: | external-network (string) The network label or UUID to create floating IP addresses on when multiple external networks exist. network (string) The network label or UUID to bring machines up on when multiple networks exist. policy-target-group (string) The UUID of Policy Target Group to use for Policy Targets created. use-default-secgroup (bool) Whether new machine instances should have the default Openstack security group assigned in addition to juju defined security groups. use-openstack-gbp (bool) Whether to use Neutrons Group-Based Policy. | | CREDENTIAL | nan | | definition: | If you want to use environment variables (recommended): Source the OpenStack RC file (see above). Run add-credential and accept the suggested defaults. | | CONTROLLER | nan | | notes on bootstrap: | You will need to create an OpenStack machine metadata. If the metadata is available locally, you can pass it to Juju via juju bootstrap ... --metadata-source <path to metadata" }, { "data": "> See more: How to configure machine image metadata If your cloud has multiple private networks: You will need to specify the one that you want the instances to boot from via juju bootstrap ... --model-default network=<network uuid or name>. If your clouds topology requires that its instances are accessed via floating IP addresses: Pass the allocate-public-ip=true (see constraints below) as a bootstrap constraint. | | nan | nan | | nan | nan | | other (alphabetical order:) | nan | | CONSTRAINT | nan | | conflicting: | [instance-type] vs. [mem, root-disk, cores] | | supported? | nan | | - allocate-public-ip | nan | | - arch | nan | | - container | nan | | - cores | nan | | - cpu-power | nan | | - image-id | (Starting with Juju 3.3) Type: String. Valid values: An OpenStack image ID. | | - instance-role | nan | | - instance-type | Valid values: Any (cloud admin) user defined OpenStack flavor. | | - mem | nan | | - root-disk | nan | | - root-disk-source | root-disk-source is either local or volume. | | - spaces | nan | | - tags | nan | | - virt-type | Valid values: [kvm, lxd]. | | - zones | nan | | PLACEMENT DIRECTIVE | nan | | <machine> | TBA | | subnet=... | nan | | system-id=... | nan | | zone=... | nan | | MACHINE | | | RESOURCE (cloud) Consistent naming, tagging, and the ability to add user-controlled tags to created instances. | nan | network (string) The network label or UUID to bring machines up on when multiple networks exist. policy-target-group (string) The UUID of Policy Target Group to use for Policy Targets created. use-default-secgroup (bool) Whether new machine instances should have the default Openstack security group assigned in addition to juju defined security groups. use-openstack-gbp (bool) Whether to use Neutrons Group-Based Policy. If your cloud has multiple private networks: You will need to specify the one that you want the instances to boot from via juju bootstrap ... --model-default network=<network uuid or name>. If your clouds topology requires that its instances are accessed via floating IP addresses: Pass the allocate-public-ip=true (see constraints below) as a bootstrap constraint. Consistent naming, tagging, and the ability to add user-controlled tags to created instances. Contributors: @hallback Last updated 3 months ago. Help improve this document in the forum. 2024 Canonical Ltd. Manage your tracker settings Legal Information Ubuntu and Canonical are registered trademarks. All other trademarks are the property of their respective owners." } ]
{ "category": "Provisioning", "file_name": "#user-impersonation.md", "project_name": "kiosk", "subcategory": "Automation & Configuration" }
[ { "data": "Help improve this page Want to contribute to this user guide? Scroll to the bottom of this page and select Edit this page on GitHub. Your contributions will help make our user guide better for everyone. AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. IAM administrators control who can be authenticated (signed in) and authorized (have permissions) to use Amazon EKS resources. IAM is an AWS service that you can use with no additional charge. How you use AWS Identity and Access Management (IAM) differs, depending on the work that you do in Amazon EKS. Service user If you use the Amazon EKS service to do your job, then your administrator provides you with the credentials and permissions that you need. As you use more Amazon EKS features to do your work, you might need additional permissions. Understanding how access is managed can help you request the right permissions from your administrator. If you cannot access a feature in Amazon EKS, see Troubleshooting IAM. Service administrator If you're in charge of Amazon EKS resources at your company, you probably have full access to Amazon EKS. It's your job to determine which Amazon EKS features and resources your service users should access. You must then submit requests to your IAM administrator to change the permissions of your service users. Review the information on this page to understand the basic concepts of IAM. To learn more about how your company can use IAM with Amazon EKS, see How Amazon EKS works with IAM. IAM administrator If you're an IAM administrator, you might want to learn details about how you can write policies to manage access to Amazon EKS. To view example Amazon EKS identity-based policies that you can use in IAM, see Amazon EKS identity-based policy examples. Authentication is how you sign in to AWS using your identity credentials. You must be authenticated (signed in to AWS) as the AWS account root user, as an IAM user, or by assuming an IAM role. You can sign in to AWS as a federated identity by using credentials provided through an identity source. AWS IAM Identity Center (IAM Identity Center) users, your company's single sign-on authentication, and your Google or Facebook credentials are examples of federated identities. When you sign in as a federated identity, your administrator previously set up identity federation using IAM roles. When you access AWS by using federation, you are indirectly assuming a role. Depending on the type of user you are, you can sign in to the AWS Management Console or the AWS access portal. For more information about signing in to AWS, see How to sign in to your AWS account in the AWS Sign-In User Guide. If you access AWS programmatically, AWS provides a software development kit (SDK) and a command line interface (CLI) to cryptographically sign your requests by using your credentials. If you don't use AWS tools, you must sign requests" }, { "data": "For more information about using the recommended method to sign requests yourself, see Signing AWS API requests in the IAM User Guide. Regardless of the authentication method that you use, you might be required to provide additional security information. For example, AWS recommends that you use multi-factor authentication (MFA) to increase the security of your account. To learn more, see Multi-factor authentication in the AWS IAM Identity Center User Guide and Using multi-factor authentication (MFA) in AWS in the IAM User Guide. When you create an AWS account, you begin with one sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the AWS account root user and is accessed by signing in with the email address and password that you used to create the account. We strongly recommend that you don't use the root user for your everyday tasks. Safeguard your root user credentials and use them to perform the tasks that only the root user can perform. For the complete list of tasks that require you to sign in as the root user, see Tasks that require root user credentials in the IAM User Guide. An IAM user is an identity within your AWS account that has specific permissions for a single person or application. Where possible, we recommend relying on temporary credentials instead of creating IAM users who have long-term credentials such as passwords and access keys. However, if you have specific use cases that require long-term credentials with IAM users, we recommend that you rotate access keys. For more information, see Rotate access keys regularly for use cases that require long-term credentials in the IAM User Guide. An IAM group is an identity that specifies a collection of IAM users. You can't sign in as a group. You can use groups to specify permissions for multiple users at a time. Groups make permissions easier to manage for large sets of users. For example, you could have a group named IAMAdmins and give that group permissions to administer IAM resources. Users are different from roles. A user is uniquely associated with one person or application, but a role is intended to be assumable by anyone who needs it. Users have permanent long-term credentials, but roles provide temporary credentials. To learn more, see When to create an IAM user (instead of a role) in the IAM User Guide. An IAM role is an identity within your AWS account that has specific permissions. It is similar to an IAM user, but is not associated with a specific person. You can temporarily assume an IAM role in the AWS Management Console by switching roles. You can assume a role by calling an AWS CLI or AWS API operation or by using a custom URL. For more information about methods for using roles, see Using IAM roles in the IAM User Guide. IAM roles with temporary credentials are useful in the following situations: Federated user access To assign permissions to a federated identity, you create a role and define permissions for the" }, { "data": "When a federated identity authenticates, the identity is associated with the role and is granted the permissions that are defined by the role. For information about roles for federation, see Creating a role for a third-party Identity Provider in the IAM User Guide. If you use IAM Identity Center, you configure a permission set. To control what your identities can access after they authenticate, IAM Identity Center correlates the permission set to a role in IAM. For information about permissions sets, see Permission sets in the AWS IAM Identity Center User Guide. Temporary IAM user permissions An IAM user or role can assume an IAM role to temporarily take on different permissions for a specific task. Cross-account access You can use an IAM role to allow someone (a trusted principal) in a different account to access resources in your account. Roles are the primary way to grant cross-account access. However, with some AWS services, you can attach a policy directly to a resource (instead of using a role as a proxy). To learn the difference between roles and resource-based policies for cross-account access, see How IAM roles differ from resource-based policies in the IAM User Guide. Cross-service access Some AWS services use features in other AWS services. For example, when you make a call in a service, it's common for that service to run applications in Amazon EC2 or store objects in Amazon S3. A service might do this using the calling principal's permissions, using a service role, or using a service-linked role. Forward access sessions (FAS) When you use an IAM user or role to perform actions in AWS, you are considered a principal. When you use some services, you might perform an action that then initiates another action in a different service. FAS uses the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. FAS requests are only made when a service receives a request that requires interactions with other AWS services or resources to complete. In this case, you must have permissions to perform both actions. For policy details when making FAS requests, see Forward access sessions. Service role A service role is an IAM role that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Creating a role to delegate permissions to an AWS service in the IAM User Guide. Service-linked role A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. Applications running on Amazon EC2 You can use an IAM role to manage temporary credentials for applications that are running on an EC2 instance and making AWS CLI or AWS API requests. This is preferable to storing access keys within the EC2" }, { "data": "To assign an AWS role to an EC2 instance and make it available to all of its applications, you create an instance profile that is attached to the instance. An instance profile contains the role and enables programs that are running on the EC2 instance to get temporary credentials. For more information, see Using an IAM role to grant permissions to applications running on Amazon EC2 instances in the IAM User Guide. To learn whether to use IAM roles or IAM users, see When to create an IAM role (instead of a user) in the IAM User Guide. You control access in AWS by creating policies and attaching them to AWS identities or resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when a principal (user, root user, or role session) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. For more information about the structure and contents of JSON policy documents, see Overview of JSON policies in the IAM User Guide. Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. By default, users and roles have no permissions. To grant users permission to perform actions on the resources that they need, an IAM administrator can create IAM policies. The administrator can then add the IAM policies to roles, and users can assume the roles. IAM policies define permissions for an action regardless of the method that you use to perform the operation. For example, suppose that you have a policy that allows the iam:GetRole action. A user with that policy can get role information from the AWS Management Console, the AWS CLI, or the AWS API. Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as an IAM user, group of users, or role. These policies control what actions users and roles can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see Creating IAM policies in the IAM User Guide. Identity-based policies can be further categorized as inline policies or managed policies. Inline policies are embedded directly into a single user, group, or role. Managed policies are standalone policies that you can attach to multiple users, groups, and roles in your AWS account. Managed policies include AWS managed policies and customer managed policies. To learn how to choose between a managed policy or an inline policy, see Choosing between managed policies and inline policies in the IAM User Guide. Resource-based policies are JSON policy documents that you attach to a resource. Examples of resource-based policies are IAM role trust policies and Amazon S3 bucket policies. In services that support resource-based policies, service administrators can use them to control access to a specific resource. For the resource where the policy is attached, the policy defines what actions a specified principal can perform on that resource and under what conditions. You must specify a principal in a resource-based" }, { "data": "Principals can include accounts, users, roles, federated users, or AWS services. Resource-based policies are inline policies that are located in that service. You can't use AWS managed policies from IAM in a resource-based policy. Access control lists (ACLs) control which principals (account members, users, or roles) have permissions to access a resource. ACLs are similar to resource-based policies, although they do not use the JSON policy document format. Amazon S3, AWS WAF, and Amazon VPC are examples of services that support ACLs. To learn more about ACLs, see Access control list (ACL) overview in the Amazon Simple Storage Service Developer Guide. AWS supports additional, less-common policy types. These policy types can set the maximum permissions granted to you by the more common policy types. Permissions boundaries A permissions boundary is an advanced feature in which you set the maximum permissions that an identity-based policy can grant to an IAM entity (IAM user or role). You can set a permissions boundary for an entity. The resulting permissions are the intersection of an entity's identity-based policies and its permissions boundaries. Resource-based policies that specify the user or role in the Principal field are not limited by the permissions boundary. An explicit deny in any of these policies overrides the allow. For more information about permissions boundaries, see Permissions boundaries for IAM entities in the IAM User Guide. Service control policies (SCPs) SCPs are JSON policies that specify the maximum permissions for an organization or organizational unit (OU) in AWS Organizations. AWS Organizations is a service for grouping and centrally managing multiple AWS accounts that your business owns. If you enable all features in an organization, then you can apply service control policies (SCPs) to any or all of your accounts. The SCP limits permissions for entities in member accounts, including each AWS account root user. For more information about Organizations and SCPs, see How SCPs work in the AWS Organizations User Guide. Session policies Session policies are advanced policies that you pass as a parameter when you programmatically create a temporary session for a role or federated user. The resulting session's permissions are the intersection of the user or role's identity-based policies and the session policies. Permissions can also come from a resource-based policy. An explicit deny in any of these policies overrides the allow. For more information, see Session policies in the IAM User Guide. When multiple types of policies apply to a request, the resulting permissions are more complicated to understand. To learn how AWS determines whether to allow a request when multiple policy types are involved, see Policy evaluation logic in the IAM User Guide. Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better." } ]
{ "category": "Provisioning", "file_name": ".md", "project_name": "Kapitan", "subcategory": "Automation & Configuration" }
[ { "data": "Write your documentation in Markdown and create a professional static site in minutes searchable, customizable, in 60+ languages, for all devices. Focus on the content of your documentation and create a professional static site in minutes. No need to know HTML, CSS or JavaScript let Material for MkDocs do the heavy lifting for you. Serve your documentation with confidence Material for MkDocs automatically adapts to perfectly fit the available screen estate, no matter the type or size of the viewing device. Desktop. Tablet. Mobile. All great. Make it yours change the colors, fonts, language, icons, logo, and more with a few lines of configuration. Material for MkDocs can be easily extended and provides many options to alter appearance and behavior. Don't let your users wait get incredible value with a small footprint by using one of the fastest themes available with excellent performance, yielding optimal search engine rankings and happy users that return. Own your documentation's complete sources and outputs, guaranteeing both integrity and security no need to entrust the backbone of your product knowledge to third-party platforms. Retain full control. You're in good company choose a mature and actively maintained solution built with state-of-the-art Open Source technologies, trusted by more than 20.000 individuals and organizations. Licensed under MIT. Material for MkDocs makes your documentation instantly searchable with zero effort: say goodbye to costly third-party crawler-based solutions that can take hours to update. Ship your documentation with a highly customizable and blazing fast search running entirely in the user's browser at no extra cost. Even better: search inside code blocks, exclude specific sections or entire pages, boost important pages in the results and build searchable documentation that works offline. Learn more Some examples need more explanation than others, which is why Material for MkDocs offers a unique and elegant way to add rich text almost anywhere in a code block. Code annotations can host formatted text, images, diagrams, code blocks, call-outs, content tabs, even interactive elements basically everything that can be expressed in Markdown or HTML. Of course, code annotations work beautifully on mobile and other touch devices and can be printed. Learn more Make an impact on social media and increase engagement when sharing links to your documentation by leveraging the built-in social plugin. Material for MkDocs makes it effortless to generate a beautiful preview image for each page, which will drive more interested users to your Open Source or commercial project. While the social plugin uses what's already there, i.e. your project's name and logo, as well as each page's title and description, it's easy to customize preview images. Supercharge your technical writing by making better use of the processing power of the visual cortex: Material for MkDocs ships more than 10,000 icons and emojis, which can be used in Markdown and HTML with simple shortcodes and an easy-to-remember syntax. Add color to icons and animate them. Make it pop. Use our dedicated icon search to quickly find the perfect icon for almost every use case and add custom icon sets with minimal configuration. Get started By joining the Insiders program, you'll get immediate access to the latest features while also helping support the ongoing development of Material for MkDocs. Thanks to our awesome sponsors, this project is actively maintained and kept in good shape. Together, we can build documentation that simply works! Learn more Follow @squidfunk on Twitter Follow @squidfunk on Fosstodon Material for MkDocs on GitHub Material for MkDocs on DockerHub Material for MkDocs on PyPI" } ]
{ "category": "Provisioning", "file_name": "docs.github.com.md", "project_name": "kiosk", "subcategory": "Automation & Configuration" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Provisioning", "file_name": "install-using-yaml.md", "project_name": "KubeDL", "subcategory": "Automation & Configuration" }
[ { "data": "From project root directory, run ``` kubectl apply -f config/crd/bases/``` A single yaml file including everything: deployment, rbac etc. ``` kubectl apply -f https://raw.githubusercontent.com/kubedl-io/kubedl/master/config/manager/allinone.yaml``` KubeDL controller is installed under kubedl-system namespace. Running the command from master branch uses the daily docker image. ``` kubectl apply -f https://raw.githubusercontent.com/kubedl-io/kubedl/master/console/dashboard.yaml``` The dashboard will list nodes. Hence, its service account requires the list node permission. Check the dashboard. ``` kubectl delete namespace kubedl-system``` ``` kubectl get crd | grep kubedl.io | cut -d ' ' -f 1 | xargs kubectl delete crd``` ``` kubectl delete clusterrole kubedl-leader-election-rolekubectl delete clusterrolebinding kubedl-manager-rolebinding``` KubeDL supports all kinds of jobs(tensorflow, pytorch etc.) in a single Kubernetes operator. You can selectively enable the kind of jobs to support. There are three options:" } ]
{ "category": "Provisioning", "file_name": "iam.md", "project_name": "kiosk", "subcategory": "Automation & Configuration" }
[ { "data": "This page provides an overview of authentication. All Kubernetes clusters have two categories of users: service accounts managed by Kubernetes, and normal users. It is assumed that a cluster-independent service manages normal users in the following ways: In this regard, Kubernetes does not have objects which represent normal user accounts. Normal users cannot be added to a cluster through an API call. Even though a normal user cannot be added via an API call, any user that presents a valid certificate signed by the cluster's certificate authority (CA) is considered authenticated. In this configuration, Kubernetes determines the username from the common name field in the 'subject' of the cert (e.g., \"/CN=bob\"). From there, the role based access control (RBAC) sub-system would determine whether the user is authorized to perform a specific operation on a resource. For more details, refer to the normal users topic in certificate request for more details about this. In contrast, service accounts are users managed by the Kubernetes API. They are bound to specific namespaces, and created automatically by the API server or manually through API calls. Service accounts are tied to a set of credentials stored as Secrets, which are mounted into pods allowing in-cluster processes to talk to the Kubernetes API. API requests are tied to either a normal user or a service account, or are treated as anonymous requests. This means every process inside or outside the cluster, from a human user typing kubectl on a workstation, to kubelets on nodes, to members of the control plane, must authenticate when making requests to the API server, or be treated as an anonymous user. Kubernetes uses client certificates, bearer tokens, or an authenticating proxy to authenticate API requests through authentication plugins. As HTTP requests are made to the API server, plugins attempt to associate the following attributes with the request: All values are opaque to the authentication system and only hold significance when interpreted by an authorizer. You can enable multiple authentication methods at once. You should usually use at least two methods: When multiple authenticator modules are enabled, the first module to successfully authenticate the request short-circuits evaluation. The API server does not guarantee the order authenticators run in. The system:authenticated group is included in the list of groups for all authenticated users. Integrations with other authentication protocols (LDAP, SAML, Kerberos, alternate x509 schemes, etc) can be accomplished using an authenticating proxy or the authentication webhook. Client certificate authentication is enabled by passing the --client-ca-file=SOMEFILE option to API server. The referenced file must contain one or more certificate authorities to use to validate client certificates presented to the API server. If a client certificate is presented and verified, the common name of the subject is used as the user name for the request. As of Kubernetes 1.4, client certificates can also indicate a user's group memberships using the certificate's organization fields. To include multiple group memberships for a user, include multiple organization fields in the certificate. For example, using the openssl command line tool to generate a certificate signing request: ``` openssl req -new -key jbeda.pem -out jbeda-csr.pem -subj \"/CN=jbeda/O=app1/O=app2\" ``` This would create a CSR for the username \"jbeda\", belonging to two groups, \"app1\" and \"app2\". See Managing Certificates for how to generate a client cert. The API server reads bearer tokens from a file when given the --token-auth-file=SOMEFILE option on the command line. Currently, tokens last indefinitely, and the token list cannot be changed without restarting the API" }, { "data": "The token file is a csv file with a minimum of 3 columns: token, user name, user uid, followed by optional group names. If you have more than one group, the column must be double quoted e.g. ``` token,user,uid,\"group1,group2,group3\" ``` When using bearer token authentication from an http client, the API server expects an Authorization header with a value of Bearer <token>. The bearer token must be a character sequence that can be put in an HTTP header value using no more than the encoding and quoting facilities of HTTP. For example: if the bearer token is 31ada4fd-adec-460c-809a-9e56ceb75269 then it would appear in an HTTP header as shown below. ``` Authorization: Bearer 31ada4fd-adec-460c-809a-9e56ceb75269 ``` To allow for streamlined bootstrapping for new clusters, Kubernetes includes a dynamically-managed Bearer token type called a Bootstrap Token. These tokens are stored as Secrets in the kube-system namespace, where they can be dynamically managed and created. Controller Manager contains a TokenCleaner controller that deletes bootstrap tokens as they expire. The tokens are of the form [a-z0-9]{6}.[a-z0-9]{16}. The first component is a Token ID and the second component is the Token Secret. You specify the token in an HTTP header as follows: ``` Authorization: Bearer 781292.db7bc3a58fc5f07e ``` You must enable the Bootstrap Token Authenticator with the --enable-bootstrap-token-auth flag on the API Server. You must enable the TokenCleaner controller via the --controllers flag on the Controller Manager. This is done with something like --controllers=*,tokencleaner. kubeadm will do this for you if you are using it to bootstrap a cluster. The authenticator authenticates as system:bootstrap:<Token ID>. It is included in the system:bootstrappers group. The naming and groups are intentionally limited to discourage users from using these tokens past bootstrapping. The user names and group can be used (and are used by kubeadm) to craft the appropriate authorization policies to support bootstrapping a cluster. Please see Bootstrap Tokens for in depth documentation on the Bootstrap Token authenticator and controllers along with how to manage these tokens with kubeadm. A service account is an automatically enabled authenticator that uses signed bearer tokens to verify requests. The plugin takes two optional flags: Service accounts are usually created automatically by the API server and associated with pods running in the cluster through the ServiceAccount Admission Controller. Bearer tokens are mounted into pods at well-known locations, and allow in-cluster processes to talk to the API server. Accounts may be explicitly associated with pods using the serviceAccountName field of a PodSpec. ``` apiVersion: apps/v1 # this apiVersion is relevant as of Kubernetes 1.9 kind: Deployment metadata: name: nginx-deployment namespace: default spec: replicas: 3 template: metadata: spec: serviceAccountName: bob-the-bot containers: name: nginx image: nginx:1.14.2 ``` Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a service account, use the kubectl create serviceaccount (NAME) command. This creates a service account in the current namespace. ``` kubectl create serviceaccount jenkins ``` ``` serviceaccount/jenkins created ``` Create an associated token: ``` kubectl create token jenkins ``` ``` eyJhbGciOiJSUzI1NiIsImtp... ``` The created token is a signed JSON Web Token (JWT). The signed JWT can be used as a bearer token to authenticate as the given service account. See above for how the token is included in a" }, { "data": "Normally these tokens are mounted into pods for in-cluster access to the API server, but can be used from outside the cluster as well. Service accounts authenticate with the username system:serviceaccount:(NAMESPACE):(SERVICEACCOUNT), and are assigned to the groups system:serviceaccounts and system:serviceaccounts:(NAMESPACE). OpenID Connect is a flavor of OAuth2 supported by some OAuth2 providers, notably Microsoft Entra ID, Salesforce, and Google. The protocol's main extension of OAuth2 is an additional field returned with the access token called an ID Token. This token is a JSON Web Token (JWT) with well known fields, such as a user's email, signed by the server. To identify the user, the authenticator uses the idtoken (not the accesstoken) from the OAuth2 token response as a bearer token. See above for how the token is included in a request. Log in to your identity provider Your identity provider will provide you with an accesstoken, idtoken and a refresh_token When using kubectl, use your id_token with the --token flag or add it directly to your kubeconfig kubectl sends your id_token in a header called Authorization to the API server The API server will make sure the JWT signature is valid Check to make sure the id_token hasn't expired Perform claim and/or user validation if CEL expressions are configured with AuthenticationConfiguration. Make sure the user is authorized Once authorized the API server returns a response to kubectl kubectl provides feedback to the user Since all of the data needed to validate who you are is in the id_token, Kubernetes doesn't need to \"phone home\" to the identity provider. In a model where every request is stateless this provides a very scalable solution for authentication. It does offer a few challenges: To enable the plugin, configure the following flags on the API server: | Parameter | Description | Example | Required | |:--|:--|:-|:--| | --oidc-issuer-url | URL of the provider that allows the API server to discover public signing keys. Only URLs that use the https:// scheme are accepted. This is typically the provider's discovery URL, changed to have an empty path. | If the issuer's OIDC discovery URL is https://accounts.provider.example/.well-known/openid-configuration, the value should be https://accounts.provider.example | Yes | | --oidc-client-id | A client id that all tokens must be issued for. | kubernetes | Yes | | --oidc-username-claim | JWT claim to use as the user name. By default sub, which is expected to be a unique identifier of the end user. Admins can choose other claims, such as email or name, depending on their provider. However, claims other than email will be prefixed with the issuer URL to prevent naming clashes with other plugins. | sub | No | | --oidc-username-prefix | Prefix prepended to username claims to prevent clashes with existing names (such as system: users). For example, the value oidc: will create usernames like oidc:jane.doe. If this flag isn't provided and --oidc-username-claim is a value other than email the prefix defaults to ( Issuer URL )# where ( Issuer URL ) is the value of --oidc-issuer-url. The value - can be used to disable all prefixing. | oidc: | No | | --oidc-groups-claim | JWT claim to use as the user's group. If the claim is present it must be an array of strings. | groups | No | | --oidc-groups-prefix | Prefix prepended to group claims to prevent clashes with existing names (such as system: groups). For example, the value oidc: will create group names like oidc:engineering and oidc:infra. | oidc: | No | | --oidc-required-claim | A key=value pair that describes a required claim in the ID" }, { "data": "If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims. | claim=value | No | | --oidc-ca-file | The path to the certificate for the CA that signed your identity provider's web certificate. Defaults to the host's root CAs. | /etc/kubernetes/ssl/kc-ca.pem | No | | --oidc-signing-algs | The signing algorithms accepted. Default is \"RS256\". | RS512 | No | JWT Authenticator is an authenticator to authenticate Kubernetes users using JWT compliant tokens. The authenticator will attempt to parse a raw ID token, verify it's been signed by the configured issuer. The public key to verify the signature is discovered from the issuer's public endpoint using OIDC discovery. The minimum valid JWT payload must contain the following claims: ``` { \"iss\": \"https://example.com\", // must match the issuer.url \"aud\": [\"my-app\"], // at least one of the entries in issuer.audiences must match the \"aud\" claim in presented JWTs. \"exp\": 1234567890, // token expiration as Unix time (the number of seconds elapsed since January 1, 1970 UTC) \"<username-claim>\": \"user\" // this is the username claim configured in the claimMappings.username.claim or claimMappings.username.expression } ``` The configuration file approach allows you to configure multiple JWT authenticators, each with a unique issuer.url and issuer.discoveryURL. The configuration file even allows you to specify CEL expressions to map claims to user attributes, and to validate claims and user information. The API server also automatically reloads the authenticators when the configuration file is modified. You can use apiserverauthenticationconfigcontrollerautomaticreloadlasttimestampseconds metric to monitor the last time the configuration was reloaded by the API server. You must specify the path to the authentication configuration using the --authentication-config flag on the API server. If you want to use command line flags instead of the configuration file, those will continue to work as-is. To access the new capabilities like configuring multiple authenticators, setting multiple audiences for an issuer, switch to using the configuration file. For Kubernetes v1.30, the structured authentication configuration file format is beta-level, and the mechanism for using that configuration is also beta. Provided you didn't specifically disable the StructuredAuthenticationConfiguration feature gate for your cluster, you can turn on structured authentication by specifying the --authentication-config command line argument to the kube-apiserver. An example of the structured authentication configuration file is shown below. ``` apiVersion: apiserver.config.k8s.io/v1beta1 kind: AuthenticationConfiguration jwt: issuer: url: https://example.com # Same as --oidc-issuer-url. discoveryURL: https://discovery.example.com/.well-known/openid-configuration certificateAuthority: <PEM encoded CA certificates> audiences: my-app # Same as --oidc-client-id. my-other-app audienceMatchPolicy: MatchAny claimValidationRules: claim: hd requiredValue: example.com expression: 'claims.hd == \"example.com\"' message: the hd claim must be set to example.com expression: 'claims.exp - claims.nbf <= 86400' message: total token lifetime must not exceed 24 hours claimMappings: username: claim: \"sub\" prefix: \"\" expression: 'claims.username + \":external-user\"' groups: claim: \"sub\" prefix: \"\" expression: 'claims.roles.split(\",\")' uid: claim: 'sub' expression: 'claims.sub' extra: key: 'example.com/tenant' valueExpression: 'claims.tenant' userValidationRules: expression: \"!user.username.startsWith('system:')\" message: 'username cannot used reserved system: prefix' expression: \"user.groups.all(group, !group.startsWith('system:'))\" message: 'groups cannot used reserved system: prefix' ``` Claim validation rule expression jwt.claimValidationRules[i].expression represents the expression which will be evaluated by CEL. CEL expressions have access to the contents of the token payload, organized into claims CEL variable. claims is a map of claim names (as strings) to claim values (of any type). User validation rule expression jwt.userValidationRules[i].expression represents the expression which will be evaluated by CEL. CEL expressions have access to the contents of userInfo, organized into user CEL variable. Refer to the UserInfo API documentation for the schema of user. Claim mapping expression jwt.claimMappings.username.expression, jwt.claimMappings.groups.expression, jwt.claimMappings.uid.expression" }, { "data": "represents the expression which will be evaluated by CEL. CEL expressions have access to the contents of the token payload, organized into claims CEL variable. claims is a map of claim names (as strings) to claim values (of any type). To learn more, see the Documentation on CEL Here are examples of the AuthenticationConfiguration with different token payloads. ``` apiVersion: apiserver.config.k8s.io/v1beta1 kind: AuthenticationConfiguration jwt: issuer: url: https://example.com audiences: my-app claimMappings: username: expression: 'claims.username + \":external-user\"' groups: expression: 'claims.roles.split(\",\")' uid: expression: 'claims.sub' extra: key: 'example.com/tenant' valueExpression: 'claims.tenant' userValidationRules: expression: \"!user.username.startsWith('system:')\" # the expression will evaluate to true, so validation will succeed. message: 'username cannot used reserved system: prefix' ``` ``` TOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJpYXQiOjE3MDExMDcyMzMsImlzcyI6Imh0dHBzOi8vZXhhbXBsZS5jb20iLCJqdGkiOiI3YzMzNzk0MjgwN2U3M2NhYTJjMzBjODY4YWMwY2U5MTBiY2UwMmRkY2JmZWJlOGMyM2I4YjVmMjdhZDYyODczIiwibmJmIjoxNzAxMTA3MjMzLCJyb2xlcyI6InVzZXIsYWRtaW4iLCJzdWIiOiJhdXRoIiwidGVuYW50IjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjRhIiwidXNlcm5hbWUiOiJmb28ifQ.TBWF2RkQHm4QQz85AYPcwLxSk-VLvQW-mNDHx7SEOSv9LVwcPYPuPajJpuQn9CgKq1R94QKSQ5F6UgHMILz8OfmPKmX00wpwwNVGeevJ79ieX2V-W56iNR5gJ-i9nn6FYk5pwfVREB0l4HSlpTOmu80gbPWAXY5hLW0ZtcE1JTEEmefORHV2ge8e3jp1xGafNy6LdJWabYuKiw8d7QgaHxtKB-t0kRMNzLRS7rka_SfQg0dSYektuxhLbiDkqhmRffGlQKXGVzUsuvFw7IGM5ZWnZgEMDzCI357obHeM3tRqpn5WRjtB8oM7JgnCymaJi-P3iCd88iu1xnzA ``` where the token payload is: ``` { \"aud\": \"kubernetes\", \"exp\": 1703232949, \"iat\": 1701107233, \"iss\": \"https://example.com\", \"jti\": \"7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873\", \"nbf\": 1701107233, \"roles\": \"user,admin\", \"sub\": \"auth\", \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\", \"username\": \"foo\" } ``` The token with the above AuthenticationConfiguration will produce the following UserInfo object and successfully authenticate the user. ``` { \"username\": \"foo:external-user\", \"uid\": \"auth\", \"groups\": [ \"user\", \"admin\" ], \"extra\": { \"example.com/tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\" } } ``` ``` apiVersion: apiserver.config.k8s.io/v1beta1 kind: AuthenticationConfiguration jwt: issuer: url: https://example.com audiences: my-app claimValidationRules: expression: 'claims.hd == \"example.com\"' # the token below does not have this claim, so validation will fail. message: the hd claim must be set to example.com claimMappings: username: expression: 'claims.username + \":external-user\"' groups: expression: 'claims.roles.split(\",\")' uid: expression: 'claims.sub' extra: key: 'example.com/tenant' valueExpression: 'claims.tenant' userValidationRules: expression: \"!user.username.startsWith('system:')\" # the expression will evaluate to true, so validation will succeed. message: 'username cannot used reserved system: prefix' ``` ``` TOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJpYXQiOjE3MDExMDcyMzMsImlzcyI6Imh0dHBzOi8vZXhhbXBsZS5jb20iLCJqdGkiOiI3YzMzNzk0MjgwN2U3M2NhYTJjMzBjODY4YWMwY2U5MTBiY2UwMmRkY2JmZWJlOGMyM2I4YjVmMjdhZDYyODczIiwibmJmIjoxNzAxMTA3MjMzLCJyb2xlcyI6InVzZXIsYWRtaW4iLCJzdWIiOiJhdXRoIiwidGVuYW50IjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjRhIiwidXNlcm5hbWUiOiJmb28ifQ.TBWF2RkQHm4QQz85AYPcwLxSk-VLvQW-mNDHx7SEOSv9LVwcPYPuPajJpuQn9CgKq1R94QKSQ5F6UgHMILz8OfmPKmX00wpwwNVGeevJ79ieX2V-W56iNR5gJ-i9nn6FYk5pwfVREB0l4HSlpTOmu80gbPWAXY5hLW0ZtcE1JTEEmefORHV2ge8e3jp1xGafNy6LdJWabYuKiw8d7QgaHxtKB-t0kRMNzLRS7rka_SfQg0dSYektuxhLbiDkqhmRffGlQKXGVzUsuvFw7IGM5ZWnZgEMDzCI357obHeM3tRqpn5WRjtB8oM7JgnCymaJi-P3iCd88iu1xnzA ``` where the token payload is: ``` { \"aud\": \"kubernetes\", \"exp\": 1703232949, \"iat\": 1701107233, \"iss\": \"https://example.com\", \"jti\": \"7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873\", \"nbf\": 1701107233, \"roles\": \"user,admin\", \"sub\": \"auth\", \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\", \"username\": \"foo\" } ``` The token with the above AuthenticationConfiguration will fail to authenticate because the hd claim is not set to example.com. The API server will return 401 Unauthorized error. ``` apiVersion: apiserver.config.k8s.io/v1beta1 kind: AuthenticationConfiguration jwt: issuer: url: https://example.com audiences: my-app claimValidationRules: expression: 'claims.hd == \"example.com\"' message: the hd claim must be set to example.com claimMappings: username: expression: '\"system:\" + claims.username' # this will prefix the username with \"system:\" and will fail user validation. groups: expression: 'claims.roles.split(\",\")' uid: expression: 'claims.sub' extra: key: 'example.com/tenant' valueExpression: 'claims.tenant' userValidationRules: expression: \"!user.username.startsWith('system:')\" # the username will be system:foo and expression will evaluate to false, so validation will fail. message: 'username cannot used reserved system: prefix' ``` ``` TOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJoZCI6ImV4YW1wbGUuY29tIiwiaWF0IjoxNzAxMTEzMTAxLCJpc3MiOiJodHRwczovL2V4YW1wbGUuY29tIiwianRpIjoiYjViMDY1MjM3MmNkMjBlMzQ1YjZmZGZmY2RjMjE4MWY0YWZkNmYyNTlhYWI0YjdlMzU4ODEyMzdkMjkyMjBiYyIsIm5iZiI6MTcwMTExMzEwMSwicm9sZXMiOiJ1c2VyLGFkbWluIiwic3ViIjoiYXV0aCIsInRlbmFudCI6IjcyZjk4OGJmLTg2ZjEtNDFhZi05MWFiLTJkN2NkMDExZGI0YSIsInVzZXJuYW1lIjoiZm9vIn0.FgPJBYLobo9jnbHreooBlvpgEcSPWnKfX6dc0IvdlRB-F0dCcgy91oCJeKaBk-8zH5AKUXoFTlInfLCkPivMOJqMECA1YTrMUwtIVqwb116AqihfByUYIIqzMjvUbthtbpIeHQm2fF0HbrUqaQ0uaYwgy8mD807h7sBcUMjNd215ffnFIHss-9zegH8GI1d9fiBf-g6zjkR1j987EP748khpQh9IxPjMJbSgGuH5x80YFuqgEWwq-aYJPQxXX6FatP96a2EAn7wfPpGlPRt0HcBOvq5pCnudgCgfVgiOJiLr7robQu4T1bis0W75VPEvwWtgFcLnvcQx0JWg ``` where the token payload is: ``` { \"aud\": \"kubernetes\", \"exp\": 1703232949, \"hd\": \"example.com\", \"iat\": 1701113101, \"iss\": \"https://example.com\", \"jti\": \"b5b0652372cd20e345b6fdffcdc2181f4afd6f259aab4b7e35881237d29220bc\", \"nbf\": 1701113101, \"roles\": \"user,admin\", \"sub\": \"auth\", \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\", \"username\": \"foo\" } ``` The token with the above AuthenticationConfiguration will produce the following UserInfo object: ``` { \"username\": \"system:foo\", \"uid\": \"auth\", \"groups\": [ \"user\", \"admin\" ], \"extra\": { \"example.com/tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\" } } ``` which will fail user validation because the username starts with system:. The API server will return 401 Unauthorized error. Kubernetes does not provide an OpenID Connect Identity Provider. You can use an existing public OpenID Connect Identity Provider (such as Google, or others). Or, you can run your own Identity Provider, such as dex, Keycloak, CloudFoundry UAA, or Tremolo Security's OpenUnison. For an identity provider to work with Kubernetes it must: Support OpenID connect discovery The public key to verify the signature is discovered from the issuer's public endpoint using OIDC discovery. If you're using the authentication configuration file, the identity provider doesn't need to publicly expose the discovery" }, { "data": "You can host the discovery endpoint at a different location than the issuer (such as locally in the cluster) and specify the issuer.discoveryURL in the configuration file. Run in TLS with non-obsolete ciphers Have a CA signed certificate (even if the CA is not a commercial CA or is self signed) A note about requirement #3 above, requiring a CA signed certificate. If you deploy your own identity provider (as opposed to one of the cloud providers like Google or Microsoft) you MUST have your identity provider's web server certificate signed by a certificate with the CA flag set to TRUE, even if it is self signed. This is due to GoLang's TLS client implementation being very strict to the standards around certificate validation. If you don't have a CA handy, you can use the gencert script from the Dex team to create a simple CA and a signed certificate and key pair. Or you can use this similar script that generates SHA256 certs with a longer life and larger key size. Refer to setup instructions for specific systems: The first option is to use the kubectl oidc authenticator, which sets the id_token as a bearer token for all requests and refreshes the token once it expires. After you've logged into your provider, use kubectl to add your idtoken, refreshtoken, clientid, and clientsecret to configure the plugin. Providers that don't return an id_token as part of their refresh token response aren't supported by this plugin and should use \"Option 2\" below. ``` kubectl config set-credentials USER_NAME \\ --auth-provider=oidc \\ --auth-provider-arg=idp-issuer-url=( issuer url ) \\ --auth-provider-arg=client-id=( your client id ) \\ --auth-provider-arg=client-secret=( your client secret ) \\ --auth-provider-arg=refresh-token=( your refresh token ) \\ --auth-provider-arg=idp-certificate-authority=( path to your ca certificate ) \\ --auth-provider-arg=id-token=( your id_token ) ``` As an example, running the below command after authenticating to your identity provider: ``` kubectl config set-credentials mmosley \\ --auth-provider=oidc \\ --auth-provider-arg=idp-issuer-url=https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP \\ --auth-provider-arg=client-id=kubernetes \\ --auth-provider-arg=client-secret=1db158f6-177d-4d9c-8a8b-d36869918ec5 \\ --auth-provider-arg=refresh-token=q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXqHega4GAXlF+ma+vmYpFcHe5eZR+slBFpZKtQA= \\ --auth-provider-arg=idp-certificate-authority=/root/ca.pem \\ --auth-provider-arg=id-token=eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tWp-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw ``` Which would produce the below configuration: ``` users: name: mmosley user: auth-provider: config: client-id: kubernetes client-secret: 1db158f6-177d-4d9c-8a8b-d36869918ec5 id-token: eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tWp-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw idp-certificate-authority: /root/ca.pem idp-issuer-url: https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq name: oidc ``` Once your idtoken expires, kubectl will attempt to refresh your idtoken using your refresh_token and clientsecret storing the new values for the refreshtoken and id_token in your .kube/config. The kubectl command lets you pass in a token using the --token option. Copy and paste the id_token into this option: ``` kubectl --token=eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczovL21sYi50cmVtb2xvLmxhbjo4MDQzL2F1dGgvaWRwL29pZGMiLCJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNDc0NTk2NjY5LCJqdGkiOiI2RDUzNXoxUEpFNjJOR3QxaWVyYm9RIiwiaWF0IjoxNDc0NTk2MzY5LCJuYmYiOjE0NzQ1OTYyNDksInN1YiI6Im13aW5kdSIsInVzZXJfcm9sZSI6WyJ1c2VycyIsIm5ldy1uYW1lc3BhY2Utdmlld2VyIl0sImVtYWlsIjoibXdpbmR1QG5vbW9yZWplZGkuY29tIn0.f2As579n9VNoaKzoF-dOQGmXkFKf1FMyNV0-vaB63jn-n9LGSCca6IVMP8pO-Zb4KvRqGyTP0r3HkHxYy5c81AnIh8ijarruczl-TKyF5akjSTHFZD-0gRzlevBDiH8Q79NAr-ky0P4iIXS8lY9Vnjch5MF74Zx0c3alKJHJUnnpjIACByfF2SCaYzbWFMUNat-K1PaUk5-ujMBG7yYnr95xD-63n8CO8teGUAAEMx6zRjzfhnhbzX-ajwZLGwGUBT4WqjMs70-6a78gZmLZb2az1cZynkFRj2BaCkVT3A2RrjeEwZEtGXlMqKJ1I2ulrOVsYx01_yD35-rw get nodes ``` Webhook authentication is a hook for verifying bearer tokens. The configuration file uses the kubeconfig file format. Within the file, clusters refers to the remote service and users refers to the API server webhook. An example would be: ``` apiVersion: v1 kind: Config clusters: name: name-of-remote-authn-service cluster: certificate-authority: /path/to/ca.pem # CA for verifying the remote service. server: https://authn.example.com/authenticate # URL of remote service to query. 'https' recommended for production. users: name: name-of-api-server user: client-certificate: /path/to/cert.pem # cert for the webhook plugin to use client-key: /path/to/key.pem # key matching the cert current-context: webhook contexts: context: cluster: name-of-remote-authn-service user: name-of-api-server name: webhook ``` When a client attempts to authenticate with the API server using a bearer token as discussed above, the authentication webhook POSTs a JSON-serialized TokenReview object containing the token to the remote service. Note that webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API" }, { "data": "Implementers should check the apiVersion field of the request to ensure correct deserialization, and must respond with a TokenReview object of the same version as the request. ``` { \"apiVersion\": \"authentication.k8s.io/v1\", \"kind\": \"TokenReview\", \"spec\": { \"token\": \"014fbff9a07c...\", \"audiences\": [\"https://myserver.example.com\", \"https://myserver.internal.example.com\"] } } ``` ``` { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"spec\": { \"token\": \"014fbff9a07c...\", \"audiences\": [\"https://myserver.example.com\", \"https://myserver.internal.example.com\"] } } ``` The remote service is expected to fill the status field of the request to indicate the success of the login. The response body's spec field is ignored and may be omitted. The remote service must return a response using the same TokenReview API version that it received. A successful validation of the bearer token would return: ``` { \"apiVersion\": \"authentication.k8s.io/v1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": true, \"user\": { \"username\": \"janedoe@example.com\", \"uid\": \"42\", \"groups\": [\"developers\", \"qa\"], \"extra\": { \"extrafield1\": [ \"extravalue1\", \"extravalue2\" ] } }, \"audiences\": [\"https://myserver.example.com\"] } } ``` ``` { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": true, \"user\": { \"username\": \"janedoe@example.com\", \"uid\": \"42\", \"groups\": [\"developers\", \"qa\"], \"extra\": { \"extrafield1\": [ \"extravalue1\", \"extravalue2\" ] } }, \"audiences\": [\"https://myserver.example.com\"] } } ``` An unsuccessful request would return: ``` { \"apiVersion\": \"authentication.k8s.io/v1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": false, \"error\": \"Credentials are expired\" } } ``` ``` { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": false, \"error\": \"Credentials are expired\" } } ``` The API server can be configured to identify users from request header values, such as X-Remote-User. It is designed for use in combination with an authenticating proxy, which sets the request header value. For example, with this configuration: ``` --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --requestheader-extra-headers-prefix=X-Remote-Extra- ``` this request: ``` GET / HTTP/1.1 X-Remote-User: fido X-Remote-Group: dogs X-Remote-Group: dachshunds X-Remote-Extra-Acme.com%2Fproject: some-project X-Remote-Extra-Scopes: openid X-Remote-Extra-Scopes: profile ``` would result in this user info: ``` name: fido groups: dogs dachshunds extra: acme.com/project: some-project scopes: openid profile ``` In order to prevent header spoofing, the authenticating proxy is required to present a valid client certificate to the API server for validation against the specified CA before the request headers are checked. WARNING: do not reuse a CA that is used in a different context unless you understand the risks and the mechanisms to protect the CA's usage. When enabled, requests that are not rejected by other configured authentication methods are treated as anonymous requests, and given a username of system:anonymous and a group of system:unauthenticated. For example, on a server with token authentication configured, and anonymous access enabled, a request providing an invalid bearer token would receive a 401 Unauthorized error. A request providing no bearer token would be treated as an anonymous request. In 1.5.1-1.5.x, anonymous access is disabled by default, and can be enabled by passing the --anonymous-auth=true option to the API server. In 1.6+, anonymous access is enabled by default if an authorization mode other than AlwaysAllow is used, and can be disabled by passing the --anonymous-auth=false option to the API server. Starting in 1.6, the ABAC and RBAC authorizers require explicit authorization of the system:anonymous user or the system:unauthenticated group, so legacy policy rules that grant access to the user or group do not include anonymous users. A user can act as another user through impersonation headers. These let requests manually override the user info a request authenticates as. For example, an admin could use this feature to debug an authorization policy by temporarily impersonating another user and seeing if a request was denied. Impersonation requests first authenticate as the requesting user, then switch to the impersonated user" }, { "data": "The following HTTP headers can be used to performing an impersonation request: An example of the impersonation headers used when impersonating a user with groups: ``` Impersonate-User: jane.doe@example.com Impersonate-Group: developers Impersonate-Group: admins ``` An example of the impersonation headers used when impersonating a user with a UID and extra fields: ``` Impersonate-User: jane.doe@example.com Impersonate-Extra-dn: cn=jane,ou=engineers,dc=example,dc=com Impersonate-Extra-acme.com%2Fproject: some-project Impersonate-Extra-scopes: view Impersonate-Extra-scopes: development Impersonate-Uid: 06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b ``` When using kubectl set the --as flag to configure the Impersonate-User header, set the --as-group flag to configure the Impersonate-Group header. ``` kubectl drain mynode ``` ``` Error from server (Forbidden): User \"clark\" cannot get nodes at the cluster scope. (get nodes mynode) ``` Set the --as and --as-group flag: ``` kubectl drain mynode --as=superman --as-group=system:masters ``` ``` node/mynode cordoned node/mynode drained ``` To impersonate a user, group, user identifier (UID) or extra fields, the impersonating user must have the ability to perform the \"impersonate\" verb on the kind of attribute being impersonated (\"user\", \"group\", \"uid\", etc.). For clusters that enable the RBAC authorization plugin, the following ClusterRole encompasses the rules needed to set user and group impersonation headers: ``` apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: impersonator rules: apiGroups: [\"\"] resources: [\"users\", \"groups\", \"serviceaccounts\"] verbs: [\"impersonate\"] ``` For impersonation, extra fields and impersonated UIDs are both under the \"authentication.k8s.io\" apiGroup. Extra fields are evaluated as sub-resources of the resource \"userextras\". To allow a user to use impersonation headers for the extra field \"scopes\" and for UIDs, a user should be granted the following role: ``` apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: scopes-and-uid-impersonator rules: apiGroups: [\"authentication.k8s.io\"] resources: [\"userextras/scopes\", \"uids\"] verbs: [\"impersonate\"] ``` The values of impersonation headers can also be restricted by limiting the set of resourceNames a resource can take. ``` apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: limited-impersonator rules: apiGroups: [\"\"] resources: [\"users\"] verbs: [\"impersonate\"] resourceNames: [\"jane.doe@example.com\"] apiGroups: [\"\"] resources: [\"groups\"] verbs: [\"impersonate\"] resourceNames: [\"developers\",\"admins\"] apiGroups: [\"authentication.k8s.io\"] resources: [\"userextras/scopes\"] verbs: [\"impersonate\"] resourceNames: [\"view\", \"development\"] apiGroups: [\"authentication.k8s.io\"] resources: [\"uids\"] verbs: [\"impersonate\"] resourceNames: [\"06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b\"] ``` k8s.io/client-go and tools using it such as kubectl and kubelet are able to execute an external command to receive user credentials. This feature is intended for client side integrations with authentication protocols not natively supported by k8s.io/client-go (LDAP, Kerberos, OAuth2, SAML, etc.). The plugin implements the protocol specific logic, then returns opaque credentials to use. Almost all credential plugin use cases require a server side component with support for the webhook token authenticator to interpret the credential format produced by the client plugin. In a hypothetical use case, an organization would run an external service that exchanges LDAP credentials for user specific, signed tokens. The service would also be capable of responding to webhook token authenticator requests to validate the tokens. Users would be required to install a credential plugin on their workstation. To authenticate against the API: Credential plugins are configured through kubectl config files as part of the user fields. ``` apiVersion: v1 kind: Config users: name: my-user user: exec: command: \"example-client-go-exec-plugin\" apiVersion: \"client.authentication.k8s.io/v1\" env: name: \"FOO\" value: \"bar\" args: \"arg1\" \"arg2\" installHint: | example-client-go-exec-plugin is required to authenticate to the current cluster. It can be installed: On macOS: brew install example-client-go-exec-plugin On Ubuntu: apt-get install example-client-go-exec-plugin On Fedora: dnf install example-client-go-exec-plugin ... provideClusterInfo: true interactiveMode: Never clusters: name: my-cluster cluster: server: \"https://172.17.4.100:6443\" certificate-authority: \"/etc/kubernetes/ca.pem\" extensions: name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config extension: arbitrary: config this: can be provided via the KUBERNETESEXECINFO environment variable upon setting provideClusterInfo you: [\"can\", \"put\", \"anything\", \"here\"] contexts: name: my-cluster context: cluster: my-cluster user: my-user current-context: my-cluster ``` ``` apiVersion: v1 kind: Config users: name: my-user user: exec: command: \"example-client-go-exec-plugin\" apiVersion:" }, { "data": "env: name: \"FOO\" value: \"bar\" args: \"arg1\" \"arg2\" installHint: | example-client-go-exec-plugin is required to authenticate to the current cluster. It can be installed: On macOS: brew install example-client-go-exec-plugin On Ubuntu: apt-get install example-client-go-exec-plugin On Fedora: dnf install example-client-go-exec-plugin ... provideClusterInfo: true interactiveMode: Never clusters: name: my-cluster cluster: server: \"https://172.17.4.100:6443\" certificate-authority: \"/etc/kubernetes/ca.pem\" extensions: name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config extension: arbitrary: config this: can be provided via the KUBERNETESEXECINFO environment variable upon setting provideClusterInfo you: [\"can\", \"put\", \"anything\", \"here\"] contexts: name: my-cluster context: cluster: my-cluster user: my-user current-context: my-cluster ``` Relative command paths are interpreted as relative to the directory of the config file. If KUBECONFIG is set to /home/jane/kubeconfig and the exec command is ./bin/example-client-go-exec-plugin, the binary /home/jane/bin/example-client-go-exec-plugin is executed. ``` name: my-user user: exec: command: \"./bin/example-client-go-exec-plugin\" apiVersion: \"client.authentication.k8s.io/v1\" interactiveMode: Never ``` The executed command prints an ExecCredential object to stdout. k8s.io/client-go authenticates against the Kubernetes API using the returned credentials in the status. The executed command is passed an ExecCredential object as input via the KUBERNETESEXECINFO environment variable. This input contains helpful information like the expected API version of the returned ExecCredential object and whether or not the plugin can use stdin to interact with the user. When run from an interactive session (i.e., a terminal), stdin can be exposed directly to the plugin. Plugins should use the spec.interactive field of the input ExecCredential object from the KUBERNETESEXECINFO environment variable in order to determine if stdin has been provided. A plugin's stdin requirements (i.e., whether stdin is optional, strictly required, or never used in order for the plugin to run successfully) is declared via the user.exec.interactiveMode field in the kubeconfig (see table below for valid values). The user.exec.interactiveMode field is optional in client.authentication.k8s.io/v1beta1 and required in client.authentication.k8s.io/v1. | interactiveMode Value | Meaning | |:|:--| | Never | This exec plugin never needs to use standard input, and therefore the exec plugin will be run regardless of whether standard input is available for user input. | | IfAvailable | This exec plugin would like to use standard input if it is available, but can still operate if standard input is not available. Therefore, the exec plugin will be run regardless of whether stdin is available for user input. If standard input is available for user input, then it will be provided to this exec plugin. | | Always | This exec plugin requires standard input in order to run, and therefore the exec plugin will only be run if standard input is available for user input. If standard input is not available for user input, then the exec plugin will not be run and an error will be returned by the exec plugin runner. | To use bearer token credentials, the plugin returns a token in the status of the ExecCredential ``` { \"apiVersion\": \"client.authentication.k8s.io/v1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\" } } ``` ``` { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\" } } ``` Alternatively, a PEM-encoded client certificate and key can be returned to use TLS client auth. If the plugin returns a different certificate and key on a subsequent call, k8s.io/client-go will close existing connections with the server to force a new TLS handshake. If specified, clientKeyData and clientCertificateData must both must be present. clientCertificateData may contain additional intermediate certificates to send to the server. ``` { \"apiVersion\": \"client.authentication.k8s.io/v1\", \"kind\": \"ExecCredential\", \"status\": { \"clientCertificateData\": \"--BEGIN CERTIFICATE--\\n...\\n--END CERTIFICATE--\", \"clientKeyData\": \"--BEGIN RSA PRIVATE" }, { "data": "RSA PRIVATE KEY--\" } } ``` ``` { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"clientCertificateData\": \"--BEGIN CERTIFICATE--\\n...\\n--END CERTIFICATE--\", \"clientKeyData\": \"--BEGIN RSA PRIVATE KEY--\\n...\\n--END RSA PRIVATE KEY--\" } } ``` Optionally, the response can include the expiry of the credential formatted as a RFC 3339 timestamp. Presence or absence of an expiry has the following impact: ``` { \"apiVersion\": \"client.authentication.k8s.io/v1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\", \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\" } } ``` ``` { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\", \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\" } } ``` To enable the exec plugin to obtain cluster-specific information, set provideClusterInfo on the user.exec field in the kubeconfig. The plugin will then be supplied this cluster-specific information in the KUBERNETESEXECINFO environment variable. Information from this environment variable can be used to perform cluster-specific credential acquisition logic. The following ExecCredential manifest describes a cluster information sample. ``` { \"apiVersion\": \"client.authentication.k8s.io/v1\", \"kind\": \"ExecCredential\", \"spec\": { \"cluster\": { \"server\": \"https://172.17.4.100:6443\", \"certificate-authority-data\": \"LS0t...\", \"config\": { \"arbitrary\": \"config\", \"this\": \"can be provided via the KUBERNETESEXECINFO environment variable upon setting provideClusterInfo\", \"you\": [\"can\", \"put\", \"anything\", \"here\"] } }, \"interactive\": true } } ``` ``` { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"spec\": { \"cluster\": { \"server\": \"https://172.17.4.100:6443\", \"certificate-authority-data\": \"LS0t...\", \"config\": { \"arbitrary\": \"config\", \"this\": \"can be provided via the KUBERNETESEXECINFO environment variable upon setting provideClusterInfo\", \"you\": [\"can\", \"put\", \"anything\", \"here\"] } }, \"interactive\": true } } ``` If your cluster has the API enabled, you can use the SelfSubjectReview API to find out how your Kubernetes cluster maps your authentication information to identify you as a client. This works whether you are authenticating as a user (typically representing a real person) or as a ServiceAccount. SelfSubjectReview objects do not have any configurable fields. On receiving a request, the Kubernetes API server fills the status with the user attributes and returns it to the user. Request example (the body would be a SelfSubjectReview): ``` POST /apis/authentication.k8s.io/v1/selfsubjectreviews ``` ``` { \"apiVersion\": \"authentication.k8s.io/v1\", \"kind\": \"SelfSubjectReview\" } ``` Response example: ``` { \"apiVersion\": \"authentication.k8s.io/v1\", \"kind\": \"SelfSubjectReview\", \"status\": { \"userInfo\": { \"name\": \"jane.doe\", \"uid\": \"b6c7cfd4-f166-11ec-8ea0-0242ac120002\", \"groups\": [ \"viewers\", \"editors\", \"system:authenticated\" ], \"extra\": { \"provider_id\": [\"token.company.example\"] } } } } ``` For convenience, the kubectl auth whoami command is present. Executing this command will produce the following output (yet different user attributes will be shown): Simple output example ``` ATTRIBUTE VALUE Username jane.doe Groups [system:authenticated] ``` Complex example including extra attributes ``` ATTRIBUTE VALUE Username jane.doe UID b79dbf30-0c6a-11ed-861d-0242ac120002 Groups [students teachers system:authenticated] Extra: skills [reading learning] Extra: subjects [math sports] ``` By providing the output flag, it is also possible to print the JSON or YAML representation of the result: ``` { \"apiVersion\": \"authentication.k8s.io/v1\", \"kind\": \"SelfSubjectReview\", \"status\": { \"userInfo\": { \"username\": \"jane.doe\", \"uid\": \"b79dbf30-0c6a-11ed-861d-0242ac120002\", \"groups\": [ \"students\", \"teachers\", \"system:authenticated\" ], \"extra\": { \"skills\": [ \"reading\", \"learning\" ], \"subjects\": [ \"math\", \"sports\" ] } } } } ``` ``` apiVersion: authentication.k8s.io/v1 kind: SelfSubjectReview status: userInfo: username: jane.doe uid: b79dbf30-0c6a-11ed-861d-0242ac120002 groups: students teachers system:authenticated extra: skills: reading learning subjects: math sports ``` This feature is extremely useful when a complicated authentication flow is used in a Kubernetes cluster, for example, if you use webhook token authentication or authenticating proxy. By default, all authenticated users can create SelfSubjectReview objects when the APISelfSubjectReview feature is enabled. It is allowed by the system:basic-user cluster role. You can only make SelfSubjectReview requests if: Was this page helpful? Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub Repository if you want to report a problem or suggest an improvement." } ]