tag
dict
content
listlengths
1
171
{ "category": "Runtime", "file_name": "upgrade.md", "project_name": "Flannel", "subcategory": "Cloud Native Network" }
[ { "data": "Flannel upgrade/downgrade procedure There are different ways of changing flannel version in the running cluster: Pros: Cleanest way of managing resources of the flannel deployment and no manual validation required as long as no additional resources was created by administrators/operators Cons: Massive networking outage within a cluster during the version change Delete all the flannel resources using kubectl ```bash kubectl -n kube-flannel delete daemonset kube-flannel-ds kubectl -n kube-flannel delete configmap kube-flannel-cfg kubectl -n kube-flannel delete serviceaccount flannel kubectl delete clusterrolebinding.rbac.authorization.k8s.io flannel kubectl delete clusterrole.rbac.authorization.k8s.io flannel kubectl delete namespace kube-flannel ``` Install the newer version of flannel and reboot the nodes Pros: Less disruptive way of changing flannel version, easier to do Cons: Some version may have changes which can't be just replaced and may need resources cleanup and/or rename, manual resources comparison required If the update is done from newer version as 0.20.2 it can be done using kubectl ```bash kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml ``` In case of error on the labeling follow the previous way. From version 0.21.4 flannel is deployed on an helm repository at `https://flannel-io.github.io/flannel/` it will be possible to manage the update directly with helm. ```bash helm upgrade flannel --set podCidr=\"10.244.0.0/16\" --namespace kube-flannel flannel/flannel ```" } ]
{ "category": "Runtime", "file_name": "VERSIONING.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "This document describes the versioning policy for this repository. This policy is designed so the following goals can be achieved. Users are provided a codebase of value that is stable and secure. Versioning of this project will be idiomatic of a Go project using [Go modules](https://github.com/golang/go/wiki/Modules). [Semantic import versioning](https://github.com/golang/go/wiki/Modules#semantic-import-versioning) will be used. Versions will comply with [semver 2.0](https://semver.org/spec/v2.0.0.html) with the following exceptions. New methods may be added to exported API interfaces. All exported interfaces that fall within this exception will include the following paragraph in their public documentation. > Warning: methods may be added to this interface in minor releases. If a module is version `v2` or higher, the major version of the module must be included as a `/vN` at the end of the module paths used in `go.mod` files (e.g., `module go.opentelemetry.io/otel/v2`, `require go.opentelemetry.io/otel/v2 v2.0.1`) and in the package import path (e.g., `import \"go.opentelemetry.io/otel/v2/trace\"`). This includes the paths used in `go get` commands (e.g., `go get go.opentelemetry.io/otel/v2@v2.0.1`. Note there is both a `/v2` and a `@v2.0.1` in that example. One way to think about it is that the module name now includes the `/v2`, so include `/v2` whenever you are using the module name). If a module is version `v0` or `v1`, do not include the major version in either the module path or the import path. Modules will be used to encapsulate signals and components. Experimental modules still under active development will be versioned at `v0` to imply the stability guarantee defined by . > Major version zero (0.y.z) is for initial development. Anything MAY > change at any time. The public API SHOULD NOT be considered stable. Mature modules for which we guarantee a stable public API will be versioned with a major version greater than `v0`. The decision to make a module stable will be made on a case-by-case basis by the maintainers of this project. Experimental modules will start their versioning at `v0.0.0` and will increment their minor version when backwards incompatible changes are released and increment their patch version when backwards compatible changes are released. All stable modules that use the same major version number will use the same entire version number. Stable modules may be released with an incremented minor or patch version even though that module has not been changed, but rather so that it will remain at the same version as other stable modules that did undergo change. When an experimental module becomes stable a new stable module version will be released and will include this now stable module. The new stable module version will be an increment of the minor version number and will be applied to all existing stable modules as well as the newly stable module being released. Versioning of the associated [contrib repository](https://github.com/open-telemetry/opentelemetry-go-contrib) of this project will be idiomatic of a Go project using [Go modules](https://github.com/golang/go/wiki/Modules). [Semantic import versioning](https://github.com/golang/go/wiki/Modules#semantic-import-versioning) will be" }, { "data": "Versions will comply with . If a module is version `v2` or higher, the major version of the module must be included as a `/vN` at the end of the module paths used in `go.mod` files (e.g., `module go.opentelemetry.io/contrib/instrumentation/host/v2`, `require go.opentelemetry.io/contrib/instrumentation/host/v2 v2.0.1`) and in the package import path (e.g., `import \"go.opentelemetry.io/contrib/instrumentation/host/v2\"`). This includes the paths used in `go get` commands (e.g., `go get go.opentelemetry.io/contrib/instrumentation/host/v2@v2.0.1`. Note there is both a `/v2` and a `@v2.0.1` in that example. One way to think about it is that the module name now includes the `/v2`, so include `/v2` whenever you are using the module name). If a module is version `v0` or `v1`, do not include the major version in either the module path or the import path. In addition to public APIs, telemetry produced by stable instrumentation will remain stable and backwards compatible. This is to avoid breaking alerts and dashboard. Modules will be used to encapsulate instrumentation, detectors, exporters, propagators, and any other independent sets of related components. Experimental modules still under active development will be versioned at `v0` to imply the stability guarantee defined by . > Major version zero (0.y.z) is for initial development. Anything MAY > change at any time. The public API SHOULD NOT be considered stable. Mature modules for which we guarantee a stable public API and telemetry will be versioned with a major version greater than `v0`. Experimental modules will start their versioning at `v0.0.0` and will increment their minor version when backwards incompatible changes are released and increment their patch version when backwards compatible changes are released. Stable contrib modules cannot depend on experimental modules from this project. All stable contrib modules of the same major version with this project will use the same entire version as this project. Stable modules may be released with an incremented minor or patch version even though that module's code has not been changed. Instead the only change that will have been included is to have updated that modules dependency on this project's stable APIs. When an experimental module in contrib becomes stable a new stable module version will be released and will include this now stable module. The new stable module version will be an increment of the minor version number and will be applied to all existing stable contrib modules, this project's modules, and the newly stable module being released. Contrib modules will be kept up to date with this project's releases. Due to the dependency contrib modules will implicitly have on this project's modules the release of stable contrib modules to match the released version number will be staggered after this project's release. There is no explicit time guarantee for how long after this projects release the contrib release will be. Effort should be made to keep them as close in time as" }, { "data": "No additional stable release in this project can be made until the contrib repository has a matching stable release. No release can be made in the contrib repository after this project's stable release except for a stable release of the contrib repository. GitHub releases will be made for all releases. Go modules will be made available at Go package mirrors. To better understand the implementation of the above policy the following example is provided. This project is simplified to include only the following modules and their versions: `otel`: `v0.14.0` `otel/trace`: `v0.14.0` `otel/metric`: `v0.14.0` `otel/baggage`: `v0.14.0` `otel/sdk/trace`: `v0.14.0` `otel/sdk/metric`: `v0.14.0` These modules have been developed to a point where the `otel/trace`, `otel/baggage`, and `otel/sdk/trace` modules have reached a point that they should be considered for a stable release. The `otel/metric` and `otel/sdk/metric` are still under active development and the `otel` module depends on both `otel/trace` and `otel/metric`. The `otel` package is refactored to remove its dependencies on `otel/metric` so it can be released as stable as well. With that done the following release candidates are made: `otel`: `v1.0.0-RC1` `otel/trace`: `v1.0.0-RC1` `otel/baggage`: `v1.0.0-RC1` `otel/sdk/trace`: `v1.0.0-RC1` The `otel/metric` and `otel/sdk/metric` modules remain at `v0.14.0`. A few minor issues are discovered in the `otel/trace` package. These issues are resolved with some minor, but backwards incompatible, changes and are released as a second release candidate: `otel`: `v1.0.0-RC2` `otel/trace`: `v1.0.0-RC2` `otel/baggage`: `v1.0.0-RC2` `otel/sdk/trace`: `v1.0.0-RC2` Notice that all module version numbers are incremented to adhere to our versioning policy. After these release candidates have been evaluated to satisfaction, they are released as version `v1.0.0`. `otel`: `v1.0.0` `otel/trace`: `v1.0.0` `otel/baggage`: `v1.0.0` `otel/sdk/trace`: `v1.0.0` Since both the `go` utility and the Go module system support [the semantic versioning definition of precedence](https://semver.org/spec/v2.0.0.html#spec-item-11), this release will correctly be interpreted as the successor to the previous release candidates. Active development of this project continues. The `otel/metric` module now has backwards incompatible changes to its API that need to be released and the `otel/baggage` module has a minor bug fix that needs to be released. The following release is made: `otel`: `v1.0.1` `otel/trace`: `v1.0.1` `otel/metric`: `v0.15.0` `otel/baggage`: `v1.0.1` `otel/sdk/trace`: `v1.0.1` `otel/sdk/metric`: `v0.15.0` Notice that, again, all stable module versions are incremented in unison and the `otel/sdk/metric` package, which depends on the `otel/metric` package, also bumped its version. This bump of the `otel/sdk/metric` package makes sense given their coupling, though it is not explicitly required by our versioning policy. As we progress, the `otel/metric` and `otel/sdk/metric` packages have reached a point where they should be evaluated for stability. The `otel` module is reintegrated with the `otel/metric` package and the following release is made: `otel`: `v1.1.0-RC1` `otel/trace`: `v1.1.0-RC1` `otel/metric`: `v1.1.0-RC1` `otel/baggage`: `v1.1.0-RC1` `otel/sdk/trace`: `v1.1.0-RC1` `otel/sdk/metric`: `v1.1.0-RC1` All the modules are evaluated and determined to a viable stable release. They are then released as version `v1.1.0` (the minor version is incremented to indicate the addition of new signal). `otel`: `v1.1.0` `otel/trace`: `v1.1.0` `otel/metric`: `v1.1.0` `otel/baggage`: `v1.1.0` `otel/sdk/trace`: `v1.1.0` `otel/sdk/metric`: `v1.1.0`" } ]
{ "category": "Runtime", "file_name": "underlay_cni_service.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "English | At present, most Underlay-type CNIs (such as Macvlan, IPVlan, Sriov-CNI, etc.) In the community are generally connected to the underlying network, and often do not natively support accessing the Service of the cluster. This is mostly because underlay Pod access to the Service needs to be forwarded through the gateway of the switch. However, there is no route to the Service on the gateway, so the packets accessing the Service cannot be routed correctly, resulting in packet loss. Spiderpool provides the following two solutions to solve the problem of Underlay CNI accessing Service: Use `kube-proxy` to access Service Use `cgroup eBPF` to access Service Both of these ways solve the problem that Underlay CNI cannot access Service, but the implementation principle is somewhat different. Below we will introduce these two ways: Spiderpool has a built-in plugin called `coordinator`, which helps us seamlessly integrate with `kube-proxy` to achieve Underlay CNI access to Service. Depending on different scenarios, the `coordinator` can run in either `underlay` or `overlay` mode. Although the implementation methods are slightly different, the core principle is to hijack the traffic of Pods accessing Services onto the host network protocol stack and then forward it through the IPtables rules created by Kube-proxy. The following is a brief introduction to the data forwarding process flowchart: Under this mode, the coordinator plugin will create a pair of Veth devices, with one end placed in the host and the other end placed in the network namespace of the Pod. Then set some routing rules inside the Pod to forward access to ClusterIP from the veth device. The coordinator defaults to auto mode, which will automatically determine whether to run in underlay or overlay mode. You only need to inject an annotation into the Pod: `v1.multus-cni.io/default-network: kube-system/<MultusCRNAME>`. After creating a Pod in Underlay mode, we enter the Pod and check the routing information: ```shell root@controller:~# kubectl exec -it macvlan-underlay-5496bb9c9b-c7rnp sh kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. 5: veth0@if428513: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 4a:fe:19:22:65:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::48fe:19ff:fe22:6505/64 scope link validlft forever preferredlft forever default via 10.6.0.1 dev eth0 10.6.0.0/16 dev eth0 proto kernel scope link src 10.6.212.241 10.6.212.101 dev veth0 scope link 10.233.64.0/18 via 10.6.212.101 dev veth0 ``` 10.6.212.101 dev veth0 scope link: `10.6.212.101` is the node's IP, which ensure where the Pod access the same node via `veth0`. 10.233.64.0/18 via 10.6.212.101 dev veth0: 10.233.64.0/18 is cluster service subnet, which ensure the Pod access ClusterIP via `veth0`. This solution heavily relies on the `MASQUERADE` of kube-proxy, otherwise the reply packets will be directly forwarded to the source Pod, and if they pass through some security devices, the packets will be" }, { "data": "Therefore, in some special scenarios, we need to set `masqueradeAll` of kube-proxy to true. By default, the underlay subnet of a Pod is different from the clusterCIDR of the cluster, so there is no need to enable `masqueradeAll`, and access between them will be SNATed. If the underlay subnet of a Pod is the same as the clusterCIDR of the cluster, then we must set `masqueradeAll` to true. Configuring `coordinator` as Overlay mode can also solve the problem of Underlay CNI accessing Service. The traditional Overlay type (such as and etc.) CNI has perfectly solved the access to Service problem. We can use it to help Underlay Pods access Service. We can attach multiple network cards to the Pod, `eth0` for creating by Overlay CNI, `net1` for creating by Underlay CNI, and set up policy routing table items through `coordinator` to ensure that when a Pod accesses Service, it forwards from `eth0`, and replies are also forwarded to `eth0`. By default, the value of mode is auto(spidercoordinator CR spec.mode is auto), `coordinator` will automatically determine whether the current CNI call is not `eth0`. If it's not, confirm that there is no `veth0` network card in the Pod, then automatically determine it as overlay mode. In overlay mode, Spiderpool will automatically synchronize the cluster default CNI Pod subnets, which are used to set routes in multi-network card Pods to enable it to communicate normally with Pods created by the default CNI when it accesses Service from `eth0`. This configuration corresponds to `spidercoordinator.spec.podCIDRType`, the default is `auto`, optional values: [\"auto\",\"calico\",\"cilium\",\"cluster\",\"none\"] These routes are injected at the start of the Pod, and if the related CIDR changes, it cannot automatically take effect on already running Pods, this requires restarting the Pod to take effect. For more details, please refer to When creating a Pod in Overlay mode and entering the Pod network command space, view the routing information: ```shell root@controller:~# kubectl exec -it macvlan-overlay-97bf89fdd-kdgrb sh kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. 0: from all lookup local 32765: from 10.6.212.227 lookup 100 32766: from all lookup main 32767: from all lookup default default via 169.254.1.1 dev eth0 10.6.212.102 dev eth0 scope link 10.233.0.0/18 via 10.6.212.102 dev eth0 10.233.64.0/18 via 10.6.212.102 dev eth0 169.254.1.1 dev eth0 scope link default via 10.6.0.1 dev net1 10.6.0.0/16 dev net1 proto kernel scope link src 10.6.212.227 ``` 32762: from all to 10.233.64.0/18 lookup 100: Ensure that when Pods access ClusterIP, they go through table 100 and are forwarded out from `eth0`. In the default configuration: Except for the default route, all routes are retained in the Main table, but the default route for 'net1' is moved to table" }, { "data": "These policy routes ensure that Underlay Pods can also normally access Service in multi-network card scenarios. In Spiderpool, we hijack the traffic of Pods accessing Services through a `coordinator` that forwards it to the host and then through the iptables rules set up by the host's Kube-proxy. This can solve the problem but may extend the data access path and cause some performance loss. The open-source CNI project, Cilium, supports replacing the kube-proxy system component entirely with eBPF technology. It can help us resolve Service addresses. When pod accessing a Service, the Service address will be directly resolved by the eBPF program mounted by Cilium on the target Pod, so that the source Pod can directly initiate access to the target Pod without going through the host's network protocol stack. This greatly shortens the access path and achieves acceleration in accessing Service. With the power of Cilium, we can also implement acceleration in accessing Service under the Underlay CNI through it. After testing, compared with kube-proxy manner, cgroup eBPF solution has . The following steps demonstrate how to accelerate access to a Service on a cluster with 2 nodes based on Macvlan CNI + Cilium: NOTE: Please ensure that the kernel version of the cluster nodes is at least greater than 4.19 Prepare a cluster without the kube-proxy component installed in advance. If kube-proxy is already installed, you can refer to the following commands to remove the kube-proxy component: ```shell ~# kubectl delete ds -n kube-system kube-proxy ~# # run the command on every node ~# iptables-save | grep -v KUBE | iptables-restore ``` To install the Cilium component, make sure to enable the kube-proxy replacement feature: ```shell ~# helm repo add cilium https://helm.cilium.io ~# helm repo update ~# APISERVERIP=<yourapiserver_ip> ~# # Kubeadm default is 6443 ~# APISERVERPORT=<yourapiserver_port> ~# helm install cilium cilium/cilium --version 1.14.3 \\ --namespace kube-system \\ --set kubeProxyReplacement=true \\ --set k8sServiceHost=${APISERVERIP} \\ --set k8sServicePort=${APISERVERPORT} ``` The installation is complete, check the pod's state: ```shell cilium-2r6s5 1/1 Running 0 15m cilium-lr9lx 1/1 Running 0 15m cilium-operator-5ff9f86dfd-lrk6r 1/1 Running 0 15m cilium-operator-5ff9f86dfd-sb695 1/1 Running 0 15m ``` To install Spiderpool, see to install Spiderpool: ```shell ~# helm install spiderpool spiderpool/spiderpool -n kube-system \\ --set multus.multusCNI.defaultCniCRName=\"macvlan-conf\" \\ --set coordinator.podCIDRType=none ``` set coordinator.podCIDRType=none, the spiderpool will not get the cluster's ServiceCIDR. Service-related routes are also not injected when pods are created. Access to the Service in this way is entirely dependent on Cilium kube-proxy Replacement. show the installation of Spiderpool: ```shell ~# kubectl get pod -n kube-system spiderpool-agent-9sllh 1/1 Running 0 1m spiderpool-agent-h92bv 1/1 Running 0 1m spiderpool-controller-7df784cdb7-bsfwv 1/1 Running 0 1m spiderpool-init 0/1 Completed 0 1m ``` Create a MacVLAN-related Multus configuration and create a companion IPPools resource: ```shell cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: v4-pool spec: gateway: 172.81.0.1 ips: 172.81.0.100-172.81.0.120 subnet: 172.81.0.0/16 apiVersion:" }, { "data": "kind: SpiderMultusConfig metadata: name: macvlan-ens192 namespace: kube-system spec: cniType: macvlan enableCoordinator: true macvlan: master: \"ens192\" ippools: ipv4: [\"v4-pool\"] EOF ``` needs to ensure that ens192 exists on the cluster nodes recommends setting enableCoordinator to true, which can resolve issues with pod health detection Create a set of cross-node DaemonSet apps for testing: ```shell ANNOTATION_MULTUS=\"v1.multus-cni.io/default-network: kube-system/macvlan-ens192\" NAME=ipvlan cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: DaemonSet metadata: name: ${NAME} labels: app: $NAME spec: selector: matchLabels: app: $NAME template: metadata: name: $NAME labels: app: $NAME annotations: ${ANNOTATION_MULTUS} spec: containers: name: test-app image: nginx imagePullPolicy: IfNotPresent ports: name: http containerPort: 80 protocol: TCP apiVersion: v1 kind: Service metadata: name: ${NAME} spec: ports: name: http port: 80 protocol: TCP targetPort: 80 selector: app: ${NAME} type: ClusterIP EOF ``` Verify the connectivity of the access service and see if the performance is improved: ```shell ~# kubectl exec -it ipvlan-test-55c97ccfd8-kd4vj sh kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. / # curl 10.233.42.25 -I HTTP/1.1 200 OK Server: nginx Date: Fri, 20 Oct 2023 07:52:13 GMT Content-Type: text/html Content-Length: 4055 Last-Modified: Thu, 02 Mar 2023 10:57:12 GMT Connection: keep-alive ETag: \"64008108-fd7\" Accept-Ranges: bytes ``` Open another terminal, enter the network space of the pod, and use the `tcpdump` tool to see that when the packet accessing the service is sent from the pod network namespace, the destination address has been resolved to the target pod address: ```shell ~# tcpdump -nnev -i eth0 tcp and port 80 tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 10.6.185.218.43550 > 10.6.185.210.80: Flags [S], cksum 0x87e7 (incorrect -> 0xe534), seq 1391704016, win 64240, options [mss 1460,sackOK,TS val 2667940841 ecr 0,nop,wscale 7], length 0 10.6.185.210.80 > 10.6.185.218.43550: Flags [S.], cksum 0x9d1a (correct), seq 2119742376, ack 1391704017, win 65160, options [mss 1460,sackOK,TS val 3827707465 ecr 2667940841,nop,wscale 7], length 0 ``` `10.6.185.218` is the IP of the source pod and `10.6.185.210` is the IP of the destination pod. Before and after using the `sockperf` tool to test the Cilium acceleration, the data comparison of pods accessing ClusterIP across nodes is obtained: | | latency(usec) | RPS | |||| | with kube-proxy | 36.763 | 72254.34 | | without kube-proxy | 27.743 | 107066.38 | According to the results, after Cilium kube-proxy replacement, access to the service is accelerated by about 30%. For more test data, please refer to There are two solutions to the Underlay CNI Access Service. The kube-proxy method is more commonly used and stable, and can be used stably in most environments. cgroup eBPF is an alternative option for Underlay CNI to access the Service and accelerates Service access. Although there are certain restrictions and thresholds for use, it can meet the needs of users in specific scenarios." } ]
{ "category": "Runtime", "file_name": "ROADMAP.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "This document lists the new features being considered for the future. The intention is for Antrea contributors and users to know what features could come in the near future, and to share feedback and ideas. Priorities for the project may change over time and so this roadmap is likely to evolve. A feature that is not listed now does not mean it will not be considered for Antrea. We definitely welcome suggestions and ideas from everyone about the roadmap and Antrea features. Reach us through Issues, Slack and / or Google Group! Antrea is coming in We are graduating some popular features to Beta or GA, deprecating some legacy APIs, dropping support for old K8s versions (< 1.19) to improve support for newer ones, and more! This is a big milestone for the project, stay tuned! We have a few things planned to improve basic usability: provide separate container images for the Agent and Controller: this will reduce image size and speed up deployment of new Antrea versions. support for installation and upgrade using the antctl CLI: this will provide an alternative installation method and antctl will ensure that Antrea components are upgraded in the right order to minimize workload disruption. CLI tools to facilitate migration from another CNI: we will take care of provisioning the correct network resources for your existing workloads. We are working on adding BGP support to the Antrea Agent, as it has been a much requested feature. Take a look at if this is something you are interested in. Antrea . However, a few features including: Egress, NodePortLocal, IPsec encryption are not supported for Windows yet. We will continue to add more features for Windows (starting with Egress) and aim for feature parity with Linux. We encourage users to reach out if they would like us to prioritize a specific feature. While the installation procedure has improved significantly since we first added Windows support, we plan to keep on streamlining the procedure (more automation) and on improving the user documentation. Antrea provides a comprehensive network policy model, which builds upon K8s Network Policies and provides many additional capabilities. One of them is the ability to define policy rules using domain names" }, { "data": "We think there is some room to improve user experience with this feature, and we are working on making it more stable. is working on to extend the base K8s NetworkPolicy resource. We are closely monitoring the upstream work and implementing these APIs as their development matures. Antrea comes with many tools for network diagnostics and observability. You may already be familiar with Traceflow, which lets you trace a single packet through the Antrea network. We plan on also providing users with the ability to capture live traffic and export it in PCAP format. Think tcpdump, but for K8s and through a dedicated Antrea API! We recently added the SecondaryNetwork feature, which supports provisioning additional networks for Pods, using the same constructs made popular by . However, at the moment, options for network \"types\" are limited. We plan on supporting new use cases (e.g., secondary network overlays, network acceleration with DPDK), as well as on improving user experience for this feature (with some useful documentation). Support for L7 NetworkPolicies was added in version 1.10, providing the ability to select traffic based on the application-layer context. However, the feature currently only supports HTTP and TLS traffic, and we plan to extend support to other protocols, such as DNS. Antrea can federate multiple K8s clusters, but this feature (introduced in version 1.7) is still considered Alpha today. Most of the functionality is already there (multi-cluster Services, cross-cluster connectivity, and multi-cluster NetworkPolicies), but we think there is some room for improvement when it comes to stability and usability. We are working on a framework to empower contributors and users to benchmark the performance of Antrea at scale. As service meshes start introducing alternatives to the sidecar approach, we believe there is an opportunity to improve the synergy between the K8s network plugin and the service mesh provider. In particular, we are looking at how Antrea can integrate with the new Istio ambient data plane mode. Take a look at for more information. While today the Antrea Controller can scale to 1000s of K8s Nodes and 100,000 Pods, and failover to a new replica in case of failure can happen in under a minute, we believe we should still investigate the possibility of deploying multiple replicas for the Controller (Active-Active or Active-Standby), to enable horizontal scaling and achieve high-availability with very quick failover. Horizontal scaling could help reduce the memory footprint of each Controller instance for very large K8s clusters." } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_egress_list.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List egress policy entries List egress policy entries. ``` cilium-dbg bpf egress list [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage the egress routing rules" } ]
{ "category": "Runtime", "file_name": "GettingStarted.md", "project_name": "CNI-Genie", "subcategory": "Cloud Native Network" }
[ { "data": "Linux box with We tested on Ubuntu 14.04 & 16.04 Docker installed Kubernetes cluster running with CNI enabled One easy way to bring up a cluster is to use : We tested on Kubernetes 1.5, 1.6, 1.7, 1.8 Till 1.7 version: ``` $ kubeadm init --use-kubernetes-version=v1.7.0 --pod-network-cidr=10.244.0.0/16 ``` Version 1.8 onwards: ``` $ kubeadm init --pod-network-cidr=10.244.0.0/16 ``` Next steps: ``` $ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` To schedule pods on the master, e.g. for a single-machine Kubernetes cluster, Till 1.7 version, run: ``` $ kubectl taint nodes --all dedicated- ``` Version 1.8 onwards, run: ``` $ kubectl taint nodes --all node-role.kubernetes.io/master- ``` One (or more) CNI plugin(s) installed, e.g., Calico, Weave, Flannel Use this to install Calico Use this to install Weave Use this to install Flannel We install genie as a Docker Container on every node Till Kubernetes 1.7 version: ``` $ kubectl apply -f https://raw.githubusercontent.com/cni-genie/CNI-Genie/master/conf/1.5/genie.yaml ``` Kubernetes 1.8 version onwards: ``` $ kubectl apply -f https://raw.githubusercontent.com/cni-genie/CNI-Genie/master/releases/v2.0/genie.yaml ``` Note that you should install genie first before making changes to the source. This ensures genie conf file is generated successfully. After making changes to source, build genie binary by running: ``` $ make all ``` Place \"genie\" binary from dest/ into /opt/cni/bin/ directory. ``` $ cp dist/genie /opt/cni/bin/genie ``` To run ginkgo tests for CNI-Genie run the following command: If Kubernetes cluster is 1.7+ ``` $ make test testKubeVersion=1.7 testKubeConfig=/root/admin.conf ``` If Kubernetes cluster is 1.5.x ``` $ make test testKubeVersion=1.5 ``` For now Genie logs are stored in /var/log/syslog To see the logs: ``` $ cat /dev/null > /var/log/syslog $ tail -f /var/log/syslog | grep 'CNI' ``` Note: one a single node cluster, after your Kubernetes master is initialized successfully, make sure you are able to schedule pods on the master by running: ``` $ kubectl taint nodes --all node-role.kubernetes.io/master- ``` Note: most plugins use differenet installation files for Kuberenetes 1.5, 1.6, 1.7 & 1.8. Make sure you use the right one!" } ]
{ "category": "Runtime", "file_name": "tencentcloud-vpc-backend.md", "project_name": "Flannel", "subcategory": "Cloud Native Network" }
[ { "data": "There are only two differences between the usage method and Alibaba Cloud: Tencent Cloud needs to create a routing table, while Alibaba Cloud creates a switch In network/config, backend-type is \"tencent-vpc\"" } ]
{ "category": "Runtime", "file_name": "topology.md", "project_name": "Kilo", "subcategory": "Cloud Native Network" }
[ { "data": "Kilo allows the topology of the encrypted network to be customized. A cluster administrator can specify whether the encrypted network should be a full mesh between every node, or if the mesh should be between distinct pools of nodes that communicate directly with one another. This allows the encrypted network to serve several purposes, for example: on cloud providers with unsecured private networks, a full mesh can be created between the nodes to secure all cluster traffic; nodes running in different cloud providers can be joined into a single cluster by creating one link between the two clouds; more generally, links that are insecure can be encrypted while links that are secure can remain fast and unencapsulated. By default, Kilo creates a mesh between the different logical locations in the cluster, e.g. data-centers, cloud providers, etc. Kilo will try to infer the location of the node using the node label. Additionally, Kilo supports using a custom topology label by setting the command line flag `--topology-label=<label>`. If this label is not set, then the node annotation can be used. For example, in order to join nodes in Google Cloud and AWS into a single cluster, an administrator could use the following snippet to annotate all nodes with `GCP` in the name: ```shell for node in $(kubectl get nodes | grep -i gcp | awk '{print $1}'); do kubectl annotate node $node kilo.squat.ai/location=\"gcp\"; done ``` In this case, Kilo would do the following: group all the nodes with the `GCP` annocation into a logical location; group all the nodes without an annotation would be grouped into default location; and elect a leader in each location and create a link between them. Analyzing the cluster with `kgctl` would produce a result like: ```shell kgctl graph | circo -Tsvg > cluster.svg ``` <img" }, { "data": "/> Creating a full mesh is a logical reduction of the logical mesh where each node is in its own group. Kilo provides a shortcut for this topology in the form of a command line flag: `--mesh-granularity=full`. When the `full` mesh granularity is specified, Kilo configures the network so that all inter-node traffic is encrypted with WireGuard. Analyzing the cluster with `kgctl` would produce a result like: ```shell kgctl graph | circo -Tsvg > cluster.svg ``` <img src=\"./graphs/full-mesh.svg\" /> The `kilo.squat.ai/location` annotation can be used to create cluster mixing some fully meshed nodes and some nodes grouped by logical location. For example, if a cluster contained a set of nodes in Google cloud and a set of nodes with no secure private network, e.g. some bare metal nodes, then the nodes in Google Cloud could be placed in one logical group while the bare metal nodes could form a full mesh. This could be accomplished by running: ```shell for node in $(kubectl get nodes | grep -i gcp | awk '{print $1}'); do kubectl annotate node $node kilo.squat.ai/location=\"gcp\"; done for node in $(kubectl get nodes | tail -n +2 | grep -v gcp | awk '{print $1}'); do kubectl annotate node $node kilo.squat.ai/location=\"$node\"; done ``` Analyzing the cluster with `kgctl` would produce a result like: ```shell kgctl graph | circo -Tsvg > cluster.svg ``` <img src=\"./graphs/mixed.svg\" /> If the cluster also had nodes in AWS, then the following snippet could be used: ```shell for node in $(kubectl get nodes | grep -i aws | awk '{print $1}'); do kubectl annotate node $node kilo.squat.ai/location=\"aws\"; done for node in $(kubectl get nodes | grep -i gcp | awk '{print $1}'); do kubectl annotate node $node kilo.squat.ai/location=\"gcp\"; done for node in $(kubectl get nodes | tail -n +2 | grep -v aws | grep -v gcp | awk '{print $1}'); do kubectl annotate node $node kilo.squat.ai/location=\"$node\"; done ``` This would in turn produce a graph like: ```shell kgctl graph | circo -Tsvg > cluster.svg ``` <img src=\"./graphs/complex.svg\" />" } ]
{ "category": "Runtime", "file_name": "CODE_OF_CONDUCT.md", "project_name": "K8up", "subcategory": "Cloud Native Storage" }
[ { "data": "K8up observes the . The code of conduct is overseen by the K8up project maintainers. Possible code of conduct violations should be emailed to the project maintainers cncf-k8up-maintainers@lists.cncf.io." } ]
{ "category": "Runtime", "file_name": "host-cluster.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Host Storage Cluster A host storage cluster is one where Rook configures Ceph to store data directly on the host. The Ceph mons will store the metadata on the host (at a path defined by the `dataDirHostPath`), and the OSDs will consume raw devices or partitions. The Ceph persistent data is stored directly on a host path (Ceph Mons) and on raw devices (Ceph OSDs). To get you started, here are several example of the Cluster CR to configure the host. For the simplest possible configuration, this example shows that all devices or partitions should be consumed by Ceph. The mons will store the metadata on the host node under `/var/lib/rook`. ```yaml apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: image: quay.io/ceph/ceph:v18.2.2 dataDirHostPath: /var/lib/rook mon: count: 3 allowMultiplePerNode: false storage: useAllNodes: true useAllDevices: true ``` More commonly, you will want to be more specific about which nodes and devices where Rook should configure the storage. The placement settings are very flexible to add node affinity, anti-affinity, or tolerations. For more options, see the . In this example, Rook will only configure Ceph daemons to run on nodes that are labeled with `role=rook-node`, and more specifically the OSDs will only be created on nodes labeled with `role=rook-osd-node`. ```yaml apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: image: quay.io/ceph/ceph:v18.2.2 dataDirHostPath: /var/lib/rook mon: count: 3 allowMultiplePerNode: false dashboard: enabled: true storage: useAllNodes: true useAllDevices: true deviceFilter: sdb placement: all: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: matchExpressions: key: role operator: In values: rook-node osd: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: matchExpressions: key: role operator: In values: rook-osd-node ``` If you need fine-grained control for every node and every device that is being configured, individual nodes and their config can be specified. In this example, we see that specific node names and devices can be specified. !!! hint Each node's 'name' field should match their 'kubernetes.io/hostname' label. ```yaml apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: image: quay.io/ceph/ceph:v18.2.2 dataDirHostPath: /var/lib/rook mon: count: 3 allowMultiplePerNode: false dashboard: enabled: true storage: useAllNodes: false useAllDevices: false deviceFilter: config: metadataDevice: databaseSizeMB: \"1024\" # this value can be removed for environments with normal sized disks (100 GB or larger) nodes: name: \"172.17.4.201\" devices: # specific devices to use for storage can be specified for each node name: \"sdb\" # Whole storage device name: \"sdc1\" # One specific partition. Should not have a file system on it. name: \"/dev/disk/by-id/ata-ST4000DM004-XXXX\" # both device name and explicit udev links are supported config: # configuration can be specified at the node level which overrides the cluster level config name: \"172.17.4.301\" deviceFilter: \"^sd.\" ```" } ]
{ "category": "Runtime", "file_name": "overview.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "Using the command-line interface tool (CLI) can achieve convenient and fast cluster management. With this tool, you can view the status of the cluster and each node, and manage each node, volume, and user. ::: tip Note With the continuous improvement of the CLI, 100% coverage of the interface functions of each node in the cluster will eventually be achieved. ::: After downloading the CubeFS source code, run the `build.sh` file in the `cubefs/cli` directory to generate the `cfs-cli` executable. At the same time, a configuration file named `.cfs-cli.json` will be generated in the `root` directory. Modify the master address to the master address of the current cluster. You can also use the `./cfs-cli config info` and `./cfs-cli config set` commands to view and set the configuration file. In the `cubefs/cli` directory, run the command `./cfs-cli --help` or `./cfs-cli -h` to get the CLI help document. The CLI is mainly divided into some types of management commands: | Command | Description | |--|| | cfs-cli cluster | Cluster management | | cfs-cli metanode | MetaNode management | | cfs-cli datanode | DataNode management | | cfs-cli datapartition | Data Partition management | | cfs-cli metapartition | Meta Partition management | | cfs-cli config | Configuration management | | cfs-cli volume, vol | Volume management | | cfs-cli user | User management | | cfs-cli nodeset | Nodeset management | | cfs-cli quota | Quota management |" } ]
{ "category": "Runtime", "file_name": "CHANGES.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "restored behavior as <= v3.9.0 with option to change path strategy using TrimRightSlashEnabled. introduced MergePathStrategy to be able to revert behaviour of path concatenation to 3.9.0 see comment in Readme how to customize this behaviour. fix broken 3.10.0 by using path package for joining paths changed tokenizer to match std route match behavior; do not trimright the path (#511) Add MIME_ZIP (#512) Add MIMEZIP and HEADERContentDisposition (#513) Changed how to get query parameter issue #510 add support for http.Handler implementations to work as FilterFunction, issue #504 (thanks to https://github.com/ggicci) use exact matching of allowed domain entries, issue #489 (#493) this changes fixes [security] Authorization Bypass Through User-Controlled Key by changing the behaviour of the AllowedDomains setting in the CORS filter. To support the previous behaviour, the CORS filter type now has a AllowedDomainFunc callback mechanism which is called when a simple domain match fails. add test and fix for POST without body and Content-type, issue #492 (#496) [Minor] Bad practice to have a mix of Receiver types. (#491) restored FilterChain (#482 by SVilgelm) fix problem with contentEncodingEnabled setting (#479) feat(parameter): adds additional openapi mappings (#478) add support for vendor extensions (#477 thx erraggy) fix removing absent route from webservice (#472) fix handling no match access selected path remove obsolete field add check for wildcard (#463) in CORS add access to Route from Request, issue #459 (#462) Added OPTIONS to WebService Fixed duplicate compression in dispatch. #449 Added check on writer to prevent compression of response twice. #447 Enable content encoding on Handle and ServeHTTP (#446) List available representations in 406 body (#437) Convert to string using rune() (#443) 405 Method Not Allowed must have Allow header (#436) (thx Bracken <abdawson@gmail.com>) add field allowedMethodsWithoutContentType (#424) support describing response headers (#426) fix openapi examples (#425) v3.0.0 fix: use request/response resulting from filter chain add Go module Module consumer should use github.com/emicklei/go-restful/v3 as import path v2.10.0 support for Custom Verbs (thanks Vinci Xu <277040271@qq.com>) fixed static example (thanks Arthur <yang_yapo@126.com>) simplify code (thanks Christian Muehlhaeuser <muesli@gmail.com>) added JWT HMAC with SHA-512 authentication code example (thanks Amim Knabben <amim.knabben@gmail.com>) v2.9.6 small optimization in filter code v2.11.1 fix WriteError return value (#415) v2.11.0 allow prefix and suffix in path variable expression (#414) v2.9.6 support google custome verb (#413) v2.9.5 fix panic in Response.WriteError if err == nil v2.9.4 fix issue #400 , parsing mime type quality Route Builder added option for contentEncodingEnabled (#398) v2.9.3 Avoid return of 415 Unsupported Media Type when request body is empty (#396) v2.9.2 Reduce allocations in per-request methods to improve performance (#395) v2.9.1 Fix issue with default responses and invalid status code 0. (#393)" }, { "data": "add per Route content encoding setting (overrides container setting) v2.8.0 add Request.QueryParameters() add json-iterator (via build tag) disable vgo module (until log is moved) v2.7.1 add vgo module v2.6.1 add JSONNewDecoderFunc to allow custom JSON Decoder usage (go 1.10+) v2.6.0 Make JSR 311 routing and path param processing consistent Adding description to RouteBuilder.Reads() Update example for Swagger12 and OpenAPI 2017-09-13 added route condition functions using `.If(func)` in route building. 2017-02-16 solved issue #304, make operation names unique 2017-01-30 [IMPORTANT] For swagger users, change your import statement to: swagger \"github.com/emicklei/go-restful-swagger12\" moved swagger 1.2 code to go-restful-swagger12 created TAG 2.0.0 2017-01-27 remove defer request body close expose Dispatch for testing filters and Routefunctions swagger response model cannot be array created TAG 1.0.0 2016-12-22 (API change) Remove code related to caching request content. Removes SetCacheReadEntity(doCache bool) 2016-11-26 Default change! now use CurlyRouter (was RouterJSR311) Default change! no more caching of request content Default change! do not recover from panics 2016-09-22 fix the DefaultRequestContentType feature 2016-02-14 take the qualify factor of the Accept header mediatype into account when deciding the contentype of the response add constructors for custom entity accessors for xml and json 2015-09-27 rename new WriteStatusAnd... to WriteHeaderAnd... for consistency 2015-09-25 fixed problem with changing Header after WriteHeader (issue 235) 2015-09-14 changed behavior of WriteHeader (immediate write) and WriteEntity (no status write) added support for custom EntityReaderWriters. 2015-08-06 add support for reading entities from compressed request content use sync.Pool for compressors of http response and request body add Description to Parameter for documentation in Swagger UI 2015-03-20 add configurable logging 2015-03-18 if not specified, the Operation is derived from the Route function 2015-03-17 expose Parameter creation functions make trace logger an interface fix OPTIONSFilter customize rendering of ServiceError JSR311 router now handles wildcards add Notes to Route 2014-11-27 (api add) PrettyPrint per response. (as proposed in #167) 2014-11-12 (api add) ApiVersion(.) for documentation in Swagger UI 2014-11-10 (api change) struct fields tagged with \"description\" show up in Swagger UI 2014-10-31 (api change) ReturnsError -> Returns (api add) RouteBuilder.Do(aBuilder) for DRY use of RouteBuilder fix swagger nested structs sort Swagger response messages by code 2014-10-23 (api add) ReturnsError allows you to document Http codes in swagger fixed problem with greedy CurlyRouter (api add) Access-Control-Max-Age in CORS add tracing functionality (injectable) for debugging purposes support JSON parse 64bit int fix empty parameters for swagger WebServicesUrl is now optional for swagger fixed duplicate AccessControlAllowOrigin in CORS (api change) expose ServeMux in container (api add) added AllowedDomains in CORS (api add) ParameterNamed for detailed documentation 2014-04-16 (api add) expose constructor of Request for" }, { "data": "2014-06-27 (api add) ParameterNamed gives access to a Parameter definition and its data (for further specification). (api add) SetCacheReadEntity allow scontrol over whether or not the request body is being cached (default true for compatibility reasons). 2014-07-03 (api add) CORS can be configured with a list of allowed domains 2014-03-12 (api add) Route path parameters can use wildcard or regular expressions. (requires CurlyRouter) 2014-02-26 (api add) Request now provides information about the matched Route, see method SelectedRoutePath 2014-02-17 (api change) renamed parameter constants (go-lint checks) 2014-01-10 (api add) support for CloseNotify, see http://golang.org/pkg/net/http/#CloseNotifier 2014-01-07 (api change) Write* methods in Response now return the error or nil. added example of serving HTML from a Go template. fixed comparing Allowed headers in CORS (is now case-insensitive) 2013-11-13 (api add) Response knows how many bytes are written to the response body. 2013-10-29 (api add) RecoverHandler(handler RecoverHandleFunction) to change how panic recovery is handled. Default behavior is to log and return a stacktrace. This may be a security issue as it exposes sourcecode information. 2013-10-04 (api add) Response knows what HTTP status has been written (api add) Request can have attributes (map of string->interface, also called request-scoped variables 2013-09-12 (api change) Router interface simplified Implemented CurlyRouter, a Router that does not use|allow regular expressions in paths 2013-08-05 add OPTIONS support add CORS support 2013-08-27 fixed some reported issues (see github) (api change) deprecated use of WriteError; use WriteErrorString instead 2014-04-15 (fix) v1.0.1 tag: fix Issue 111: WriteErrorString 2013-08-08 (api add) Added implementation Container: a WebServices collection with its own http.ServeMux allowing multiple endpoints per program. Existing uses of go-restful will register their services to the DefaultContainer. (api add) the swagger package has be extended to have a UI per container. if panic is detected then a small stack trace is printed (thanks to runner-mei) (api add) WriteErrorString to Response Important API changes: (api remove) package variable DoNotRecover no longer works ; use restful.DefaultContainer.DoNotRecover(true) instead. (api remove) package variable EnableContentEncoding no longer works ; use restful.DefaultContainer.EnableContentEncoding(true) instead. 2013-07-06 (api add) Added support for response encoding (gzip and deflate(zlib)). This feature is disabled on default (for backwards compatibility). Use restful.EnableContentEncoding = true in your initialization to enable this feature. 2013-06-19 (improve) DoNotRecover option, moved request body closer, improved ReadEntity 2013-06-03 (api change) removed Dispatcher interface, hide PathExpression changed receiver names of type functions to be more idiomatic Go 2013-06-02 (optimize) Cache the RegExp compilation of Paths. 2013-05-22 (api add) Added support for request/response filter functions 2013-05-18 (api add) Added feature to change the default Http Request Dispatch function (travis cline) (api change) Moved Swagger Webservice to swagger package (see example restful-user) [2012-11-14 .. 2013-05-18> See https://github.com/emicklei/go-restful/commits 2012-11-14 Initial commit" } ]
{ "category": "Runtime", "file_name": "BUILD.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "This document provides instructions on how to build and run Kanister locally. Kanister is a data management framework written in Go. It allows users to express data protection workflows using blueprints and actionsets. These resources are defined as Kubernetes , following the operator pattern. ](https://docs.kanister.io/architecture.html) `build` - A collection of shell scripts used by the Makefile targets to build, test and package Kanister `cmd` - Go `main` packages containing the source of the `controller`, `kanctl` and `kando` executables `docker` - A collection of Dockerfiles for build and demos `docs` - Source of the documentation at docs.kanister.io `examples` - A collection of example blueprints to show how Kanister works with different data services `graphic` - Image files used in documentation `helm` - Helm chart for the Kanister operator `pkg` - Go library packages used by Kanister The provides a set of targets to help simplify the build tasks. To ensure cross-platform consistency, many of these targets use Docker to spawn build containers based on the `ghcr.io/kanisterio/build` public image. For `make test` to succeed, a valid `kubeconfig` file must be found at `$HOME/.kube/config`. See the Docker command that runs `make test` . Use the `check` target to ensure your development environment has the necessary development tools: ```sh make check ``` The following targets can be used to lint, test and build the Kanister controller: ```sh make golint make test make build-controller ``` To build kanister tools (kanctl and kando), use the following conmmand: ```sh make build GOBORING=true BIN=<kanctl|kando> ARCH=<arm64|amd64> ``` This will build a selected binary `BIN` for a selected architecture `ARCH`. To build the controller OCI image: ```sh make release-controller \\ IMAGE=<yourregistry>/<yourcontroller_image> \\ VERSION=<yourimagetag> ``` Update the `IMAGE` variable to reference the image registry you'd like to push your image to. You must have write permissions on the registry. If `IMAGE` is not specified, the Makefile will use the default of `kanisterio/controller`. The `VERSION` variable is useful for versioning your image with a custom tag. If `VERSION` is not specified, the Makefile will auto-generate one for your image. For example, the following command will build and push your image to the registry at `ghcr.io/myregistry/kanister`, with the tag `20221003`: ```sh make release-controller \\ IMAGE=ghcr.io/myregistry/kanister \\ VERSION=20221003 ``` You can test your Kanister controller locally by using Helm to deploy the local Helm chart: ```sh helm install kanister ./helm/kanister-operator \\ --create-namespace \\ --namespace kanister \\ --set image.repository=<yourregistry>/<yourcontroller_image> \\ --set image.tag=<yourimagetag> ``` Subsequent changes to your Kanister controller can be applied using the `helm upgrade` command: ```sh helm upgrade kanister ./helm/kanister-operator \\ --namespace kanister \\ --set image.repository=<yourregistry>/<yourcontroller_image> \\ --set image.tag=<yournewimage_tag> ``` Most of the Makefile targets can work in a non-Docker development setup, by setting the `DOCKER_BUILD` variable to `false`. Kanister is using `check` library to extend go testing capabilities: https://github.com/kastenhq/check It's recommended to write new tests using this library for consistency. `make test` runs all tests in the" }, { "data": "It's possible to run a specific test with `TEST_FILTER` environment variable: ``` make tests TEST_FILTER=OutputTestSuite ``` This variable will be passed to `-check.f` flag and supports regex filters. To run tests for specific package you can run `go test` in that package directory. It's recommended to do that in build image shell, you can run it with `make shell`. The `check` library handles arguments differently from standard `go test` to run specific test, you can use `-check.f <test regex>` to filter test (or suite) names to increase verbosity, you can use `-check.v` or `-check.vv` to controll how many suites from the package run in parallel, you can use `-check.suitep <number>` See https://github.com/kastenhq/check and https://github.com/kastenhq/check/blob/v1/run.go#L30 for more information The source of the documentation is found in the `docs` folder. They are written in the format. To rebuild the documentation: ```sh make docs ``` The `docs` target uses the `ghcr.io/kanisterio/docker-sphinx` public image to generate the HTML documents and store them in your local `/docs/_build/html` folder. We have started experimenting, and will soon fully transition, to using to generate Kanister documentation. This requires the documentation files to be written in , along with some . This new documentation system offers a live-dev server that will dynamically render Markdown documentation files as you are making changes to them on your local machine/branch. To start this development server, place yourself in the `docs_new` folder, then run the following commands: ```sh pnpm install pnpm run docs:dev ``` To render/build the docs locally (it will generate static assets, like HTML pages, Javascript/CSS files, etc.), use this command: ```sh pnpm run docs:build ``` To start a local webserver that you can use to preview the documentation that has been rendered by the command above, use this command: ```sh pnpm run docs:preview ``` If you have new blueprints that you think will benefit the community, feel free to add them to the `examples` folder via pull requests. Use the existing folder layout as a guide. Be sure to include a comprehensive README.md to demonstrate end-to-end usage. Kanister can be extended with custom Kanister Functions. All the functions are written in Go. They are located in the `pkg/function` folder. Take a look at this to see how to write a new Kanister Function. Don't forget to update the documentation at `docs/functions.rst` with configuration information and examples to show off your new function. The Kanister build image is used to build and push a new Kanister build image (`ghcr.io/kanisterio/build`). It is an on-demand workflow and needs to be run manually. This workflow expects `image tag` value as an input. The author updating the build image tag must raise a separate PR to update this value for it to be used in the build process. It should be set it . The MongoDB Atlas image is used to build and push a new Atlas tools image (`ghcr.io/kanisterio/mongodb-atlas`). It is an on-demand workflow and needs to be run manually when there are changes in . This workflow expects `image tag` value as an input." } ]
{ "category": "Runtime", "file_name": "reliability.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "| Case ID | Title | Priority | Smoke | Status | Other | |||-|-|--|-| | R00001 | Successfully run a pod when the Spiderpool controller is restarting | p2 | | done | | | R00002 | Successfully run a pod when the ETCD is restarting | p3 | | done | | | R00003 | Successfully run a pod when the API-server is restarting | p2 | | done | | | R00004 | Successfully run a pod when the Spiderpool agent is restarting | p2 | | done | | | R00005 | Successfully run a pod when the coreDns is restarting | p3 | | done | | | R00006 | Successfully recover a pod whose original node is power-off | p2 | | done | | | R00007 | Spiderpool Controller active/standby switching is normal | p2 | | done | |" } ]
{ "category": "Runtime", "file_name": "split-brain.md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }
[ { "data": "This document contains steps to recover from a file split-brain. It can be obtained either by The command `gluster volume heal info split-brain`. Identify the files for which file operations performed from the client keep failing with Input/Output error. In case of VMs, they need to be powered-off. This is done by observing the afr changelog extended attributes of the file on the bricks using the getfattr command; then identifying the type of split-brain (data split-brain, metadata split-brain, entry split-brain or split-brain due to gfid-mismatch); and finally determining which of the bricks contains the 'good copy' of the file. ``` getfattr -d -m . -e hex <file-path-on-brick> ``` It is also possible that one brick might contain the correct data while the other might contain the correct metadata. ``` setfattr -n <attribute-name> -v <attribute-value> <file-path-on-brick> ``` ``` ls -l <file-path-on-gluster-mount> ``` Detailed Instructions for steps 3 through 5: =========================================== To understand how to resolve split-brain we need to know how to interpret the afr changelog extended attributes. Execute `getfattr -d -m . -e hex <file-path-on-brick>` Example: ``` [root@store3 ~]# getfattr -d -e hex -m. brick-a/file.txt \\#file: brick-a/file.txt security.selinux=0x726f6f743a6f626a6563745f723a66696c655f743a733000 trusted.afr.vol-client-2=0x000000000000000000000000 trusted.afr.vol-client-3=0x000000000200000000000000 trusted.gfid=0x307a5c9efddd4e7c96e94fd4bcdcbd1b ``` The extended attributes with `trusted.afr.<volname>-client-<subvolume-index>` are used by afr to maintain changelog of the file.The values of the `trusted.afr.<volname>-client-<subvolume-index>` are calculated by the glusterfs client (fuse or nfs-server) processes. When the glusterfs client modifies a file or directory, the client contacts each brick and updates the changelog extended attribute according to the response of the brick. `subvolume-index` is nothing but (brick number - 1) in `gluster volume info <volname>` output. Example: ``` [root@pranithk-laptop ~]# gluster volume info vol Volume Name: vol Type: Distributed-Replicate Volume ID: 4f2d7849-fbd6-40a2-b346-d13420978a01 Status: Created Number of Bricks: 4 x 2 = 8 Transport-type: tcp Bricks: brick-a: pranithk-laptop:/gfs/brick-a brick-b: pranithk-laptop:/gfs/brick-b brick-c: pranithk-laptop:/gfs/brick-c brick-d: pranithk-laptop:/gfs/brick-d brick-e: pranithk-laptop:/gfs/brick-e brick-f: pranithk-laptop:/gfs/brick-f brick-g: pranithk-laptop:/gfs/brick-g brick-h: pranithk-laptop:/gfs/brick-h ``` In the example above: ``` Brick | Replica set | Brick subvolume index -/gfs/brick-a | 0 | 0 -/gfs/brick-b | 0 | 1 -/gfs/brick-c | 1 | 2 -/gfs/brick-d | 1 | 3 -/gfs/brick-e | 2 | 4 -/gfs/brick-f | 2 | 5 -/gfs/brick-g | 3 | 6 -/gfs/brick-h | 3 | 7 ``` Each file in a brick maintains the changelog of itself and that of the files present in all the other bricks in it's replica set as seen by that brick. In the example volume given above, all files in brick-a will have 2 entries, one for itself and the other for the file present in it's replica pair, i.e.brick-b: ``` trusted.afr.vol-client-0=0x000000000000000000000000 -->changelog for itself (brick-a) trusted.afr.vol-client-1=0x000000000000000000000000 -->changelog for brick-b as seen by brick-a ``` Likewise, all files in brick-b will have: ```" }, { "data": "-->changelog for brick-a as seen by brick-b trusted.afr.vol-client-1=0x000000000000000000000000 -->changelog for itself (brick-b) ``` The same can be extended for other replica pairs. Interpreting Changelog (roughly pending operation count) Value: Each extended attribute has a value which is 24 hexa decimal digits. First 8 digits represent changelog of data. Second 8 digits represent changelog of metadata. Last 8 digits represent Changelog of directory entries. Pictorially representing the same, we have: ``` 0x 000003d7 00000001 00000000 | | | | | \\_ changelog of directory entries | \\_ changelog of metadata \\ _ changelog of data ``` For Directories metadata and entry changelogs are valid. For regular files data and metadata changelogs are valid. For special files like device files etc metadata changelog is valid. When a file split-brain happens it could be either data split-brain or meta-data split-brain or both. When a split-brain happens the changelog of the file would be something like this: Example:(Lets consider both data, metadata split-brain on same file). ``` [root@pranithk-laptop vol]# getfattr -d -m . -e hex /gfs/brick-?/a getfattr: Removing leading '/' from absolute path names \\#file: gfs/brick-a/a trusted.afr.vol-client-0=0x000000000000000000000000 trusted.afr.vol-client-1=0x000003d70000000100000000 trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57 \\#file: gfs/brick-b/a trusted.afr.vol-client-0=0x000003b00000000100000000 trusted.afr.vol-client-1=0x000000000000000000000000 trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57 ``` The first 8 digits of trusted.afr.vol-client-0 are all zeros (0x00000000................), and the first 8 digits of trusted.afr.vol-client-1 are not all zeros (0x000003d7................). So the changelog on /gfs/brick-a/a implies that some data operations succeeded on itself but failed on /gfs/brick-b/a. The second 8 digits of trusted.afr.vol-client-0 are all zeros (0x........00000000........), and the second 8 digits of trusted.afr.vol-client-1 are not all zeros (0x........00000001........). So the changelog on /gfs/brick-a/a implies that some metadata operations succeeded on itself but failed on /gfs/brick-b/a. The first 8 digits of trusted.afr.vol-client-0 are not all zeros (0x000003b0................), and the first 8 digits of trusted.afr.vol-client-1 are all zeros (0x00000000................). So the changelog on /gfs/brick-b/a implies that some data operations succeeded on itself but failed on /gfs/brick-a/a. The second 8 digits of trusted.afr.vol-client-0 are not all zeros (0x........00000001........), and the second 8 digits of trusted.afr.vol-client-1 are all zeros (0x........00000000........). So the changelog on /gfs/brick-b/a implies that some metadata operations succeeded on itself but failed on /gfs/brick-a/a. Since both the copies have data, metadata changes that are not on the other file, it is in both data and metadata split-brain. Deciding on the correct copy: -- The user may have to inspect stat,getfattr output of the files to decide which metadata to retain and contents of the file to decide which data to retain. Continuing with the example above, lets say we want to retain the data of /gfs/brick-a/a and metadata of" }, { "data": "Resetting the relevant changelogs to resolve the split-brain: For resolving data-split-brain: We need to change the changelog extended attributes on the files as if some data operations succeeded on /gfs/brick-a/a but failed on /gfs/brick-b/a. But /gfs/brick-b/a should NOT have any changelog which says some data operations succeeded on /gfs/brick-b/a but failed on /gfs/brick-a/a. We need to reset the data part of the changelog on trusted.afr.vol-client-0 of /gfs/brick-b/a. For resolving metadata-split-brain: We need to change the changelog extended attributes on the files as if some metadata operations succeeded on /gfs/brick-b/a but failed on /gfs/brick-a/a. But /gfs/brick-a/a should NOT have any changelog which says some metadata operations succeeded on /gfs/brick-a/a but failed on /gfs/brick-b/a. We need to reset metadata part of the changelog on trusted.afr.vol-client-1 of /gfs/brick-a/a So, the intended changes are: On /gfs/brick-b/a: For trusted.afr.vol-client-0 0x000003b00000000100000000 to 0x000000000000000100000000 (Note that the metadata part is still not all zeros) Hence execute `setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000 /gfs/brick-b/a` On /gfs/brick-a/a: For trusted.afr.vol-client-1 0x0000000000000000ffffffff to 0x000003d70000000000000000 (Note that the data part is still not all zeros) Hence execute `setfattr -n trusted.afr.vol-client-1 -v 0x000003d70000000000000000 /gfs/brick-a/a` Thus after the above operations are done, the changelogs look like this: ``` [root@pranithk-laptop vol]# getfattr -d -m . -e hex /gfs/brick-?/a getfattr: Removing leading '/' from absolute path names \\#file: gfs/brick-a/a trusted.afr.vol-client-0=0x000000000000000000000000 trusted.afr.vol-client-1=0x000003d70000000000000000 trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57 \\#file: gfs/brick-b/a trusted.afr.vol-client-0=0x000000000000000100000000 trusted.afr.vol-client-1=0x000000000000000000000000 trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57 ``` Triggering Self-heal: Perform `ls -l <file-path-on-gluster-mount>` to trigger healing. Fixing Directory entry split-brain: Afr has the ability to conservatively merge different entries in the directories when there is a split-brain on directory. If on one brick directory 'd' has entries '1', '2' and has entries '3', '4' on the other brick then afr will merge all of the entries in the directory to have '1', '2', '3', '4' entries in the same directory. (Note: this may result in deleted files to re-appear in case the split-brain happens because of deletion of files on the directory) Split-brain resolution needs human intervention when there is at least one entry which has same file name but different gfid in that directory. Example: On brick-a the directory has entries '1' (with gfid g1), '2' and on brick-b directory has entries '1' (with gfid g2) and '3'. These kinds of directory split-brains need human intervention to resolve. The user needs to remove either file '1' on brick-a or the file '1' on brick-b to resolve the split-brain. In addition, the corresponding gfid-link file also needs to be removed.The gfid-link files are present in the .glusterfs folder in the top-level directory of the brick. If the gfid of the file is 0x307a5c9efddd4e7c96e94fd4bcdcbd1b (the trusted.gfid extended attribute got from the getfattr command earlier),the gfid-link file can be found at `/gfs/brick-a/.glusterfs/30/7a/307a5c9efddd4e7c96e94fd4bcdcbd1b` Before deleting the gfid-link, we have to ensure that there are no hard links to the file present on that brick. If hard-links exist,they must be deleted as well." } ]
{ "category": "Runtime", "file_name": "storage_volumes.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "(howto-storage-volumes)= See the following sections for instructions on how to create, configure, view and resize {ref}`storage-volumes`. When you create an instance, Incus automatically creates a storage volume that is used as the root disk for the instance. You can add custom storage volumes to your instances. Such custom storage volumes are independent of the instance, which means that they can be backed up separately and are retained until you delete them. Custom storage volumes with content type `filesystem` can also be shared between different instances. See {ref}`storage-volumes` for detailed information. Use the following command to create a custom storage volume of type `block` or `filesystem` in a storage pool: incus storage volume create <poolname> <volumename> [configuration_options...] See the {ref}`storage-drivers` documentation for a list of available storage volume configuration options for each driver. By default, custom storage volumes use the `filesystem` {ref}`content type <storage-content-types>`. To create a custom storage volume with the content type `block`, add the `--type` flag: incus storage volume create <poolname> <volumename> --type=block [configuration_options...] To add a custom storage volume on a cluster member, add the `--target` flag: incus storage volume create <poolname> <volumename> --target=<clustermember> [configurationoptions...] ```{note} For most storage drivers, custom storage volumes are not replicated across the cluster and exist only on the member for which they were created. This behavior is different for Ceph-based storage pools (`ceph` and `cephfs`) and clustered LVM (`lvmcluster`), where volumes are available from any cluster member. ``` To create a custom storage volume of type `iso`, use the `import` command instead of the `create` command: incus storage volume import <poolname> <isopath> <volume_name> --type=iso (storage-attach-volume)= After creating a custom storage volume, you can add it to one or more instances as a {ref}`disk device <devices-disk>`. The following restrictions apply: Custom storage volumes of {ref}`content type <storage-content-types>` `block` or `iso` cannot be attached to containers, but only to virtual machines. To avoid data corruption, storage volumes of {ref}`content type <storage-content-types>` `block` should never be attached to more than one virtual machine at a time. Storage volumes of {ref}`content type <storage-content-types>` `iso` are always read-only, and can therefore be attached to more than one virtual machine at a time without corrupting data. File system storage volumes can't be attached to virtual machines while they're running. For custom storage volumes with the content type `filesystem`, use the following command, where `<location>` is the path for accessing the storage volume inside the instance (for example, `/data`): incus storage volume attach <poolname> <filesystemvolumename> <instancename> <location> Custom storage volumes with the content type `block` do not take a location: incus storage volume attach <poolname> <blockvolumename> <instancename> By default, the custom storage volume is added to the instance with the volume name as the {ref}`device <devices>`" }, { "data": "If you want to use a different device name, you can add it to the command: incus storage volume attach <poolname> <filesystemvolumename> <instancename> <device_name> <location> incus storage volume attach <poolname> <blockvolumename> <instancename> <device_name> The command is a shortcut for adding a disk device to an instance. Alternatively, you can add a disk device for the storage volume in the usual way: incus config device add <instancename> <devicename> disk pool=<poolname> source=<volumename> [path=<location>] When using this way, you can add further configuration to the command if needed. See {ref}`disk device <devices-disk>` for all available device options. (storage-configure-IO)= When you attach a storage volume to an instance as a {ref}`disk device <devices-disk>`, you can configure I/O limits for it. To do so, set the `limits.read`, `limits.write` or `limits.max` properties to the corresponding limits. See the {ref}`devices-disk` reference for more information. The limits are applied through the Linux `blkio` cgroup controller, which makes it possible to restrict I/O at the disk level (but nothing finer grained than that). ```{note} Because the limits apply to a whole physical disk rather than a partition or path, the following restrictions apply: Limits will not apply to file systems that are backed by virtual devices (for example, device mapper). If a file system is backed by multiple block devices, each device will get the same limit. If two disk devices that are backed by the same disk are attached to the same instance, the limits of the two devices will be averaged. ``` All I/O limits only apply to actual block device access. Therefore, consider the file system's own overhead when setting limits. Access to cached data is not affected by the limit. (storage-volume-special)= Instead of attaching a custom volume to an instance as a disk device, you can also use it as a special kind of volume to store {ref}`backups <backups>` or {ref}`images <about-images>`. To do so, you must set the corresponding {ref}`server configuration <server-options-misc>`: To use a custom volume to store the backup tarballs: incus config set storage.backupsvolume <poolname>/<volume_name> To use a custom volume to store the image tarballs: incus config set storage.imagesvolume <poolname>/<volume_name> (storage-configure-volume)= See the {ref}`storage-drivers` documentation for the available configuration options for each storage driver. Use the following command to set configuration options for a storage volume: incus storage volume set <poolname> [<volumetype>/]<volume_name> <key> <value> The default {ref}`storage volume type <storage-volume-types>` is `custom`, so you can leave out the `<volume_type>/` when configuring a custom storage" }, { "data": "For example, to set the size of your custom storage volume `my-volume` to 1 GiB, use the following command: incus storage volume set my-pool my-volume size=1GiB To set the snapshot expiry time for your virtual machine `my-vm` to one month, use the following command: incus storage volume set my-pool virtual-machine/my-vm snapshots.expiry 1M You can also edit the storage volume configuration by using the following command: incus storage volume edit <poolname> [<volumetype>/]<volume_name> (storage-configure-vol-default)= You can define default volume configurations for a storage pool. To do so, set a storage pool configuration with a `volume` prefix, thus `volume.<VOLUME_CONFIGURATION>=<VALUE>`. This value is then used for all new storage volumes in the pool, unless it is set explicitly for a volume or an instance. In general, the defaults set on a storage pool level (before the volume was created) can be overridden through the volume configuration, and the volume configuration can be overridden through the instance configuration (for storage volumes of {ref}`type <storage-volume-types>` `container` or `virtual-machine`). For example, to set a default volume size for a storage pool, use the following command: incus storage set [<remote>:]<pool_name> volume.size <value> You can display a list of all available storage volumes in a storage pool and check their configuration. To list all available storage volumes in a storage pool, use the following command: incus storage volume list <pool_name> To display the storage volumes for all projects (not only the default project), add the `--all-projects` flag. The resulting table contains the {ref}`storage volume type <storage-volume-types>` and the {ref}`content type <storage-content-types>` for each storage volume in the pool. ```{note} Custom storage volumes might use the same name as instance volumes (for example, you might have a container named `c1` with a container storage volume named `c1` and a custom storage volume named `c1`). Therefore, to distinguish between instance storage volumes and custom storage volumes, all instance storage volumes must be referred to as `<volumetype>/<volumename>` (for example, `container/c1` or `virtual-machine/vm`) in commands. ``` To show detailed configuration information about a specific volume, use the following command: incus storage volume show <poolname> [<volumetype>/]<volume_name> To show state information about a specific volume, use the following command: incus storage volume info <poolname> [<volumetype>/]<volume_name> In both commands, the default {ref}`storage volume type <storage-volume-types>` is `custom`, so you can leave out the `<volume_type>/` when displaying information about a custom storage volume. If you need more storage in a volume, you can increase the size of your storage volume. In some cases, it is also possible to reduce the size of a storage volume. To resize a storage volume, set its size configuration: incus storage volume set <poolname> <volumename> size <new_size> ```{important} Growing a storage volume usually works (if the storage pool has sufficient storage). Shrinking a storage volume is only possible for storage volumes with content type `filesystem`. It is not guaranteed to work though, because you cannot shrink storage below its current used size. Shrinking a storage volume with content type `block` is not possible. ```" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_srv6_vrf.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List SRv6 VRF mappings List SRv6 VRF mappings. ``` cilium-dbg bpf srv6 vrf [flags] ``` ``` -h, --help help for vrf -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage the SRv6 routing rules" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.14.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "Stop using `projects.registry.vmware.com` for user-facing images. (, [@antoninbas]) Persist TLS certificate and key of antrea-controller and periodically sync the CA cert to improve robustness. (, [@tnqn]) Disable cgo for all Antrea binaries. (, [@antoninbas]) Disable `libcapng` to make logrotate run as root in UBI images to fix an OVS crash issue. (, [@xliuxu]) Fix nil pointer dereference when ClusterGroup/Group is used in NetworkPolicy controller. (, [@tnqn]) Fix race condition in agent Traceflow controller when a tag is associated again with a new Traceflow before the old Traceflow deletion event is processed. (, [@tnqn]) Change the maximum flags from 7 to 255 to fix the wrong TCP flags validation issue in Traceflow CRD. (, [@gran-vmv]) Update maximum number of buckets to 700 in OVS group add/insert_bucket message. (, [@hongliangl]) Use 65000 MTU upper bound for interfaces in encap mode in case of large packets being dropped unexpectedly. (, [@antoninbas]) Enable IPv4/IPv6 forwarding on demand automatically to eliminate the need for user intervention or dependencies on other components. (, [@tnqn]) Store NetworkPolicy in filesystem as fallback data source to let antrea-agent fallback to use the files if it can't connect to antrea-controller on startup. (, [@tnqn]) Support Local ExternalTrafficPolicy for Services with ExternalIPs when Antrea proxyAll mode is enabled. (, [@tnqn]) Enable Pod network after realizing initial NetworkPolicies to avoid traffic from/to Pods bypassing NetworkPolicy when antrea-agent restarts. (, [@tnqn]) Fix Clean-AntreaNetwork.ps1 invocation in Prepare-AntreaAgent.ps1 for containerized OVS on Windows. (, [@antoninbas]) Add missing space to kubelet args in Prepare-Node.ps1 so that kubelet can start successfully on Windows. (, [@antoninbas]) Update Windows OVS download link to remove the redundant certificate to fix OVS driver installation failure. (, [@XinShuYang]) Add DHCP IP retries in PrepareHNSNetwork on Windows to fix the potential race condition issue where acquiring a DHCP IP address may fail after CreateHNSNetwork. (, [@XinShuYang]) Fix `antctl trace-packet` command failure which is caused by arguments missing issue. (, [@luolanzone]) Fix incorrect MTU configurations for the WireGuard encryption mode and GRE tunnel mode. ( , [@hjiajing] [@tnqn]) Fix the CrashLookBackOff issue when using the UBI-based image. (, [@antoninbas]) Skip enforcement of ingress NetworkPolicies rules for hairpinned Service traffic (Pod accessing itself via a Service). ( , [@GraysonWu]) Set net.ipv4.conf.antrea-gw0.arp_announce to 1 to fix an ARP request leak when a Node or hostNetwork Pod accesses a local Pod and AntreaIPAM is enabled. (, [@gran-vmv]) Add rate-limit config to Egress to specify the rate limit of north-south egress traffic of this" }, { "data": "(, [@GraysonWu]) Add `IPAllocated` and `IPAssigned` conditions to Egress status to improve Egress visibility. (, [@AJPL88] [@tnqn]) Add goroutine stack dump in `SupportBundle` for both Antrea Agent and Antrea Controller. (, [@aniketraj1947]) Add \"X-Load-Balancing-Endpoint-Weight\" header to AntreaProxy Service healthcheck. (, [@hongliangl]) Add log rotation configuration in Antrea Agent config for audit logs. ( , [@antoninbas] [@mengdie-song]) Add GroupMembers API Pagination support to Antrea Go clientset. (, [@qiyueyao]) Add Namespaced Group Membership API for Antrea Controller. (, [@qiyueyao]) Support Pod secondary interfaces on VLAN network. ( , [@jianjuns]) Enable Windows OVS container to run on pristine host environment, without requiring some dependencies to be installed manually ahead of time. (, [@NamanAg30]) Update `Install-WindowsCNI-Containerd.ps1` script to make it compatible with containerd 1.7. (, [@NamanAg30]) Add a new all-in-one manifest for the Multi-cluster leader cluster, and update the Multi-cluster user guide. ( , [@luolanzone]) Clean up auto-generated resources in leader and member clusters when a ClusterSet is deleted, and recreate resources when a member cluster rejoins the ClusterSet. ( , [@luolanzone]) Multiple APIs are promoted from beta to GA. The corresponding feature gates are removed from Antrea config files. Promote feature gate EndpointSlice to GA. (, [@hongliangl]) Promote feature gate NodePortLocal to GA. (, [@hjiajing]) Promote feature gate AntreaProxy to GA, and add an option `antreaProxy.enable` to allow users to disable this feature. (, [@hongliangl]) Make antrea-controller not tolerate Node unreachable to speed up the failover process. (, [@tnqn]) Improve `antctl get featuregates` output. (, [@cr7258]) Increase the rate limit setting of `PacketInMeter` and the size of `PacketInQueue`. (, [@GraysonWu]) Add `hostAliases` to Helm values for Flow Aggregator. (, [@yuntanghsu]) Decouple Audit logging from AntreaPolicy feature gate to enable logging for NetworkPolicy when AntreaPolicy is disabled. (, [@qiyueyao]) Change Traceflow CRD validation to webhook validation. (, [@shi0rik0]) Stop using `/bin/sh` and invoke the binary directly for OVS commands in Antrea Agent. (, [@antoninbas]) Install flows for nested Services in `EndpointDNAT` only when Antrea Multi-cluster is enabled. (, [@hongliangl]) Make rate-limiting of PacketIn messages configurable; the same rate-limit value applies to each feature that is dependent on PacketIn messages (e.g, Traceflow) but the limit is enforced independently for each feature. (, [@GraysonWu]) Change the default flow's action to `drop` in `ARPSpoofGuardTable` to effectively prevent ARP spoofing. (, [@hongliangl]) Remove auto-generated suffix from ConfigMap names, and add config checksums as Deployment annotations in Windows manifests, to avoid stale ConfigMaps when updating Antrea while preserving automatic rolling of" }, { "data": "(, [@Atish-iaf]) Add a ClusterSet deletion webhook for the leader cluster to reject ClusterSet deletion if there is any MemberClusterAnnounce. (, [@luolanzone]) Update Go version to v1.21. (, [@antoninbas]) Remove the dependency of the MulticastGroup API on the NetworkPolicyStats feature gate, to fix the empty list issue when users run `kubectl get multicastgroups` even when the Multicast is enabled. (, [@ceclinux]) Fix `antctl tf` CLI failure when the Traceflow is using an IPv6 address. (, [@Atish-iaf]) Fix a deadlock issue in NetworkPolicy Controller which causes a FQDN resolution failure. ( , [@Dyanngg] [@tnqn]) Fix NetworkPolicy span calculation to avoid out-dated data when multiple NetworkPolicies have the same selector. (, [@tnqn]) Use the first matching address when getting Node address to find the correct transport interface. (, [@xliuxu]) Fix rollback invocation after CmdAdd failure in CNI server and improve logging. (, [@antoninbas]) Add error log when Antrea network's MTU exceeds Suricata's maximum supported value. (, [@hongliangl]) Do not delete IPv6 link-local route in route reconciler to fix cross-Node Pod traffic or Pod-to-external traffic. (, [@wenyingd]) Do not apply Egress to traffic destined for ServiceCIDRs to avoid performance issue and unexpected behaviors. (, [@tnqn]) Unify TCP and UDP DNS interception flows to fix invalid flow matching for DNS responses. (, [@GraysonWu]) Fix the burst setting of the `PacketInQueue` to reduce the DNS response delay when a Pod has any FQDN policy applied. (, [@tnqn]) Fix SSL library downloading failure in Install-OVS.ps1 on Windows. (, [@XinShuYang]) Do not attempt to join Windows antrea-agents to the memberlist cluster to avoid misleading error logs. (, [@tnqn]) Fix an issue that antctl proxy is not using the user specified port. (, [@tnqn]) Enable IPv6 on OVS internal port if needed in bridging mode to fix agent crash issue when IPAM is enabled. (, [@antoninbas]) Fix missing protocol in Service when processing ANP named ports to ensure rule can be enforced correctly in OVS. (, [@Dyanngg]) Fix error log when agent fails to connect to K8s API. (, [@tnqn]) Fix a bug that ClusterSet status is not updated in Antrea Multi-cluster. (, [@luolanzone]) Fix an Antrea Controller crash issue in handling empty Pod labels for LabelIdentity when the config enableStretchedNetworkPolicy is enabled for Antrea Multi-cluster. ( , [@Dyanngg]) Always initialize `ovsmeterpacketdroppedcount` metrics to fix a bug that the metrics are not showing up if OVS Meter is not supported on the system. (, [@tnqn]) Skip starting modules which are not required by VM Agent to fix logs flood due to RBAC warning. (, [@mengdie-song])" } ]
{ "category": "Runtime", "file_name": "multiple-storage-types-support.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "Rook is designed to provide orchestration and management for distributed storage systems to run in cloud-native environments, however only is currently supported. Rook could be beneficial to a wider audience if support for orchestrating more storage backends was implemented. For instance, in addition to the existing support for Ceph, we should consider support for as the next storage backend supported by Rook. Minio is a popular distributed object storage solution that could benefit from a custom controller to orchestrate, configure and manage it in cloud-native environments. This design document aims to describe how this can be accomplished through common storage abstractions and custom controllers for more types of storage providers. To design and describe how Rook will support multiple storage backends Consider options and recommend architecture for hosting multiple controllers/operators Provide basic guidance on how new storage controllers would integrate with Rook Define common abstractions and types across storage providers : Kubernetes extension that enables lightweight lambda (functional) controllers To provide a native experience in Kubernetes, Rook has so far defined new storage types with and implemented the to manage the instances of those types that users create. When deciding how to expand the experience provided by Rook, we should reevaluate the most current options for extending Kubernetes in order to be confident in our architecture. This is especially important because changing the architecture later on will be more difficult the more storage types we have integrated into Rook. is the most feature rich and complete extension option for Kubernetes, but it is also the most complicated to deploy and manage. Basically, it allows you to extend the Kubernetes API with your own API server that behaves just like the core API server does. This approach offers a complete and powerful feature set such as rich validation, API versioning, custom business logic, etc. However, using an extension apiserver has some disruptive drawbacks: etcd must also be deployed for storage of its API objects, increasing the complexity and adding another point of failure. This can be avoided with using a CRD to store its API objects but that is awkward and exposes internal storage to the user. development cost would be significant to get the deployment working reliably in supported Rook environments and to port our existing CRDs to extension types. breaking change for Rook users with no clear migration path, which would be very disruptive to our current user base. CRDs are what Rook is already using to extend Kubernetes. They are a limited extension mechanism that allows the definition of custom types, but lacks the rich features of API aggregation. For example, validation of a users CRD is only at the schema level and simple property value checks that are available via the . Also, there is currently no versioning (conversion) support. However, CRDs are being actively supported by the community and more features are being added to CRDs going forward (e.g., a ). Finally, CRDs are not a breaking change for Rook users since they are already in use today. The controllers are the entities that will perform orchestration and deployment tasks to ensure the users desired state is made a reality within their cluster. There are a few options for deploying and hosting the controllers that will perform this work, as explained in the following sections. Multiple controllers could be deployed within a single process. For instance, Rook could run one controller per domain of expertise, e.g., ceph-controller, minio-controller," }, { "data": "Controllers would all watch the same custom resource types via a `SharedInformer` and respond to events via a `WorkQueue` for efficiency and reduced burden on core apiserver. Even though all controllers are watching, only the applicable controller responds to and handles an event. For example, the user runs `kubectl create cluster.yaml` to create a `cluster.rook.io` instance that has Ceph specific properties. All controllers will receive the created event via the `SharedInformer`, but only the Ceph controller will queue and handle it. We can consider only loading the controllers the user specifically asks for, perhaps via an environment variable in `operator.yaml`. Note that this architecture can be used with either API aggregation or CRDs. Slightly easier developer burden for new storage backends since there is no need to create a new deployment to host their controller. Less resource burden on K8s cluster since watchers/caches are being shared. Easier migration path to API aggregation in the future, if CRDs usage is continued now. All controllers must use same base image since they are all running in the same process. If a controller needs to access a backend specific tool then they will have to schedule a job that invokes the appropriate image. This is similar to execing a new process but at the cluster level. Note this only applies to the controller, not to the backend's daemons. Those daemons will be running a backend specific image and can directly `exec` to their tools. Each storage backend could have their own operator pod that hosts only that backend's controller, e.g., `ceph-operator.yaml`, `minio-operator.yaml`, etc. The user would decide which operators to deploy based on what storage they want to use. Each operator pod would watch the same custom resource types with their own individual watchers. Each operator can use their own image, meaning they have direct access (through `exec`) to their backend specific tools. Runtime isolation, one flaky backend does not impact or cause downtime for other backends. Privilege isolation, each backend could define their own service account and RBAC that is scoped to just their needs. More difficult migration path to API aggregation in the future. Potentially more resource usage and load on Kubernetes API since watchers will not be shared, but this is likely not an issue since users will deploy only the operator they need. Slightly more developer burden as all backends have to write their own deployment/host to manage their individual pod. For storage backends that fit the patterns that supports (`CompositeController` and `DecoratorController`), this could be an option to incorporate into Rook. Basically, a storage backend defines their custom types and the parent/child relationships between them. The metacontroller handles all the K8s API interactions and regularly calls into storage backend defined \"hooks\". The storage backend is given JSON representing the current state in K8s types and then returns JSON defining in K8s types what the desired state should be. The metacontroller then makes that desired state a reality via the K8s API. This pattern does allow for fairly complicated stateful apps (e.g. ) that have well defined parent/children hierarchies, and can allow for the storage backend operator to perform \"imperative\" operations to manipulate cluster state by launching Jobs. CRDs with an operator pod per backend: This will not be a breaking change for our current users and does not come with the deployment complexity of API aggregation. It would provide each backend's operator the freedom to easily invoke their own tools that are packaged in their own specific image, avoiding unnecessary image bloat. It also provides both resource and privilege isolation for each backend. We would accept the burden of limited CRD functionality (which is improving in the future" }, { "data": "We should also consider integrating metacontroller's functionality for storage backends that are compatible and can benefit from its patterns. Each storage backend can make this decision independently. Custom resources in Kubernetes use the following naming and versioning convention: Group: A collection of several related types that are versioned together and can be enabled/disabled as a unit in the API (e.g., `ceph.rook.io`) Version: The API version of the group (e.g., `v1alpha1`) Kind: The specific type within the API group (e.g., `cluster`) Putting this together with an example, the `cluster` kind from the `ceph.rook.io` API group with a version of `v1alpha1` would be referred to in full as `cluster.ceph.rook.io/v1alpha1`. Versioning of custom resources defined by Rook is important, and we should carefully consider a design that allows resources to be versioned in a sensible way. Let's first review some properties of Rook's resources and versioning scheme that are desirable and we should aim to satisfy with this design: Storage backends should be independently versioned, so their maturity can be properly conveyed. For example, the initial implementation of a new storage backend should not be forced to start at a stable `v1` version. CRDs should mostly be defined only for resources that can be instantiated. If the user can't create an instance of the resource, then it's likely better off as a `*Spec` type that can potentially be reused across many types. Reuse of common types is a very good thing since it unifies the experience across storage types and it reduces the duplication of effort and code. Commonality and sharing of types and implementations is important and is another way Rook provides value to the greater storage community beyond the operators that it implements. Note that it is not a goal to define a common abstraction that applies to the top level storage backends themselves, for instance a single `Cluster` type that covers both Ceph and Minio. We should not be trying to force each backend to look the same to storage admins, but instead we should focus on providing the common abstractions and implementations that storage providers can build on top of. This idea will become more clear in the following sections of this document. With the intent for Rook's resources to fulfill the desirable properties mentioned above, we propose the following API groups: `rook.io`: common abstractions and implementations, in the form of `Spec` types, that have use across multiple storage backends and types. For example, storage, network information, placement, and resource usage. `ceph.rook.io`: Ceph specific `Cluster` CRD type that the user can instantiate to have the Ceph controller deploy a Ceph cluster or Ceph resources for them. This Ceph specific API group allows Ceph types to be versioned independently. `nexenta.rook.io`: Similar, but for Nexenta. With this approach, the user experience to create a cluster would look like the following in `yaml`, where they are declaring and configuring a Ceph specific CRD type (from the `ceph.rook.io` API group), but with many common `*Spec` types that provide configuration and logic that is reusable across storage providers. `ceph-cluster.yaml`: ```yaml apiVersion: ceph.rook.io/v1 kind: Cluster spec: mon: count: 3 allowMultiplePerNode: false network: placement: resources: storage: deviceFilter: \"^sd.\" config: storeType: \"bluestore\" databaseSizeMB: \"1024\" ``` Our `golang` strongly typed definitions would look like the following, where the Ceph specific `Cluster` CRD type has common `*Spec` fields. `types.go`: ```go package v1alpha1 // \"github.com/rook/rook/pkg/apis/ceph.rook.io/v1alpha1\" import ( metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" ) type Cluster struct { metav1.TypeMeta `json:\",inline\"` metav1.ObjectMeta `json:\"metadata\"` Spec ClusterSpec `json:\"spec\"` Status ClusterStatus `json:\"status\"` } type ClusterSpec struct { Storage rook.StorageScopeSpec `json:\"storage\"` Network rook.NetworkSpec `json:\"network\"` Placement rook.PlacementSpec `json:\"placement\"` Resources rook.ResourceSpec `json:\"resources\"` Mon" }, { "data": "`json:\"mon\"` } ``` Similar to how we will not try to force commonality by defining a single `Cluster` type across all backends, we will also not define single types that define the deployment and configuration of a backend's storage concepts. For example, both Ceph and Minio present object storage. Both Ceph and Nexenta present shared file systems. However, the implementation details for what components and configuration comprise these storage presentations is very provider specific. Therefore, it is not reasonable to define a common CRD that attempts to normalize how all providers deploy their object or file system presentations. Any commonality that can be reasonably achieved should be in the form of reusable `*Spec` types and their associated libraries. Each provider can make a decision about how to expose their storage concepts. They could be defined as instantiable top level CRDs or they could be defined as collections underneath the top level storage provider CRD. Below are terse examples to demonstrate the two options. Top-level CRDs: ```yaml apiVersion: ceph.rook.io/v1 kind: Cluster spec: ... apiVersion: ceph.rook.io/v1 kind: Pool spec: ... apiVersion: ceph.rook.io/v1 kind: Filesystem spec: ... ``` Collections under storage provider CRD: ```yaml apiVersion: ceph.rook.io/v1 kind: Cluster spec: pools: name: replicaPool replicated: size: 1 name: ecPool erasureCoded: dataChunks: 2 codingChunks: 1 filesystems: name: filesystem1 metadataServer: activeCount: 1 ``` The `StorageScopeSpec` type defines the boundaries or \"scope\" of the resources that comprise the backing storage substrate for a cluster. This could be devices, filters, directories, nodes, , and others. There are user requested means of selecting storage that Rook doesn't currently support that could be included in this type, such as the ability to select a device by path instead of by name, e.g. `/dev/disk/by-id/`. Also, wildcards/patterns/globbing should be supported on multiple resource types, removing the need for the current `useAllNodes` and `useAllDevices` boolean fields. By encapsulating this concept as its own type, it can be reused within other custom resources of Rook. For instance, this would enable Rook to support storage of other types that could benefit from orchestration in cloud-native environments beyond distributed storage systems. `StorageScopeSpec` could also provide an abstraction from such details as device name changes across reboots. Most storage backends have a need to specify configuration options at the node and device level. Since the `StorageScopeSpec` type is already defining a node/device hierarchy for the cluster, it would be desirable for storage backends to include their configuration options within this same hierarchy, as opposed to having to repeat the hierarchy again elsewhere in the spec. However, this isn't completely straight forward because the `StorageScopeSpec` type is a common abstraction and does not have knowledge of specific storage backends. A solution for this would be to allow backend specific properties to be defined inline within a `StorageScopeSpec` as key/value pairs. This allows for arbitrary backend properties to be inserted at the node and device level while still reusing the single StorageScopeSpec abstraction, but it means that during deserialization these properties are not strongly typed. They would be deserialized into a golang `map[string]string`. However, an operator with knowledge of its specific backend's properties could then take that map and deserialize it into a strong type. The yaml would like something like this: ```yaml nodes: name: nodeA config: storeType: \"bluestore\" devices: name: \"sda\" config: storeType: \"filestore\" ``` Note how the Ceph specific properties at the node and device level are string key/values and would be deserialized that way instead of to strong" }, { "data": "For example, this is what the golang struct would look like: ```go type StorageScopeSpec struct { Nodes []Node `json:\"nodes\"` } type Node struct { Name string `json:\"name\"` Config map[string]string `json:\"config\"` Devices []Device `json:\"devices\"` } type Device struct { Name string `json:\"name\"` Config map[string]string `json:\"config\"` } ``` After Kubernetes has done the general deserialization of the `StorageScopeSpec` into a strong type with weakly typed maps of backend specific config properties, the Ceph operator could easily convert this map into a strong config type that is has knowledge of. Other backend operators could do a similar thing for their node/device level config. As previously mentioned, the `rook.io` API group will also define some other useful `*Spec` types: `PlacementSpec`: Defines placement requirements for components of the storage provider, such as node and pod affinity. This is similar to the existing , but in a generic way that is reusable by all storage providers. A `PlacementSpec` will essentially be a map of placement information structs that are indexed by component name. `NetworkSpec`: Defines the network configuration for the storage provider, such as `hostNetwork`. `ResourceSpec`: Defines the resource usage of the provider, allowing limits on CPU and memory, similar to the existing . Rook and the greater community would also benefit from additional types and abstractions. We should work on defining those further, but it is out of scope for this design document that is focusing on support for multiple storage backends. Some potential ideas for additional types to support in Rook: Quality of Service (QoS), resource consumption (I/O and storage limits) As more storage backends are integrated into Rook, it is preferable that all source code lives within the single `rook/rook` repository. This has a number of benefits such as easier sharing of build logic, developer iteration when updating shared code, and readability of the full source. Multiple container images can easily be built from the single source repository, similar to how `rook/rook` and `rook/toolbox` are currently built from the same repository. `rook/rook` image: defines all custom resource types, generated clients, general cluster discovery information (disks), and any storage operators that do not have special tool dependencies. backend specific images: to avoid image bloat in the main `rook/rook` image, each backend will have their own image that contains all of their daemons and tools. These images will be used for the various daemons/components of each backend, e.g. `rook/ceph`, `rook/minio`, etc. Each storage provider has its own Rook repo. Rook will enable storage providers to integrate their solutions with cloud-native environments by providing a framework of common abstractions and implementations that helps providers efficiently build reliable and well tested storage controllers. `StorageScopeSpec`, `PlacementSpec`, `ResourceSpec`, `NetworkSpec`, etc. Each storage provider will be versioned independently with its own API Group (e.g., `ceph.rook.io`) and its own instantiable CRD type(s). Each storage provider will have its own operator pod that performs orchestration and management of the storage resources. This operator will use a provider specific container image with any special tools needed. This section contains concrete examples of storage clusters as a user would define them using yaml. In addition to distributed storage clusters, we will be considering support for additional storage types in the near future. `ceph-cluster.yaml`: ```yaml apiVersion: ceph.rook.io/v1alpha1 kind: Cluster metadata: name: ceph namespace: rook-ceph spec: mon: count: 3 allowMultiplePerNode: false network: hostNetwork: false placement: name: \"mon\" nodeAffinity: podAffinity: podAntiAffinity: tolerations: resources: name: osd limits: memory: \"1024Mi\" requests: cpu: \"500m\" memory: \"1024Mi\" storage: deviceFilter: \"^sd.\" location: config: storeConfig: storeType: bluestore databaseSizeMB: \"1024\" metadataDevice: nvme01 directories: path: /rook/storage-dir nodes: name: \"nodeA\" directories: path: \"/rook/storage-dir\" config: # ceph specific config at the directory level via key/value pairs storeType:" } ]
{ "category": "Runtime", "file_name": "yum.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "You can use the yum tool to quickly deploy and start the CubeFS cluster in CentOS 7+ operating system. The RPM dependencies of this tool can be installed with the following command: ::: tip Note The cluster is managed through Ansible, please make sure that Ansible has been deployed. ::: ``` bash $ yum install https://ocs-cn-north1.heytapcs.com/cubefs/rpm/3.3.0/cfs-install-3.3.0-el7.x86_64.rpm $ cd /cfs/install $ tree -L 3 . install_cfs.yml install.sh iplist src template client.json.j2 create_vol.sh.j2 datanode.json.j2 grafana grafana.ini init.sh provisioning master.json.j2 metanode.json.j2 objectnode.json.j2 ``` You can modify the parameters of the CubeFS cluster in the iplist file according to the actual environment. `master`, `datanode`, `metanode`, `objectnode`, `monitor`, `client` contain the IP addresses of each module member. The `cfs:vars` module defines the SSH login information of all nodes, and the login name and password of all nodes in the cluster need to be unified in advance. Defines the startup parameters of each Master node. | Parameter | Type | Description | Required | |-|--||-| | master_clusterName | string | Cluster name | Yes | | master_listen | string | Port number for http service listening | Yes | | master_prof | string | Port number for golang pprof | Yes | | master_logDir | string | Directory for storing log files | Yes | | master_logLevel | string | Log level, default is info | No | | master_retainLogs | string | How many raft logs to keep | Yes | | master_walDir | string | Directory for storing raft wal logs | Yes | | master_storeDir | string | Directory for storing RocksDB data. This directory must exist. If the directory does not exist, the service cannot be started. | Yes | | master_exporterPort | int | Port for prometheus to obtain monitoring data | No | | master_metaNodeReservedMem | string | Reserved memory size for metadata nodes. If the remaining memory is less than this value, the MetaNode becomes read-only. Unit: bytes, default value: 1073741824 | No | For more configuration information, please refer to . Defines the startup parameters of each DataNode. | Parameter | Type | Description | Required | ||--|--|-| | datanode_listen | string | Port for DataNode to start TCP listening as a server | Yes | | datanode_prof | string | Port used by DataNode to provide HTTP interface | Yes | | datanode_logDir | string | Path to store logs | Yes | | datanode_logLevel | string | Log level. The default is info. | No | | datanode_raftHeartbeat | string | Port used by RAFT to send heartbeat messages between nodes | Yes | | datanode_raftReplica | string | Port used by RAFT to send log messages | Yes | | datanode_raftDir | string | Path to store RAFT debugging logs. The default is the binary file startup path. | No | | datanode_exporterPort | string | Port for the monitoring system to collect data | No | | datanode_disks | string array | Format: PATH:RETAIN, PATH: disk mount path, RETAIN: the minimum reserved space under this path, and the disk is considered full if the remaining space is less than this value. Unit: bytes. (Recommended value: 20G~50G) | Yes | For more configuration information, please refer to . Defines the startup parameters of the" }, { "data": "| Parameter | Type | Description | Required | |-|--|--|-| | metanode_listen | string | Port for listening and accepting requests | Yes | | metanode_prof | string | Debugging and administrator API interface | Yes | | metanode_logLevel | string | Log level. The default is info. | No | | metanode_metadataDir | string | Directory for storing metadata snapshots | Yes | | metanode_logDir | string | Directory for storing logs | Yes | | metanode_raftDir | string | Directory for storing raft wal logs | Yes | | metanode_raftHeartbeatPort | string | Port for raft heartbeat communication | Yes | | metanode_raftReplicaPort | string | Port for raft data transmission | Yes | | metanode_exporterPort | string | Port for prometheus to obtain monitoring data | No | | metanode_totalMem | string | Maximum available memory. This value needs to be higher than the value of metaNodeReservedMem in the master configuration. Unit: bytes. | Yes | For more configuration information, please refer to . Defines the startup parameters of the ObjectNode. | Parameter | Type | Description | Required | |-|--|-|-| | objectnode_listen | string | IP address and port number for http service listening | Yes | | objectnode_domains | string array | Configure domain names for S3-compatible interfaces to support DNS-style access to resources. Format: `DOMAIN` | No | | objectnode_logDir | string | Path to store logs | Yes | | objectnode_logLevel | string | Log level. The default is `error`. | No | | objectnode_exporterPort | string | Port for prometheus to obtain monitoring data | No | | objectnode_enableHTTPS | string | Whether to support the HTTPS protocol | Yes | For more configuration information, please refer to . Defines the startup parameters of the FUSE client. | Parameter | Type | Description | Required | ||--|--|-| | client_mountPoint | string | Mount point | Yes | | client_volName | string | Volume name | No | | client_owner | string | Volume owner | Yes | | client_SizeGB | string | If the volume does not exist, a volume of this size will be created. Unit: GB. | No | | client_logDir | string | Path to store logs | Yes | | client_logLevel | string | Log level: debug, info, warn, error, default is info. | No | | client_exporterPort | string | Port for prometheus to obtain monitoring data | Yes | | client_profPort | string | Port for golang pprof debugging | No | For more configuration information, please refer to . ``` yaml [master] 10.196.59.198 10.196.59.199 10.196.59.200 [datanode] ... [cfs:vars] ansiblesshport=22 ansiblesshuser=root ansiblesshpass=\"password\" ... ... ... datanode_disks = '\"/data0:10737418240\",\"/data1:10737418240\"' ... ... metanode_totalMem = \"28589934592\" ... ... ``` ::: tip Note CubeFS supports mixed deployment. If mixed deployment is adopted, pay attention to modifying the port configuration of each module to avoid port conflicts. ::: Use the install.sh script to start the CubeFS cluster, and make sure to start the Master first. ``` bash $ bash install.sh -h Usage: install.sh -r | --role [datanode | metanode | master | objectnode | client | all | createvol ] $ bash install.sh -r master $ bash install.sh -r metanode $ bash install.sh -r datanode $ bash install.sh -r objectnode $ bash install.sh -r createvol $ bash install.sh -r client ``` After all roles are started, you can log in to the node where the client role is located to verify whether the mount point /cfs/mountpoint has been mounted to the CubeFS file system." } ]
{ "category": "Runtime", "file_name": "USERS.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "Sharing experiences and learning from other users is essential. If you are using Spiderpool or it is integrated into your product, service, or platform, please consider adding yourself as a user with a quick description of your use case by opening a pull request to this file and adding a section describing your usage of Spiderpool. | orgnization | description | ||| | a bank in ShangHai, China | a cluster with a size of 2000+ pods, production environment | | a bank in SiChuan, China | about 80 clusters , not more thant 100 nodes for each cluster | | a broker in ShangHai, China | a cluster with a size of about 50 nodes | | a broker in ShangHai, China | a cluster with a size of about 50 nodes | | VIVO, a smart phone vendor in China | leverage Spiderpool in AI clusters for LLM | | a telcom company of Beijing branch, China | leverage Spiderpool in development environment | | a telcom company of Suzhou branch, China | leverage Spiderpool in development environment |" } ]
{ "category": "Runtime", "file_name": "traceflow-guide.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "Antrea supports using Traceflow for network diagnosis. It can inject a packet into OVS on a Node and trace the forwarding path of the packet across Nodes, and it can also trace a matched packet of real traffic from or to a Pod. In either case, a Traceflow operation is triggered by a Traceflow CRD which specifies the type of Traceflow, the source and destination of the packet to trace, and the headers of the packet. And the Traceflow results will be populated to the `status` field of the Traceflow CRD, which include the observations of the trace packet at various observations points in the forwarding path. Besides creating the Traceflow CRD using kubectl, users can also start a Traceflow using `antctl`, or from the . When using the Antrea web UI, the Traceflow results can be visualized using a graph. <!-- toc --> - - - <!-- /toc --> The Traceflow feature is enabled by default since Antrea version v0.11. In order to use a Service as the destination in traces, AntreaProxy (also enabled by default since v0.11) is required. You can choose to use `kubectl` together with a YAML file, the `antctl traceflow` command, or the Antrea UI to start a new trace. When starting a new trace, you can provide the following information which will be used to build the trace packet: source Pod destination Pod, Service or destination IP address transport protocol (TCP/UDP/ICMP) transport ports You can start a new trace by creating Traceflow CRD via kubectl and a YAML file which contains the essential configuration of Traceflow CRD. An example YAML file of Traceflow CRD might look like this: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: Traceflow metadata: name: tf-test spec: source: namespace: default pod: tcp-sts-0 destination: namespace: default pod: tcp-sts-2 packet: ipHeader: # If ipHeader/ipv6Header is not set, the default value is IPv4+ICMP. protocol: 6 # Protocol here can be 6 (TCP), 17 (UDP) or 1 (ICMP), default value is 1 (ICMP) transportHeader: tcp: srcPort: 10000 # Source port for TCP/UDP. If omitted, a random port will be used. dstPort: 80 # Destination port needs to be set when Protocol is TCP/UDP. flags: 2 # Construct a SYN packet: 2 is also the default value when the flags field is omitted. ``` The CRD above starts a new trace from port 10000 of source Pod named `tcp-sts-0` to port 80 of destination Pod named `tcp-sts-2` using TCP protocol. Antrea Traceflow supports IPv6 traffic. An example YAML file of Traceflow CRD might look like this: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: Traceflow metadata: name: tf-test-ipv6 spec: source: namespace: default pod: tcp-sts-0 destination: namespace: default pod: tcp-sts-2 packet: ipv6Header: # ipv6Header MUST be set to run Traceflow in IPv6, and ipHeader will be ignored when ipv6Header set. nextHeader: 58 # Protocol here can be 6 (TCP), 17 (UDP) or 58 (ICMPv6), default value is 58 (ICMPv6) ``` The CRD above starts a new trace from source Pod named `tcp-sts-0` to destination Pod named `tcp-sts-2` using ICMPv6 protocol. Starting from Antrea version" }, { "data": "you can trace a packet of the real traffic from or to a Pod, instead of the injected packet. To start such a Traceflow, add `liveTraffic: true` to the Traceflow `spec`. Then, the first packet of the first connection that matches the Traceflow spec will be traced (connections opened before the Traceflow was initiated will be ignored), and the headers of the packet will be captured and reported in the `status` field of the Traceflow CRD, in addition to the observations. A live-traffic Traceflow requires only one of `source` and `destination` to be specified. When `source` or `destination` is not specified, it means that a packet can be captured regardless of its source or destination. One of `source` and `destination` must be a Pod. When `source` is not specified, or is an IP address, only the receiver Node will capture the packet and trace it after the L2 forwarding observation point. This means that even if the source of the packet is on the same Node as the destination, no observations on the sending path will be reported for the Traceflow. By default, a live-traffic Traceflow (the same as a normal Traceflow) will timeout in 20 seconds, and if no matched packet captured before the timeout the Traceflow will fail. But you can specify a different timeout value, by adding `timeout: <value-in-seconds>` to the Traceflow `spec`. In some cases, it might be useful to capture the packets dropped by NetworkPolicies (inc. K8s NetworkPolicies or Antrea-native policies). You can add `droppedOnly: true` to the live-traffic Traceflow `spec`, then the first packet that matches the Traceflow spec and is dropped by a NetworkPolicy will be captured and traced. The following example is a live-traffic Traceflow that captures a dropped UDP packet to UDP port 1234 of Pod udp-server, within 1 minute: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: Traceflow metadata: name: tf-test spec: liveTraffic: true droppedOnly: true destination: namespace: default pod: udp-server packet: transportHeader: udp: dstPort: 1234 timeout: 60 ``` Please refer to the corresponding . Please refer to the for installation instructions. Once you can access the UI in your browser, navigate to the `Traceflow` page. You can always view Traceflow result directly via Traceflow CRD status and see if the packet is successfully delivered or somehow dropped by certain packet-processing stage. Antrea also provides a more user-friendly way by showing the Traceflow result via a trace graph when using the Antrea UI. Traceflow CRDs are meant for admins to troubleshoot and diagnose the network by injecting a packet from a source workload to a destination workload. Thus, access to manage these CRDs must be granted to subjects which have the authority to perform these diagnostic actions. On cluster initialization, Antrea grants the permissions to edit these CRDs with `admin` and the `edit` ClusterRole. In addition to this, Antrea also grants the permission to view these CRDs with the `view` ClusterRole. Cluster admins can therefore grant these ClusterRoles to any subject who may be responsible to troubleshoot the network. The admins may also decide to share the `view` ClusterRole to a wider range of subjects to allow them to read the traceflows that are active in the cluster." } ]
{ "category": "Runtime", "file_name": "list.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "You can list all rkt pods. ``` $ rkt list UUID APP IMAGE NAME STATE CREATED STARTED NETWORKS 5bc080ca redis redis running 2 minutes ago 41 seconds ago default:ip4=172.16.28.7 etcd coreos.com/etcd:v2.0.9 3089337c nginx nginx exited 9 minutes ago 2 minutes ago ``` You can view the full UUID as well as the image's ID by using the `--full` flag. ``` $ rkt list --full UUID APP IMAGE NAME IMAGE ID STATE CREATED STARTED NETWORKS 5bc080cav-9e03-480d-b705-5928af396cc5 redis redis sha512-91e98d7f1679 running 2016-01-25 17:42:32.563 +0100 CET 2016-01-25 17:44:05.294 +0100 CET default:ip4=172.16.28.7 etcd coreos.com/etcd:v2.0.9 sha512-a03f6bad952b 3089337c4-8021-119b-5ea0-879a7c694de4 nginx nginx sha512-32ad6892f21a exited 2016-01-25 17:36:40.203 +0100 CET 2016-01-25 17:42:15.1 +0100 CET ``` | Flag | Default | Options | Description | | | | | | | `--full` | `false` | `true` or `false` | Use long output format | | `--no-legend` | `false` | `true` or `false` | Suppress a legend with the list | See the table with ." } ]
{ "category": "Runtime", "file_name": "cilium-dbg_status.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Display status of daemon ``` cilium-dbg status [flags] ``` ``` --all-addresses Show all allocated addresses, not just count --all-clusters Show all clusters --all-controllers Show all controllers, not just failing --all-health Show all health status, not just failing --all-nodes Show all nodes, not just localhost --all-redirects Show all redirects --brief Only print a one-line status message -h, --help help for status -o, --output string json| yaml| jsonpath='{}' --timeout duration Sets the timeout to use when querying for health (default 30s) --verbose Equivalent to --all-addresses --all-controllers --all-nodes --all-redirects --all-clusters --all-health ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI" } ]
{ "category": "Runtime", "file_name": "resource-constraints.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "Resource Constraints allow the rook components to be in specific Kubernetes Quality of Service (QoS) classes. For this the components started by rook need to have resource requests and/or limits set depending on which class the component(s) should be in. Ceph has recommendations for CPU and memory for each component. The recommendations can be found here: http://docs.ceph.com/docs/master/start/hardware-recommendations/ If limits are set too low, this is especially a danger on the memory side. When a Pod runs out of memory because of the limit it is OOM killed, there is a potential of data loss. The CPU limits would merely limit the \"throughput\" of the component when reaching it's limit. The resource constraints are defined in the rook Cluster, Filesystem and RGW CRDs. The default is to not set resource requirements, this translates to the `qosClass: BestEffort` (`qosClass` will be later on explained in ). The user is able to enable and disable the automatic resource algorithm as he wants. This algorithm has a global \"governance\" class for the Quality of Service (QoS) class to be aimed for. The key to choose the QoS class is named `qosClasss` and in the `resources` specification. The QoS classes are: `BestEffort` - No `requests` and `limits` are set (for testing). `Burstable` - Only `requests` requirements are set. `Guaranteed` - `requests` and `limits` are set to be equal. Additionally to allow the user to simply tune up/down the values without needing to set them a key named `multiplier` in the `resources` specification. This value is used as a multiplier for the calculated values. The OSDs are a special case that need to be covered by the algorithm. The algorithm needs to take the count of stores (devices and supplied directories) in account to calculate the correct amount of CPU and especially memory" }, { "data": "User defined values always overrule automatic calculated values by rook. The precedence of values is: User defined values (For OSD) per node Global values Example: If you specify a global value for memory for OSDs but set a memory value for a specific node, every OSD except the specific OSD on the node, will get the global value set. A Kubernetes resource requirement object looks like this: ```yaml requests: cpu: \"2\" memory: \"1Gi\" limits: cpu: \"3\" memory: \"2Gi\" ``` The key in the CRDs to set resource requirements is named `resources`. The following components will allow a resource requirement to be set for: `api` `agent` `mgr` `mon` `osd` The `mds` and `rgw` components are configured through the CRD that creates them. The `mds` are created through the `CephFilesystem` CRD and the `rgw` through the `CephObjectStore` CRD. There will be a `resources` section added to their respective specification to allow user defined requests/limits. To allow per node/OSD configuration of resource constraints, a key is added to the `storage.nodes` item. It is named `resources` and contain a resource requirement object (see above). The `rook-ceph-agent` may be utilized in some way to detect how many stores are used. The requests/limits configured are as an example and not to be used in production. ```yaml apiVersion: v1 kind: Namespace metadata: name: rook-ceph apiVersion: rook.io/v1alpha1 kind: Cluster metadata: name: rook-ceph namespace: rook-ceph spec: dataDirHostPath: /var/lib/rook hostNetwork: false mon: count: 3 allowMultiplePerNode: false placement: resources: api: requests: cpu: \"500m\" memory: \"512Mi\" limits: memory: \"512Mi\" mgr: requests: cpu: \"500m\" memory: \"512Mi\" limits: memory: \"512Mi\" mon: requests: cpu: \"500m\" memory: \"512Mi\" limits: memory: \"512Mi\" osd: requests: cpu: \"500m\" memory: \"512Mi\" limits: memory: \"512Mi\" storage: useAllNodes: true useAllDevices: false deviceFilter: metadataDevice: location: storeConfig: storeType: bluestore nodes: name: \"172.17.4.101\" directories: path: \"/rook/storage-dir\" resources: requests: memory: \"512Mi\" name: \"172.17.4.201\" devices: name: \"sdb\" name: \"sdc\" storeConfig: storeType: bluestore resources: requests: memory: \"512Mi\" cpu: \"1\" ```" } ]
{ "category": "Runtime", "file_name": "uninstall.md", "project_name": "FabEdge", "subcategory": "Cloud Native Network" }
[ { "data": "English | Delete helm release ``` $ helm uninstall fabedge -n fabedge ``` Delete other resources ``` $ kubectl -n fabedge delete cm --all $ kubectl -n fabedge delete pods --all $ kubectl -n fabedge delete secret --all $ kubectl -n fabedge delete job.batch --all ``` Delete namespace ``` $ kubectl delete namespace fabedge ``` Delete all FabeEdge configuration file from all edge nodes ``` $ rm -f /etc/cni/net.d/fabedge.* ``` Delete all fabedge images on all nodes ``` $ docker images | grep fabedge | awk '{print $3}' | xargs -I{} docker rmi {} ``` Delete CustomResourceDefinition ``` $ kubectl delete CustomResourceDefinition \"clusters.fabedge.io\" $ kubectl delete CustomResourceDefinition \"communities.fabedge.io\" $ kubectl delete ClusterRole \"fabedge-operator\" $ kubectl delete ClusterRoleBinding \"fabedge-operator\" ```" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.5.5.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "This latest stable version of Longhorn 1.5 introduces several improvements and bug fixes that are intended to improve system quality, resilience, and stability. The Longhorn team appreciates your contributions and anticipates receiving feedback regarding this release. [!NOTE] For more information about release-related terminology, see . [!IMPORTANT] Ensure that your cluster is running Kubernetes v1.21 or later before installing Longhorn v1.5.5. You can install Longhorn using a variety of tools, including Rancher, Kubectl, and Helm. For more information about installation methods and requirements, see in the Longhorn documentation. [!IMPORTANT] Ensure that your cluster is running Kubernetes v1.21 or later before upgrading from Longhorn v1.4.x or v1.5.x (< v1.5.5) to v1.5.5. Longhorn only allows upgrades from supported versions. For more information about upgrade paths and procedures, see in the Longhorn documentation. For information about important changes, including feature incompatibility, deprecation, and removal, see in the Longhorn documentation. For information about issues identified after this release, see . - @PhanLe1010 @chriscchien - @derekbit @chriscchien - @shuo-wu @chriscchien - @james-munson @chriscchien - @c3y1huang @roger-ryao - @james-munson @roger-ryao Security issues in latest longhorn docker images - @c3y1huang @chriscchien - @derekbit @chriscchien - @yangchiu @ejweber - @shuo-wu @chriscchien - @ChanYiLin @chriscchien - @ejweber @roger-ryao - @mantissahz @roger-ryao - @ejweber @roger-ryao - @ejweber @chriscchien - @Vicente-Cheng @roger-ryao - @james-munson @chriscchien - @ChanYiLin @chriscchien - @ejweber @roger-ryao - @james-munson @roger-ryao - @c3y1huang @roger-ryao - @ChanYiLin - @ChanYiLin @roger-ryao @ChanYiLin @PhanLe1010 @Vicente-Cheng @c3y1huang @chriscchien @derekbit @ejweber @innobead @james-munson @mantissahz @roger-ryao @shuo-wu @yangchiu" } ]
{ "category": "Runtime", "file_name": "handling-page-faults-on-snapshot-resume.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "Firecracker allows for a better management of the microVM's memory loading by letting users choose between relying on host OS to handle the page faults when resuming from a snapshot, or having a dedicated userspace process for dealing with page faults, with the help of . When resuming a microVM from a snapshot, loading the snapshotted guest's memory (which is file-backed) into RAM is usually kernel's responsibility and is handled on a per-page-fault basis. Each time the guest touches a page that is not already in Firecracker's process memory, a page fault occurs, which triggers a context switch and IO operation in order to bring that page into RAM. Depending on the use case, doing this for every page can be time-consuming. Userfaultfd is a mechanism that passes that responsibility of handling page fault events from kernel space to user space. In order to be able to interact with this mechanism, userspace needs to firstly obtain an userfault file descriptor object (UFFD). For (host) kernels 4.14 and 5.10 UFFD objects are created by calling into . For kernel 6.1, UFFD is created through the `/dev/userfaultfd` device. Access to `/dev/userfaultfd` is managed by file system permissions, so the Firecracker process needs to have proper permissions to create the UFFD object. When `/dev/userfaultfd` is present on the host system, jailer makes it available inside the jail and Firecracker process can use it without any further configuration. If a user is not using Firecracker along with the jailer, they should manage manually permissions to `/dev/userfaultfd`. For example, on systems that rely on access control lists (ACLs), this can be achieved by: ```bash sudo setfacl -m u:${USER}:rw /dev/userfaultfd ``` Next, the memory address range must be registered with the userfault file descriptor so that the userfault object can monitor page faults occurring for those addresses. After this, the user space process can start reading and serving events via the userfault file descriptor. These events will contain the address that triggered the fault. The fault-handling thread can choose to handle these events using these . In the flow described above, there are two userspace processes that interact with each other in order to handle page faults: Firecracker process and the page fault handler. Please note that users are responsible for writing the page fault handler process to monitor userfaultfd events and handle those events. Below is the interaction flow between Firecracker and the page fault handler (designed by the users): Page fault handler binds and listens on a unix domain socket in order to be able to communicate with the Firecracker process. Please note that when using the Jailer, the page fault handler process, UDS and memory file must reside inside the jail. The UDS must only be accessible to Firecracker and the page fault handler. PUT snapshot/load API call is issued towards Firecracker's API thread. The request encapsulates in its body the path to the unix domain socket that page fault handler listens to in order to communicate with Firecracker. Firecracker process creates the userfault object and obtains the userfault file descriptor. The page fault handler privately mmaps the contents of the guest memory file. Firecracker anonymously mmaps memory based on the memory description found in the microVM state file and registers the memory regions with the userfault object in order for the userfaultfd to be aware of page fault events on these addresses. Firecracker then connects to the socket previously opened by the page fault" }, { "data": "Firecracker passes the userfault file descriptor and the guest memory layout (e.g. dimensions of each memory region, and their in KiB) to the page fault handler process through the socket. After sending the necessary information to the page fault handler, Firecracker continues with the normal cycle to restore from snapshot. It reads from the microVM state file the relevant serialized components and loads them into memory. Page faults that occur while Firecracker is touching guest memory are handled by the page fault handler process, which listens for events on the userfault file descriptor that Firecracker previously sent. When a page fault event happens, the page fault handler issues `UFFDIO_COPY` to load the previously mmaped file contents into the correspondent memory region. After Firecracker sends the payload (i.e. mem mappings and file descriptor), no other communication happens on the UDS socket (or otherwise) between Firecracker and the page fault handler process. The balloon device allows the host to reclaim memory from a microVM. For more details on balloon, please refer to . When the balloon device asks for removal of a memory range, Firecracker calls `madvise` with the `MADV_DONTNEED` flag in order to let the kernel know that it can free up memory found in that specific area. On such a system call, the userfaultfd interface sends `UFFDEVENTREMOVE`. When implementing the logic for the page fault handler, users must identify events of type `UFFDEVENTREMOVE` and handle them by zeroing out those pages. This is because the memory is removed, but the area still remains monitored by userfaultfd. After a cycle of inflation and deflation, page faults might happen again for memory ranges that have been removed by balloon (and subsequently zeroed out by the page fault handler). In such a case, the page fault handler process must zero out the faulted page (instead of bringing it from file), as recommended by . In case of a compromised balloon driver, the page fault handler can get flooded with `UFFDEVENTREMOVE`. We recommend using the jailer's built-in cgroup functionality as defense in depth, in order to limit resource usage of the Firecracker process. If the handler process crashes while Firecracker is resuming the snapshot, Firecracker will hang when a page fault occurs. This is because Firecracker is designed to wait for the requested page to be made available. If the page fault handler process is no longer around when this happens, Firecracker will wait forever. Users are expected to monitor the page fault handler's status or gather metrics of hanged Firecracker process and implement a recycle mechanism if necessary. It is the page fault handler process's responsibility to handle any errors that might occur and also send signals to Firecracker process to inform it of any crashes/exits. The page fault handler can fetch Firecracker's PID through `getsockopt` call with `SO_PEERCRED` option, which fetches credentials of the peer process that is connected to the socket. The returned credentials contain: PID, GID and UID of the peer process (Firecracker in the page fault handler's case). We recommend that the page fault handler includes timeouts for waiting on Firecracker to connect to the UDS or send information over the UDS, in order to account for unexpected cases when Firecracker crashes before being able to connect/send data. An example of a handler process can be found . The process is designed to tackle faults on a certain address by loading into memory the entire region that the address belongs to, but users can choose any other behavior that suits their use case best." } ]
{ "category": "Runtime", "file_name": "handle-backup-of-volumes-by-resources-filters.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Currently, Velero doesn't have one flexible way to handle volumes. If users want to skip the backup of volumes or only backup some volumes in different namespaces in batch, currently they need to use the opt-in and opt-out approach one by one, or use label-selector but if it has big different labels on each different related pod, which is cumbersome when they have lots of volumes to handle with. it would be convenient if Velero could provide policies to handle the backup of volumes just by `some specific volumes conditions`. As of Today, Velero has lots of filters to handle (backup or skip backup) resources including resources filters like `IncludedNamespaces, ExcludedNamespaces`, label selectors like `LabelSelector, OrLabelSelectors`, annotation like `backup.velero.io/must-include-additional-items` etc. But it's not enough flexible to handle volumes, we need one generic way to handle volumes. Introducing flexible policies to handle volumes, and do not patch any labels or annotations to the pods or volumes. We only handle volumes for backup and do not support restore. Currently, only handles volumes, and does not support other resources. Only environment-unrelated and platform-independent general volumes attributes are supported, do not support volumes attributes related to a specific environment. Users want to skip PV with the requirements: option to skip all PV data option to skip specified PV type (RBD, NFS) option to skip specified PV size option to skip specified storage-class First, Velero will provide the user with one YAML file template and all supported volume policies will be in. Second, writing your own configuration file by imitating the YAML template, it could be partial volume policies from the template. Third, create one configmap from your own configuration file, and the configmap should be in Velero install namespace. Fourth, create a backup with the command `velero backup create --resource-policies-configmap $policiesConfigmap`, which will reference the current backup to your volume policies. At the same time, Velero will validate all volume policies user imported, the backup will fail if the volume policies are not supported or some items could not be parsed. Fifth, the current backup CR will record the reference of volume policies configmap. Sixth, Velero first filters volumes by other current supported filters, at last, it will apply the volume policies to the filtered volumes to get the final matched volume to handle. The volume resources policies should contain a list of policies which is the combination of conditions and related `action`, when target volumes meet the conditions, the related `action` will take" }, { "data": "Below is the API Design for the user configuration: ```go type VolumeActionType string const Skip VolumeActionType = \"skip\" // Action defined as one action for a specific way of backup type Action struct { // Type defined specific type of action, it could be 'file-system-backup', 'volume-snapshot', or 'skip' currently Type VolumeActionType `yaml:\"type\"` // Parameters defined map of parameters when executing a specific action // +optional // +nullable Parameters map[string]interface{} `yaml:\"parameters,omitempty\"` } // VolumePolicy defined policy to conditions to match Volumes and related action to handle matched Volumes type VolumePolicy struct { // Conditions defined list of conditions to match Volumes Conditions map[string]interface{} `yaml:\"conditions\"` Action Action `yaml:\"action\"` } // ResourcePolicies currently defined slice of volume policies to handle backup type ResourcePolicies struct { Version string `yaml:\"version\"` VolumePolicies []VolumePolicy `yaml:\"volumePolicies\"` // we may support other resource policies in the future, and they could be added separately // OtherResourcePolicies: []OtherResourcePolicy } ``` The policies YAML config file would look like this: ```yaml version: v1 volumePolicies: conditions: capacity: \"0,100Gi\" csi: driver: aws.ebs.csi.driver fsType: ext4 storageClass: gp2 ebs-sc action: type: volume-snapshot parameters: volume-snapshot-timeout: \"6h\" conditions: capacity: \"0,100Gi\" storageClass: gp2 ebs-sc action: type: file-system-backup conditions: nfs: server: 192.168.200.90 action: type: file-system-backup conditions: nfs: {} action: type: skip conditions: csi: driver: aws.efs.csi.driver action: type: skip ``` The whole resource policies consist of groups of volume policies. For one specific volume policy which is a combination of one action and serval conditions. which means one action and serval conditions are the smallest unit of volume policy. Volume policies are a list and if the target volumes match the first policy, the latter will be ignored, which would reduce the complexity of matching volumes especially when there are multiple complex volumes policies. `Action` defined one action for a specific way of backup: if choosing `Kopia` or `Restic`, the action value would be `file-system-backup`. if choosing volume snapshot, the action value would be `volume-snapshot`. if choosing skip backup of volume, the action value would be `skip`, and it will skip backup of volume no matter is `file-system-backup` or `volume-snapshot`. The policies could be extended for later other ways of backup, which means it may have some other `Action` value that will be assigned in the future. Both `file-system-backup` `volume-snapshot`, and `skip` could be partially or fully configured in the YAML file. And configuration could take effect only for the related action. The conditions are serials of volume attributes, the matched Volumes should meet all the volume attributes in one conditions configuration. In Velero 1.11, we want to support the volume attributes listed below: capacity: matching volumes have the capacity that falls within this `capacity` range. storageClass: matching volumes those with specified `storageClass`, such as `gp2`, `ebs-sc` in eks. matching volumes that used specified volume sources. Parameters are optional for one specific action. For example, it could be `csi-snapshot-timeout: 6h` for CSI snapshot. One single condition in `Conditions` with a specific key and empty value, which means the value matches any value. For example, if the `conditions.nfs` is `{}`, it means if `NFS` is used as `persistentVolumeSource` in Persistent Volume will be skipped no matter what the NFS server or NFS Path is. The size of each single filter value should limit to 256 bytes in case of an unfriendly long variable assignment. For capacity for PV or size for Volume, the value should include the lower value and upper value concatenated by" }, { "data": "And it has several combinations below: \"0,5Gi\" or \"0Gi,5Gi\" which means capacity or size matches from 0 to 5Gi, including value 0 and value 5Gi \",5Gi\" which is equal to \"0,5Gi\" \"5Gi,\" which means capacity or size matches larger than 5Gi, including value 5Gi \"5Gi\" which is not supported and will be failed in validating configuration. Currently, resources policies are defined in `BackupSpec` struct, it will be more and more bloated with adding more and more filters which makes the size of `Backup` CR bigger and bigger, so we want to store the resources policies in configmap, and `Backup` CRD reference to current configmap. the `configmap` user created would be like this: ```yaml apiVersion: v1 data: policies.yaml: version: v1 volumePolicies: conditions: capacity: \"0,100Gi\" csi: driver: aws.ebs.csi.driver fsType: ext4 storageClass: gp2 ebs-sc action: type: volume-snapshot parameters: volume-snapshot-timeout: \"6h\" kind: ConfigMap metadata: creationTimestamp: \"2023-01-16T14:08:12Z\" name: backup01 namespace: velero resourceVersion: \"17891025\" uid: b73e7f76-fc9e-4e72-8e2e-79db717fe9f1 ``` A new variable `resourcePolices` would be added into `BackupSpec`, it's value is assigned with the current resources policy configmap ```yaml apiVersion: velero.io/v1 kind: Backup metadata: name: backup-1 spec: resourcePolices: refType: Configmap ref: backup01 ... ``` The configmap only stores those assigned values, not the whole resources policies. The name of the configmap is `$BackupName`, and it's in Velero install namespace. The life cycle of resource policies configmap is managed by the user instead of Velero, which could make it more flexible and easy to maintain. The resource policies configmap will remain in the cluster until the user deletes it. Unlike backup, the resource policies configmap will not sync to the new cluster. So if the user wants to use one resource policies that do not sync to the new cluster, the backup will fail with resource policies not found. One resource policies configmap could be used by multiple backups. If the backup referenced resource policies configmap is been deleted, it won't affect the already existing backups, but if the user wants to reference the deleted configmap to create one new backup, it will fail with resource policies not found. We want to introduce the version field in the YAML data to contain break changes. Therefore, we won't follow a semver paradigm, for example in v1.11 the data look like this: ```yaml version: v1 volumePolicies: .... ``` Hypothetically, in v1.12 we add new fields like clusterResourcePolicies, the version will remain as v1 b/c this change is backward compatible: ```yaml version: v1 volumePolicies: .... clusterResourcePolicies: .... ``` Suppose in v1.13, we have to introduce a break change, at this time we will bump up the version: ```yaml version: v2 volume-policies: .... ``` We only support one version in Velero, so it won't be recognized if backup using a former version of YAML data. To manage the effort for maintenance, we will only support one version of the data in Velero. Suppose that there is one break change for the YAML data in Velero v1.13, we should bump up the config version to v2, and v2 is only supported in v1.13. For the existing data with version: v1, it should migrate them when the Velero startup, this won't hurt the existing backup schedule CR as it only references the" }, { "data": "To make the migration easier, the configmap for such resource filter policies should be labeled manually before Velero startup like this, Velero will migrate the labeled configmap. We only support migrating from the previous version to the current version in case of complexity in data format conversion, which users could regenerate configmap in the new YAML data version, and it is easier to do version control. ```yaml apiVersion: v1 kind: ConfigMap metadata: labels: velero.io/resource-filter-policies: \"true\" name: example namespace: velero data: ..... ``` As the resource policies configmap is referenced by backup CR, the policies in configmap are not so intuitive, so we need to integrate policies in configmap to the output of the command `velero backup describe`, and make it more readable. Currently, we have these resources filters: IncludedNamespaces ExcludedNamespaces IncludedResources ExcludedResources LabelSelector OrLabelSelectors IncludeClusterResources UseVolumeSnapshots velero.io/exclude-from-backup=true backup.velero.io/backup-volumes-excludes backup.velero.io/backup-volumes backup.velero.io/must-include-additional-items So it should be careful with the combination of volumes resources policies and the above resources filters. When volume resource policies conflict with the above resource filters, we should respect the above resource filters. For example, if the user used the opt-out approach to `backup.velero.io/backup-volumes-excludes` annotation on the pod and also defined include volume in volumes resources filters configuration, we should respect the opt-out approach to skip backup of the volume. If volume resource policies conflict with themselves, the first matched policy will be respect. This implementation should be included in Velero v1.11.0 Currently, in Velero v1.11.0 we only support `Action` `skip`, and support `file-system-backup` and `volume-snapshot` for the later version. And `Parameters` in `Action` is also not supported in v1.11.0, we will support in a later version. In Velero 1.11, we supported Conditions and format listed below: capacity ```yaml capacity: \"10Gi,100Gi\" // match volume has the size between 10Gi and 100Gi ``` storageClass ```yaml storageClass: // match volume has the storage class gp2 or ebs-sc gp2 ebs-sc ``` volume sources (currently only support below format and attributes) Specify the volume source name, the name could be `nfs`, `rbd`, `iscsi`, `csi` etc. ```yaml nfs : {} // match any volume has nfs volume source csi : {} // match any volume has csi volume source ``` Specify details for the related volume source (currently we only support csi driver filter and nfs server or path filter) ```yaml csi: // match volume has nfs volume source and using `aws.efs.csi.driver` driver: aws.efs.csi.driver nfs: // match volume has nfs volume source and using below server and path server: 192.168.200.90 path: /mnt/nfs ``` The conditions also could be extended in later versions, such as we could further supporting filtering other volume source detail not only NFS and CSI. Here we support the user define the YAML config file and storing the resources policies into configmap, also we could define one resource's policies CRD and store policies imported from the user-defined config file in the related CR. But CRD is more like one kind of resource with status, Kubernetes API Server handles the lifecycle of a CR and handles it in different statuses. Compared to CRD, Configmap is more focused to store data. Should we support more than one version of filter policies configmap?" } ]
{ "category": "Runtime", "file_name": "upload-progress.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Volume snapshotter plugin are used by Velero to take snapshots of persistent volume contents. Depending on the underlying storage system, those snapshots may be available to use immediately, they may be uploaded to stable storage internally by the plugin or they may need to be uploaded after the snapshot has been taken. We would like for Velero to continue on to the next part of the backup as quickly as possible but we would also like the backup to not be marked as complete until it is a usable backup. We'd also eventually like to bring the control of upload under the control of Velero and allow the user to make decisions about the ultimate destination of backup data independent of the storage system they're using. AWS - AWS snapshots return quickly, but are then uploaded in the background and cannot be used until EBS moves the data into S3 internally. vSphere - The vSphere plugin takes a local snapshot and then the vSphere plugin uploads the data to S3. The local snapshot is usable before the upload completes. Restic - Does not go through the volume snapshot path. Restic backups will block Velero progress until completed. Enable monitoring of operations that continue after snapshotting operations have completed Keep non-usable backups (upload/persistence has not finished) from appearing as completed Minimize change to volume snapshot and BackupItemAction plugins Unification of BackupItemActions and VolumeSnapshotters In this model, movement of the snapshot to stable storage is under the control of the snapshot plugin. Decisions about where and when the snapshot gets moved to stable storage are not directly controlled by Velero. This is the model for the current VolumeSnapshot plugins. In this model, the snapshot is moved to external storage under the control of Velero. This enables Velero to move data between storage systems. This also allows backup partners to use Velero to snapshot data and then move the data into their backup repository. Velero currently has backup phases \"InProgress\" and \"Completed\". The backup moves to the Completed phase when all of the volume snapshots have completed and the Kubernetes metadata has been written into the object store. However, the actual data movement may be happening in the background after the backup has been marked \"Completed\". The backup is not actually a stable backup until the data has been persisted properly. In some cases (e.g. AWS) the backup cannot be restored from until the snapshots have been persisted. Once the snapshots have been taken, however, it is possible for additional backups to be made without interference. Waiting until all data has been moved before starting the next backup will slow the progress of the system without adding any actual benefit to the user. A new backup phase, \"Uploading\" will be introduced. When a backup has entered this phase, Velero is free to start another backup. The backup will remain in the \"Uploading\" phase until all data has been successfully moved to persistent storage. The backup will not fail once it reaches this phase, it will continuously retry moving the data. If the backup is deleted (cancelled), the plugins will attempt to delete the snapshots and stop the data movement - this may not be possible with all storage" }, { "data": "When a backup request is initially created, it is in the \"New\" phase. The next state is either \"InProgress\" or \"FailedValidation\" If the backup request is incorrectly formed, it goes to the \"FailedValidation\" phase and terminates When work on the backup begins, it moves to the \"InProgress\" phase. It remains in the \"InProgress\" phase until all pre/post execution hooks have been executed, all snapshots have been taken and the Kubernetes metadata and backup info is safely written to the object store plugin. In the current implementation, Restic backups will move data during the \"InProgress\" phase. In the future, it may be possible to combine a snapshot with a Restic (or equivalent) backup which would allow for data movement to be handled in the \"Uploading\" phase, The next phase is either \"Completed\", \"Uploading\", \"Failed\" or \"PartiallyFailed\". Backups which would have a final phase of \"Completed\" or \"PartiallyFailed\" may move to the \"Uploading\" state. A backup which will be marked \"Failed\" will go directly to the \"Failed\" phase. Uploads may continue in the background for snapshots that were taken by a \"Failed\" backup, but no progress will not be monitored or updated. When a \"Failed\" backup is deleted, all snapshots will be deleted and at that point any uploads still in progress should be aborted. The \"Uploading\" phase signifies that the main part of the backup, including snapshotting has completed successfully and uploading is continuing. In the event of an error during uploading, the phase will change to UploadingPartialFailure. On success, the phase changes to Completed. The backup cannot be restored from when it is in the Uploading state. The \"UploadingPartialFailure\" phase signifies that the main part of the backup, including snapshotting has completed, but there were partial failures either during the main part or during the uploading. The backup cannot be restored from when it is in the UploadingPartialFailure state. When a backup has had fatal errors it is marked as \"Failed\" This backup cannot be restored from. The \"Completed\" phase signifies that the backup has completed, all data has been transferred to stable storage and the backup is ready to be used in a restore. When the Completed phase has been reached it is safe to remove any of the items that were backed up. The \"PartiallyFailed\" phase signifies that the backup has completed and at least part of the backup is usable. Restoration from a PartiallyFailed backup will not result in a complete restoration but pieces may be available. When a BackupAction is executed, any SnapshotItemAction or VolumeSnapshot plugins will return snapshot IDs. The plugin should be able to provide status on the progress for the snapshot and handle cancellation of the upload if the snapshot is deleted. If the plugin is restarted, the snapshot ID should remain valid. When all snapshots have been taken and Kubernetes resources have been persisted to the ObjectStorePlugin the backup will either have fatal errors or will be at least partially usable. If the backup has fatal errors it will move to the \"Failed\" state and finish. If a backup fails, the upload will not be cancelled but it will not be monitored either. For backups in any phase, all snapshots will be deleted when the backup is" }, { "data": "Plugins will cancel any data movement and remove snapshots and other associated resources when the VolumeSnapshotter DeleteSnapshot method or DeleteItemAction Execute method is called. Velero will poll the plugins for status on the snapshots when the backup exits the \"InProgress\" phase and has no fatal errors. If any snapshots are not complete, the backup will move to either Uploading or UploadingPartialFailure or Failed. Post-snapshot operations may take a long time and Velero and its plugins may be restarted during this time. Once a backup has moved into the Uploading or UploadingPartialFailure phase, another backup may be started. While in the Uploading or UploadingPartialFailure phase, the snapshots and backup items will be periodically polled. When all of the snapshots and backup items have reported success, the backup will move to the Completed or PartiallyFailed phase, depending on whether the backup was in the Uploading or UploadingPartialFailure phase. The Backup resources will not be written to object storage until the backup has entered a final phase: Completed, Failed or PartialFailure InProgress backups will not have a `velero-backup.json` present in the object store. During reconciliation, backups which do not have a `velero-backup.json` object in the object store will be ignored. type UploadProgress struct { completed bool // True when the operation has completed, either successfully or with a failure err error // Set when the operation has failed itemsCompleted, itemsToComplete int64 // The number of items that have been completed and the items to complete // For a disk, an item would be a byte and itemsToComplete would be the // total size to transfer (may be less than the size of a volume if // performing an incremental) and itemsCompleted is the number of bytes // transferred. On successful completion, itemsCompleted and itemsToComplete // should be the same started, updated time.Time // When the upload was started and when the last update was seen. Not all // systems retain when the upload was begun, return Time 0 (time.Unix(0, 0)) // if unknown. } A new method will be added to the VolumeSnapshotter interface (details depending on plugin versioning spec) UploadProgress(snapshotID string) (UploadProgress, error) UploadProgress will report the current status of a snapshot upload. This should be callable at any time after the snapshot has been taken. In the event a plugin is restarted, if the snapshotID continues to be valid it should be possible to retrieve the progress. `error` is set if there is an issue retrieving progress. If the snapshot is has encountered an error during the upload, the error should be return in UploadProgress and error should be nil. Currently CSI snapshots and the Velero Plugin for vSphere are implemented as BackupItemAction plugins. The majority of BackupItemAction plugins do not take snapshots or upload data so rather than modify BackupItemAction we introduce a new plugins, SnapshotItemAction. SnapshotItemAction will be used in place of BackupItemAction for the CSI snapshots and the Velero Plugin for vSphere and will return a snapshot ID in addition to the item itself. The SnapshotItemAction plugin identifier as well as the Item and Snapshot ID will be stored in the `<backup-name>-itemsnapshots.json.gz`. When checking for progress, this info will be used to select the appropriate SnapshotItemAction plugin to query for" }, { "data": "NotApplicable should only be returned if the SnapshotItemAction plugin should not be handling the item. If the SnapshotItemAction plugin should handle the item but, for example, the item/snapshot ID cannot be found to report progress, a UploadProgress struct with the error set appropriately (in this case NotFound) should be returned. // SnapshotItemAction is an actor that snapshots an individual item being backed up (it may also do other operations on the item that is returned). type SnapshotItemAction interface { // AppliesTo returns information about which resources this action should be invoked for. // A BackupItemAction's Execute function will only be invoked on items that match the returned // selector. A zero-valued ResourceSelector matches all resources. AppliesTo() (ResourceSelector, error) // Execute allows the ItemAction to perform arbitrary logic with the item being backed up, // including mutating the item itself prior to backup. The item (unmodified or modified) // should be returned, along with an optional slice of ResourceIdentifiers specifying // additional related items that should be backed up. Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, snapshotID string, []ResourceIdentifier, error) // Progress Progress(input *SnapshotItemProgressInput) (UploadProgress, error) } // SnapshotItemProgressInput contains the input parameters for the SnapshotItemAction's Progress function. type SnapshotItemProgressInput struct { // Item is the item that was stored in the backup Item runtime.Unstructured // SnapshotID is the snapshot ID returned by SnapshotItemAction SnapshotID string // Backup is the representation of the restore resource processed by Velero. Backup *velerov1api.Backup } No changes to the existing format are introduced by this change. A `<backup-name>-itemsnapshots.json.gz` file will be added that contains the items and snapshot IDs returned by ItemSnapshotAction. Also, the creation of the `velero-backup.json` object will not occur until the backup moves to one of the terminal phases (Completed, PartiallyFailed, or Failed). Reconciliation should ignore backups that do not have a `velero-backup.json` object. The cluster that is creating the backup will have the Backup resource present and will be able to manage the backup before the backup completes. If the Backup resource is removed (e.g. Velero is uninstalled) before a backup completes and writes its `velero-backup.json` object, the other objects in the object store for the backup will be effectively orphaned. This can currently happen but the current window is much smaller. The itemsnapshots file is similar to the existing `<backup-name>-itemsnapshots.json.gz` Each snapshot taken via SnapshotItemAction will have a JSON record in the file. Exact format TBD. For systems such as EBS, a snapshot is not available until the storage system has transferred the snapshot to stable storage. CSI snapshots expose the readyToUse state that, in the case of EBS, indicates that the snapshot has been transferred to durable storage and is ready to be used. The CSI BackupItemProgress.Progress method will poll that field and when completed, return completion. The vSphere Plugin for Velero uploads snapshots to S3 in the background. This is also a BackupItemAction plugin, it will check the status of the Upload records for the snapshot and return progress. The backup workflow remains the same until we get to the point where the `velero-backup.json` object is written. At this point, we will queue the backup to a finalization go-routine. The next backup may then" }, { "data": "The finalization routine will run across all of the volume snapshots and call the UploadProgress method on each of them. It will then run across all items and call BackupItemProgress.Progress for any that match with a BackupItemProgress. If all snapshots and backup items have finished uploading (either successfully or failed), the backup will be completed and the backup will move to the appropriate terminal phase and upload the `velero-backup.json` object to the object store and the backup will be complete. If any of the snapshots or backup items are still being processed, the phase of the backup will be set to the appropriate phase (Uploading or UploadingPartialFailure). In the event of any of the upload progress checks return an error, the phase will move to UploadingPartialFailure. The backup will then be requeued and will be rechecked again after some time has passed. On restart, the Velero server will scan all Backup resources. Any Backup resources which are in the InProgress phase will be moved to the Failed phase. Any Backup resources in the Oploading or OploadingPartialFailure phase will be treated as if they have been requeued and progress checked and the backup will be requeued or moved to a terminal phase as appropriate. VolumeSnapshotter new plugin APIs BackupItemProgress new plugin interface New backup phases Defer uploading `velero-backup.json` AWS EBS plugin UploadProgress implementation Upload monitoring Implementation of `<backup-name>-itemsnapshots.json.gz` file Restart logic Change in reconciliation logic to ignore backups that have not completed CSI plugin BackupItemProgress implementation vSphere plugin BackupItemProgress implementation (vSphere plugin team) Futures are here for reference, they may change radically when actually implemented. Some storage systems have the ability to provide different levels of protection for snapshots. These are termed \"Fragile\" and \"Durable\". Currently, Velero expects snapshots to be Durable (they should be able to survive the destruction of the cluster and the storage it is using). In the future we would like the ability to take advantage of snapshots that are Fragile. For example, vSphere snapshots are Fragile (they reside in the same datastore as the virtual disk). The Velero Plugin for vSphere uses a vSphere local/fragile snapshot to get a consistent snapshot, then uploads the data to S3 to make it Durable. In the current design, upload progress will not be complete until the snapshot is ready to use and Durable. It is possible, however, to restore data from a vSphere snapshot before it has been made Durable, and this is a capability we'd like to expose in the future. Other storage systems implement this functionality as well. We will be moving the control of the data movement from the vSphere plugin into Velero. Some storage system, such as EBS, are only capable of creating Durable snapshots. There is no usable intermediate Fragile stage. For a Velero backup, users should be able to specify whether they want a Durable backup or a Fragile backup (Fragile backups may consume less resources, be quicker to restore from and are suitable for things like backing up a cluster before upgrading software). We can introduce three snapshot states - Creating, Fragile and Durable. A snapshot would be created with a desired state, Fragile or Durable. When the snapshot reaches the desired or higher state (e.g. request was for Fragile but snapshot went to Durable as on EBS), then the snapshot would be completed." } ]
{ "category": "Runtime", "file_name": "metrics.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "To enable the metrics exporter for CRI-O, either start `crio` with `--metrics-enable` or add the corresponding option to a config overwrite, for example `/etc/crio/crio.conf.d/01-metrics.conf`: ```toml [crio.metrics] enable_metrics = true ``` The metrics endpoint serves per default on port `9090` via HTTP. This can be changed via the `--metrics-port` command line argument or via the configuration file: ```toml metrics_port = 9090 ``` If CRI-O runs with enabled metrics, then this can be verified by querying the endpoint manually via . ```shell curl localhost:9090/metrics ``` It is also possible to serve the metrics via HTTPs, by providing an additional certificate and key: ```toml [crio.metrics] enable_metrics = true metrics_cert = \"/path/to/cert.pem\" metrics_key = \"/path/to/key.pem\" ``` Beside the , CRI-O provides the following additional metrics: <!-- markdownlint-disable MD013 MD033 --> | Metric Key | Possible Labels or Buckets | Type | Purpose | | | | | | | `criooperationstotal` | every CRI-O RPC\\* `operation` | Counter | Cumulative number of CRI-O operations by operation type. | | `criooperationslatencysecondstotal` | every CRI-O RPC\\* `operation`,<br><br>`networksetuppod` (CNI pod network setup time),<br><br>`networksetupoverall` (Overall network setup time) | Summary | Latency in seconds of CRI-O operations. Split-up by operation type. | | `criooperationslatency_seconds` | every CRI-O RPC\\* `operation` | Gauge | Latency in seconds of individual CRI calls for CRI-O operations. Broken down by operation type. | | `criooperationserrors_total` | every CRI-O RPC\\* `operation` | Counter | Cumulative number of CRI-O operation errors by operation type. | | `crioimagepullsbytestotal` | `mediatype`, `size`<br>sizes are in bucket of bytes for layer sizes of 1 KiB, 1 MiB, 10 MiB, 50 MiB, 100 MiB, 200 MiB, 300 MiB, 400 MiB, 500 MiB, 1 GiB, 10 GiB | Counter | Bytes transferred by CRI-O image pulls. | | `crioimagepullsskippedbytes_total` | `size`<br>sizes are in bucket of bytes for layer sizes of 1 KiB, 1 MiB, 10 MiB, 50 MiB, 100 MiB, 200 MiB, 300 MiB, 400 MiB, 500 MiB, 1 GiB, 10 GiB | Counter | Bytes skipped by CRI-O image pulls by name. The ratio of skipped bytes to total bytes can be used to determine cache reuse ratio. | | `crioimagepullssuccesstotal` | | Counter | Successful image pulls. | | `crioimagepullsfailuretotal` | `error` | Counter | Failed image pulls by their error category. | | `crioimagepullslayersize_{sum,count,bucket}` | buckets in byte for layer sizes of 1 KiB, 1 MiB, 10 MiB, 50 MiB, 100 MiB, 200 MiB, 300 MiB, 400 MiB, 500 MiB, 1 GiB, 10 GiB | Histogram | Bytes transferred by CRI-O image pulls per layer. | | `crioimagelayerreusetotal` | | Counter | Reused (not pulled) local image layer count by name. | | `criocontainersdroppedeventstotal` | | Counter | The total number of container events" }, { "data": "| | `criocontainersoom_total` | | Counter | Total number of containers killed because they ran out of memory (OOM). | | `criocontainersoomcounttotal` | `name` | Counter | Containers killed because they ran out of memory (OOM) by their name.<br>The label `name` can have high cardinality sometimes but it is in the interest of users giving them the ease to identify which container(s) are going into OOM state. Also, ideally very few containers should OOM keeping the label cardinality of `name` reasonably low. | | `criocontainersseccompnotifiercount_total` | `name`, `syscall` | Counter | Forbidden `syscall` count resulting in killed containers by `name`. | | `crioprocessesdefunct` | | Gauge | Total number of defunct processes in the node | <!-- markdownlint-enable MD013 MD033 --> Available CRI-O RPC's from the : `Attach`, `ContainerStats`, `ContainerStatus`, `CreateContainer`, `Exec`, `ExecSync`, `ImageFsInfo`, `ImageStatus`, `ListContainerStats`, `ListContainers`, `ListImages`, `ListPodSandbox`, `PodSandboxStatus`, `PortForward`, `PullImage`, `RemoveContainer`, `RemoveImage`, `RemovePodSandbox`, `ReopenContainerLog`, `RunPodSandbox`, `StartContainer`, `Status`, `StopContainer`, `StopPodSandbox`, `UpdateContainerResources`, `UpdateRuntimeConfig`, `Version` Available error categories for `crioimagepulls_failures`: `UNKNOWN`: The default label which gets applied if the error is not known `CONNECTION_REFUSED`: The local network is down or the registry refused the connection. `CONNECTION_TIMEOUT`: The connection timed out during the image download. `NOT_FOUND`: The registry does not exist at the specified resource `BLOB_UNKNOWN`: This error may be returned when a blob is unknown to the registry in a specified repository. This can be returned with a standard get or if a manifest references an unknown layer during upload. `BLOBUPLOADINVALID`: The blob upload encountered an error and can no longer proceed. `BLOBUPLOADUNKNOWN`: If a blob upload has been cancelled or was never started, this error code may be returned. `DENIED`: The access controller denied access for the operation on a resource. `DIGEST_INVALID`: When a blob is uploaded, the registry will check that the content matches the digest provided by the client. The error may include a detail structure with the key \"digest\", including the invalid digest string. This error may also be returned when a manifest includes an invalid layer digest. `MANIFESTBLOBUNKNOWN`: This error may be returned when a manifest blob is unknown to the registry. `MANIFEST_INVALID`: During upload, manifests undergo several checks ensuring validity. If those checks fail, this error may be returned, unless a more specific error is included. The detail will contain information the failed validation. `MANIFEST_UNKNOWN`: This error is returned when the manifest, identified by name and tag is unknown to the repository. `MANIFEST_UNVERIFIED`: During manifest upload, if the manifest fails signature verification, this error will be returned. `NAME_INVALID`: Invalid repository name encountered either during manifest. validation or any API operation. `NAME_UNKNOWN`: This is returned if the name used during an operation is unknown to the registry. `SIZE_INVALID`: When a layer is uploaded, the provided size will be checked against the uploaded content. If they do not match, this error will be" }, { "data": "`TAG_INVALID`: During a manifest upload, if the tag in the manifest does not match the uri tag, this error will be returned. `TOOMANYREQUESTS`: Returned when a client attempts to contact a service too many times. `UNAUTHORIZED`: The access controller was unable to authenticate the client. Often this will be accompanied by a Www-Authenticate HTTP response header indicating how to authenticate. `UNAVAILABLE`: Returned when a service is not available. `UNSUPPORTED`: The operation was unsupported due to a missing implementation or invalid set of parameters. The CRI-O metrics exporter can be used to provide a cluster wide scraping endpoint for Prometheus. It is possible to either build the container image manually via `make metrics-exporter` or directly consume the [available image on quay.io][4]. The deployment requires enabled within the target Kubernetes environment and creates a new to be able to list available nodes. Beside that a new Role will be created to be able to update a config-map within the `cri-o-exporter` namespace. Please be aware that the exporter only works if the pod has access to the node IP from its namespace. This should generally work but might be restricted due to network configuration or policies. To deploy the metrics exporter within a new `cri-o-metrics-exporter` namespace, simply apply the from the root directory of this repository: ```shell kubectl create -f contrib/metrics-exporter/cluster.yaml ``` The `CRIOMETRICSPORT` environment variable is set per default to `\"9090\"` and can be used to customize the metrics port for the nodes. If the deployment is up and running, it should log the registered nodes as well as that a new config-map has been created: ```shell $ kubectl logs -f cri-o-metrics-exporter-65c9b7b867-7qmsb level=info msg=\"Getting cluster configuration\" level=info msg=\"Creating Kubernetes client\" level=info msg=\"Retrieving nodes\" level=info msg=\"Registering handler /master (for 172.1.2.0)\" level=info msg=\"Registering handler /node-0 (for 172.1.3.0)\" level=info msg=\"Registering handler /node-1 (for 172.1.3.1)\" level=info msg=\"Registering handler /node-2 (for 172.1.3.2)\" level=info msg=\"Registering handler /node-3 (for 172.1.3.3)\" level=info msg=\"Registering handler /node-4 (for 172.1.3.4)\" level=info msg=\"Updated scrape configs in configMap cri-o-metrics-exporter\" level=info msg=\"Wrote scrape configs to configMap cri-o-metrics-exporter\" level=info msg=\"Serving HTTP on :8080\" ``` The config-map now contains the , which can be used for Prometheus: ```shell kubectl get cm cri-o-metrics-exporter -o yaml ``` ```yaml apiVersion: v1 data: config: | scrape_configs: job_name: \"cri-o-exporter-master\" scrape_interval: 1s metrics_path: /master static_configs: targets: [\"cri-o-metrics-exporter.cri-o-metrics-exporter\"] labels: instance: \"master\" job_name: \"cri-o-exporter-node-0\" scrape_interval: 1s metrics_path: /node-0 static_configs: targets: [\"cri-o-metrics-exporter.cri-o-metrics-exporter\"] labels: instance: \"node-0\" job_name: \"cri-o-exporter-node-1\" scrape_interval: 1s metrics_path: /node-1 static_configs: targets: [\"cri-o-metrics-exporter.cri-o-metrics-exporter\"] labels: instance: \"node-1\" job_name: \"cri-o-exporter-node-2\" scrape_interval: 1s metrics_path: /node-2 static_configs: targets: [\"cri-o-metrics-exporter.cri-o-metrics-exporter\"] labels: instance: \"node-2\" job_name: \"cri-o-exporter-node-3\" scrape_interval: 1s metrics_path: /node-3 static_configs: targets: [\"cri-o-metrics-exporter.cri-o-metrics-exporter\"] labels: instance: \"node-3\" job_name: \"cri-o-exporter-node-4\" scrape_interval: 1s metrics_path: /node-4 static_configs: targets: [\"cri-o-metrics-exporter.cri-o-metrics-exporter\"] labels: instance: \"node-4\" kind: ConfigMap metadata: creationTimestamp: \"2020-05-12T08:29:06Z\" name: cri-o-metrics-exporter namespace: cri-o-metrics-exporter resourceVersion: \"2862950\" selfLink: /api/v1/namespaces/cri-o-metrics-exporter/configmaps/cri-o-metrics-exporter uid: 1409804a-78a2-4961-8205-c5f383626b4b ``` If the scrape configuration has been added to the Prometheus server, then the provided dashboard can be setup, too:" } ]
{ "category": "Runtime", "file_name": "dex.md", "project_name": "MinIO", "subcategory": "Cloud Native Storage" }
[ { "data": "Dex is an identity service that uses OpenID Connect to drive authentication for apps. Dex acts as a portal to other identity providers through \"connectors.\" This lets dex defer authentication to LDAP servers, SAML providers, or established identity providers like GitHub, Google, and Active Directory. Clients write their authentication logic once to talk to dex, then dex handles the protocols for a given backend. Install Dex by following ``` ~ ./bin/dex serve dex.yaml time=\"2020-07-12T20:45:50Z\" level=info msg=\"config issuer: http://127.0.0.1:5556/dex\" time=\"2020-07-12T20:45:50Z\" level=info msg=\"config storage: sqlite3\" time=\"2020-07-12T20:45:50Z\" level=info msg=\"config static client: Example App\" time=\"2020-07-12T20:45:50Z\" level=info msg=\"config connector: mock\" time=\"2020-07-12T20:45:50Z\" level=info msg=\"config connector: local passwords enabled\" time=\"2020-07-12T20:45:50Z\" level=info msg=\"config response types accepted: [code token id_token]\" time=\"2020-07-12T20:45:50Z\" level=info msg=\"config using password grant connector: local\" time=\"2020-07-12T20:45:50Z\" level=info msg=\"config signing keys expire after: 3h0m0s\" time=\"2020-07-12T20:45:50Z\" level=info msg=\"config id tokens valid for: 3h0m0s\" time=\"2020-07-12T20:45:50Z\" level=info msg=\"listening (http) on 0.0.0.0:5556\" ``` ``` ~ export MINIOIDENTITYOPENIDCLAIMNAME=name ~ export MINIOIDENTITYOPENIDCONFIGURL=http://127.0.0.1:5556/dex/.well-known/openid-configuration ~ minio server ~/test ``` ``` ~ go run web-identity.go -cid example-app -csec ZXhhbXBsZS1hcHAtc2VjcmV0 \\ -config-ep http://127.0.0.1:5556/dex/.well-known/openid-configuration \\ -cscopes groups,openid,email,profile ``` ``` ~ mc admin policy create admin allaccess.json ``` Contents of `allaccess.json` ```json { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:*\" ], \"Resource\": [ \"arn:aws:s3:::*\" ] } ] } ``` You will be redirected to dex login screen - click \"Login with email\", enter username password username: admin@example.com password: password and then click \"Grant access\" On the browser now you shall see the list of buckets output, along with your temporary credentials obtained from MinIO. ``` { \"buckets\": [ \"dl.minio.equipment\", \"dl.minio.service-fulfillment\", \"testbucket\" ], \"credentials\": { \"AccessKeyID\": \"Q31CVS1PSCJ4OTK2YVEM\", \"SecretAccessKey\": \"rmDEOKARqKYmEyjWGhmhLpzcncyu7Jf8aZ9bjDic\", \"SessionToken\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3NLZXkiOiJRMzFDVlMxUFNDSjRPVEsyWVZFTSIsImF0X2hhc2giOiI4amItZFE2OXRtZEVueUZaMUttNWhnIiwiYXVkIjoiZXhhbXBsZS1hcHAiLCJlbWFpbCI6ImFkbWluQGV4YW1wbGUuY29tIiwiZW1haWxfdmVyaWZpZWQiOnRydWUsImV4cCI6IjE1OTQ2MDAxODIiLCJpYXQiOjE1OTQ1ODkzODQsImlzcyI6Imh0dHA6Ly8xMjcuMC4wLjE6NTU1Ni9kZXgiLCJuYW1lIjoiYWRtaW4iLCJzdWIiOiJDaVF3T0dFNE5qZzBZaTFrWWpnNExUUmlOek10T1RCaE9TMHpZMlF4TmpZeFpqVTBOallTQld4dlkyRnMifQ.nrbzIJz99Om7TvJ04jnSTmhvlM7aR9hMM1Aqjp2ONJ1UKYCvegBLrTu6cYR968_OpmnAGJ8vkd7sIjUjtR4zbw\", \"SignerType\": 1 } } ``` Now you have successfully configured Dex IdP with MinIO. NOTE: Dex supports groups with external connectors so you can use `groups` as policy claim instead of `name`. ``` export MINIOIDENTITYOPENIDCLAIMNAME=groups ``` and add relevant policies on MinIO using `mc admin policy create myminio/ <group_name> group-access.json`" } ]
{ "category": "Runtime", "file_name": "feature_request.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "name: Feature request about: Suggest an idea for this project title: '[Feature Request] Title' labels: '' assignees: '' `[Author TODO: Why is this feature request important? What are the use cases? Please describe.]` `[Author TODO: A clear and concise description of how you would like the feature to work.]` `[Author TODO: A clear and concise description of any alternative solutions or features you have considered.]` `[Author TODO: How do you work around not having this feature?]` `[Author TODO: Add additional context about this feature request here.]` [ ] Have you searched the Firecracker Issues database for similar requests? [ ] Have you read all the existing relevant Firecracker documentation? [ ] Have you read and understood Firecracker's core tenets?" } ]
{ "category": "Runtime", "file_name": "functions.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "Kanister Functions are written in go and are compiled when building the controller. They are referenced by Blueprints phases. A Kanister Function implements the following go interface: ``` go // Func allows custom actions to be executed. type Func interface { Name() string Exec(ctx context.Context, args ...string) (mapstringinterface{}, error) RequiredArgs() []string Arguments() []string } ``` Kanister Functions are registered by the return value of `Name()`, which must be static. Each phase in a Blueprint executes a Kanister Function. The `Func` field in a `BlueprintPhase` is used to lookup a Kanister Function. After `BlueprintPhase.Args` are rendered, they are passed into the Kanister Function\\'s `Exec()` method. The `RequiredArgs` method returns the list of argument names that are required. And `Arguments` method returns the list of all the argument names that are supported by the function. The Kanister controller ships with the following Kanister Functions out-of-the-box that provide integration with Kubernetes: KubeExec is similar to running ``` bash kubectl exec -it --namespace <NAMESPACE> <POD> -c <CONTAINER> [CMD LIST...] ``` | Argument | Required | Type | Description | | - | :: | -- | -- | | namespace | Yes | string | namespace in which to execute | | pod | Yes | string | name of the pod in which to execute | | container | No | string | (required if pod contains more than 1 container) name of the container in which to execute | | command | Yes | []string | command list to execute | Example: ``` yaml func: KubeExec name: examplePhase args: namespace: \"{{ .Deployment.Namespace }}\" pod: \"{{ index .Deployment.Pods 0 }}\" container: kanister-sidecar command: sh -c | echo \"Example\" ``` KubeExecAll is similar to running KubeExec on specified containers of given pods (all specified containers on given pods) in parallel. In the below example, the command is going to be executed in both the containers of the given pods. | Argument | Required | Type | Description | | - | :: | -- | -- | | namespace | Yes | string | namespace in which to execute | | pods | Yes | string | space separated list of names of pods in which to execute | | containers | Yes | string | space separated list of names of the containers in which to execute | | command | Yes | []string | command list to execute | Example: ``` yaml func: KubeExecAll name: examplePhase args: namespace: \"{{ .Deployment.Namespace }}\" pods: \"{{ index .Deployment.Pods 0 }} {{ index .Deployment.Pods 1 }}\" containers: \"container1 container2\" command: sh -c | echo \"Example\" ``` KubeTask spins up a new container and executes a command via a Pod. This allows you to run a new Pod from a Blueprint. | Argument | Required | Type | Description | | -- | :: | -- | -- | | namespace | No | string | namespace in which to execute (the pod will be created in controller's namespace if not specified) | | image | Yes | string | image to be used for executing the task | | command | Yes | []string | command list to execute | | podOverride | No | map[string]interface{} | specs to override default pod specs with | Example: ``` yaml func: KubeTask name: examplePhase args: namespace: \"{{ .Deployment.Namespace }}\" image: busybox podOverride: containers: name: container imagePullPolicy: IfNotPresent command: sh -c | echo \"Example\" ``` ScaleWorkload is used to scale up or scale down a Kubernetes" }, { "data": "It also sets the original replica count of the workload as output artifact with the key `originalReplicaCount`. The function only returns after the desired replica state is achieved: When reducing the replica count, wait until all terminating pods complete. When increasing the replica count, wait until all pods are ready. Currently the function supports Deployments, StatefulSets and DeploymentConfigs. It is similar to running ``` bash kubectl scale deployment <DEPLOYMENT-NAME> --replicas=<NUMBER OF REPLICAS> --namespace <NAMESPACE> ``` This can be useful if the workload needs to be shutdown before processing certain data operations. For example, it may be useful to use `ScaleWorkload` to stop a database process before restoring files. See for an example with new `ScaleWorkload` function. | Argument | Required | Type | Description | | | :: | - | -- | | namespace | No | string | namespace in which to execute | | name | No | string | name of the workload to scale | | kind | No | string | [deployment] or [statefulset] | | replicas | Yes | int | The desired number of replicas | | waitForReady | No | bool | Whether to wait for the workload to be ready before executing next steps. Default Value is `true` | Example of scaling down: ``` yaml func: ScaleWorkload name: examplePhase args: namespace: \"{{ .Deployment.Namespace }}\" name: \"{{ .Deployment.Name }}\" kind: deployment replicas: 0 ``` Example of scaling up: ``` yaml func: ScaleWorkload name: examplePhase args: namespace: \"{{ .Deployment.Namespace }}\" name: \"{{ .Deployment.Name }}\" kind: deployment replicas: 1 waitForReady: false ``` This function allows running a new Pod that will mount one or more PVCs and execute a command or script that manipulates the data on the PVCs. The function can be useful when it is necessary to perform operations on the data volumes that are used by one or more application containers. The typical sequence is to stop the application using ScaleWorkload, perform the data manipulation using PrepareData, and then restart the application using ScaleWorkload. ::: tip NOTE It is extremely important that, if PrepareData modifies the underlying data, the PVCs must not be currently in use by an active application container (ensure by using ScaleWorkload with replicas=0 first). For advanced use cases, it is possible to have concurrent access but the PV needs to have RWX mode enabled and the volume needs to use a clustered file system that supports concurrent access. ::: | Argument | Required | Type | Description | | -- | :: | -- | -- | | namespace | Yes | string | namespace in which to execute | | image | Yes | string | image to be used the command | | volumes | No | map[string]string | Mapping of `pvcName` to `mountPath` under which the volume will be available | | command | Yes | []string | command list to execute | | serviceaccount | No | string | service account info | | podOverride | No | map[string]interface{} | specs to override default pod specs with | ::: tip NOTE The `volumes` argument does not support `subPath` mounts so the data manipulation logic needs to be aware of any `subPath` mounts that may have been used when mounting a PVC in the primary application container. If `volumes` argument is not specified, all volumes belonging to the protected object will be mounted at the predefined path `/mnt/prepare_data/<pvcName>` ::: Example: ``` yaml func: ScaleWorkload name: ShutdownApplication args: namespace: \"{{ .Deployment.Namespace }}\" name: \"{{" }, { "data": "}}\" kind: deployment replicas: 0 func: PrepareData name: ManipulateData args: namespace: \"{{ .Deployment.Namespace }}\" image: busybox volumes: application-pvc-1: \"/data\" application-pvc-2: \"/restore-data\" command: sh -c | cp /restore-data/filetoreplace.data /data/file.data ``` This function backs up data from a container into any object store supported by Kanister. ::: tip WARNING The BackupData will be deprecated soon. We recommend using instead. However, and will continue to be available, ensuring you retain control over your existing backups. ::: ::: tip NOTE It is important that the application includes a `kanister-tools` sidecar container. This sidecar is necessary to run the tools that capture path on a volume and store it on the object store. ::: Arguments: | Argument | Required | Type | Description | | -- | :: | - | -- | | namespace | Yes | string | namespace in which to execute | | pod | Yes | string | pod in which to execute | | container | Yes | string | container in which to execute | | includePath | Yes | string | path of the data to be backed up | | backupArtifactPrefix | Yes | string | path to store the backup on the object store | | encryptionKey | No | string | encryption key to be used for backups | | insecureTLS | No | bool | enables insecure connection for data mover | Outputs: | Output | Type | Description | | | | -- | | backupTag | string | unique tag added to the backup | | backupID | string | unique snapshot id generated during backup | Example: ``` yaml actions: backup: outputArtifacts: backupInfo: keyValue: backupIdentifier: \"{{ .Phases.BackupToObjectStore.Output.backupTag }}\" phases: func: BackupData name: BackupToObjectStore args: namespace: \"{{ .Deployment.Namespace }}\" pod: \"{{ index .Deployment.Pods 0 }}\" container: kanister-tools includePath: /mnt/data backupArtifactPrefix: s3-bucket/path/artifactPrefix ``` This function concurrently backs up data from one or more pods into an any object store supported by Kanister. ::: tip WARNING The BackupDataAll will be deprecated soon. However, and will continue to be available, ensuring you retain control over your existing backups. ::: ::: tip NOTE It is important that the application includes a `kanister-tools` sidecar container. This sidecar is necessary to run the tools that capture path on a volume and store it on the object store. ::: Arguments: | Argument | Required | Type | Description | | -- | :: | - | -- | | namespace | Yes | string | namespace in which to execute | | pods | No | string | pods in which to execute (by default runs on all the pods) | | container | Yes | string | container in which to execute | | includePath | Yes | string | path of the data to be backed up | | backupArtifactPrefix | Yes | string | path to store the backup on the object store appended by pod name later | | encryptionKey | No | string | encryption key to be used for backups | | insecureTLS | No | bool | enables insecure connection for data mover | Outputs: | Output | Type | Description | | - | | -- | | BackupAllInfo | string | info about backup tag and identifier required for restore | Example: ``` yaml actions: backup: outputArtifacts: params: keyValue: backupInfo: \"{{ .Phases.backupToObjectStore.Output.BackupAllInfo }}\" phases: func: BackupDataAll name: BackupToObjectStore args: namespace: \"{{ .Deployment.Namespace }}\" container: kanister-tools includePath: /mnt/data backupArtifactPrefix: s3-bucket/path/artifactPrefix ``` This function restores data backed up by the" }, { "data": "It creates a new Pod that mounts the PVCs referenced by the specified Pod and restores data to the specified path. ::: tip NOTE It is extremely important that, the PVCs are not be currently in use by an active application container, as they are required to be mounted to the new Pod (ensure by using ScaleWorkload with replicas=0 first). For advanced use cases, it is possible to have concurrent access but the PV needs to have RWX mode enabled and the volume needs to use a clustered file system that supports concurrent access. ::: | Argument | Required | Type | Description | | -- | :: | -- | -- | | namespace | Yes | string | namespace in which to execute | | image | Yes | string | image to be used for running restore | | backupArtifactPrefix | Yes | string | path to the backup on the object store | | backupIdentifier | No | string | (required if backupTag not provided) unique snapshot id generated during backup | | backupTag | No | string | (required if backupIdentifier not provided) unique tag added during the backup | | restorePath | No | string | path where data is restored | | pod | No | string | pod to which the volumes are attached | | volumes | No | map[string]string | Mapping of [pvcName] to [mountPath] under which the volume will be available | | encryptionKey | No | string | encryption key to be used during backups | | insecureTLS | No | bool | enables insecure connection for data mover | | podOverride | No | map[string]interface{} | specs to override default pod specs with | ::: tip NOTE The `image` argument requires the use of `ghcr.io/kanisterio/kanister-tools` image since it includes the required tools to restore data from the object store. Between the `pod` and `volumes` arguments, exactly one argument must be specified. ::: Example: Consider a scenario where you wish to restore the data backed up by the function. We will first scale down the application, restore the data and then scale it back up. For this phase, we will use the `backupInfo` Artifact provided by backup function. ``` yaml func: ScaleWorkload name: ShutdownApplication args: namespace: \\\"{{ .Deployment.Namespace }}\\\" name: \\\"{{ .Deployment.Name }}\\\" kind: Deployment replicas: 0 func: RestoreData name: RestoreFromObjectStore args: namespace: \\\"{{ .Deployment.Namespace }}\\\" pod: \\\"{{ index .Deployment.Pods 0 }}\\\" image: ghcr.io/kanisterio/kanister-tools: backupArtifactPrefix: s3-bucket/path/artifactPrefix backupTag: \\\"{{ .ArtifactsIn.backupInfo.KeyValue.backupIdentifier }}\\\" func: ScaleWorkload name: StartupApplication args: namespace: \\\"{{ .Deployment.Namespace }}\\\" name: \\\"{{ .Deployment.Name }}\\\" kind: Deployment replicas: 1 ``` This function concurrently restores data backed up by the function, on one or more pods. It concurrently runs a job Pod for each workload Pod, that mounts the respective PVCs and restores data to the specified path. ::: tip NOTE It is extremely important that, the PVCs are not be currently in use by an active application container, as they are required to be mounted to the new Pod (ensure by using ScaleWorkload with replicas=0 first). For advanced use cases, it is possible to have concurrent access but the PV needs to have RWX mode enabled and the volume needs to use a clustered file system that supports concurrent" }, { "data": "::: | Argument | Required | Type | Description | | -- | :: | -- | -- | | namespace | Yes | string | namespace in which to execute | | image | Yes | string | image to be used for running restore | | backupArtifactPrefix | Yes | string | path to the backup on the object store | | restorePath | No | string | path where data is restored | | pods | No | string | pods to which the volumes are attached | | encryptionKey | No | string | encryption key to be used during backups | | backupInfo | Yes | string | snapshot info generated as output in BackupDataAll function | | insecureTLS | No | bool | enables insecure connection for data mover | | podOverride | No | map[string]interface{} | specs to override default pod specs with | ::: tip NOTE The image argument requires the use of [ghcr.io/kanisterio/kanister-tools] image since it includes the required tools to restore data from the object store. Between the pod and volumes arguments, exactly one argument must be specified. ::: Example: Consider a scenario where you wish to restore the data backed up by the function. We will first scale down the application, restore the data and then scale it back up. We will not specify `pods` in args, so this function will restore data on all pods concurrently. For this phase, we will use the `params` Artifact provided by BackupDataAll function. ``` yaml func: ScaleWorkload name: ShutdownApplication args: namespace: \\\"{{ .Deployment.Namespace }}\\\" name: \\\"{{ .Deployment.Name }}\\\" kind: Deployment replicas: 0 func: RestoreDataAll name: RestoreFromObjectStore args: namespace: \\\"{{ .Deployment.Namespace }}\\\" image: ghcr.io/kanisterio/kanister-tools: backupArtifactPrefix: s3-bucket/path/artifactPrefix backupInfo: \\\"{{ .ArtifactsIn.params.KeyValue.backupInfo }}\\\" func: ScaleWorkload name: StartupApplication args: namespace: \\\"{{ .Deployment.Namespace }}\\\" name: \\\"{{ .Deployment.Name }}\\\" kind: Deployment replicas: 2 ``` This function copies data from the specified volume (referenced by a Kubernetes PersistentVolumeClaim) into an object store. This data can be restored into a volume using the `restoredata`{.interpreted-text role=\"ref\"} function ::: tip NOTE The PVC must not be in-use (attached to a running Pod) If data needs to be copied from a running workload without stopping it, use the function ::: Arguments: | Argument | Required | Type | Description | | | :: | -- | -- | | namespace | Yes | string | namespace the source PVC is in | | volume | Yes | string | name of the source PVC | | dataArtifactPrefix | Yes | string | path on the object store to store the data in | | encryptionKey | No | string | encryption key to be used during backups | | insecureTLS | No | bool | enables insecure connection for data mover | | podOverride | No | map[string]interface{} | specs to override default pod specs with | Outputs: | Output | Type | Description | | - | | -- | | backupID | string | unique snapshot id generated when data was copied | | backupRoot | string | parent directory location of the data copied from | | backupArtifactLocation | string | location in objectstore where data was copied | | backupTag | string | unique string to identify this data copy | Example: If the ActionSet `Object` is a PersistentVolumeClaim: ``` yaml func: CopyVolumeData args: namespace: \"{{ .PVC.Namespace }}\" volume: \"{{ .PVC.Name }}\" dataArtifactPrefix: s3-bucket-name/path ``` This function deletes the snapshot data backed up by the" }, { "data": "| Argument | Required | Type | Description | | -- | :: | -- | -- | | namespace | Yes | string | namespace in which to execute | | backupArtifactPrefix | Yes | string | path to the backup on the object store | | backupID | No | string | (required if backupTag not provided) unique snapshot id generated during backup | | backupTag | No | string | (required if backupID not provided) unique tag added during the backup | | encryptionKey | No | string | encryption key to be used during backups | | insecureTLS | No | bool | enables insecure connection for data mover | | podOverride | No | map[string]interface{} | specs to override default pod specs with | Example: Consider a scenario where you wish to delete the data backed up by the function. For this phase, we will use the `backupInfo` Artifact provided by backup function. ``` yaml func: DeleteData name: DeleteFromObjectStore args: namespace: \"{{ .Namespace.Name }}\" backupArtifactPrefix: s3-bucket/path/artifactPrefix backupTag: \"{{ .ArtifactsIn.backupInfo.KeyValue.backupIdentifier }}\" ``` This function concurrently deletes the snapshot data backed up by the BackupDataAll function. | Argument | Required | Type | Description | | -- | :: | -- | -- | | namespace | Yes | string | namespace in which to execute | | backupArtifactPrefix | Yes | string | path to the backup on the object store | | backupInfo | Yes | string | snapshot info generated as output in BackupDataAll function | | encryptionKey | No | string | encryption key to be used during backups | | reclaimSpace | No | bool | provides a way to specify if space should be reclaimed | | insecureTLS | No | bool | enables insecure connection for data mover | | podOverride | No | map[string]interface{} | specs to override default pod specs with | Example: Consider a scenario where you wish to delete all the data backed up by the function. For this phase, we will use the `params` Artifact provided by backup function. ``` yaml func: DeleteDataAll name: DeleteFromObjectStore args: namespace: \"{{ .Namespace.Name }}\" backupArtifactPrefix: s3-bucket/path/artifactPrefix backupInfo: \"{{ .ArtifactsIn.params.KeyValue.backupInfo }}\" reclaimSpace: true ``` This function uses a new Pod to delete the specified artifact from an object store. | Argument | Required | Type | Description | | -- | :: | | -- | | artifact | Yes | string | artifact to be deleted from the object store | ::: tip NOTE The Kubernetes job uses the `ghcr.io/kanisterio/kanister-tools` image, since it includes all the tools required to delete the artifact from an object store. ::: Example: ``` yaml func: LocationDelete name: LocationDeleteFromObjectStore args: artifact: s3://bucket/path/artifact ``` This function is used to create snapshots of one or more PVCs associated with an application. It takes individual snapshot of each PVC which can be then restored later. It generates an output that contains the Snapshot info required for restoring PVCs. ::: tip NOTE Currently we only support PVC snapshots on AWS EBS. Support for more storage providers is coming soon! ::: Arguments: | Argument | Required | Type | Description | | | :: | - | -- | | namespace | Yes | string | namespace in which to execute | | pvcs | No | []string | list of names of PVCs to be backed up | | skipWait | No | bool | initiate but do not wait for the snapshot operation to complete | When no PVCs are specified in the `pvcs` argument above, all PVCs in use by a Deployment or StatefulSet will be backed up. Outputs: | Output | Type | Description | | - | | -- | | volumeSnapshotInfo | string | Snapshot info required while restoring the PVCs | Example: Consider a scenario where you wish to backup all PVCs of a" }, { "data": "The output of this phase is saved to an Artifact named `backupInfo`, shown below: ``` yaml actions: backup: outputArtifacts: backupInfo: keyValue: manifest: \"{{ .Phases.backupVolume.Output.volumeSnapshotInfo }}\" phases: func: CreateVolumeSnapshot name: backupVolume args: namespace: \"{{ .Deployment.Namespace }}\" ``` This function is used to wait for completion of snapshot operations initiated using the function. function. Arguments: | Argument | Required | Type | Description | | | :: | | -- | | snapshots | Yes | string | snapshot info generated as output in CreateVolumeSnapshot function | This function is used to restore one or more PVCs of an application from the snapshots taken using the `createvolumesnapshot`{.interpreted-text role=\"ref\"} function. It deletes old PVCs, if present and creates new PVCs from the snapshots taken earlier. Arguments: | Argument | Required | Type | Description | | | :: | | -- | | namespace | Yes | string | namespace in which to execute | | snapshots | Yes | string | snapshot info generated as output in CreateVolumeSnapshot function | Example: Consider a scenario where you wish to restore all PVCs of a deployment. We will first scale down the application, restore PVCs and then scale up. For this phase, we will make use of the backupInfo Artifact provided by the function. ``` yaml func: ScaleWorkload name: shutdownPod args: namespace: \"{{ .Deployment.Namespace }}\" name: \"{{ .Deployment.Name }}\" kind: Deployment replicas: 0 func: CreateVolumeFromSnapshot name: restoreVolume args: namespace: \"{{ .Deployment.Namespace }}\" snapshots: \"{{ .ArtifactsIn.backupInfo.KeyValue.manifest }}\" func: ScaleWorkload name: bringupPod args: namespace: \"{{ .Deployment.Namespace }}\" name: \"{{ .Deployment.Name }}\" kind: Deployment replicas: 1 ``` This function is used to delete snapshots of PVCs taken using the function. Arguments: | Argument | Required | Type | Description | | | :: | | -- | | namespace | Yes | string | namespace in which to execute | | snapshots | Yes | string | snapshot info generated as output in CreateVolumeSnapshot function | Example: ``` yaml func: DeleteVolumeSnapshot name: deleteVolumeSnapshot args: namespace: \"{{ .Deployment.Namespace }}\" snapshots: \"{{ .ArtifactsIn.backupInfo.KeyValue.manifest }}\" ``` This function get stats for the backed up data from the object store location ::: tip NOTE It is important that the application includes a `kanister-tools` sidecar container. This sidecar is necessary to run the tools that get the information from the object store. ::: Arguments: | Argument | Required | Type | Description | | -- | :: | | -- | | namespace | Yes | string | namespace in which to execute | | backupArtifactPrefix | Yes | string | path to the object store location | | backupID | Yes | string | unique snapshot id generated during backup | | mode | No | string | mode in which stats are expected | | encryptionKey | No | string | encryption key to be used for backups | Outputs: | Output | Type | Description | | -- | | -- | | mode | string | mode of the output stats | | fileCount| string | number of files in backup | | size | string | size of the number of files in backup | Example: ``` yaml actions: backupStats: outputArtifacts: backupStats: keyValue: mode: \"{{ .Phases.BackupDataStatsFromObjectStore.Output.mode }}\" fileCount: \"{{ .Phases.BackupDataStatsFromObjectStore.Output.fileCount }}\" size: \"{{ .Phases.BackupDataStatsFromObjectStore.Output.size }}\" phases: func: BackupDataStats name: BackupDataStatsFromObjectStore args: namespace: \"{{ .Deployment.Namespace }}\" backupArtifactPrefix: s3-bucket/path/artifactPrefix mode: restore-size backupID: \"{{ .ArtifactsIn.snapshot.KeyValue.backupIdentifier }}\" ``` This function creates RDS snapshot of running RDS" }, { "data": "Arguments: | Argument | Required | Type | Description | | - | :: | | -- | | instanceID | Yes | string | ID of RDS instance you want to create snapshot of | | dbEngine | No | string | Required in case of RDS Aurora instance. Supported DB Engines: `aurora` `aurora-mysql` and `aurora-postgresql` | Outputs: | Output | Type | Description | | - | - | -- | | snapshotID | string | ID of the RDS snapshot that has been created | | instanceID | string | ID of the RDS instance | | securityGroupID | []string | AWS Security Group IDs associated with the RDS instance | | allocatedStorage | string | Specifies the allocated storage size in gibibytes (GiB) | | dbSubnetGroup | string | Specifies the DB Subnet group associated with the RDS instance | Example: ``` yaml actions: backup: outputArtifacts: backupInfo: keyValue: snapshotID: \"{{ .Phases.createSnapshot.Output.snapshotID }}\" instanceID: \"{{ .Phases.createSnapshot.Output.instanceID }}\" securityGroupID: \"{{ .Phases.createSnapshot.Output.securityGroupID }}\" allocatedStorage: \"{{ .Phases.createSnapshot.Output.allocatedStorage }}\" dbSubnetGroup: \"{{ .Phases.createSnapshot.Output.dbSubnetGroup }}\" configMapNames: dbconfig phases: func: CreateRDSSnapshot name: createSnapshot args: instanceID: '{{ index .ConfigMaps.dbconfig.Data \"postgres.instanceid\" }}' ``` This function spins up a temporary RDS instance from the given snapshot, extracts database dump and uploads that dump to the configured object storage. Arguments: | Argument | Required | Type | Description | | -- | :: | - | -- | | instanceID | Yes | string | RDS db instance ID | | namespace | Yes | string | namespace in which to execute the Kanister tools pod for this function | | snapshotID | Yes | string | ID of the RDS snapshot | | dbEngine | Yes | string | one of the RDS db engines. Supported engine(s): `PostgreSQL` | | username | No | string | username of the RDS database instance | | password | No | string | password of the RDS database instance | | backupArtifactPrefix | No | string | path to store the backup on the object store | | databases | No | []string | list of databases to take backup of | | securityGroupID | No | []string | list of `securityGroupID` to be passed to temporary RDS instance | | dbSubnetGroup | No | string | DB Subnet Group to be passed to temporary RDS instance | ::: tip NOTE \\- If `databases` argument is not set, backup of all the databases will be taken. - If `securityGroupID` argument is not set, `ExportRDSSnapshotToLocation` will find out Security Group IDs associated with instance with `instanceID` and will pass the same. - If `backupArtifactPrefix` argument is not set, `instanceID` will be used as backupArtifactPrefix. - If `dbSubnetGroup` argument is not set, `default` DB Subnet group will be used. ::: Outputs: | Output | Type | Description | | | - | -- | | snapshotID | string | ID of the RDS snapshot that has been created | | instanceID | string | ID of the RDS instance | | backupID | string | unique backup id generated during storing data into object storage | | securityGroupID | []string | AWS Security Group IDs associated with the RDS instance | Example: ``` yaml actions: backup: outputArtifacts: backupInfo: keyValue: snapshotID: \"{{ .Phases.createSnapshot.Output.snapshotID }}\" instanceID: \"{{ .Phases.createSnapshot.Output.instanceID }}\" securityGroupID: \"{{ .Phases.createSnapshot.Output.securityGroupID }}\" backupID: \"{{ .Phases.exportSnapshot.Output.backupID }}\" dbSubnetGroup: \"{{ .Phases.createSnapshot.Output.dbSubnetGroup }}\" configMapNames: dbconfig phases: func: CreateRDSSnapshot name: createSnapshot args: instanceID: '{{ index .ConfigMaps.dbconfig.Data \"postgres.instanceid\" }}' func: ExportRDSSnapshotToLocation name: exportSnapshot objects: dbsecret: kind: Secret name: '{{ index .ConfigMaps.dbconfig.Data \"postgres.secret\" }}' namespace: \"{{ .Namespace.Name }}\" args: namespace: \"{{ .Namespace.Name }}\" instanceID: \"{{ .Phases.createSnapshot.Output.instanceID }}\" securityGroupID: \"{{ .Phases.createSnapshot.Output.securityGroupID }}\" username: '{{ index .Phases.exportSnapshot.Secrets.dbsecret.Data \"username\" | toString }}' password: '{{ index" }, { "data": "\"password\" | toString }}' dbEngine: \"PostgreSQL\" databases: '{{ index .ConfigMaps.dbconfig.Data \"postgres.databases\" }}' snapshotID: \"{{ .Phases.createSnapshot.Output.snapshotID }}\" backupArtifactPrefix: test-postgresql-instance/postgres dbSubnetGroup: \"{{ .Phases.createSnapshot.Output.dbSubnetGroup }}\" ``` This function restores the RDS DB instance either from an RDS snapshot or from the data dump (if [snapshotID] is not set) that is stored in an object storage. ::: tip NOTE \\- If [snapshotID] is set, the function will restore RDS instance from the RDS snapshot. Otherwise backupID needs to be set to restore the RDS instance from data dump. - While restoring the data from RDS snapshot if RDS instance (where we have to restore the data) doesn\\'t exist, the RDS instance will be created. But if the data is being restored from the Object Storage (data dump) and the RDS instance doesn\\'t exist new RDS instance will not be created and will result in an error. ::: Arguments: | Argument | Required | Type | Description | | -- | :: | - | -- | | instanceID | Yes | string | RDS db instance ID | | snapshotID | No | string | ID of the RDS snapshot | | username | No | string | username of the RDS database instance | | password | No | string | password of the RDS database instance | | backupArtifactPrefix | No | string | path to store the backup on the object store | | backupID | No | string | unique backup id generated during storing data into object storage | | securityGroupID | No | []string | list of `securityGroupID` to be passed to restored RDS instance | | namespace | No | string | namespace in which to execute. Required if `snapshotID` is nil | | dbEngine | No | string | one of the RDS db engines. Supported engines: `PostgreSQL`, `aurora`, `aurora-mysql` and `aurora-postgresql`. Required if `snapshotID` is nil or Aurora is run in RDS instance | | dbSubnetGroup | No | string | DB Subnet Group to be passed to restored RDS instance | ::: tip NOTE \\- If `snapshotID` is not set, restore will be done from data dump. In that case `backupID` [arg] is required. - If `securityGroupID` argument is not set, `RestoreRDSSnapshot` will find out Security Group IDs associated with instance with `instanceID` and will pass the same. - If `dbSubnetGroup` argument is not set, `default` DB Subnet group will be used. ::: Outputs: | Output | Type | Description | | - | | -- | | endpoint| string | endpoint of the RDS instance | Example: ``` yaml restore: inputArtifactNames: backupInfo kind: Namespace phases: func: RestoreRDSSnapshot name: restoreSnapshots objects: dbsecret: kind: Secret name: '{{ index .ConfigMaps.dbconfig.Data \"postgres.secret\" }}' namespace: \"{{ .Namespace.Name }}\" args: namespace: \"{{ .Namespace.Name }}\" backupArtifactPrefix: test-postgresql-instance/postgres instanceID: \"{{ .ArtifactsIn.backupInfo.KeyValue.instanceID }}\" backupID: \"{{ .ArtifactsIn.backupInfo.KeyValue.backupID }}\" securityGroupID: \"{{ .ArtifactsIn.backupInfo.KeyValue.securityGroupID }}\" username: '{{ index .Phases.restoreSnapshots.Secrets.dbsecret.Data \"username\" | toString }}' password: '{{ index .Phases.restoreSnapshots.Secrets.dbsecret.Data \"password\" | toString }}' dbEngine: \"PostgreSQL\" dbSubnetGroup: \"{{ .ArtifactsIn.backupInfo.KeyValue.dbSubnetGroup }}\" ``` This function deletes the RDS snapshot by the [snapshotID]. Arguments: | Argument | Required | Type | Description | | - | :: | | -- | | snapshotID | No | string | ID of the RDS snapshot | Example: ``` yaml actions: delete: kind: Namespace inputArtifactNames: backupInfo phases: func: DeleteRDSSnapshot name: deleteSnapshot args: snapshotID: \"{{ .ArtifactsIn.backupInfo.KeyValue.snapshotID }}\" ``` This function is used to create or delete Kubernetes" }, { "data": "Arguments: | Argument | Required | Type | Description | | | :: | | -- | | operation | Yes | string | `create` or `delete` Kubernetes resource | | namespace | No | string | namespace in which the operation is executed | | spec | No | string | resource spec that needs to be created | | objectReference | No | map[string]interface{} | object reference for delete operation | Example: ``` yaml func: KubeOps name: createDeploy args: operation: create namespace: \"{{ .Deployment.Namespace }}\" spec: |- apiVersion: apps/v1 kind: Deployment metadata: name: \"{{ .Deployment.Name }}\" spec: replicas: 1 selector: matchLabels: app: example template: metadata: labels: app: example spec: containers: image: busybox imagePullPolicy: IfNotPresent name: container ports: containerPort: 80 name: http protocol: TCP func: KubeOps name: deleteDeploy args: operation: delete objectReference: apiVersion: \"{{ .Phases.createDeploy.Output.apiVersion }}\" group: \"{{ .Phases.createDeploy.Output.group }}\" resource: \"{{ .Phases.createDeploy.Output.resource }}\" name: \"{{ .Phases.createDeploy.Output.name }}\" namespace: \"{{ .Phases.createDeploy.Output.namespace }}\" ``` This function is used to wait on a Kubernetes resource until a desired state is reached. The wait condition is defined in a Go template syntax. Arguments: | Argument | Required | Type | Description | | - | :: | | -- | | timeout | Yes | string | wait timeout | | conditions | Yes | map[string]interface{} | keys should be `allOf` and/or `anyOf` with value as `[]Condition` | `Condition` struct: ``` yaml condition: \"Go template condition that returns true or false\" objectReference: apiVersion: \"Kubernetes resource API version\" resource: \"Type of resource to wait for\" name: \"Name of the resource\" ``` The Go template conditions can be validated using kubectl commands with `-o go-template` flag. E.g. To check if the Deployment is ready, the following Go template syntax can be used with kubectl command ``` bash kubectl get deploy -n $NAMESPACE $DEPLOY_NAME \\ -o go-template='{{ $available := false }}{{ range $condition := $.status.conditions }}{{ if and (eq .type \"Available\") (eq .status \"True\") }}{{ $available = true }}{{ end }}{{ end }}{{ $available }}' ``` The same Go template can be used as a condition in the WaitV2 function. Example: ``` yaml func: WaitV2 name: waitForDeploymentReady args: timeout: 5m conditions: anyOf: condition: '{{ $available := false }}{{ range $condition := $.status.conditions }}{{ if and (eq .type \"Available\") (eq .status \"True\") }}{{ $available = true }}{{ end }}{{ end }}{{ $available }}' objectReference: apiVersion: \"v1\" group: \"apps\" name: \"{{ .Object.metadata.name }}\" namespace: \"{{ .Object.metadata.namespace }}\" resource: \"deployments\" ``` This function is used to wait on a Kubernetes resource until a desired state is reached. Arguments: | Argument | Required | Type | Description | | - | :: | | -- | | timeout | Yes | string | wait timeout | | conditions | Yes | map[string]interface{} | keys should be `allOf` and/or `anyOf` with value as `[]Condition` | `Condition` struct: ``` yaml condition: \"Go template condition that returns true or false\" objectReference: apiVersion: \"Kubernetes resource API version\" resource: \"Type of resource to wait for\" name: \"Name of the resource\" ``` ::: tip NOTE We can refer to the object key-value in Go template condition with the help of a `$` prefix JSON-path syntax. ::: Example: ``` yaml func: Wait name: waitNsReady args: timeout: 60s conditions: allOf: condition: '{{ if (eq \"{ $.status.phase }\" \"Invalid\")}}true{{ else }}false{{ end }}' objectReference: apiVersion: v1 resource: namespaces name: \"{{ .Namespace.Name }}\" condition: '{{ if (eq \"{ $.status.phase }\" \"Active\")}}true{{ else }}false{{ end }}' objectReference: apiVersion: v1 resource: namespaces name: \"{{ .Namespace.Name }}\" ``` This function is used to create CSI VolumeSnapshot for a PersistentVolumeClaim. By default, it waits for the VolumeSnapshot to be" }, { "data": "Arguments: | Argument | Required | Type | Description | | -- | :: | -- | -- | | name | No | string | name of the VolumeSnapshot. Default value is `<pvc>-snapshot-<random-alphanumeric-suffix>` | | pvc | Yes | string | name of the PersistentVolumeClaim to be captured | | namespace | Yes | string | namespace of the PersistentVolumeClaim and resultant VolumeSnapshot | | snapshotClass | Yes | string | name of the VolumeSnapshotClass | | labels | No | map[string]string | labels for the VolumeSnapshot | Outputs: | Output | Type | Description | | | | -- | | name | string | name of the CSI VolumeSnapshot | | pvc | string | name of the captured PVC | | namespace | string | namespace of the captured PVC and VolumeSnapshot | | restoreSize | string | required memory size to restore PVC | | snapshotContent | string | name of the VolumeSnapshotContent | Example: ``` yaml actions: backup: outputArtifacts: snapshotInfo: keyValue: name: \"{{ .Phases.createCSISnapshot.Output.name }}\" pvc: \"{{ .Phases.createCSISnapshot.Output.pvc }}\" namespace: \"{{ .Phases.createCSISnapshot.Output.namespace }}\" restoreSize: \"{{ .Phases.createCSISnapshot.Output.restoreSize }}\" snapshotContent: \"{{ .Phases.createCSISnapshot.Output.snapshotContent }}\" phases: func: CreateCSISnapshot name: createCSISnapshot args: pvc: \"{{ .PVC.Name }}\" namespace: \"{{ .PVC.Namespace }}\" snapshotClass: do-block-storage ``` This function creates a pair of CSI `VolumeSnapshot` and `VolumeSnapshotContent` resources, assuming that the underlying real storage volume snapshot already exists. The deletion behavior is defined by the `deletionPolicy` property (`Retain`, `Delete`) of the snapshot class. For more information on pre-provisioned volume snapshots and snapshot deletion policy, see the Kubernetes . Arguments: | Argument | Required | Type | Description | | -- | :: | | -- | | name | Yes | string | name of the new CSI `VolumeSnapshot` | | namespace | Yes | string | namespace of the new CSI `VolumeSnapshot` | | driver | Yes | string | name of the CSI driver for the new CSI `VolumeSnapshotContent` | | handle | Yes | string | unique identifier of the volume snapshot created on the storage backend used as the source of the new `VolumeSnapshotContent` | | snapshotClass | Yes | string | name of the `VolumeSnapshotClass` to use | Outputs: | Output | Type | Description | | | | -- | | name | string | name of the new CSI `VolumeSnapshot` | | namespace | string | namespace of the new CSI `VolumeSnapshot` | | restoreSize | string | required memory size to restore the volume | | snapshotContent | string | name of the new CSI `VolumeSnapshotContent` | Example: ``` yaml actions: createStaticSnapshot: phases: func: CreateCSISnapshotStatic name: createCSISnapshotStatic args: name: volume-snapshot namespace: default snapshotClass: csi-hostpath-snapclass driver: hostpath.csi.k8s.io handle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002 ``` This function restores a new PersistentVolumeClaim using CSI VolumeSnapshot. Arguments: | Argument | Required | Type | Description | | - | :: | -- | -- | | name | Yes | string | name of the VolumeSnapshot | | pvc | Yes | string | name of the new PVC | | namespace | Yes | string | namespace of the VolumeSnapshot and resultant PersistentVolumeClaim | | storageClass | Yes | string | name of the StorageClass | | restoreSize | Yes | string | required memory size to restore" }, { "data": "Must be greater than zero | | accessModes | No | []string | access modes for the underlying PV (Default is `[\"ReadWriteOnce\"]`) | | volumeMode | No | string | mode of volume (Default is `\"Filesystem\"`) | | labels | No | map[string]string | optional labels for the PersistentVolumeClaim | ::: tip NOTE Output artifact `snapshotInfo` from `CreateCSISnapshot` function can be used as an input artifact in this function. ::: Example: ``` yaml actions: restore: inputArtifactNames: snapshotInfo phases: func: RestoreCSISnapshot name: restoreCSISnapshot args: name: \"{{ .ArtifactsIn.snapshotInfo.KeyValue.name }}\" pvc: \"{{ .ArtifactsIn.snapshotInfo.KeyValue.pvc }}-restored\" namespace: \"{{ .ArtifactsIn.snapshotInfo.KeyValue.namespace }}\" storageClass: do-block-storage restoreSize: \"{{ .ArtifactsIn.snapshotInfo.KeyValue.restoreSize }}\" accessModes: [\"ReadWriteOnce\"] volumeMode: \"Filesystem\" ``` This function deletes a VolumeSnapshot from given namespace. Arguments: | Argument | Required | Type | Description | | | :: | | -- | | name | Yes | string | name of the VolumeSnapshot | | namespace | Yes | string | namespace of the VolumeSnapshot | ::: tip NOTE Output artifact `snapshotInfo` from `CreateCSISnapshot` function can be used as an input artifact in this function. ::: Example: ``` yaml actions: delete: inputArtifactNames: snapshotInfo phases: func: DeleteCSISnapshot name: deleteCSISnapshot args: name: \"{{ .ArtifactsIn.snapshotInfo.KeyValue.name }}\" namespace: \"{{ .ArtifactsIn.snapshotInfo.KeyValue.namespace }}\" ``` This function deletes an unbounded `VolumeSnapshotContent` resource. It has no effect on bounded `VolumeSnapshotContent` resources, as they would be protected by the CSI controller. Arguments: | Argument | Required | Type | Description | | -- | :: | | -- | | name | Yes | string | name of the `VolumeSnapshotContent` | Example: ``` yaml actions: deleteVSC: phases: func: DeleteCSISnapshotContent name: deleteCSISnapshotContent args: name: \"test-snapshot-content-content-dfc8fa67-8b11-4fdf-bf94-928589c2eed8\" ``` This function backs up data from a container into any object store supported by Kanister using Kopia Repository Server as data mover. ::: tip NOTE It is important that the application includes a `kanister-tools` sidecar container. This sidecar is necessary to run the tools that back up the volume and store it on the object store. Additionally, in order to use this function, a RepositoryServer CR is needed while creating the . ::: Arguments: | Argument | Required | Type | Description | | | :: | | -- | | namespace | Yes | string | namespace of the container that you want to backup the data of | | pod | Yes | string | pod name of the container that you want to backup the data of | | container | Yes | string | name of the kanister sidecar container | | includePath | Yes | string | path of the data to be backed up | | snapshotTags | No | string | custom tags to be provided to the kopia snapshots | | repositoryServerUserHostname| No | string | user's hostname to access the kopia repository server. Hostname would be available in the user access credential secret | Outputs: | Output | Type | Description | | -- | | -- | | backupID | string | unique snapshot id generated during backup | | size | string | size of the backup | | phySize | string | physical size of the backup | Example: ``` yaml actions: backup: outputArtifacts: backupIdentifier: keyValue: id: \"{{ .Phases.backupToS3.Output.backupID }}\" phases: func: BackupDataUsingKopiaServer name: backupToS3 args: namespace: \"{{ .Deployment.Namespace }}\" pod: \"{{ index .Deployment.Pods 0 }}\" container: kanister-tools includePath: /mnt/data ``` This function restores data backed up by the `BackupDataUsingKopiaServer` function. It creates a new Pod that mounts the PVCs referenced by the Pod specified in the function argument and restores data to the specified path. ::: tip NOTE It is extremely important that, the PVCs are not currently in use by an active application container, as they are required to be mounted to the new Pod (ensure by using `ScaleWorkload` with replicas=0" }, { "data": "For advanced use cases, it is possible to have concurrent access but the PV needs to have `RWX` access mode and the volume needs to use a clustered file system that supports concurrent access. ::: | Argument | Required | Type | Description | | | :: | - | -- | | namespace | Yes | string | namespace of the application that you want to restore the data in | | image | Yes | string | image to be used for running restore job (should contain kopia binary) | | backupIdentifier | Yes | string | unique snapshot id generated during backup | | restorePath | Yes | string | path where data to be restored | | pod | No | string | pod to which the volumes are attached | | volumes | No | map[string]string | mapping of [pvcName] to [mountPath] under which the volume will be available | | podOverride | No | map[string]interface{} | specs to override default pod specs with | | repositoryServerUserHostname| No | string | user's hostname to access the kopia repository server. Hostname would be available in the user access credential secret | ::: tip NOTE The `image` argument requires the use of `ghcr.io/kanisterio/kanister-tools` image since it includes the required tools to restore data from the object store. Either `pod` or the `volumes` arguments must be specified to this function based on the function that was used to backup the data. If [BackupDataUsingKopiaServer] is used to backup the data we should specify pod and for [CopyVolumeDataUsingKopiaServer], volumes should be specified. Additionally, in order to use this function, a RepositoryServer CR is required. ::: Example: Consider a scenario where you wish to restore the data backed up by the function. We will first scale down the application, restore the data and then scale it back up. For this phase, we will use the `backupIdentifier` Artifact provided by backup function. ``` yaml func: ScaleWorkload name: shutdownPod args: namespace: \\\"{{ .Deployment.Namespace }}\\\" name: \\\"{{ .Deployment.Name }}\\\" kind: Deployment replicas: 0 func: RestoreDataUsingKopiaServer name: restoreFromS3 args: namespace: \\\"{{ .Deployment.Namespace }}\\\" pod: \\\"{{ index .Deployment.Pods 0 }}\\\" backupIdentifier: \\\"{{ .ArtifactsIn.backupIdentifier.KeyValue.id }}\\\" restorePath: /mnt/data func: ScaleWorkload name: bringupPod args: namespace: \\\"{{ .Deployment.Namespace }}\\\" name: \\\"{{ .Deployment.Name }}\\\" kind: Deployment replicas: 1 ``` This function deletes the snapshot data backed up by the `BackupDataUsingKopiaServer` function. It creates a new Pod that runs `delete snapshot` command. ::: tip NOTE The `image` argument requires the use of `ghcr.io/kanisterio/kanister-tools` image since it includes the required tools to delete snapshot from the object store. Additionally, in order to use this function, a RepositoryServer CR is required. ::: | Argument | Required | Type | Description | | | :: | | -- | | namespace | Yes | string | namespace in which to execute the delete job | | backupID | Yes | string | unique snapshot id generated during backup | | image | Yes | string | image to be used for running delete job (should contain kopia binary) | | repositoryServerUserHostname| No | string | user's hostname to access the kopia repository server. Hostname would be available in the user access credential secret | Example: Consider a scenario where you wish to delete the data backed up by the function. For this phase, we will use the `backupIdentifier` Artifact provided by backup function. ``` yaml func: DeleteDataUsingKopiaServer name: DeleteFromObjectStore args: namespace: \"{{ .Deployment.Namespace }}\" backupID: \"{{ .ArtifactsIn.backupIdentifier.KeyValue.id }}\" image: ghcr.io/kanisterio/kanister-tools:0.89.0 ``` Kanister can be extended by registering new Kanister Functions. Kanister Functions are registered using a similar mechanism to drivers. To register new Kanister Functions," } ]
{ "category": "Runtime", "file_name": "CODE_OF_CONDUCT.md", "project_name": "Inclavare Containers", "subcategory": "Container Runtime" }
[ { "data": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at teamgingonic@gmail.com. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. This Code of Conduct is adapted from the , version 1.4, available at" } ]
{ "category": "Runtime", "file_name": "ceph-object-store-crd.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "title: CephObjectStore CRD Rook allows creation and customization of object stores through the custom resource definitions (CRDs). The following settings are available for Ceph object stores. Erasure coded pools can only be used with `dataPools`. The `metadataPool` must use a replicated pool. !!! note This sample requires at least 3 bluestore OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the is set to `host` and the `erasureCoded` chunk settings require at least 3 different OSDs (2 `dataChunks` + 1 `codingChunks`). ```yaml apiVersion: ceph.rook.io/v1 kind: CephObjectStore metadata: name: my-store namespace: rook-ceph spec: metadataPool: failureDomain: host replicated: size: 3 dataPool: failureDomain: host erasureCoded: dataChunks: 2 codingChunks: 1 preservePoolsOnDelete: true gateway: port: 80 instances: 1 annotations: placement: resources: ``` `name`: The name of the object store to create, which will be reflected in the pool and other resource names. `namespace`: The namespace of the Rook cluster where the object store is created. The pools allow all of the settings defined in the Block Pool CRD spec. For more details, see the settings. In the example above, there must be at least three hosts (size 3) and at least three devices (2 data + 1 coding chunks) in the cluster. When the `zone` section is set pools with the object stores name will not be created since the object-store will the using the pools created by the ceph-object-zone. `metadataPool`: The settings used to create all of the object store metadata pools. Must use replication. `dataPool`: The settings to create the object store data pool. Can use replication or erasure coding. `preservePoolsOnDelete`: If it is set to 'true' the pools used to support the object store will remain when the object store will be deleted. This is a security measure to avoid accidental loss of data. It is set to 'false' by default. If not specified is also deemed as 'false'. `allowUsersInNamespaces`: If a CephObjectStoreUser is created in a namespace other than the Rook cluster namespace, the namespace must be added to this list of allowed namespaces, or specify \"*\" to allow all namespaces. This is useful for applications that need object store credentials to be created in their own namespace, where neither OBCs nor COSI is being used to create buckets. The default is empty. The gateway settings correspond to the RGW daemon settings. `type`: `S3` is supported `sslCertificateRef`: If specified, this is the name of the Kubernetes secret(`opaque` or `tls` type) that contains the TLS certificate to be used for secure connections to the object store. If it is an opaque Kubernetes Secret, Rook will look in the secret provided at the `cert` key name. The value of the `cert` key must be in the format expected by the [RGW service](https://docs.ceph.com/docs/master/install/ceph-deploy/install-ceph-gateway/#using-ssl-with-civetweb): \"The server key, server certificate, and any other CA or intermediate certificates be supplied in one file. Each of these items must be in PEM form.\" They are scenarios where the certificate DNS is set for a particular domain that does not include the local Kubernetes DNS, namely the object store DNS service endpoint. If adding the service DNS name to the certificate is not empty another key can be specified in the secret's data: `insecureSkipVerify: true` to skip the certificate" }, { "data": "It is not recommended to enable this option since TLS is susceptible to machine-in-the-middle attacks unless custom verification is used. `caBundleRef`: If specified, this is the name of the Kubernetes secret (type `opaque`) that contains additional custom ca-bundle to use. The secret must be in the same namespace as the Rook cluster. Rook will look in the secret provided at the `cabundle` key name. `hostNetwork`: Whether host networking is enabled for the rgw daemon. If not set, the network settings from the cluster CR will be applied. `port`: The port on which the Object service will be reachable. If host networking is enabled, the RGW daemons will also listen on that port. If running on SDN, the RGW daemon listening port will be 8080 internally. `securePort`: The secure port on which RGW pods will be listening. A TLS certificate must be specified either via `sslCerticateRef` or `service.annotations` `instances`: The number of pods that will be started to load balance this object store. `externalRgwEndpoints`: A list of IP addresses to connect to external existing Rados Gateways (works with external mode). This setting will be ignored if the `CephCluster` does not have `external` spec enabled. Refer to the for more details. Multiple endpoints can be given, but for stability of ObjectBucketClaims, we highly recommend that users give only a single external RGW endpoint that is a load balancer that sends requests to the multiple RGWs. `annotations`: Key value pair list of annotations to add. `labels`: Key value pair list of labels to add. `placement`: The Kubernetes placement settings to determine where the RGW pods should be started in the cluster. `resources`: Set resource requests/limits for the Gateway Pod(s), see . `priorityClassName`: Set priority class name for the Gateway Pod(s) `service`: The annotations to set on to the Kubernetes Service of RGW. The feature supported in Openshift is enabled by the following example: ```yaml gateway: service: annotations: service.beta.openshift.io/serving-cert-secret-name: <name of TLS secret for automatic generation> ``` Example of external rgw endpoints to connect to: ```yaml gateway: port: 80 externalRgwEndpoints: ip: 192.168.39.182 ``` The settings allow the object store to join custom created . `name`: the name of the ceph-object-zone the object store will be in. The hosting settings allow you to host buckets in the object store on a custom DNS name, enabling virtual-hosted-style access to buckets similar to AWS S3 (https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html). `dnsNames`: a list of DNS names to host buckets on. These names need to valid according RFC-1123. Otherwise it will fail. Each endpoint requires wildcard support like . Do not include the wildcard itself in the list of hostnames (e.g., use \"mystore.example.com\" instead of \".mystore.example.com\"). Add all the hostnames like openshift routes otherwise access will be denied, but if the hostname does not support wild card then virtual host style won't work those hostname. By default cephobjectstore service endpoint and custom endpoints from cephobjectzone is included. The feature is supported only for Ceph v18 and later versions. Rook provides a default `mime.types` file for each Ceph object store. This file is stored in a Kubernetes ConfigMap with the name `rook-ceph-rgw-<STORE-NAME>-mime-types`. For most users, the default file should suffice, however, the option is available to users to edit the `mime.types` file in the ConfigMap as they desire. Users may have their own special file types, and particularly security conscious users may wish to pare down the file to reduce the possibility of a file type execution" }, { "data": "Rook will not overwrite an existing `mime.types` ConfigMap so that user modifications will not be destroyed. If the object store is destroyed and recreated, the ConfigMap will also be destroyed and created anew. Rook will be default monitor the state of the object store endpoints. The following CRD settings are available: `healthCheck`: main object store health monitoring section `startupProbe`: Disable, or override timing and threshold values of the object gateway startup probe. `readinessProbe`: Disable, or override timing and threshold values of the object gateway readiness probe. Here is a complete example: ```yaml healthCheck: startupProbe: disabled: false readinessProbe: disabled: false periodSeconds: 5 failureThreshold: 2 ``` You can monitor the health of a CephObjectStore by monitoring the gateway deployments it creates. The primary deployment created is named `rook-ceph-rgw-<store-name>-a` where `store-name` is the name of the CephObjectStore (don't forget the `-a` at the end). Ceph RGW supports Server Side Encryption as defined in with three different modes: AWS-SSE:C, AWS-SSE:KMS and AWS-SSE:S3. The last two modes require a Key Management System (KMS) like HashiCorp Vault. Currently, Vault is the only supported KMS backend for CephObjectStore. Refer to the for details about Vault. If these settings are defined, then RGW will establish a connection between Vault and whenever S3 client sends request with Server Side Encryption. has more details. The `security` section contains settings related to KMS encryption of the RGW. ```yaml security: kms: connectionDetails: KMS_PROVIDER: vault VAULT_ADDR: http://vault.default.svc.cluster.local:8200 VAULTBACKENDPATH: rgw VAULTSECRETENGINE: kv VAULT_BACKEND: v2 tokenSecretName: rgw-vault-kms-token s3: connectionDetails: KMS_PROVIDER: vault VAULT_ADDR: http://vault.default.svc.cluster.local:8200 VAULTBACKENDPATH: rgw VAULTSECRETENGINE: transit tokenSecretName: rgw-vault-s3-token ``` For RGW, please note the following: `VAULTSECRETENGINE`: the secret engine which Vault should use. Currently supports and . AWS-SSE:KMS supports `transit` engine and `kv` engine version 2. AWS-SSE:S3 only supports `transit` engine. The Storage administrator needs to create a secret in the Vault server so that S3 clients use that key for encryption for AWS-SSE:KMS ```console vault kv put rook/<mybucketkey> key=$(openssl rand -base64 32) # kv engine vault write -f transit/keys/<mybucketkey> exportable=true # transit engine ``` TLS authentication with custom certificates between Vault and CephObjectStore RGWs are supported from ceph v16.2.6 onwards `tokenSecretName` can be (and often will be) the same for both kms and s3 configurations. `AWS-SSE:S3` requires Ceph Quincy v17.2.3 or later. During deletion of a CephObjectStore resource, Rook protects against accidental or premature destruction of user data by blocking deletion if there are any object buckets in the object store being deleted. Buckets may have been created by users or by ObjectBucketClaims. For deletion to be successful, all buckets in the object store must be removed. This may require manual deletion or removal of all ObjectBucketClaims. Alternately, the `cephobjectstore.ceph.rook.io` finalizer on the CephObjectStore can be removed to remove the Kubernetes Custom Resource, but the Ceph pools which store the data will not be removed in this case. Rook will warn about which buckets are blocking deletion in three ways: An event will be registered on the CephObjectStore resource A status condition will be added to the CephObjectStore resource An error will be added to the Rook Ceph Operator log If the CephObjectStore is configured in a the above conditions are applicable only to stores that belong to a single master zone. Otherwise the conditions are ignored. Even if the store is removed the user can access the data from a peer object store." } ]
{ "category": "Runtime", "file_name": "system-requirements.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "English | x86-64, arm64 The system kernel version must be greater than 4.2 when using `ipvlan` as the cluster's CNI We test Spiderpool against the following Kubernetes versions v1.22.7 v1.23.5 v1.24.4 v1.25.3 v1.26.2 v1.27.1 v1.28.0 The feature requires a minimum version of `v1.21`. | ENV Configuration | Port/Protocol | Description | Is Optional | |--||--|| | SPIDERPOOLHEALTHPORT | 5710/tcp | `spiderpool-agent` pod health check port for kubernetes | must | | SPIDERPOOLMETRICHTTP_PORT | 5711/tcp | `spiderpool-agent` metrics port | optional(default disable) | | SPIDERPOOLGOPSLISTEN_PORT | 5712/tcp | `spiderpool-agent` gops port for debug | optional(default enable) | | SPIDERPOOLHEALTHPORT | 5720/tcp | `spiderpool-controller` pod health check port for kubernetes | must | | SPIDERPOOLMETRICHTTP_PORT | 5711/tcp | `spiderpool-controller` metrics port for openTelemetry | optional(default disable) | | SPIDERPOOLWEBHOOKPORT | 5722/tcp | `spiderpool-controller` webhook port for kubernetes | must | | SPIDERPOOLGOPSLISTEN_PORT | 5724/tcp | `spiderpool-controller` gops port for debug | optional(default enable) |" } ]
{ "category": "Runtime", "file_name": "configuration.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Configuration For most any Ceph cluster, the user will want to--and may need to--change some Ceph configurations. These changes often may be warranted in order to alter performance to meet SLAs or to update default data resiliency settings. !!! warning Modify Ceph settings carefully, and review the before making any changes. Changing the settings could result in unhealthy daemons or even data loss if used incorrectly. Rook and Ceph both strive to make configuration as easy as possible, but there are some configuration options which users are well advised to consider for any production cluster. The number of PGs and PGPs can be configured on a per-pool basis, but it is advised to set default values that are appropriate for your Ceph cluster. Appropriate values depend on the number of OSDs the user expects to have backing each pool. These can be configured by declaring pgnum and pgpnum parameters under CephBlockPool resource. For determining the right value for pg_num please refer [placement group sizing](ceph-configuration.md#placement-group-sizing) In this example configuration, 128 PGs are applied to the pool: ```yaml apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: ceph-block-pool-test namespace: rook-ceph spec: deviceClass: hdd replicated: size: 3 spec: parameters: pg_num: '128' # create the pool with a pre-configured placement group number pgpnum: '128' # this should at least match `pgnum` so that all PGs are used ``` Ceph provide detailed information about how to tune these parameters. Nautilus capable of automatically managing PG and PGP values for pools. Please see for more information about this module. The `pg_autoscaler` module is enabled by default. To disable this module, in the : ```yaml spec: mgr: modules: name: pg_autoscaler enabled: false ``` With that setting, the autoscaler will be enabled for all new pools. If you do not desire to have the autoscaler enabled for all new pools, you will need to use the Rook toolbox to enable the module and on individual pools. The most recommended way of configuring Ceph is to set Ceph's configuration directly. The first method for doing so is to use Ceph's CLI from the Rook toolbox pod. Using the toolbox pod is detailed . From the toolbox, the user can change Ceph configurations, enable manager modules, create users and pools, and much more. The Ceph Dashboard, examined in more detail , is another way of setting some of Ceph's configuration directly. Configuration by the Ceph dashboard is recommended with the same priority as configuration via the Ceph CLI (above). Setting configs via Ceph's CLI requires that at least one mon be available for the configs to be set, and setting configs via dashboard requires at least one mgr to be available. Ceph may also have a small number of very advanced settings that aren't able to be modified easily via CLI or dashboard. The least recommended method for configuring Ceph is intended as a last-resort fallback in situations like these. This is covered in detail ." } ]
{ "category": "Runtime", "file_name": "kata-api-design.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "To fulfill the , and based on the discussion on , the Kata runtime library features the following APIs: Sandbox based top API Storage and network hotplug API Plugin frameworks for external proprietary Kata runtime extensions |Name|Description| ||| |`CreateSandbox(SandboxConfig, Factory)`| Create a sandbox and its containers, base on `SandboxConfig` and `Factory`. Return the `Sandbox` structure, but do not start them.| |Name|Description| ||| |`sandbox.Delete()`| Shut down the VM in which the sandbox, and destroy the sandbox and remove all persistent metadata.| |`sandbox.Monitor()`| Return a context handler for caller to monitor sandbox callbacks such as error termination.| |`sandbox.Release()`| Release a sandbox data structure, close connections to the agent, and quit any goroutines associated with the Sandbox. Mostly used for daemon restart.| |`sandbox.Start()`| Start a sandbox and the containers making the sandbox.| |`sandbox.Stats()`| Get the stats of a running sandbox, return a `SandboxStats` structure.| |`sandbox.Status()`| Get the status of the sandbox and containers, return a `SandboxStatus` structure.| |`sandbox.Stop(force)`| Stop a sandbox and Destroy the containers in the sandbox. When force is true, ignore guest related stop failures.| |`sandbox.CreateContainer(contConfig)`| Create new container in the sandbox with the `ContainerConfig` parameter. It will add new container config to `sandbox.config.Containers`.| |`sandbox.DeleteContainer(containerID)`| Delete a container from the sandbox by `containerID`, return a `Container` structure.| |`sandbox.EnterContainer(containerID, cmd)`| Run a new process in a container, executing customer's `types.Cmd` command.| |`sandbox.KillContainer(containerID, signal, all)`| Signal a container in the sandbox by the `containerID`.| |`sandbox.PauseContainer(containerID)`| Pause a running container in the sandbox by the `containerID`.| |`sandbox.ProcessListContainer(containerID, options)`| List every process running inside a specific container in the sandbox, return a `ProcessList` structure.| |`sandbox.ResumeContainer(containerID)`| Resume a paused container in the sandbox by the `containerID`.| |`sandbox.StartContainer(containerID)`| Start a container in the sandbox by the `containerID`.| |`sandbox.StatsContainer(containerID)`| Get the stats of a running container, return a `ContainerStats` structure.| |`sandbox.StatusContainer(containerID)`| Get the status of a container in the sandbox, return a `ContainerStatus` structure.| |`sandbox.StopContainer(containerID, force)`| Stop a container in the sandbox by the `containerID`.| |`sandbox.UpdateContainer(containerID, resources)`| Update a running container in the sandbox.| |`sandbox.WaitProcess(containerID, processID)`| Wait on a process to terminate.| |Name|Description| ||| |`sandbox.AddDevice(info)`| Add new storage device `DeviceInfo` to the sandbox, return a `Device` structure.| |`sandbox.AddInterface(inf)`| Add new NIC to the sandbox.| |`sandbox.RemoveInterface(inf)`| Remove a NIC from the sandbox.| |`sandbox.ListInterfaces()`| List all NICs and their configurations in the sandbox, return a `pbTypes.Interface` list.| |`sandbox.UpdateRoutes(routes)`| Update the sandbox route table (e.g. for portmapping support), return a `pbTypes.Route` list.| |`sandbox.ListRoutes()`| List the sandbox route table, return a `pbTypes.Route` list.| |Name|Description| ||| |`sandbox.WinsizeProcess(containerID, processID, Height, Width)`| Relay TTY resize request to a process.| |`sandbox.SignalProcess(containerID, processID, signalID, signalALL)`| Relay a signal to a process or all processes in a container.| |`sandbox.IOStream(containerID, processID)`| Relay a process stdio. Return stdin/stdout/stderr pipes to the process stdin/stdout/stderr streams.| |Name|Description| ||| |`sandbox.GetOOMEvent()`| Monitor the OOM events that occur in the sandbox..| |`sandbox.UpdateRuntimeMetrics()`| Update the `shim/hypervisor` metrics of the running sandbox.| |`sandbox.GetAgentMetrics()`| Get metrics of the agent and the guest in the running sandbox.| TBD. The metadata storage plugin controls where sandbox metadata is saved. All metadata storage plugins must implement the following API: |Name|Description| ||| |`storage.Save(key, value)`| Save a record.| |`storage.Load(key)`| Load a record.| |`storage.Delete(key)`| Delete a record.| Built-in implementations include: Filesystem storage LevelDB storage The VM factory plugin controls how a sandbox factory creates new VMs. All VM factory plugins must implement following API: |Name|Description| ||| |`VMFactory.NewVM(HypervisorConfig)`|Create a new VM based on `HypervisorConfig`.| Built-in implementations include: |Name|Description| ||| |`CreateNew()`| Create brand new VM based on `HypervisorConfig`.| |`CreateFromTemplate()`| Create new VM from template.| |`CreateFromCache()`| Create new VM from VM caches.|" } ]
{ "category": "Runtime", "file_name": "20201106-disk-reconnection.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "When disks are reconnected/migrated to other Longhorn nodes, Longhorn should be able to figure out the disk reconnection and update the node ID as well as the data path for the related replicas (including failed replicas). https://github.com/longhorn/longhorn/issues/1269 The goal of this feature is to reuse the existing data of the failed replica when the corresponding disk is back. As for how to reuse the existing data and handle rebuild related feature, it is already implemented in #1304, which is not the intention of this enhancement. Identifying the disk that is previously used in Longhorn is not the the key point. The essential of this feature is that Longhorn should know where to reuse existing data of all related replicas when the disk is reconnected. In other words, the fields that indicating the replica data position should be updated when the disk is reconnected. Before the enhancement, there is no way to reuse the existing data when a disk is reconnected/migrated. After the enhancement, this can be done by: detach the volumes using the disk Reconnect the disk to the another node (both nodes keep running) reattach the related volumes Before the enhancement, there is no chance to reuse the failed replicas on the node. After the enhancement, Longhorn will update the path and node id for all failed replicas using the disks, then Longhorn can reuse the failed replicas during rebuilding. Detach all related volumes using the disk before the disk migration. Directly move the disk to the new node (physically or in cloud vendor) and mount the disk. Add the disk with the new mount point to the corresponding new Longhorn node in Longhorn Node page. Attach the volumes for the workloads. Directly shut down the node when there are replicas on the node. Then the replicas on the node will fail. Move the disks on the down node to other running nodes (physically or in cloud vendor). Add the disk with the new mount point to the corresponding new Longhorn node in Longhorn Node page. Wait then verify the failed replicas using the disk will be reused, and the node ID & path info will be updated. There is no API change. When a disk is ready, Longhorn can list all related replicas via `replica.Spec.DiskID` then sync up node ID and path info for these replicas. If a disk is not ready, the scheduling info will be cleaned" }, { "data": "Longhorn won't be confused of updating replicas if multiple disconnected disks using the same Disk UUID. Need to add a disk related label for replicas. Store DiskUUID rather than the disk name in `replica.Spec.DiskID` Need to update `DiskID` for existing replicas during upgrade. Since the disk path of a replica may get changed but the data directory name is immutable. It's better to split `replica.Spec.DataPath` to `replica.Spec.DiskPath` and `replica.Spec.DataDirectoryName`. Then it's more convenient to sync up the disk path for replicas. Need to update the path fields for existing replicas during upgrade. Disable the node soft anti-affinity. Create a new host disk. Disable the default disk and add the extra disk with scheduling enabled for the current node. Launch a Longhorn volume with 1 replica. Then verify the only replica is scheduled to the new disk. Write random data to the volume then verify the data. Detach the volume. Unmount then remount the disk to another path. (disk migration) Create another Longhorn disk based on the migrated path. Verify the Longhorn disk state. The Longhorn disk added before the migration should become \"unschedulable\". The Longhorn disk created after the migration should become \"schedulable\". Verify the replica DiskID and the path is updated. Attach the volume. Then verify the state and the data. Set `ReplicaReplenishmentWaitInterval`. Make sure it's longer than the time needs for node replacement. Launch a Kubernetes cluster with the nodes in AWS Auto Scaling group. Then Deploy Longhorn. Deploy some workloads using Longhorn volumes. Wait for/Trigger the ASG instance replacement. Verify new replicas won't be created before reaching `ReplicaReplenishmentWaitInterval`. Verify the failed replicas are reused after the node recovery. Verify if workloads still work fine with the volumes after the recovery. Launch Longhorn v1.0.x Create and attach a volume, then write data to the volume. Directly remove a Kubernetes node, and shut down a node. Wait for the related replicas failure. Then record `replica.Spec.DiskID` for the failed replicas. Upgrade to Longhorn master Verify the Longhorn node related to the removed node is gone. Verify `replica.Spec.DiskID` on the down node is updated and the field of the replica on the gone node is unchanged. `replica.Spec.DataPath` for all replicas becomes empty. Remove all unscheduled replicas. Power on the down node. Wait for the failed replica on the down node being reused. Wait for a new replica being replenished and available. Need to update disk ID and data path for existing replicas." } ]
{ "category": "Runtime", "file_name": "basic-install.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Basic Install\" layout: docs Use this doc to get a basic installation of Velero. Refer to customize your installation. Access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. For more information on supported Kubernetes versions, see the Velero . `kubectl` installed locally Velero uses object storage to store backups and associated artifacts. It also optionally integrates with supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you'll be using from the list of . Velero supports storage providers for both cloud-provider environments and on-premises environments. For more details on on-premises scenarios, see the . Velero does not officially support Windows. In testing, the Velero team was able to backup stateless Windows applications only. The restic integration and backups of stateful applications or PersistentVolumes were not supported. If you want to perform your own testing of Velero on Windows, you must deploy Velero as a Windows container. Velero does not provide official Windows images, but its possible for you to build your own Velero Windows container image to use. Note that you must build this image on a Windows node. On macOS, you can use to install the `velero` client: ```bash brew install velero ``` Download the 's tarball for your client platform. Extract the tarball: ```bash tar -xvf <RELEASE-TARBALL-NAME>.tar.gz ``` Move the extracted `velero` binary to somewhere in your `$PATH` (`/usr/local/bin` for most users). On Windows, you can use to install the client: ```powershell choco install velero ``` There are two supported methods for installing the Velero server components: the `velero install` CLI command the Velero uses storage provider plugins to integrate with a variety of storage systems to support backup and snapshot operations. The steps to install and configure the Velero server components along with the appropriate plugins are specific to your chosen storage provider. To find installation instructions for your chosen storage provider, follow the documentation link for your provider at our page Note: if your object storage provider is different than your volume snapshot provider, follow the installation instructions for your object storage provider first, then return here and follow the instructions to . Please refer to ." } ]
{ "category": "Runtime", "file_name": "zone.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "A Rook Ceph cluster. Ideally a ceph-object-realm and a ceph-object-zone-group resource would have been started up already. The resource described in this design document represents the zone in the . When the storage admin is ready to create a multisite zone for object storage, the admin will name the zone in the metadata section on the configuration file. In the config, the admin must configure the zone group the zone is in, and pools for the zone. The first zone created in a zone group is designated as the master zone in the Ceph cluster. If endpoint(s) are not specified the endpoint will be set to the Kubernetes service DNS address and port used for the CephObjectStore. To override this, a user can specify custom endpoint(s). The endpoint(s) specified will be become the sole source of endpoints for the zone, replacing any service endpoints added by CephObjectStores. This example `ceph-object-zone.yaml`, names a zone `my-zone`. ```yaml apiVersion: ceph.rook.io/v1alpha1 kind: CephObjectZone metadata: name: zone-a namespace: rook-ceph spec: zoneGroup: zone-group-b metadataPool: failureDomain: host replicated: size: 3 dataPool: failureDomain: device erasureCoded: dataChunks: 6 codingChunks: 2 customEndpoints: \"http://zone-a.fqdn\" preservePoolsOnDelete: true ``` Now create the ceph-object-zone. ```bash kubectl create -f ceph-object-zone.yaml ``` At this point the Rook operator recognizes that a new ceph-object-zone resource needs to be configured. The operator will start creating the resource to start the ceph-object-zone. After these steps the admin should start up: A referring to the newly started up ceph-object-zone resource. A , with the same name as the `zoneGroup` field, if it has not already been started up already. A , with the same name as the `realm` field in the ceph-object-zone-group config, if it has not already been started up already. The order in which these resources are created is not important. Once the all of the resources in #2 are started up, the operator will create a zone on the Rook Ceph cluster and the ceph-object-zone resource will be running. The zone group named in the `zoneGroup` section must be the same as the ceph-object-zone-group resource the zone is a part of. When resource is deleted, zone are not deleted from the cluster. zone deletion must be done through toolboxes. Any number of ceph-object-stores can be part of a ceph-object-zone. When the storage admin is ready to sync data from another Ceph cluster with multisite set up (primary cluster) to a Rook Ceph cluster (pulling cluster), the pulling cluster will have a newly created in the zone group from the primary cluster. A resource must be created to pull the realm information from the primary cluster to the pulling cluster. Once the ceph-object-pull-realm is configured a ceph-object-zone must be created. After an ceph-object-store is configured to be in this ceph-object-zone, the all Ceph multisite resources will be running and data between the two clusters will start syncing. At the moment creating a CephObjectZone resource does not handle configuration updates for the zone. By default when a CephObjectZone is deleted, the pools supporting the zone are not deleted from the Ceph cluster. But if `preservePoolsOnDelete` is set to false, then pools are deleted from the Ceph cluster. A CephObjectZone will be removed only if all CephObjectStores that reference the zone are deleted first. One of the following scenarios is possible when deleting a CephObjectStore in a multisite configuration. Rook's behavior is noted after each scenario. The store belongs to a that has no other" }, { "data": "This case is essentially the same as deleting a CephObjectStore outside of a multisite configuration; Rook should check for dependents before deleting the store The store belongs to a master zone that has other peers Rook will error on this condition with a message instructing the user to manually set another zone as the master zone once that zone has all data backed up to it The store is a non-master peer Rook will not check for dependents in this case, as the data in the master zone is assumed to have a copy of all user data The Rook toolbox can modify the Ceph Multisite state via the radosgw-admin command. There are two scenarios possible when deleting a zone. The following commands, run via the toolbox, deletes the zone if there is only one zone in the zone group. ```bash ``` In the other scenario, there are more than one zones in a zone group. Care must be taken when changing which zone is the master zone. Please read the following documentation before running the below commands: https://docs.ceph.com/docs/master/radosgw/multisite/#changing-the-metadata-master-zone The following commands, run via toolboxes, remove the zone from the zone group first, then delete the zone. ```bash ``` Similar to deleting zones, the Rook toolbox can also change the master zone in a zone group. ```bash ``` The ceph-object-zone settings are exposed to Rook as a Custom Resource Definition (CRD). The CRD is the Kubernetes-native means by which the Rook operator can watch for new resources. The name of the resource provided in the `metadata` section becomes the name of the zone. The following variables can be configured in the ceph-object-zone resource. `zoneGroup`: The zone group named in the `zoneGroup` section of the ceph-realm resource the zone is a part of. `customEndpoints`: Specify the endpoint(s) that will accept multisite replication traffic for this zone. You may include the port in the definition if necessary. For example: \"https://my-object-store.my-domain.net:443\". `preservePoolsOnDelete`: If it is set to 'true' the pools used to support the zone will remain when the CephObjectZone is deleted. This is a security measure to avoid accidental loss of data. It is set to 'true' by default. If not specified it is also deemed as 'true'. ```yaml apiVersion: ceph.rook.io/v1alpha1 kind: CephObjectZone metadata: name: zone-b namespace: rook-ceph spec: zoneGroup: zone-group-b metadataPool: failureDomain: host replicated: size: 3 dataPool: failureDomain: device erasureCoded: dataChunks: 6 codingChunks: 2 customEndpoints: \"http://rgw-a.fqdn\" preservePoolsOnDelete: true ``` The pools are the backing data store for the object stores in the zone and are created with specific names to be private to a zone. As long as the `zone` config option is specified in the object-store's config, the object-store will use pools defined in the ceph-zone's configuration. Pools can be configured with all of the settings that can be specified in the . The underlying schema for pools defined by a pool CRD is the same as the schema under the `metadataPool` and `dataPool` elements of the object store CRD. All metadata pools are created with the same settings, while the data pool can be created with independent settings. The metadata pools must use replication, while the data pool can use replication or erasure coding. When the ceph-object-zone is deleted the pools used to support the zone will remain just like the zone. This is a security measure to avoid accidental loss of data. Just like deleting the zone itself, removing the pools must be done by hand through the toolbox. ```yaml metadataPool: failureDomain: host replicated: size: 3 dataPool: failureDomain: device erasureCoded: dataChunks: 6 codingChunks: 2 ``` The feature is supported for the ceph-object-zone CRD as well. The following example" } ]
{ "category": "Runtime", "file_name": "custom-ca-support.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "It is desired that Velero performs SSL verification on the Object Storage endpoint (BackupStorageLocation), but it is not guaranteed that the Velero container has the endpoints' CA bundle in it's system store. Velero needs to support the ability for a user to specify custom CA bundles at installation time and Velero needs to support a mechanism in the BackupStorageLocation Custom Resource to allow a user to specify a custom CA bundle. This mechanism needs to also allow Restic to access and use this custom CA bundle. Enable Velero to be configured with a custom CA bundle at installation Enable Velero support for custom CA bundles with S3 API BackupStorageLocations Enable Restic to use the custom CA bundles whether it is configured at installation time or on the BackupStorageLocation Enable Velero client to take a CA bundle as an argument Support non-S3 providers Currently, in order for Velero to perform SSL verification of the object storage endpoint the user must manually set the `AWSCABUNDLE` environment variable on the Velero deployment. If the user is using Restic, the user has to either: Add the certs to the Restic container's system store Modify Velero to pass in the certs as a CLI parameter to Restic - requiring a custom Velero deployment There are really 2 methods of using Velero with custom certificates: Including a custom certificate at Velero installation Specifying a custom certificate to be used with a `BackupStorageLocation` On the Velero deployment at install time, we can set the AWS environment variable `AWSCABUNDLE` which will allow Velero to communicate over https with the proper certs when communicating with the S3 bucket. This means we will add the ability to specify a custom CA bundle at installation time. For more information, see \"Install Command Changes\". On the Restic daemonset, we will want to also mount this secret at a pre-defined location. In the `restic` pkg, the command to invoke restic will need to be updated to pass the path to the cert file that is mounted if it is specified in the config. This is good, but doesn't allow us to specify different certs when `BackupStorageLocation` resources are created. In order to support custom certs for object storage, Velero will add an additional field to the `BackupStorageLocation`'s provider `Config` resource to provide a secretRef which will contain the coordinates to a secret containing the relevant cert file for object storage. In order for Restic to be able to consume and use this cert, Velero will need the ability to write the CA bundle somewhere in memory for the Restic pod to consume it. To accomplish this, we can look at the code for managing restic repository credentials. The way this works today is that the key is stored in a secret in the Velero namespace, and each time Velero executes a restic command, the contents of the secret are read and written out to a temp" }, { "data": "The path to this file is then passed to restic and removed afterwards. pass the path of the temp file to restic, and then remove the temp file afterwards. See ref #1 and #2. This same approach can be taken for CA bundles. The bundle can be stored in a secret which is referenced on the BSL and written to a temp file prior to invoking Restic. The `AWSCABUNDLE` environment variable works for the Velero deployment because this environment variable is passed into the AWS SDK which is used in the to build up the config object. This means that a user can simply define the CA bundle in the deployment as an env var. This can be utilized for the installation of Velero with a custom cert by simply setting this env var to the contents of the CA bundle, or the env var can be mapped to a secret which is controlled at installation time. I recommend using a secret as it makes the Restic integration easier as well. At installation time, if a user has specified a custom cert then the Restic daemonset should be updated to include the secret mounted at a predefined path. We could optionally use the system store for all custom certs added at installation time. Restic supports using the custom certs to the root certs. In the case of the BSL being created with a secret reference, then at runtime the secret will need to be consumed. This secret will be read and applied to the AWS `session` object. The `getSession()` function will need to be updated to take in the custom CA bundle so it can be passed . The Restic controller will need to be updated to write the contents of the CA bundle secret out to a temporary file inside of the restic pod.The restic will need to be updated to include the path to the file as an argument to the restic server using `--cacert`. For the path when a user defines a custom cert on the BSL, Velero will be responsible for updating the daemonset to include the secret mounted as a volume at a predefined path. Where we mount the secret is a fine detail, but I recommend mounting the certs to `/certs` to keep it in line with the other volume mount paths being used. The installation flags should be updated to include the ability to pass in a cert file. Then the install command would do the heavy lifting of creating a secret and updating the proper fields on the deployment and daemonset to mount the secret at a well defined path. Since the Velero client is responsible for gathering logs and information about the Object Storage, this implementation should include a new flag `--cacert` which can be used when communicating with the Object Storage. Additionally, the user should be able to set this in their client configuration. The command would look like: ``` $ velero client config set cacert PATH ```" } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "Container Network Interface (CNI)", "subcategory": "Cloud Native Network" }
[ { "data": "CNI is and accepts contributions via GitHub pull requests. This document outlines some of the conventions on development workflow, commit message formatting, contact points and other resources to make it easier to get your contribution accepted. We gratefully welcome improvements to documentation as well as to code. By contributing to this project you agree to the Developer Certificate of Origin (DCO). This document was created by the Linux Kernel community and is a simple statement that you, as a contributor, have the legal right to make the contribution. See the file for details. The project uses the cni-dev email list, IRC chat, and Slack: Email: IRC: # channel on Slack: #cni on the . NOTE: the previous CNI Slack (containernetworking.slack.com) has been sunsetted. Please avoid emailing maintainers found in the MAINTAINERS file directly. They are very busy and read the mailing lists. Fork the repository on GitHub Read the for build and test instructions Play with the project, submit bugs, submit pull requests! This is a rough outline of how to prepare a contribution: Create a topic branch from where you want to base your work (usually branched from main). Make commits of logical units. Make sure your commit messages are in the proper format (see below). Push your changes to a topic branch in your fork of the repository. If you changed code: add automated tests to cover your changes, using the & style if the package did not previously have any test coverage, add it to the list of `TESTABLE` packages in the `test.sh` script. run the full test script and ensure it passes Make sure any new code files have a license header (this is now enforced by automated tests) Submit a pull request to the original repository. We generally require test coverage of any new features or bug" }, { "data": "Here's how you can run the test suite on any system (even Mac or Windows) using and a hypervisor of your choice: ```bash vagrant up vagrant ssh sudo su cd /go/src/github.com/containernetworking/cni ./test.sh cd libcni go test ``` These things will make a PR more likely to be accepted: a well-described requirement tests for new code tests for old code! new code and tests follow the conventions in old code and tests a good commit message (see below) In general, we will merge a PR once two maintainers have endorsed it. Trivial changes (e.g., corrections to spelling) may get waved through. For substantial changes, more people may become involved, and you might get asked to resubmit the PR or divide the changes into more than one PR. We follow a rough convention for commit messages that is designed to answer two questions: what changed and why. The subject line should feature the what and the body of the commit should describe the why. ```md scripts: add the test-cluster command this uses tmux to setup a test cluster that you can easily kill and start for debugging. Fixes #38 ``` The format can be described more formally as follows: ```md <subsystem>: <what changed> <BLANK LINE> <why this change was made> <BLANK LINE> <footer> ``` The first line is the subject and should be no longer than 70 characters, the second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools. So you've built a CNI plugin. Where should it live? Short answer: We'd be happy to link to it from our . But we'd rather you kept the code in your own repo. Long answer: An advantage of the CNI model is that independent plugins can be built, distributed and used without any code changes to this repository. While some widely used plugins (and a few less-popular legacy ones) live in this repo, we're reluctant to add more. If you have a good reason why the CNI maintainers should take custody of your plugin, please open an issue or PR." } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "Carina", "subcategory": "Cloud Native Storage" }
[ { "data": "This is a copy/fork of the project existing in We moved it here so we can change / update the Kubernetes APIs, and we are really thankful to the original creators. Generates a CA and leaf certificate with a long (100y) expiration, then patches by setting the `caBundle` field with the generated CA. Can optionally patch the hooks `failurePolicy` setting - useful in cases where a single Helm chart needs to provision resources and hooks at the same time as patching. The utility works in two parts, optimized to work better with the Helm provisioning process that leverages pre-install and post-install hooks to execute this as a Kubernetes job. This tool may not be adequate in all security environments. If a more complete solution is required, you may want to seek alternatives such as ``` Use this to create a ca and signed certificates and patch admission webhooks to allow for quick installation and configuration of validating and admission webhooks. Usage: kube-webhook-certgen [flags] kube-webhook-certgen [command] Available Commands: create Generate a ca and server cert+key and store the results in a secret 'secret-name' in 'namespace' help Help about any command patch Patch a validatingwebhookconfiguration and mutatingwebhookconfiguration 'webhook-name' by using the ca from 'secret-name' in 'namespace' version Prints the CLI version information Flags: -h, --help help for kube-webhook-certgen --kubeconfig string Path to kubeconfig file: e.g. ~/.kube/kind-config-kind --log-format string Log format: text|json (default \"text\") --log-level string Log level: panic|fatal|error|warn|info|debug|trace (default \"info\") ``` ``` Generate a ca and server cert+key and store the results in a secret 'secret-name' in 'namespace' Usage: kube-webhook-certgen create [flags] Flags: --cert-name string Name of cert file in the secret (default \"cert\") -h, --help help for create --host string Comma-separated hostnames and IPs to generate a certificate for --key-name string Name of key file in the secret (default \"key\") --namespace string Namespace of the secret where certificate information will be written --secret-name string Name of the secret where certificate information will be written Global Flags: --kubeconfig string Path to kubeconfig file: e.g. ~/.kube/kind-config-kind --log-format string Log format: text|json (default \"json\") --log-level string Log level: panic|fatal|error|warn|info|debug|trace (default \"info\") ``` ``` Patch a validatingwebhookconfiguration and mutatingwebhookconfiguration 'webhook-name' by using the ca from 'secret-name' in 'namespace' Usage: kube-webhook-certgen patch [flags] Flags: -h, --help help for patch --namespace string Namespace of the secret where certificate information will be read from --patch-failure-policy string If set, patch the webhooks with this failure policy. Valid options are Ignore or Fail --patch-mutating If true, patch mutatingwebhookconfiguration (default true) --patch-validating If true, patch validatingwebhookconfiguration (default true) --secret-name string Name of the secret where certificate information will be read from --webhook-name string Name of validatingwebhookconfiguration and mutatingwebhookconfiguration that will be updated Global Flags: --kubeconfig string Path to kubeconfig file: e.g. ~/.kube/kind-config-kind --log-format string Log format: text|json (default \"text\") --log-level string Log level: panic|fatal|error|warn|info|debug|trace (default \"info\") ``` helm chart" } ]
{ "category": "Runtime", "file_name": "ipam-proxy.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "title: Automatic IP Allocation and the Weave Proxy menu_order: 30 search_type: Documentation If is enabled in Weave Net (by default IPAM is enabled), then containers started via the proxy are automatically assigned an IP address, *without having to specify any special environment variables or any other options*. host1$ docker run -ti weaveworks/ubuntu To use a specific subnet, you can pass a `WEAVE_CIDR` to the container, for example: host1$ docker run -ti -e WEAVE_CIDR=net:10.32.2.0/24 weaveworks/ubuntu To start a container without connecting it to the Weave network, pass `WEAVE_CIDR=none`, for example: host1$ docker run -ti -e WEAVE_CIDR=none weaveworks/ubuntu If you do not want an IP to be assigned by default, the proxy needs to be passed the `--no-default-ipalloc` flag, for example: host1$ weave launch --no-default-ipalloc In this configuration, containers using no `WEAVE_CIDR` environment variable are not connected to the Weave network. Containers started with a `WEAVE_CIDR` environment variable are handled as before. To automatically assign an address in this mode, start the container with a blank `WEAVE_CIDR`, for example: host1$ docker run -ti -e WEAVE_CIDR=\"\" weaveworks/ubuntu See Also *" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_nodeid.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage the node IDs ``` -h, --help help for nodeid ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - List node IDs and their IP addresses" } ]
{ "category": "Runtime", "file_name": "feature-enhancement-request.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "name: Feature enhancement request about: Suggest an idea for this project Describe the problem/challenge you have <!--A description of the current limitation/problem/challenge that you are experiencing.--> Describe the solution you'd like <!--A clear and concise description of what you want to happen.--> Anything else you would like to add: <!--Miscellaneous information that will assist in solving the issue.--> Environment: Velero version (use `velero version`): Kubernetes version (use `kubectl version`): Kubernetes installer & version: Cloud provider or hardware configuration: OS (e.g. from `/etc/os-release`): Vote on this issue! This is an invitation to the Velero community to vote on issues, you can see the project's . Use the \"reaction smiley face\" up to the right of this comment to vote. :+1: for \"The project would be better with this feature added\" :-1: for \"This feature will not enhance the project in a meaningful way\"" } ]
{ "category": "Runtime", "file_name": "pv_restore_info.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Velero has different ways to handle data in the volumes during restore. The users want to have more clarity in terms of how the volumes are handled in restore process via either Velero CLI or other downstream product which consumes Velero. Create new metadata to store the information of the restored volume, which will have the same life-cycle as the restore CR. Consume the metadata in velero CLI to enable it display more details for volumes in the output of `velero restore describe --details` Provide finer grained control of the volume restore process. The focus of the design is to enable displaying more details. Persist additional metadata like podvolume, datadownloads etc to the restore folder in backup-location. The restore volume info will be stored in a file named like `${restore_name}-vol-info.json`. The content of the file will be a list of volume info objects, each of which will map to a volume that is restored, and will contain the information like name of the restored PV/PVC, restore method and related objects to provide details depending on the way it's restored, it will look like this: ``` [ { \"pvcName\": \"nginx-logs-2\", \"pvcNamespace\": \"nginx-app-restore\", \"pvName\": \"pvc-e320d75b-a788-41a3-b6ba-267a553efa5e\", \"restoreMethod\": \"PodVolumeRestore\", \"snapshotDataMoved\": false, \"pvrInfo\": { \"snapshotHandle\": \"81973157c3a945a5229285c931b02c68\", \"uploaderType\": \"kopia\", \"volumeName\": \"nginx-logs\", \"podName\": \"nginx-deployment-79b56c644b-mjdhp\", \"podNamespace\": \"nginx-app-restore\" } }, { \"pvcName\": \"nginx-logs-1\", \"pvcNamespace\": \"nginx-app-restore\", \"pvName\": \"pvc-98c151f4-df47-4980-ba6d-470842f652cc\", \"restoreMethod\": \"CSISnapshot\", \"snapshotDataMoved\": false, \"csiSnapshotInfo\": { \"snapshotHandle\": \"snap-01a3b21a5e9f85528\", \"size\": 2147483648, \"driver\": \"ebs.csi.aws.com\", \"vscName\": \"velero-velero-nginx-logs-1-jxmbg-hx9x5\" } } ...... ] ``` Each field will have the same meaning as the corresponding field in the backup volume info. It will not have the fields that were introduced to help with the backup process, like `pvInfo`, `dataupload` etc. Two steps are involved in generating the restore volume info, the first is \"collection\", which is to gather the information for restoration of the volumes, the second is \"generation\", which is to iterate through the data collected in the first step and generate the volume info list as is described above. Unlike backup, the CR objects created during the restore process will not be persisted to the backup storage location. Therefore, to gather the information needed to generate volume information, we either need to collect the CRs in the middle of the restore process, or retrieve the objects based on the `resouce-list.json` of the restore via API server. The information to be collected are: PV/PVC mapping relationship: It will be collected via the `restore-resource-list.json`, b/c at the time the json is ready, all PVCs and PVs are already created. Native snapshot information: It will be collected in the restore workflow when each snapshot is restored. podvolumerestore CRs: It will be collected in the restore workflow after each pvr is created. volumesnapshot CRs for CSI snapshot: It will be collected in the step of collecting PVC info, by reading the `dataSource` field in the spec of the PVC. datadownload CRs It will be collected in the phase of collecting PVC info, by querying the API-server to list the datadownload CRs labeled with the restore name. After the collection step, the generation step is relatively straight-forward, as we have all the information needed in the data structures. The whole collection and generation steps will be done with the \"best-effort\" manner, i.e. if there are any failures we will only log the error in restore log, rather than failing the whole restore process, we will not put these errors or warnings into the" }, { "data": "b/c it won't impact the restored resources. Depending on the number of the restored PVCs the \"collection\" step may involve many API calls, but it's considered acceptable b/c at that time the resources are already created, so the actual RTO is not impacted. By using the client of controller runtime we can make the collection step more efficient by using the cache of the API server. We may consider to make improvements if we observe performance issues, like using multiple go-routines in the collection. Because the restore volume info shares the same data structures with the backup volume info, we will refactor the code in package `internal/volume` to make the sub-components in backup volume info shared by both backup and restore volume info. We'll introduce a struct called `RestoreVolumeInfoTracker` which encapsulates the logic of collecting and generating the restore volume info: ``` // RestoreVolumeInfoTracker is used to track the volume information during restore. // It is used to generate the RestoreVolumeInfo array. type RestoreVolumeInfoTracker struct { *sync.Mutex restore *velerov1api.Restore log logrus.FieldLogger client kbclient.Client pvPvc *pvcPvMap // map of PV name to the NativeSnapshotInfo from which the PV is restored pvNativeSnapshotMap map[string]NativeSnapshotInfo // map of PV name to the CSISnapshot object from which the PV is restored pvCSISnapshotMap map[string]snapshotv1api.VolumeSnapshot datadownloadList *velerov2alpha1.DataDownloadList pvrs []*velerov1api.PodVolumeRestore } ``` The `RestoreVolumeInfoTracker` will be created when the restore request is initialized, and it will be passed to the `restoreContext` and carried over the whole restore process. The `client` in this struct is to be used to query the resources in the restored namespace, and the current client in restore reconciler only watches the resources in the namespace where velero is installed. Therefore, we need to introduce the `CrClient` which has the same life-cycle of velero server to the restore reconciler, because this is the client that watches all the resources on the cluster. In addition to that, we will make small changes in the restore workflow to collect the information needed. We'll make the changes un-intrusive and make sure not to change the logic of the restore to avoid break change or regression. We'll also introduce routine changes in the package `pkg/persistence` to persist the restore volume info to the backup storage location. Last but not least, the `velero restore describe --details` will be updated to display the volume info in the output. There used to be suggestion that to provide more details about volume, we can query the `backup-vol-info.json` with the resource identifier in `restore-resource-list.json`. This will not work when there're resource modifiers involved in the restore process, which may change the metadata of PVC/PV. In addition, we may add more detailed restore-specific information about the volumes that is not available in the `backup-vol-info.json`. Therefore, the `restore-vol-info.json` is a better approach. There should be no security impact introduced by this design. The restore volume info will be consumed by Velero CLI and downstream products for displaying details. So the functionality of backup and restore will not be impacted for restores created by older versions of Velero which do not have the restore volume info metadata. The client should properly handle the case when the restore volume info does not exist. The data structures referenced by volume info is shared between both restore and backup and it's not versioned, so in the future we must make sure there will only be incremental changes to the metadata, such that no break change will be introduced to the client. https://github.com/vmware-tanzu/velero/issues/7546 https://github.com/vmware-tanzu/velero/issues/6478" } ]
{ "category": "Runtime", "file_name": "ark_create.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"ark create\" layout: docs Create ark resources Create ark resources ``` -h, --help help for create ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Back up and restore Kubernetes cluster resources. - Create a backup - Create a restore - Create a schedule" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_post-uninstall-cleanup.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Remove system state installed by Cilium at runtime Clean up CNI configurations, CNI binaries, attached BPF programs, bpffs, tc filters, routes, links and named network namespaces. Running this command might be necessary to get the worker node back into working condition after uninstalling the Cilium agent. ``` cilium-dbg post-uninstall-cleanup [flags] ``` ``` --all-state Remove all cilium state --bpf-state Remove BPF state -f, --force Skip confirmation -h, --help help for post-uninstall-cleanup ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI" } ]
{ "category": "Runtime", "file_name": "gfapi-symbol-versions.md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }
[ { "data": "In general, adding new APIs to a shared library does not require that symbol versions be used or the the SO_NAME be \"bumped.\" These actions are usually reserved for when a major change is introduced, e.g. many APIs change or a signficant change in the functionality occurs. Over the normal lifetime of a When a new API is added, the library is recompiled, consumers of the new API are able to do so, and existing, legacy consumers of the original API continue as before. If by some chance an old copy of the library is installed on a system, it's unlikely that most applications will be affected. New applications that use the new API will incur a run-time error terminate. Bumping the SO_NAME, i.e. changing the shared lib's file name, e.g. from libfoo.so.0 to libfoo.so.1, which also changes the ELF SO_NAME attribute inside the file, works a little differently. libfoo.so.0 contains only the old APIs. libfoo.so.1 contains both the old and new APIs. Legacy software that was linked with libfoo.so.0 continues to work as libfoo.so.0 is usually left installed on the system. New software that uses the new APIs is linked with libfoo.so.1, and works as long as long as libfoo.so.1 is installed on the system. Accidentally (re)installing libfoo.so.0 doesn't break new software as long as reinstalling doesn't erase libfoo.so.1. Using symbol versions is somewhere in the middle. The shared library file remains libfoo.so.0 forever. Legacy APIs may or may not have an associated symbol version. New APIs may or may not have an associated symbol version either. In general symbol versions are reserved for APIs that have changed. Either the function's signature has changed, i.e. the return time or the number of paramaters, and/or the parameter types have changed. Another reason for using symbol versions on an API is when the behaviour or functionality of the API changes dramatically. As with a library that doesn't use versioned symbols, old and new applications either find or don't find the versioned symbols they need. If the versioned symbol doesn't exist in the installed library, the application incurs a run-time error and terminates. GlusterFS wanted to keep tight control over the APIs in libgfapi. Originally bumping the SO_NAME was considered, and GlusterFS-3.6.0 was released with libgfapi.so.7. Not only was \"7\" a mistake (it should have been \"6\"), but it was quickly pointed out that many dependent packages that use libgfapi would be forced to be recompiled/relinked. Thus no packages of" }, { "data": "were ever released and 3.6.1 was quickly released with libgfapi.so.0, but with symbol versions. There's no strong technical reason for either; the APIs have not changed, only new APIs have been added. It's merely being done in anticipation that some APIs might change sometime in the future. Enough about that now, let's get into the nitty gritty This is the default, and the easiest thing to do. Public APIs have declarations in either glfs.h, glfs-handles.h, or, at your discretion, in a new header file intended for consumption by other developers. Here's what you need to do to add a new public API: Write the declaration, e.g. in glfs.h: ```C int glfsdtrt (const char *volname, void *stuff) _THROW ``` Write the definition, e.g. in glfs-dtrt.c: ```C int pubglfsdtrt (const char volname, void stuff) { ... return 0; } ``` Add the symbol version magic for ELF, gnu toolchain to the definition. following the definition of your new function in glfs-dtrtops.c, add a line like this: ```C GFAPISYMVERPUBLICDEFAULT(glfsdtrt, 3.7.0) ``` The whole thing should look like: ```C int pubglfsdtrt (const char volname, void stuff) { ... } GFAPISYMVERPUBLICDEFAULT(glfsdtrt, 3.7.0); ``` In this example, 3.7.0 refers to the Version the symbol will first appear in. There's nothing magic about it, it's just a string token. The current versions we have are 3.4.0, 3.4.2, 3.5.0, 3.5.1, and 3.6.0. They are to be considered locked or closed. You can not, must not add any new APIs and use these versions. Most new APIs will use 3.7.0. If you add a new API appearing in 3.6.2 (and mainline) then you would use 3.6.2. Add the symbol version magic for OS X to the declaration. following the declaration in glfs.h, add a line like this: ```C GFAPIPUBLIC(glfsdtrt, 3.7.0) ``` The whole thing should look like: ```C int glfsdtrt (const char *volname, void *stuff) _THROW GFAPIPUBLIC(glfsdtrt, 3.7.0); ``` The version here must match the version associated with the definition. Add the new API to the ELF, gnu toolchain link map file, gfapi.map Most new public APIs will probably be added to a new section that looks like this: ``` GFAPI_3.7.0 { global: glfs_dtrt; } GFAPIPRIVATE3.7.0; ``` if you're adding your new API to, e.g. 3.6.2, it'll look like this: ``` GFAPI_3.6.2 { global: glfs_dtrt; } GFAPI_3.6.0; ``` and you must change the ``` GFAPIPRIVATE3.7.0 { ...} GFAPI_3.6.0; ``` section to: ``` GFAPIPRIVATE3.7.0 { ...}" }, { "data": "``` Add the new API to the OS X alias list file, gfapi.aliases. Most new APIs will use a line that looks like this: ```C pubglfsdtrt glfsdtrt$GFAPI3.7.0 ``` if you're adding your new API to, e.g. 3.6.2, it'll look like this: ```C pubglfsdtrt glfsdtrt$GFAPI3.6.2 ``` And that's it. If you're thinking about adding a private API that isn't declared in one of the header files, then you should seriously rethink what you're doing and figure out how to put it in libglusterfs instead. If that hasn't convinced you, follow the instructions above, but use the _PRIVATE versions of macros, symbol versions, and aliases. If you're 1337 enough to ignore this advice, then you're 1337 enough to figure out how to do it. There are two ways an API might change, 1) its signature has changed, or 2) its new functionality or behavior is substantially different than the old. An APIs signature consists of the function return type, and the number and/or type of its parameters. E.g. the original API: ```C int glfs_dtrt (const char volname, void stuff); ``` and the changed API: ```C void glfs_dtrt (const char volname, glfs_t ctx, void stuff); ``` One way to avoid a change like this, and which is preferable in many ways, is to leave the legacy glfs_dtrt() function alone, document it as deprecated, and simply add a new API, e.g. glfs_dtrt2(). Practically speaking, that's effectively what we'll be doing anyway, the difference is only that we'll use a versioned symbol to do it. On the assumption that adding a new API is undesirable for some reason, perhaps the use of glfs_gnu() is just so pervasive that we really don't want to add glfs_gnu2(). change the declaration in glfs.h: ```C glfst *glfsgnu (const char volname, void stuff) THROW GFAPIPUBLIC(glfsgnu, 3.7.0); ```` Note that there is only the single, new declaration. change the old definition of glfs_gnu() in glfs.c: ```C struct glfs * pubglfsgnu340 (const char * volname) { ... } GFAPISYMVERPUBLIC(glfsgnu340, glfsgnu, 3.4.0); ``` create the new definition of glfs_gnu in glfs.c: ```C struct glfs * pubglfsgnu (const char volname, void stuff) { ... } GFAPISYMVERPUBLICDEFAULT(glfsgnu, 3.7.0); ``` Add the new API to the ELF, gnu toolchain link map file, gfapi.map ``` GFAPI_3.7.0 { global: glfs_gnu; } GFAPIPRIVATE3.7.0; ``` Update the OS X alias list file, gfapi.aliases, for both versions: Change the old line: ```C pubglfsgnu glfsgnu$GFAPI3.4.0 ``` to: ```C pubglfsgnu340 glfsgnu$GFAPI3.4.0 ``` Add a new line: ```C pubglfsgnu glfsgnu$GFAPI3.7.0 ``` Lastly, change all gfapi internal calls glfs_gnu to the new API." } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_ct_flush.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Flush all connection tracking entries ``` cilium-dbg bpf ct flush ( <endpoint identifier> | global ) [flags] ``` ``` -h, --help help for flush ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Connection tracking tables" } ]
{ "category": "Runtime", "file_name": "CHANGELOG.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "Fix cgroups v2 mountpoint detection. Add support for cgroups v2. Thanks to @emadolsky for their contribution to this release. Support colons in cgroup names. Remove linters from runtime dependencies. Migrate to Go modules. Fixed quota clamping to always round down rather than up; Rather than guaranteeing constant throttling at saturation, instead assume that the fractional CPU was added as a hedge for factors outside of Go's scheduler. Log the new value of `GOMAXPROCS` rather than the current value. Make logs more explicit about whether `GOMAXPROCS` was modified or not. Allow customization of the minimum `GOMAXPROCS`, and modify default from 2 to 1. Initial release." } ]
{ "category": "Runtime", "file_name": "ci-configuration.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "title: CI Configuration This page contains information regarding the CI configuration used for the Rook project to test, build and release the project. Snyk (Security Scan): `SNYK_TOKEN` - API Token for the (workflow file: `snyk.yaml`). Testing: `IBMINSTANCEID`: Used for KMS (Key Management System) IBM Key Protect access (see ). `IBMSERVICEAPI_KEY`: Used for KMS (Key Management System) IBM Key Protect access (see ). Publishing: `DOCKERUSERNAME` + `DOCKERPASSWORD`: Username and password of registry. `DOCKER_REGISTRY`: Target registry namespace (e.g., `rook`) `AWSUSR` + `AWSPSW`: AWS credentials with access to S3 for Helm chart publishing. `GITAPITOKEN`: GitHub access token, used to push docs changes to the docs repositories `gh-pages` branch." } ]
{ "category": "Runtime", "file_name": "20200511-cstor-backupandrestore.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "oep-number: CStor Backup and Restore REV1 title: Backup and Restore for V1 version of CStorVolumes authors: \"@mynktl\" \"@mittachaitu\" owners: \"@kmova\" \"@vishnuitta\" \"@mynktl\" editor: \"@mittachaitu\" creation-date: 2020-05-11 last-updated: 2020-06-12 status: provisional - - - - - - - - - - - - - This proposal brings out the design details to implement backup and restore solution for V1 version of CStorVolumes. Create a backup of the cStor persistent volume to the desired storage location( this can be AWS, GCP or any storage provider). This backup will be either on-demand or scheduled backup. Restore this backup to the same cluster or another cluster. Solution to create an on-demand backup of cStor persistent volumes. Solution to create an on-demand restore of backuped up cStor persistent volumes. Solution to create a scheduled backup of cStor persistent volumes. Solution to create an incremental/differential backup of cStor persistent volumes. With incremental/differential backup, the user will be able to save backup storage space and it Supporting local backup and restore. As an OpenEBS user, I should be able create a backup for CStorVolumes. As an OpenEBS user, I should be restore a backuped CStorVolumes. As an OpenEBS user, I should be able to create a scheduled backups for CStorVolumes. The volume management api like create, delete, snapshot, clone etc have moved from m-apiserver to CVC-Operator as part of supporting the CSI Driver. REST API of Maya-ApiServer to trigger backup/restore also needs to be implemented in the CVC operator as these APIs are using volume snapshot/clone API. Currently CVC Operator only supports declarative API via CRs, but plugin requires imperative API(REST API). The proposal is to implement a REST server within CVC-Operator to perform velero-plugin operations. The velero-plugin should identify the type of the volume and forward the REST API requests to the CVC Operator. Velero-plugin execute different REST API of CVC-Apiserver based on the type-of-request/API from velero. User can create backup using velero CLI Example: ```sh velero backup create <BACKUPNAME> --include-namespaces=<NAMESPACE> --snapshot-volumes volume-snapshot-locations=<SNAPSHOT_LOCATION> ``` User can resotre a backup using velero CLI Example ``` velero restore create --from-backup <BACKUP_NAME> --restore-volumes=true ``` Velero server sends CreateSnapshot API with volumeID to create the snapshot. Velero Interface will execute `CreateSnapshot` API of the cStor velero-plugin controller. cStor velero-plugin controller is responsible for executing below steps: BackUp the relevant PVC object to the cloud provider. Execute the REST API(POST `/latest/backups`) of CVC-Operator (This object will include the IP Address of snapshot receiver/sender module) to create backup. Create a snapshot using volume name and snapshot name(which will get during the CreateSnapshot request). If the request is to create `local backup` then return from here else continue with other steps. Find the Healthy cStorVolumeReplica if it doesn't exist return error. Create CStorCompletedBackUp resources if it doesnt exist(Intention of creating this resource is used for incremental backup purpose). Create CStorBackUp resource by populating the current snapshot name and previous snapshot name(if exists from CStorCompletedBackUp) with Healthy CStorVolumeReplica pool UID. Corresponding backup controllers exist in pool-manager will responsible for sending the snapshot data from pool to velero-plugin and velero-plugin will write this stream to cloud-provider. Call cloud interface API `UploadSnapshot` which will upload the backup to the cloud provider. This API will return the unique snapshotID (volumeID + \"-velero-bkp-\" + backupName) to the velero server. This snapshotID will be used to refer to the backup snapshot. Velero server sends `DeleteSnapshot` API of Velero Interface to delete the backup/snapshot with argument `snapshotID`. This snapshotID is generated during the backup creation of this" }, { "data": "Velero Interface will execute the DeleteSnapshot API of cStor velero-plugin. cStor velero-plugin is responsible for performing below steps: Delete the PVC object for this backup from the cloud provider. Execute REST API(`latest/backups/`) of the CVC-Operator to delete the resources created for this particular backup. Delete the CStorCompletedBackUp resources if the given cstorbackup is the last backup of schedule or cstorcompletedbackup doesn't have any successful backup. Delete the snapshot created for that backup. Delete the CStorBackUp resource. Execute the `RemoveSnapshot` API of the cloud interface to delete the snapshot from the cloud provider. Velero Server will execute the `CreateVolumeFromSnapshot` API of the velero interface to restore the backup with the argument (snapshotID, volumeType). Velero interface will execute `CreateVolumeFromSnapshot` API of velero-plugin. velero-plugin will perform following below steps: Download the PVC object from the cloud provider through a cloud interface and deploy the PVC. If PVC already exists in the cluster then skip the PVC creation. (only for remote restore) Check If PVC status is bounded. (Only for remote restore) Execute the REST API(`latest/restore`) with CVC-Operator restore details(includes the IP address of snapshot receiver/sender module) to initiate the restore of the snapshot. Create CVC with clone configuration(i.e CVC will hold source volume and snapshot information in spec) only if the restore request is local restore and for remote restore velero-plugin creates PVC with annotation `openebs.io/created-through: restore` then CSI-Provisioner will propagate this annotation to CVC and then CVC-controller will create CVR with annotation `isRestoreVol: true` only if `openebs.io/created-through` annotation is set. If CVR contains annotation `isRestoreVol: true` then CVR controller will skip setting targetIP(targetIP helpful for replica to connect to target to serve IOs). Wait till CVC comes to Bound state(blocking call and it will be retried for 10 seconds at interval of 2 seconds in case if it is not Bound). Create a replica count number of restore CRs which will be responsible for dumping backup data into the volume dataset. Call cloud interface API `RestoreSnapshot` to download the snapshot from the cloud provider. One can access the REST endpoint of CVC by fetching the details of CVC-Operator service. Below command will be help to get the CVC-Operator service ```sh kubectl get service -n <openebs_namespace> -l openebs.io/component-name: cvc-operator-svc ``` Restore delete will delete restore resource object only. Note: It will not delete the resources restored in that restore(ex: PVC). When REST API `/latest/backups` is executed it creates CStorBackUp CR with `Pending` status. Backup controller which present in pool-manager will get event and perform the following operations: Update the CStorBackUp status to `Init`(which conveys controller instantiate process). In next reconciliation update the Status of CStorBackUp resource as `InProgress`(Which will help to understand the backup process is started). Execute the below command and ZFS will send the data to `sender/receiver module`(blocking call and this command execution will be retried for 50 seconds at interval of 5 seconds in case of errors). CMD: If request is for full backup then command is `zfs send <snapshotdatasetname> | nc -w 3 <ip_address> <port>` If request is for incremental backup then command is `zfs send -i <oldsnapshotdatasetname> <newsnapshotdatasetname> | nc -w 3 <ip_address> <port>` Updates the corresponding CStorCompletedBackups with last two completed backups. For example, if schedule `b` has last two backups b-0 and b-1 (b-0 created first and after that b-1 was created) having snapshots b-0 and b-1 respectively then CStorCompletedBackups for the schedule `b` will have following information : ```go CStorCompletedBackups.Spec.PrevSnapName = b-1 CStorCompletedBackups.Spec.SnapName = b-0 ``` NOTE: ip_address and port are the IP Address and port of snapshot sender/receiver" }, { "data": "When REST API `/latest/restore` is executed it creates CStorRestore CR with `Pending` status. Restore controller which present in pool-manager will get event and perform the following operations: Update the CStorRestore status to `Init`(which conveys controller instantiate process). Update the status of CStorRestore resource as `InProgress`(Which will help to understand the restore process is started). Execute the below command and ZFS will receive the data from `sender/receiver module`(blocking call and this command execution will be retried for 50 seconds at interval of 5 seconds in case of errors). CMD: `nc -w 3 <ipaddress> <port> | zfs recv -F <volumedataset_name>` NOTE: ip_address and port are the IP Address and port of snapshot sender/receiver module. Following is the existing CStorBackup schema in go struct: ```go // CStorBackup describes a cstor backup resource created as a custom resource type CStorBackup struct { metav1.TypeMeta `json:\",inline\"` metav1.ObjectMeta `json:\"metadata,omitempty\"` Spec CStorBackupSpec `json:\"spec\"` Status CStorBackupStatus `json:\"status\"` } // CStorBackupSpec is the spec for a CStorBackup resource type CStorBackupSpec struct { // BackupName is a name of the backup or scheduled backup BackupName string `json:\"backupName\"` // VolumeName is a name of the volume for which this backup is destined VolumeName string `json:\"volumeName\"` // SnapName is a name of the current backup snapshot SnapName string `json:\"snapName\"` // PrevSnapName is the last completed-backup's snapshot name PrevSnapName string `json:\"prevSnapName\"` // BackupDest is the remote address for backup transfer BackupDest string `json:\"backupDest\"` // LocalSnap is flag to enable local snapshot only LocalSnap bool `json:\"localSnap\"` } // CStorBackupStatus is to hold status of backup type CStorBackupStatus string // Status written onto CStorBackup objects. const ( // BKPCStorStatusDone , backup is completed. BKPCStorStatusDone CStorBackupStatus = \"Done\" // BKPCStorStatusFailed , backup is failed. BKPCStorStatusFailed CStorBackupStatus = \"Failed\" // BKPCStorStatusInit , backup is initialized. BKPCStorStatusInit CStorBackupStatus = \"Init\" // BKPCStorStatusPending , backup is pending. BKPCStorStatusPending CStorBackupStatus = \"Pending\" // BKPCStorStatusInProgress , backup is in progress. BKPCStorStatusInProgress CStorBackupStatus = \"InProgress\" // BKPCStorStatusInvalid , backup operation is invalid. BKPCStorStatusInvalid CStorBackupStatus = \"Invalid\" ) ``` Following is the existing CStorRestore schema in go struct: ```go // CStorRestore describes a cstor restore resource created as a custom resource type CStorRestore struct { metav1.TypeMeta `json:\",inline\"` metav1.ObjectMeta `json:\"metadata,omitempty\"` // set name to restore name + volume name + something like csp tag Spec CStorRestoreSpec `json:\"spec\"` Status CStorRestoreStatus `json:\"status\"` } // CStorRestoreSpec is the spec for a CStorRestore resource type CStorRestoreSpec struct { RestoreName string `json:\"restoreName\"` // set restore name VolumeName string `json:\"volumeName\"` // RestoreSrc can be ip:port in case of restore from remote or volumeName in case of local restore RestoreSrc string `json:\"restoreSrc\"` MaxRetryCount int `json:\"maxretrycount\"` RetryCount int `json:\"retrycount\"` StorageClass string `json:\"storageClass,omitempty\"` Size resource.Quantity `json:\"size,omitempty\"` // Local will be helpful to identify whether restore is from local (or) backup/snapshot Local bool `json:\"localRestore,omitempty\"` } // CStorRestoreStatus is to hold result of action. type CStorRestoreStatus string // Status written onto CStrorRestore object. const ( // RSTCStorStatusEmpty ensures the create operation is to be done, if import fails. RSTCStorStatusEmpty CStorRestoreStatus = \"\" // RSTCStorStatusDone , restore operation is completed. RSTCStorStatusDone CStorRestoreStatus = \"Done\" // RSTCStorStatusFailed , restore operation is failed. RSTCStorStatusFailed CStorRestoreStatus = \"Failed\" // RSTCStorStatusInit , restore operation is initialized. RSTCStorStatusInit CStorRestoreStatus = \"Init\" // RSTCStorStatusPending , restore operation is pending. RSTCStorStatusPending CStorRestoreStatus = \"Pending\" // RSTCStorStatusInProgress , restore operation is in progress. RSTCStorStatusInProgress CStorRestoreStatus = \"InProgress\" // RSTCStorStatusInvalid , restore operation is invalid. RSTCStorStatusInvalid CStorRestoreStatus = \"Invalid\" ) ``` Following is the existing CStorCompletedBackup schema in go struct: ```go // CStorCompletedBackup describes a cstor completed-backup resource created as custom resource type CStorCompletedBackup struct { metav1.TypeMeta `json:\",inline\"` metav1.ObjectMeta `json:\"metadata,omitempty\"` Spec CStorBackupSpec `json:\"spec\"` } ```" } ]
{ "category": "Runtime", "file_name": "GOVERNANCE.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "This document defines the governance policies of Kanister. Anyone can contribute to Kanister, whether through code, design discussions, documentation, blog posts, talks, or other means. All contributors are expected to follow the Kanister . Contributions to the code base, documentation, or other components in the Kanister GitHub organization must follow the guidelines described in the document. Whether these contributions get merged into the project is the prerogative of the maintainers. Maintainers are responsible for the overall security, quality and integrity of the project. They propose, manage, review, approve/reject major change and enhancement requests. They also have the ability to merge code into the project. See the document for the full list of maintainer responsibilities. Ideally, all project decisions are resolved by maintainer consensus. If this is not possible, maintainers may call for a vote. The voting process is a simple majority in which each maintainer receives one vote. If a maintainer is no longer interested in or cannot perform the duties listed above, they should move themselves to emeritus status. This can also occur by a vote of the maintainers. Anyone can become a Kanister maintainer. Maintainers should be extremely proficient in Go; have relevant domain expertise; have the time and ability to meet the maintainer expectations outlined above, and demonstrate the ability to work with the existing maintainers and project process. To become a maintainer, start by expressing interest to existing maintainers. Existing maintainers will then ask you to demonstrate the qualifications above by contributing PRs, doing code reviews, and other such tasks under their guidance. After several months of working together, maintainers will decide whether to grant maintainer status. This governance is a living document and its policies will need to be updated over time to meet the community's needs. Until the steering committee is set up, the maintainers will have full ownership of this governance. Changes can be proposed at any time, but a super majority is required to approve any updates." } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_ipcache_get.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Retrieve identity for an ip ``` cilium-dbg bpf ipcache get [flags] ``` ``` -h, --help help for get ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage the IPCache mappings for IP/CIDR <-> Identity" } ]
{ "category": "Runtime", "file_name": "spider-subnet.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "English The SpiderSubnet resource represents a set of IP addresses. When application administrators need to allocate fixed IP addresses for their applications, they usually have to rely on platform administrators to provide the available IPs and routing information. However, this collaboration between different operational teams can lead to complex workflows for creating each application. With the Spiderpool's SpiderSubnet, this process is greatly simplified. SpiderSubnet can automatically allocate IP addresses from the subnet to SpiderIPPool, while also allowing applications to have fixed IP addresses. This automation significantly reduces operational costs and streamlines the workflow. When the Subnet feature is enabled, each instance of IPPool belongs to the same subnet as the Subnet instance. The IP addresses in the IPPool instance must be a subset of those in the Subnet instance, and there should be no overlapping IP addresses among different IPPool instances. By default, the routing configuration of IPPool instances inherits the settings from the corresponding Subnet instance. To allocate fixed IP addresses for applications and decouple the roles of application administrators and their network counterparts, the following two practices can be adopted: Manually create IPPool: application administrators manually create IPPool instances, ensuring that the range of available IP addresses are defined in the corresponding Subnet instance. This allows them to have control over which specific IP addresses are used. Automatically create IPPool: application administrators can specify the name of the Subnet instance in the Pod annotation. Spiderpool automatically creates an IPPool instance with fixed IP addresses coming from the Subnet instance. The IP addresses in the instance are then allocated to Pods. Spiderpool also monitors application scaling and deletion events, automatically adjusting the IP pool size or removing IPs as needed. SpiderSubnet also supports several controllers, including ReplicaSet, Deployment, StatefulSet, DaemonSet, Job, CronJob, and k8s extended operator. If you need to use a third-party controller, you can refer to the doc . This feature does not support the bare Pod. Notice: Before v0.7.0 version, you have to create a SpiderSubnet resource before you create a SpiderIPPool resource with SpiderSubnet feature enabled. Since v0.7.0 version, you can create an orphan SpiderIPPool without a SpiderSubnet resource. A ready Kubernetes cluster. has already been installed. Refer to to install Spiderpool. And make sure that the helm installs the option `--ipam.spidersubnet.enable=true --ipam.spidersubnet.autoPool.enable=true`. The `ipam.spidersubnet.autoPool.enable` provide the `Automatically create IPPool` ability. To simplify the creation of JSON-formatted Multus CNI configuration, Spiderpool introduces the SpiderMultusConfig CR, which automates the management of Multus NetworkAttachmentDefinition CRs. Here is an example of creating a Macvlan SpiderMultusConfig: master: the interface `ens192` is used as the spec for master. ```bash MACVLANMASTERINTERFACE0=\"ens192\" MACVLANMULTUSNAME0=\"macvlan-$MACVLANMASTERINTERFACE0\" MACVLANMASTERINTERFACE1=\"ens224\" MACVLANMULTUSNAME1=\"macvlan-$MACVLANMASTERINTERFACE1\" cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: ${MACVLANMULTUSNAME0} namespace: kube-system spec: cniType: macvlan enableCoordinator: true macvlan: master: ${MACVLANMASTERINTERFACE0} apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: ${MACVLANMULTUSNAME1} namespace: kube-system spec: cniType: macvlan enableCoordinator: true macvlan: master: ${MACVLANMASTERINTERFACE1} EOF ``` With the provided configuration, we create the following two Macvlan SpiderMultusConfigs that will automatically generate a Multus NetworkAttachmentDefinition CR corresponding to the host's `ens192` and `ens224` network interfaces. ```bash ~# kubectl get spidermultusconfigs.spiderpool.spidernet.io -n kube-system NAME AGE macvlan-ens192 26m macvlan-ens224 26m ~# kubectl get" }, { "data": "-n kube-system NAME AGE macvlan-ens192 27m macvlan-ens224 27m ``` ```bash ~# cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderSubnet metadata: name: subnet-6 spec: subnet: 10.6.0.0/16 gateway: 10.6.0.1 ips: 10.6.168.101-10.6.168.110 routes: dst: 10.7.0.0/16 gw: 10.6.0.1 apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderSubnet metadata: name: subnet-7 spec: subnet: 10.7.0.0/16 gateway: 10.7.0.1 ips: 10.7.168.101-10.7.168.110 routes: dst: 10.6.0.0/16 gw: 10.7.0.1 EOF ``` Apply the above YAML configuration to create two SpiderSubnet instances and configure gateway and routing information for each of them. ```bash ~# kubectl get spidersubnet NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT subnet-6 4 10.6.0.0/16 0 10 subnet-7 4 10.7.0.0/16 0 10 ~# kubectl get spidersubnet subnet-6 -o jsonpath='{.spec}' | jq { \"gateway\": \"10.6.0.1\", \"ipVersion\": 4, \"ips\": [ \"10.6.168.101-10.6.168.110\" ], \"routes\": [ { \"dst\": \"10.7.0.0/16\", \"gw\": \"10.6.0.1\" } ], \"subnet\": \"10.6.0.0/16\", \"vlan\": 0 } ~# kubectl get spidersubnet subnet-7 -o jsonpath='{.spec}' | jq { \"gateway\": \"10.7.0.1\", \"ipVersion\": 4, \"ips\": [ \"10.7.168.101-10.7.168.110\" ], \"routes\": [ { \"dst\": \"10.6.0.0/16\", \"gw\": \"10.7.0.1\" } ], \"subnet\": \"10.7.0.0/16\", \"vlan\": 0 } ``` The following YAML example creates two replicas of a Deployment application: `ipam.spidernet.io/subnet`: specifies the Spiderpool subnet. Spiderpool automatically selects IP addresses from this subnet to create a fixed IP pool associated with the application, ensuring fixed IP assignment. (Notice: this feature don't support wildcard.) `ipam.spidernet.io/ippool-ip-number`: specifies the number of IP addresses in the IP pool. This annotation can be written in two ways: specifying a fixed quantity using a numeric value, such as `ipam.spidernet.io/ippool-ip-number1`, or specifying a relative quantity using a plus and a number, such as `ipam.spidernet.io/ippool-ip-number+1`. The latter means that the IP pool will dynamically maintain an additional IP address based on the number of replicas, ensuring temporary IPs are available during elastic scaling. `ipam.spidernet.io/ippool-reclaim`: indicate whether the automatically created fixed IP pool should be reclaimed upon application deletion. `v1.multus-cni.io/default-network`: create a default network interface for the application. ```shell cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: test-app-1 spec: replicas: 2 selector: matchLabels: app: test-app-1 template: metadata: annotations: ipam.spidernet.io/subnet: |- { \"ipv4\": [\"subnet-6\"] } ipam.spidernet.io/ippool-ip-number: '+1' v1.multus-cni.io/default-network: kube-system/macvlan-ens192 ipam.spidernet.io/ippool-reclaim: \"false\" labels: app: test-app-1 spec: containers: name: test-app-1 image: nginx imagePullPolicy: IfNotPresent ports: name: http containerPort: 80 protocol: TCP EOF ``` When creating the application, Spiderpool selects random IP addresses from the specified subnet to create a fixed IP pool that is bound to the Pod's network interface. The automatic pool automatically inherits the gateway and routing of the subnet. ```bash ~# kubectl get spiderippool NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT auto4-test-app-1-eth0-a5bd3 4 10.6.0.0/16 2 3 false ~# kubectl get po -l app=test-app-1 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-1-74cbbf654-2ndzl 1/1 Running 0 46s 10.6.168.101 controller-node-1 <none> <none> test-app-1-74cbbf654-4f2w2 1/1 Running 0 46s 10.6.168.103 worker-node-1 <none> <none> ~# kubectl get spiderippool auto4-test-app-1-eth0-a5bd3 -ojsonpath={.spec} | jq { \"default\": false, \"disable\": false, \"gateway\": \"10.6.0.1\", \"ipVersion\": 4, \"ips\": [ \"10.6.168.101-10.6.168.103\" ], \"podAffinity\": { \"matchLabels\": { \"ipam.spidernet.io/app-api-group\": \"apps\", \"ipam.spidernet.io/app-api-version\": \"v1\", \"ipam.spidernet.io/app-kind\": \"Deployment\", \"ipam.spidernet.io/app-name\": \"test-app-1\", \"ipam.spidernet.io/app-namespace\": \"default\" } }, \"routes\": [ { \"dst\": \"10.7.0.0/16\", \"gw\": \"10.6.0.1\" } ], \"subnet\":" }, { "data": "\"vlan\": 0 } ``` To achieve the desired fixed IP pool, Spiderpool adds built-in labels and PodAffinity to bind the pool to the specific application. With the annotation of `ipam.spidernet.io/ippool-reclaim: false`, IP addresses are reclaimed upon application deletion, but the automatic pool itself remains intact. If you want the pool to be available for other applications, you need to manually remove these built-in labels and PodAffinity. ```bash Additional Labels: ipam.spidernet.io/owner-application-gv ipam.spidernet.io/owner-application-kind ipam.spidernet.io/owner-application-namespace ipam.spidernet.io/owner-application-name ipam.spidernet.io/owner-application-uid Additional PodAffinity: ipam.spidernet.io/app-api-group ipam.spidernet.io/app-api-version ipam.spidernet.io/app-kind ipam.spidernet.io/app-namespace ipam.spidernet.io/app-name ``` After multiple tests and Pod restarts, the Pod's IP remains fixed within the IP pool range: ```bash ~# kubectl delete po -l app=test-app-1 ~# kubectl get po -l app=test-app-1 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-1-74cbbf654-7v54p 1/1 Running 0 7s 10.6.168.101 worker-node-1 <none> <none> test-app-1-74cbbf654-qzxp7 1/1 Running 0 7s 10.6.168.102 controller-node-1 <none> <none> ``` When creating the application, the annotation `ipam.spidernet.io/ippool-ip-number`: '+1' is specified to allocate one extra fixed IP compared to the number of replicas. This configuration prevents any issues during rolling updates, ensuring that new Pods have available IPs while the old Pods are not deleted yet. Let's consider a scaling scenario where the replica count increases from 2 to 3. In this case, the two fixed IP pools associated with the application will automatically scale from 3 IPs to 4 IPs, maintaining one redundant IP address as expected: ```bash ~# kubectl scale deploy test-app-1 --replicas 3 deployment.apps/test-app-1 scaled ~# kubectl get po -l app=test-app-1 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-1-74cbbf654-7v54p 1/1 Running 0 54s 10.6.168.101 worker-node-1 <none> <none> test-app-1-74cbbf654-9w8gd 1/1 Running 0 19s 10.6.168.103 worker-node-1 <none> <none> test-app-1-74cbbf654-qzxp7 1/1 Running 0 54s 10.6.168.102 controller-node-1 <none> <none> ~# kubectl get spiderippool NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT auto4-test-app-1-eth0-a5bd3 4 10.6.0.0/16 3 4 false ``` With the information mentioned, scaling the application in Spiderpool is as simple as adjusting the replica count for the application. During application creation, the annotation `ipam.spidernet.io/ippool-reclaim` is specified. Its default value of `true` indicates that when the application is deleted, the corresponding automatic pool is also removed. However, `false` in this case means that upon application deletion, the assigned IPs within the automatically created fixed IP pool will be reclaimed, while retaining the pool itself. Applications created with the same configuration and name will automatically inherited the IP pool. ```bash ~# kubectl delete deploy test-app-1 deployment.apps \"test-app-1\" deleted ~# kubectl get spiderippool NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT auto4-test-app-1-eth0-a5bd3 4 10.6.0.0/16 0 4 false ``` With the provided application YAML, creating an application with the same name again will automatically reuse the existing IP pools. Instead of creating new IP pools, the previously created ones will be utilized. This ensures consistency in the replica count and the IP allocation within the pool. ```bash ~# kubectl get spiderippool NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT auto4-test-app-1-eth0-a5bd3 4 10.6.0.0/16 2 3 false ``` To assign fixed IPs to the multiple NICs of Pods, follow the instructions in this section. In the example YAML below, a Deployment with two replicas is created, each having multiple network interfaces. The annotations therein include: `ipam.spidernet.io/subnets`: specify the subnet for" }, { "data": "Spiderpool will randomly select IPs from this subnet to create fixed IP pools associated with the application, ensuring persistent IP assignment. In this example, this annotation creates two fixed IP pools belonging to two different underlay subnets for the Pods. `v1.multus-cni.io/default-network`: create a default network interface for the application. `k8s.v1.cni.cncf.io/networks`: create an additional network interface for the application. ```bash cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: test-app-2 spec: replicas: 2 selector: matchLabels: app: test-app-2 template: metadata: annotations: ipam.spidernet.io/subnets: |- [ { \"interface\": \"eth0\", \"ipv4\": [\"subnet-6\"] },{ \"interface\": \"net1\", \"ipv4\": [\"subnet-7\"] } ] v1.multus-cni.io/default-network: kube-system/macvlan-ens192 k8s.v1.cni.cncf.io/networks: kube-system/macvlan-ens224 labels: app: test-app-2 spec: containers: name: test-app-2 image: nginx imagePullPolicy: IfNotPresent ports: name: http containerPort: 80 protocol: TCP EOF ``` During application creation, Spiderpool randomly selects IPs from the specified two Underlay subnets to create fixed IP pools. These pools are then associated with the two network interfaces of the application's Pods. Each network interface's fixed pool automatically inherits the gateway, routing, and other properties of its respective subnet. ```bash ~# kubectl get spiderippool NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT auto4-test-app-2-eth0-44037 4 10.6.0.0/16 2 3 false auto4-test-app-2-net1-44037 4 10.7.0.0/16 2 3 false ~# kubectl get po -l app=test-app-2 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-2-f5d6b8d6c-8hxvw 1/1 Running 0 6m22s 10.6.168.101 controller-node-1 <none> <none> test-app-2-f5d6b8d6c-rvx55 1/1 Running 0 6m22s 10.6.168.105 worker-node-1 <none> <none> ~# kubectl get spiderippool auto4-test-app-2-eth0-44037 -ojsonpath={.spec} | jq { \"default\": false, \"disable\": false, \"gateway\": \"10.6.0.1\", \"ipVersion\": 4, \"ips\": [ \"10.6.168.101\", \"10.6.168.105-10.6.168.106\" ], \"podAffinity\": { \"matchLabels\": { \"ipam.spidernet.io/app-api-group\": \"apps\", \"ipam.spidernet.io/app-api-version\": \"v1\", \"ipam.spidernet.io/app-kind\": \"Deployment\", \"ipam.spidernet.io/app-name\": \"test-app-2\", \"ipam.spidernet.io/app-namespace\": \"default\" } }, \"routes\": [ { \"dst\": \"10.7.0.0/16\", \"gw\": \"10.6.0.1\" } ], \"subnet\": \"10.6.0.0/16\", \"vlan\": 0 } ~# kubectl get spiderippool auto4-test-app-2-net1-44037 -ojsonpath={.spec} | jq { \"default\": false, \"disable\": false, \"gateway\": \"10.7.0.1\", \"ipVersion\": 4, \"ips\": [ \"10.7.168.101-10.7.168.103\" ], \"podAffinity\": { \"matchLabels\": { \"ipam.spidernet.io/app-api-group\": \"apps\", \"ipam.spidernet.io/app-api-version\": \"v1\", \"ipam.spidernet.io/app-kind\": \"Deployment\", \"ipam.spidernet.io/app-name\": \"test-app-2\", \"ipam.spidernet.io/app-namespace\": \"default\" } }, \"routes\": [ { \"dst\": \"10.6.0.0/16\", \"gw\": \"10.7.0.1\" } ], \"subnet\": \"10.7.0.0/16\", \"vlan\": 0 } ``` SpiderSubnet also supports dynamic IP scaling for multiple network interfaces and automatic reclamation of IP pools. Below is an example of creating an IPPool instance that inherits the properties of `subnet-6` with a subnet ID of `10.6.0.0/16`. The available IP range of this IPPool instance must be a subset of `subnet-6.spec.ips`. ```bash ~# cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: ippool-test spec: ips: 10.6.168.108-10.6.168.110 subnet: 10.6.0.0/16 EOF ``` Using the provided YAML, you can manually create an IPPool instance that will inherit the attributes of the subnet having the specified subnet ID, such as gateway, routing, and other properties. ```bash ~# kubectl get spiderippool NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT ippool-test 4 10.6.0.0/16 0 3 false ~# kubectl get spidersubnet NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT subnet-6 4 10.6.0.0/16 3 10 subnet-7 4 10.7.0.0/16 0 10 ~# kubectl get spiderippool ippool-test -o jsonpath='{.spec}' | jq { \"default\": false, \"disable\": false, \"gateway\": \"10.6.0.1\", \"ipVersion\": 4, \"ips\": [ \"10.6.168.108-10.6.168.110\" ], \"routes\": [ { \"dst\": \"10.7.0.0/16\", \"gw\": \"10.6.0.1\" } ], \"subnet\": \"10.6.0.0/16\", \"vlan\": 0 } ``` SpiderSubnet helps to separate the roles of infrastructure administrators and their application counterparts by enabling automatic creation and dynamic scaling of fixed IP pools for applications that require static IPs." } ]
{ "category": "Runtime", "file_name": "metadata_engines_benchmark.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Metadata Engines Benchmark sidebar_position: 6 slug: /metadataenginesbenchmark description: This article describes how to test and evaluate the performance of various metadata engines for JuiceFS using a real-world environment. Conclusion first: For pure metadata operations, MySQL costs about 2~4x times of Redis; TiKV has similar performance to MySQL, and in most cases it costs a bit less; etcd costs about 1.5x times of TiKV. For small I/O (~100 KiB) workloads, total time costs with MySQL are about 1~3x of those with Redis; TiKV and etcd performs similarly to MySQL. For large I/O (~4 MiB) workloads, total time costs with different metadata engines show no significant difference (object storage becomes the bottleneck). :::note By changing `appendfsync` from `always` to `everysec`, Redis gains performance boost but loses a bit of data reliability. More information can be found . Both Redis and MySQL store only one replica locally, while TiKV and etcd stores three replicas on three different hosts using Raft protocol. ::: Details are provided below. Please note all the tests are run with the same object storage (to save data), clients and metadata hosts, only metadata engines differ. 1.1.0-beta1+2023-06-08.5ef17ba0 Amazon S3 Amazon c5.xlarge: 4 vCPUs, 8 GiB Memory, Up to 10 Gigabit Network Ubuntu 20.04.1 LTS Amazon c5d.xlarge: 4 vCPUs, 8 GiB Memory, Up to 10 Gigabit Network, 100 GB SSD (local storage for metadata engines) Ubuntu 20.04.1 LTS SSD is formatted as ext4 and mounted on `/data` Version: Configuration: `appendonly`: `yes` `appendfsync`: `always` or `everysec` `dir`: `/data/redis` Version: 8.0.25 `/var/lib/mysql` is bind mounted on `/data/mysql` Version: 15.3 The data directory was changed to `/data/pgdata` Version: 6.5.3 Configuration: `deploy_dir`: `/data/tikv-deploy` `data_dir`: `/data/tikv-data` Version: 3.3.25 Configuration: `data-dir`: `/data/etcd` Version: 6.3.23 Configuration `data-dir``/data/fdb` All the following tests are run for each metadata engine. Simple benchmarks within the source code: JuiceFS provides a basic benchmark command: ```bash ./juicefs bench /mnt/jfs -p 4 ``` Version: mdtest-3.3.0 Run parallel tests on 3 client nodes: ```bash $ cat myhost client1 slots=4 client2 slots=4 client3 slots=4 ``` Test commands: ```bash mpirun --use-hwthread-cpus --allow-run-as-root -np 12 --hostfile myhost --map-by slot /root/mdtest -b 3 -z 1 -I 100 -u -d /mnt/jfs mpirun --use-hwthread-cpus --allow-run-as-root -np 12 --hostfile myhost --map-by slot /root/mdtest -F -w 102400 -I 1000 -z 0 -u -d /mnt/jfs ``` Version: fio-3.28 ```bash fio --name=big-write --directory=/mnt/jfs --rw=write --refillbuffers --bs=4M --size=4G --numjobs=4 --endfsync=1 --group_reporting ``` Shows time cost (us/op). Smaller is better. Number in parentheses is the multiple of Redis-Always cost (`always` and `everysec` are candidates for Redis configuration `appendfsync`). Because of enabling metadata cache, the results of `read` are all less than 1us, which are not comparable for now. | | Redis-Always | Redis-Everysec | MySQL | PostgreSQL | TiKV | etcd | FoundationDB | |--|--|-|--|--||--|--| | mkdir | 558 | 468 (0.8) | 2042 (3.7) | 1076 (1.9) | 1237 (2.2) | 1916 (3.4) | 1842 (3.3) | | mvdir | 693 | 621 (0.9) | 2693 (3.9) | 1459 (2.1) | 1414 (2.0) | 2486 (3.6) | 1895 (2.7) | | rmdir | 717 | 648 (0.9) | 3050 (4.3) | 1697 (2.4) | 1641 (2.3) | 2980 (4.2) | 2088" }, { "data": "| | readdir_10 | 280 | 288 (1.0) | 1350 (4.8) | 1098 (3.9) | 995 (3.6) | 1757 (6.3) | 1744 (6.2) | | readdir_1k | 1490 | 1547 (1.0) | 18779 (12.6) | 18414 (12.4) | 5834 (3.9) | 15809 (10.6) | 15276 (10.3) | | mknod | 562 | 464 (0.8) | 1547 (2.8) | 849 (1.5) | 1211 (2.2) | 1838 (3.3) | 1763 (3.1) | | create | 570 | 455 (0.8) | 1570 (2.8) | 844 (1.5) | 1209 (2.1) | 1849 (3.2) | 1761 (3.1) | | rename | 728 | 627 (0.9) | 2735 (3.8) | 1478 (2.0) | 1419 (1.9) | 2445 (3.4) | 1911 (2.6) | | unlink | 658 | 567 (0.9) | 2365 (3.6) | 1280 (1.9) | 1443 (2.2) | 2461 (3.7) | 1940 (2.9) | | lookup | 173 | 178 (1.0) | 557 (3.2) | 375 (2.2) | 608 (3.5) | 1054 (6.1) | 1029 (5.9) | | getattr | 87 | 86 (1.0) | 530 (6.1) | 350 (4.0) | 306 (3.5) | 536 (6.2) | 504 (5.8) | | setattr | 471 | 345 (0.7) | 1029 (2.2) | 571 (1.2) | 1001 (2.1) | 1279 (2.7) | 1596 (3.4) | | access | 87 | 89 (1.0) | 518 (6.0) | 356 (4.1) | 307 (3.5) | 534 (6.1) | 526 (6.0) | | setxattr | 393 | 262 (0.7) | 992 (2.5) | 534 (1.4) | 800 (2.0) | 717 (1.8) | 1300 (3.3) | | getxattr | 84 | 87 (1.0) | 494 (5.9) | 333 (4.0) | 303 (3.6) | 529 (6.3) | 511 (6.1) | | removexattr | 215 | 96 (0.4) | 697 (3.2) | 385 (1.8) | 1007 (4.7) | 1336 (6.2) | 1597 (7.4) | | listxattr_1 | 85 | 87 (1.0) | 516 (6.1) | 342 (4.0) | 303 (3.6) | 531 (6.2) | 515 (6.1) | | listxattr_10 | 87 | 91 (1.0) | 561 (6.4) | 383 (4.4) | 322 (3.7) | 565 (6.5) | 529 (6.1) | | link | 680 | 545 (0.8) | 2435 (3.6) | 1375 (2.0) | 1732 (2.5) | 3058 (4.5) | 2402 (3.5) | | symlink | 580 | 448 (0.8) | 1785 (3.1) | 954 (1.6) | 1224 (2.1) | 1897 (3.3) | 1764 (3.0) | | newchunk | 0 | 0 (0.0) | 1 (0.0) | 1 (0.0) | 1 (0.0) | 1 (0.0) | 2 (0.0) | | write | 553 | 369 (0.7) | 2352 (4.3) | 1183 (2.1) | 1573 (2.8) | 1788 (3.2) | 1747 (3.2) | | read_1 | 0 | 0 (0.0) | 0 (0.0) | 0 (0.0) | 0 (0.0) | 0 (0.0) | 0 (0.0) | | read_10 | 0 | 0 (0.0) | 0 (0.0) | 0 (0.0) | 0 (0.0) | 0 (0.0) | 0 (0.0) | | | Redis-Always | Redis-Everysec | MySQL | PostgreSQL | TiKV | etcd | FoundationDB | ||||--|--|--|--|--| | Write big file | 730.84 MiB/s | 731.93 MiB/s | 729.00 MiB/s | 744.47 MiB/s |" }, { "data": "MiB/s | 746.07 MiB/s | 744.70 MiB/s | | Read big file | 923.98 MiB/s | 892.99 MiB/s | 905.93 MiB/s | 895.88 MiB/s | 918.19 MiB/s | 939.63 MiB/s | 948.81 MiB/s | | Write small file | 95.20 files/s | 109.10 files/s | 82.30 files/s | 86.40 files/s | 101.20 files/s | 95.80 files/s | 94.60 files/s | | Read small file | 1242.80 files/s | 937.30 files/s | 752.40 files/s | 1857.90 files/s | 681.50 files/s | 1229.10 files/s | 1301.40 files/s | | Stat file | 12313.80 files/s | 11989.50 files/s | 3583.10 files/s | 7845.80 files/s | 4211.20 files/s | 2836.60 files/s | 3400.00 files/s | | FUSE operation | 0.41 ms/op | 0.40 ms/op | 0.46 ms/op | 0.44 ms/op | 0.41 ms/op | 0.41 ms/op | 0.44 ms/op | | Update meta | 2.45 ms/op | 1.76 ms/op | 2.46 ms/op | 1.78 ms/op | 3.76 ms/op | 3.40 ms/op | 2.87 ms/op | Shows rate (ops/sec). Bigger is better. | | Redis-Always | Redis-Everysec | MySQL | PostgreSQL | TiKV | etcd | FoundationDB | |--|--|-|-||--|-|--| | EMPTY FILES | | | | | | | | | Directory creation | 4901.342 | 9990.029 | 1252.421 | 4091.934 | 4041.304 | 1910.768 | 3065.578 | | Directory stat | 289992.466 | 379692.576 | 9359.278 | 69384.097 | 49465.223 | 6500.178 | 17746.670 | | Directory removal | 5131.614 | 10356.293 | 902.077 | 1254.890 | 3210.518 | 1450.842 | 2460.604 | | File creation | 5472.628 | 9984.824 | 1326.613 | 4726.582 | 4053.610 | 1801.956 | 2908.526 | | File stat | 288951.216 | 253218.558 | 9135.571 | 233148.252 | 50432.658 | 6276.787 | 14939.411 | | File read | 64560.148 | 60861.397 | 8445.953 | 20013.027 | 18411.280 | 9094.627 | 11087.931 | | File removal | 6084.791 | 12221.083 | 1073.063 | 3961.855 | 3742.269 | 1648.734 | 2214.311 | | Tree creation | 80.121 | 83.546 | 34.420 | 61.937 | 77.875 | 56.299 | 74.982 | | Tree removal | 218.535 | 95.599 | 42.330 | 44.696 | 114.414 | 76.002 | 64.036 | | SMALL FILES | | | | | | | | File creation | 295.067 | 312.182 | 275.588 | 289.627 | 307.121 | 275.578 | 263.487 | | File stat | 54069.827 | 52800.108 | 8760.709 | 19841.728 | 14076.214 | 8214.318 | 10009.670 | | File read | 62341.568 | 57998.398 | 4639.571 | 19244.678 | 23376.733 | 5477.754 | 6533.787 | | File removal | 5615.018 | 11573.415 | 1061.600 | 3907.740 | 3411.663 | 1024.421 | 1750.613 | | Tree creation | 57.860 | 57.080 | 23.723 | 52.621 | 44.590 | 19.998 | 11.243 | | Tree removal | 96.756 | 65.279 | 23.227 | 19.511 | 27.616 | 17.868 | 10.571 | | | Redis-Always | Redis-Everysec | MySQL | PostgreSQL | TiKV | etcd | FoundationDB | |--|--|-|--||--|--|--| | Write bandwidth | 729 MiB/s | 737 MiB/s | 736 MiB/s | 768 MiB/s | 731 MiB/s | 738 MiB/s | 745 MiB/s |" } ]
{ "category": "Runtime", "file_name": "gke-installation.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "We support running Antrea inside of GKE clusters on Ubuntu Node. Antrea would operate in NetworkPolicy only mode, in which no encapsulation is required for any kind of traffic (Intra Node, Inter Node, etc) and NetworkPolicies are enforced using OVS. Antrea is supported on both VPC-native Enable/Disable modes. Install the Google Cloud SDK (gcloud). Refer to ```bash curl https://sdk.cloud.google.com | bash ``` Make sure you are authenticated to use the Google Cloud API ```bash export ADMIN_USER=user@email.com gcloud auth login ``` Create a project or use an existing one ```bash export GKE_PROJECT=gke-clusters gcloud projects create $GKE_PROJECT ``` You can use any method to create a GKE cluster (gcloud SDK, gcloud Console, etc). The example given here is using the Google Cloud SDK. Note: Antrea is supported on Ubuntu Nodes only for GKE cluster. When creating the cluster, you must use the default network provider and must not enable \"Dataplane V2\". Create a GKE cluster ```bash export GKE_ZONE=\"us-west1\" export GKE_HOST=\"UBUNTU\" gcloud container --project $GKEPROJECT clusters create cluster1 --image-type $GKEHOST \\ --zone $GKE_ZONE --enable-ip-alias ``` Access your cluster ```bash kubectl get nodes NAME STATUS ROLES AGE VERSION gke-cluster1-default-pool-93d7da1c-61z4 Ready <none> 3m11s 1.25.7-gke.1000 gke-cluster1-default-pool-93d7da1c-rkbm Ready <none> 3m9s 1.25.7-gke.1000 ``` Create a cluster-admin ClusterRoleBinding ```bash kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user user@email.com ``` Note: To create clusterRoleBinding, the user must have `container.clusterRoleBindings.create` permission. Use this command to enable it, if the previous command fails due to permission error. Only cluster Admin can assign this permission. ```bash gcloud projects add-iam-policy-binding $GKE_PROJECT --member user:user@email.com --role roles/container.admin ``` Prepare the Cluster Nodes Deploy ``antrea-node-init`` DaemonSet to enable ``kubelet`` to operate in CNI mode. ```bash kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-gke-node-init.yml ``` Deploy Antrea To deploy a released version of Antrea, pick a deployment manifest from the . Note that GKE support was added in release 0.5.0, which means you cannot pick a release older than 0.5.0. For any given release `<TAG>` (e.g. `v0.5.0`), you can deploy Antrea as follows: ```bash kubectl apply -f https://github.com/antrea-io/antrea/releases/download/<TAG>/antrea-gke.yml ``` To deploy the latest version of Antrea (built from the main branch), use the checked-in : ```bash kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-gke.yml ``` The command will deploy a single replica of Antrea controller to the GKE cluster and deploy Antrea agent to every Node. After a successful deployment you should be able to see these Pods running in your cluster: ```bash $ kubectl get pods --namespace kube-system -l app=antrea -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES antrea-agent-24vwr 2/2 Running 0 46s 10.138.15.209 gke-cluster1-default-pool-93d7da1c-rkbm <none> <none> antrea-agent-7dlcp 2/2 Running 0 46s 10.138.15.206 gke-cluster1-default-pool-9ba12cea-wjzn <none> <none> antrea-controller-5f9985c59-5crt6 1/1 Running 0 46s 10.138.15.209 gke-cluster1-default-pool-93d7da1c-rkbm <none> <none> ``` Restart remaining Pods Once Antrea is up and running, restart all Pods in all Namespaces (kube-system, gmp-system, etc) so they can be managed by Antrea. ```bash $ for ns in $(kubectl get ns -o=jsonpath=''{.items[*].metadata.name}'' --no-headers=true); do \\ pods=$(kubectl get pods -n $ns -o custom-columns=NAME:.metadata.name,HOSTNETWORK:.spec.hostNetwork --no-headers=true | grep '<none>' | awk '{ print $1 }'); \\ [ -z \"$pods\" ] || kubectl delete pods -n $ns $pods; done pod \"alertmanager-0\" deleted pod \"collector-4sfvd\" deleted pod \"collector-gtlxf\" deleted pod \"gmp-operator-67c4678f5c-ffktp\" deleted pod \"rule-evaluator-85b8bb96dc-trnqj\" deleted pod \"event-exporter-gke-7bf6c99dcb-4r62c\" deleted pod \"konnectivity-agent-autoscaler-6dfdb49cf7-hfv9g\" deleted pod \"konnectivity-agent-cc655669b-2cjc9\" deleted pod \"konnectivity-agent-cc655669b-d79vf\" deleted pod \"kube-dns-5bfd847c64-ksllw\" deleted pod \"kube-dns-5bfd847c64-qv9tq\" deleted pod \"kube-dns-autoscaler-84b8db4dc7-2pb2b\" deleted pod \"l7-default-backend-64679d9c86-q69lm\" deleted pod \"metrics-server-v0.5.2-6bf74b5d5f-22gqq\" deleted ```" } ]
{ "category": "Runtime", "file_name": "license.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "MIT License Copyright (c) 2019 Josh Bleecher Snyder Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." } ]
{ "category": "Runtime", "file_name": "storage-class-device-set.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "Target version: Rook 1.1 The primary motivation for this feature is to take advantage of the mobility of storage in cloud-based environments by defining storage based on sets of devices consumed as block-mode PVCs. In environments like AWS you can request remote storage that can be dynamically provisioned and attached to any node in a given Availability Zone (AZ). The design can also accommodate non-cloud environments through the use of local PVs. ```go struct StorageClassDeviceSet { Name string // A unique identifier for the set Count int // Number of devices in this set Resources v1.ResourceRequirements // Requests/limits for the devices Placement rook.Placement // Placement constraints for the devices Config map[string]string // Provider-specific device configuration volumeClaimTemplates []v1.PersistentVolumeClaim // List of PVC templates for the underlying storage devices } ``` A provider will be able to use the `StorageClassDeviceSet` struct to describe the properties of a particular set of `StorageClassDevices`. In this design, the notion of a \"`StorageClassDevice`\" is an abstract concept, separate from underlying storage devices. There are three main aspects to this abstraction: A `StorageClassDevice` is both storage and the software required to make the storage available and manage it in the cluster. As such, the struct takes into account the resources required to run the associated software, if any. A single `StorageClassDevice` could be comprised of multiple underlying storage devices, specified by having more than one item in the `volumeClaimTemplates` field. Since any storage devices that are part of a `StorageClassDevice` will be represented by block-mode PVCs, they will need to be associated with a Pod so that they can be attached to cluster nodes. A `StorageClassDeviceSet` will have the following fields associated with it: name: A name for the set. [required]* count: The number of devices in the set. [required]* resources*: The CPU and RAM requests/limits for the devices. Default is no resource requests. placement*: The placement criteria for the devices. Default is no placement criteria. config*: Granular device configuration. This is a generic `map[string]string` to allow for provider-specific configuration. volumeClaimTemplates*: A list of PVC templates to use for provisioning the underlying storage devices. An entry in `volumeClaimTemplates` must specify the following fields: resources.requests.storage*: The desired capacity for the underlying storage devices. storageClassName*: The StorageClass to provision PVCs from. Default would be to use the cluster-default StorageClass. The CephCluster CRD could be extended to include a new field: `spec.storage.StorageClassDeviceSets`, which would be a list of one or more `StorageClassDeviceSets`. If elements exist in this list, the CephCluster controller would then create enough PVCs to match each `StorageClassDeviceSet`'s `Count` field, attach them to individual OsdPrepare Jobs, then attach them to OSD Pods once the Jobs are completed. For the initial implementation, only one entry in `volumeClaimTemplates` would be supported, if only to tighten the scope for an MVP. The PVCs would be provisioned against a configured or default" }, { "data": "It is recommended that the admin setup a StorageClass with `volumeBindingMode: WaitForFirstConsumer` set. If the admin wishes to control device placement, it will be up to them to make sure the desired nodes are labeled properly to ensure the Kubernetes scheduler will distribute the OSD Pods based on Placement criteria. In keeping with current Rook-Ceph patterns, the resources and placement for the OSDs specified in the `StorageClassDeviceSet` would override any cluster-wide configurations for OSDs. Additionally, other conflicting configurations parameters in the CephCluster CRD,such as `useAllDevices`, will be ignored by device sets. While the current strategy of deploying OSD Pods as individual Kubernetes Deployments, some changes to the deployment logic would need to change. The workflow would look something like this: Get all matching OSD PVCs Create any missing OSD PVCs Get all matching OSD Deployments Check that all OSD Deployments are using valid OSD PVCs If not, probably remove the OSD Deployment? Remove any PVCs used by OSD Deployments from the list of PVCs to be worked on Run an OsdPrepare Job on all unused and uninitialized PVCs This would be one Job per PVC Create an OSD Deployment for each unused but initialized PVC Deploy OSD with `ceph-volume` if available. If PV is not backed by LV, create a LV in this PV. If PV is backed by LV, use this PV as is. This design can also be applied to non-cloud environments. To take advantage of this, the admin should deploy the to create local PVs for the desired local storage devices and then follow the recommended directions for [local PVs](https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/). Creating real-world OSDs is complex: Some configurations deploy multiple OSDs on a single drive Some configurations are using more than one drive for a single OSD. Deployments often look similar on multiple hosts. There are some advanced configurations possible, like encrypted drives. All of these setups are valid real-world configurations that need to be supported. The Ceph project defines a data structure that allows defining groups of drives to be provisioned in a specific way by ceph-volume: Drive Groups. Drive Groups were originally designed to be ephemeral, but it turns out that orchestrators like DeepSea store them permanently in order to have a source of truth when (re-)provisioning OSDs. Also, Drive Groups were originally designed to be host specific. But the concept of hosts is not really required for the Drive Group data structure make sense, as they only select a subset of a set of available drives. DeepSea has a documentation of [some example drive groups](https://github.com/SUSE/DeepSea/wiki/Drive-Groups#example-drive-group-files). A complete specification is documented in the" }, { "data": "documentation](http://docs.ceph.com/docs/master/mgr/orchestrator_modules/#orchestrator.DriveGroupSpec). A DeviceSet to provision 8 OSDs on 8 drives could look like: ```yaml name: \"mydeviceset\" count: 8 ``` The Drive Group would look like so: ```yaml host_pattern: \"hostname1\" data_devices: count: 8 ``` A Drive Group with 8 OSDs using a shared fast drive could look similar to this: ```yaml host_pattern: \"hostname1\" data_devices: count: 8 model: MC-55-44-XZ db_devices: model: NVME db_slots: 8 ``` Drive Groups don't yet provide orchestrator specific extensions, like resource requirements or placement specs, but that could be added trivially. Also a name could be added to Drive Groups. Given the complexity of this design, here are a few examples to showcase possible configurations for OSD `StorageClassDeviceSets`. ```yaml type: CephCluster name: cluster1 ... spec: ... storage: ... storageClassDeviceSets: name: cluster1-set1 count: 3 resources: requests: cpu: 2 memory: 4Gi placement: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: weight: 100 podAffinityTerm: labelSelector: matchExpressions: key: \"rook.io/cluster\" operator: In values: cluster1 topologyKey: \"failure-domain.beta.kubernetes.io/zone\" volumeClaimTemplates: spec: resources: requests: storage: 5Ti storageClassName: gp2-ebs ``` In this example, `podAntiAffinity` is used to spread the OSD Pods across as many AZs as possible. In addition, all Pods would have to be given the label `rook.io/cluster=cluster1` to denote they belong to this cluster, such that the scheduler will know to try and not schedule multiple Pods with that label on the same nodes if possible. The CPU and memory requests would allow the scheduler to know if a given node can support running an OSD process. It should be noted, in the case where the only nodes that can run a new OSD Pod are nodes with OSD Pods already on them, one of those nodes would be selected. In addition, EBS volumes may not cross between AZs once created, so a given Pod is guaranteed to always be limited to the AZ it was created in. ```yaml type: CephCluster name: cluster1 ... spec: ... resources: osd: requests: cpu: 2 memory: 4Gi storage: ... storageClassDeviceSets: name: cluster1-set1 count: 3 placement: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: matchExpressions: key: \"failure-domain.beta.kubernetes.io/zone\" operator: In values: us-west-1a podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: weight: 100 podAffinityTerm: labelSelector: matchExpressions: key: \"rook.io/cluster\" operator: In values: cluster1 volumeClaimTemplates: spec: resources: requests: storage: 5Ti storageClassName: gp2-ebs name: cluster1-set2 count: 3 placement: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: matchExpressions: key: \"failure-domain.beta.kubernetes.io/zone\" operator: In values: us-west-1b podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: weight: 100 podAffinityTerm: labelSelector: matchExpressions: key: \"rook.io/cluster\" operator: In values: cluster1 volumeClaimTemplates: spec: resources: requests: storage: 5Ti storageClassName: gp2-ebs name: cluster1-set3 count: 3 placement: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: matchExpressions: key: \"failure-domain.beta.kubernetes.io/zone\" operator: In values: us-west-1c podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: weight: 100 podAffinityTerm: labelSelector: matchExpressions: key: \"rook.io/cluster\" operator: In values: cluster1 volumeClaimTemplates: spec: resources: requests: storage: 5Ti storageClassName: gp2-ebs ``` In this example, we've added a `nodeAffinity` to the `placement` that restricts all OSD Pods to a specific AZ. This case is only really useful if you specify multiple `StorageClassDeviceSets` for different AZs, so that has been done here. We also specify a top-level `resources` definition, since we want that to be the same for all OSDs in the device sets. ```yaml type: CephCluster name: cluster1 ... spec: ... storage: ... storageClassDeviceSets: name: cluster1-set1 count: 3 resources: requests: cpu: 2 memory: 4Gi placement: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: weight: 100 podAffinityTerm: labelSelector: matchExpressions: key: \"rook.io/cluster\" operator: In values: cluster1 topologyKey:" }, { "data": "volumeClaimTemplates: spec: resources: requests: storage: 5Ti storageClassName: gp2-ebs name: cluster1-set2 count: 3 resources: requests: cpu: 2 memory: 8Gi placement: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: weight: 100 podAffinityTerm: labelSelector: matchExpressions: key: \"rook.io/cluster\" operator: In values: cluster1 topologyKey: \"failure-domain.beta.kubernetes.io/zone\" volumeClaimTemplates: spec: resources: requests: storage: 10Ti storageClassName: gp2-ebs ``` In this example, we have two `StorageClassDeviceSets` with different capacities for the devices in each set. The devices with larger capacities would require a greater amount of memory to operate the OSD Pods, so that is reflected in the `resources` field. ```yaml type: CephCluster name: cluster1 ... spec: ... storage: ... storageClassDeviceSets: name: cluster1-set1 count: 3 volumeClaimTemplates: spec: resources: requests: storage: 1 storageClassName: cluster1-local-storage ``` In this example, we expect there to be nodes that are configured with one local storage device each, but they would not be specified in the `nodes` list. Prior to this, the admin would have had to deploy the local-storage-provisioner, created local PVs for each of the devices, and created a StorageClass to allow binding of the PVs to PVCs. At this point, the same workflow would be the same as the cloud use case, where you simply specify a count of devices, and a template with a StorageClass. Two notes here: The count of devices would need to match the number of existing devices to consume them all. The capacity for each device is irrelevant, since we will simply consume the entire storage device and get that capacity regardless of what is set for the PVC. ```yaml type: CephCluster name: cluster1 ... spec: ... storage: ... storageClassDeviceSets: name: cluster1-set1 count: 3 config: metadataDevice: \"/dev/rook/device1\" volumeClaimTemplates: metadata: name: osd-data spec: resources: requests: storage: 1 storageClassName: cluster1-hdd-storage metadata: name: osd-metadata spec: resources: requests: storage: 1 storageClassName: cluster1-nvme-storage ``` In this example, we are using NVMe devices to store OSD metadata while having HDDs store the actual data. We do this by creating two StorageClasses, one for the NVMe devices and one for the HDDs. Then, if we assume our implementation will always provide the block devices in a deterministic manner, we specify the location of the NVMe devices (as seen in the container) as the `metadataDevice` in the OSD config. We can guarantee that a given OSD Pod will always select two devices that are on the same node if we configure `volumeBindingMode: WaitForFirstConsumer` in the StorageClasses, as that allows us to offload that logic to the Kubernetes scheduler. Finally, we also provide a `name` field for each device set, which can be used to identify which set a given PVC belongs to. ```yaml type: CephCluster name: cluster1 ... spec: ... storage: ... storageClassDeviceSets: name: cluster1-set1 count: 3 config: osdsPerDevice: \"3\" volumeClaimTemplates: spec: resources: requests: storage: 5Ti storageClassName: cluster1-local-storage ``` In this example, we show how we can provide additional OSD configuration in the `StorageClassDeviceSet`. The `config` field is just a `map[string]string` type, so anything can go in this field." } ]
{ "category": "Runtime", "file_name": "p8s_metrics.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "title: JuiceFS Metrics sidebar_position: 4 If you haven't yet set up monitoring for JuiceFS, read to learn how. | Name | Description | | - | -- | | `vol_name` | Volume name | | `instance` | Client host name in format" }, { "data": "Refer to for more information | | `mp` | Mount point path, if metrics are reported through , for example, , `mp` will be `sdk-<PID>` | | Name | Description | Unit | |-|-|| | `juicefsusedspace` | Total used space | byte | | `juicefsusedinodes` | Total number of inodes | | | Name | Description | Unit | | - | -- | - | | `juicefs_uptime` | Total running time | second | | `juicefscpuusage` | Accumulated CPU usage | second | | `juicefs_memory` | Used memory | byte | | Name | Description | Unit | | - | -- | - | | `juicefstransactiondurationshistogramseconds` | Transactions latency distributions | second | | `juicefstransactionrestart` | Number of times a transaction restarted | | | Name | Description | Unit | | - | -- | - | | `juicefsfusereadsizebytes` | Size distributions of read request | byte | | `juicefsfusewrittensizebytes` | Size distributions of write request | byte | | `juicefsfuseopsdurationshistogram_seconds` | Operations latency distributions | second | | `juicefsfuseopen_handlers` | Number of open files and directories | | | Name | Description | Unit | | - | -- | - | | `juicefssdkreadsizebytes` | Size distributions of read request | byte | | `juicefssdkwrittensizebytes` | Size distributions of write request | byte | | `juicefssdkopsdurationshistogram_seconds` | Operations latency distributions | second | | Name | Description | Unit | |:-||--| | `juicefsblockcacheblocks` | Number of cached blocks | | | `juicefsblockcachebytes` | Size of cached blocks | byte | | `juicefsblockcachehits` | Count of cached block hits | | | `juicefsblockcachemiss` | Count of cached block miss | | | `juicefsblockcachewrites` | Count of cached block writes | | | `juicefsblockcachedrops` | Count of cached block drops | | | `juicefsblockcacheevicts` | Count of cached block evicts | | | `juicefsblockcachehit_bytes` | Size of cached block hits | byte | | `juicefsblockcachemiss_bytes` | Size of cached block miss | byte | | `juicefsblockcachewrite_bytes` | Size of cached block writes | byte | | `juicefsblockcachereadhistseconds` | Latency distributions of read cached block | second | | `juicefsblockcachewritehistseconds` | Latency distributions of write cached block | second | | `juicefsstagingblocks` | Number of blocks in the staging path | | | `juicefsstagingblock_bytes` | Total bytes of blocks in the staging path | byte | | `juicefsstagingblockdelayseconds` | Total seconds of delay for staging blocks | second | | Name | Description | | - | -- | | `method` | Method to request object storage (e.g. GET, PUT, HEAD, DELETE) | | Name | Description | Unit | | - | -- | - | | `juicefsobjectrequestdurationshistogram_seconds` | Object storage request latency distributions | second | | `juicefsobjectrequest_errors` | Count of failed requests to object storage | | | `juicefsobjectrequestdatabytes` | Size of requests to object storage | byte | | Name | Description | Unit | |-| -- | - | | `juicefscompactsizehistogrambytes` | Size distributions of compacted data | byte | | `juicefsusedreadbuffersize_bytes` | size of currently used buffer for read | | | Name | Description | Unit | |-|-|-| | `juicefssyncscanned` | Number of all objects scanned from the source | | | `juicefssynchandled` | Number of objects from the source that have been processed | | | `juicefssyncpending` | Number of objects waiting to be synchronized | | | `juicefssynccopied` | Number of objects that have been synchronized | | | `juicefssynccopied_bytes` | Total size of data that has been synchronized | byte | | `juicefssyncskipped` | Number of objects that skipped during synchronization | | | `juicefssyncfailed` | Number of objects" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_kvstore_set.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Set a key and value ``` cilium-dbg kvstore set [options] <key> [flags] ``` ``` cilium kvstore set foo=bar ``` ``` -h, --help help for set --key string Key --value string Value ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API --kvstore string Key-Value Store type --kvstore-opt map Key-Value Store options ``` - Direct access to the kvstore" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.3.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "https://github.com/vmware-tanzu/velero/releases/tag/v1.3.2 `velero/velero:v1.3.2` https://velero.io/docs/v1.3.2/ https://velero.io/docs/v1.3.2/upgrade-to-1.3/ Allow `plugins/` as a valid top-level directory within backup storage locations. This directory is a place for plugin authors to store arbitrary data as needed. It is recommended to create an additional subdirectory under `plugins/` specifically for your plugin, e.g. `plugins/my-plugin-data/`. (#2350, @skriss) bug fix: don't panic in `velero restic repo get` when last maintenance time is `nil` (#2315, @skriss) https://github.com/vmware-tanzu/velero/releases/tag/v1.3.1 `velero/velero:v1.3.1` https://velero.io/docs/v1.3.1/ https://velero.io/docs/v1.3.1/upgrade-to-1.3/ Fixed a bug that caused failures when backing up CustomResourceDefinitions with whole numbers in numeric fields. Fix CRD backup failures when fields contained a whole number. (#2322, @nrb) https://github.com/vmware-tanzu/velero/releases/tag/v1.3.0 `velero/velero:v1.3.0` https://velero.io/docs/v1.3.0/ https://velero.io/docs/v1.3.0/upgrade-to-1.3/ This release includes a number of related bug fixes and improvements to how Velero backs up and restores custom resource definitions (CRDs) and instances of those CRDs. We found and fixed three issues around restoring CRDs that were originally created via the `v1beta1` CRD API. The first issue affected CRDs that had the `PreserveUnknownFields` field set to `true`. These CRDs could not be restored into 1.16+ Kubernetes clusters, because the `v1` CRD API does not allow this field to be set to `true`. We added code to the restore process to check for this scenario, to set the `PreserveUnknownFields` field to `false`, and to instead set `x-kubernetes-preserve-unknown-fields` to `true` in the OpenAPIv3 structural schema, per Kubernetes guidance. For more information on this, see the . The second issue affected CRDs without structural schemas. These CRDs need to be backed up/restored through the `v1beta1` API, since all CRDs created through the `v1` API must have structural schemas. We added code to detect these CRDs and always back them up/restore them through the `v1beta1` API. Finally, related to the previous issue, we found that our restore code was unable to handle backups with multiple API versions for a given resource type, and weve remediated this as well. We also improved the CRD restore process to enable users to properly restore CRDs and instances of those CRDs in a single restore operation. Previously, users found that they needed to run two separate restores: one to restore the CRD(s), and another to restore instances of the CRD(s). This was due to two deficiencies in the Velero code. First, Velero did not wait for a CRD to be fully accepted by the Kubernetes API server and ready for serving before moving on; and second, Velero did not refresh its cached list of available APIs in the target cluster after restoring CRDs, so it was not aware that it could restore instances of those CRDs. We fixed both of these issues by (1) adding code to wait for CRDs to be ready after restore before moving on, and (2) refreshing the cached list of APIs after restoring CRDs, so any instances of newly-restored CRDs could subsequently be restored. With all of these fixes and improvements in place, we hope that the CRD backup and restore experience is now seamless across all supported versions of Kubernetes. Thanks to community members and , Velero now provides multi-arch container images by using Docker manifest" }, { "data": "We are currently publishing images for `linux/amd64`, `linux/arm64`, `linux/arm`, and `linux/ppc64le` in . Users dont need to change anything other than updating their version tag - the v1.3 image is `velero/velero:v1.3.0`, and Docker will automatically pull the proper architecture for the host. For more information on manifest lists, see . We fixed a large number of bugs and made some smaller usability improvements in this release. Here are a few highlights: Support private registries with custom ports for the restic restore helper image (, ) Use AWS profile from BackupStorageLocation when invoking restic (, ) Allow restores from schedules in other clusters (, ) Fix memory leak & race condition in restore code (, ) Corrected the selfLink for Backup CR in site/docs/main/output-file-format.md (#2292, @RushinthJohn) Back up schema-less CustomResourceDefinitions as v1beta1, even if they are retrieved via the v1 endpoint. (#2264, @nrb) Bug fix: restic backup volume snapshot to the second location failed (#2244, @jenting) Added support of using PV name from volumesnapshotter('SetVolumeID') in case of PV renaming during the restore (#2216, @mynktl) Replaced deprecated helm repo url at all it appearance at docs. (#2209, @markrity) added support for arm and arm64 images (#2227, @shaneutt) when restoring from a schedule, validate by checking for backup(s) labeled with the schedule name rather than existence of the schedule itself, to allow for restoring from deleted schedules and schedules in other clusters (#2218, @cpanato) bug fix: back up server-preferred version of CRDs rather than always the `v1beta1` version (#2230, @skriss) Wait for CustomResourceDefinitions to be ready before restoring CustomResources. Also refresh the resource list from the Kubernetes API server after restoring CRDs in order to properly restore CRs. (#1937, @nrb) When restoring a v1 CRD with PreserveUnknownFields = True, make sure that the preservation behavior is maintained by copying the flag into the Open API V3 schema, but update the flag so as to allow the Kubernetes API server to accept the CRD without error. (#2197, @nrb) Enable pruning unknown CRD fields (#2187, @jenting) bump restic to 0.9.6 to fix some issues with non AWS standard regions (#2210, @Sh4d1) bug fix: fix race condition resulting in restores sometimes succeeding despite restic restore failures (#2201, @skriss) Bug fix: Check for nil LastMaintenanceTime in ResticRepository dueForMaintenance (#2200, @sseago) repopulate backuplastsuccessful_timestamp metrics for each schedule after server restart (#2196, @skriss) added support for ppc64le images and manifest lists (#1768, @prajyot) bug fix: only prioritize restoring `replicasets.apps`, not `replicasets.extensions` (#2157, @skriss) bug fix: restore both `replicasets.apps` and* `replicasets.extensions` before `deployments` (#2120, @skriss) bug fix: don't restore cluster-scoped resources when restoring specific namespaces and IncludeClusterResources is nil (#2118, @skriss) Enabling Velero to switch credentials (`AWS_PROFILE`) if multiple s3-compatible backupLocations are present (#2096, @dinesh) bug fix: deep-copy backup's labels when constructing snapshot tags, so the PV name isn't added as a label to the backup (#2075, @skriss) remove the `fsfreeze-pause` image being published from this repo; replace it with `ubuntu:bionic` in the nginx example app (#2068, @skriss) add support for a private registry with a custom port in a restic-helper image (#1999, @cognoz) return better error message to user when cluster config can't be found via `--kubeconfig`, `$KUBECONFIG`, or in-cluster config (#2057, @skriss)" } ]
{ "category": "Runtime", "file_name": "release-process.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "Up-to-date versions of `git`, `go` etc Install GitHub release tool `go get github.com/weaveworks/github-release` Install manifest tool `go get github.com/estesp/manifest-tool` Create a [github token for github-release](https://help.github.com/articles/creating-an-access-token-for-command-line-use/); select the `repo` OAuth scope; set and export `$GITHUB_TOKEN` with this value The release script behaves differently depending on the kind of release you are doing. There are three types: Mainline* - a release (typically from master) with version tag `vX.Y.Z` where Z is zero (e.g. `v2.1.0`) Branch* - a bugfix release (typically from a branch) with version tag `vX.Y.Z` where Z is non-zero (e.g `v2.1.1`) Prerelease* - a release from an arbitrary branch with an arbitrary version tag (e.g. `feature-preview-20150904`) N.B. the release script _infers the release type from the format of the version tag_. Ensure your tag is in the correct format and the desired behaviours for that type of release will be obtained from the script. Checkout the branch from which you wish to release Choose a version tag (see above) henceforth referred to as `$TAG`. Add a changelog entry for the new tag at the top of `CHANGELOG.md`. The first line must be a markdown header of the form `## Release $TAG` for Prerelease builds, `## Release ${TAG#v}` otherwise. Commit the changelog update: git commit -m \"Add release $TAG\" CHANGELOG.md Next you must tag the changelog commit with `$TAG` git tag -a -m \"Release $TAG\" $TAG You are now ready to perform the build. If you have skipped the previous steps (e.g. because you're doing a rebuild), you must ensure that `HEAD` points to the tagged commit. You may then execute bin/release build This has the following effects: `git tag --points-at HEAD` is used to determine `$TAG` (hence the `HEAD` requirement) Your local* repository is cloned into `releases/$TAG` `CHANGELOG.md` is checked to ensure it has an entry for `$TAG` Distributables injected with `$TAG` are built Tests are executed First you must push your branch and version tag upstream, so that an associated GitHub release may be created: git push git@github.com:weaveworks/weave git push git@github.com:weaveworks/weave $TAG N.B. if you're testing the release process, push to your fork instead! You're now ready to draft your release notes: bin/release draft This has the following effects: A is created in GitHub for `$TAG`. This release is in the draft state, so it is only visible to contributors; for Prerelease builds the pre-release attribute will also be set The `weave` script is uploaded as an attachment to the release Navigate to https://github.com/weaveworks/weave/releases, 'Edit' the draft and input the release notes. When you are done make sure you 'Save draft' (and not 'Publish release'!). Once the release notes have passed review, proceed to the publish phase. This step must only be performed for Mainline and Branch releases: git tag -af -m \"Release $TAG\" latest_release $TAG git push -f git@github.com:weaveworks/weave latest_release The `latest_release` tag must point at `$TAG`, not at `HEAD` - the build script will complain otherwise." }, { "data": "if you're testing the release process, push to your fork instead! You can now publish the release and upload the remaining distributables to DockerHub: bin/release publish The effects of this step depend on the inferred release type. The following occurs for all types: Docker images are tagged `$TAG` and pushed to DockerHub GitHub release moves from draft to published state Additionally, for Mainline and Branch types: Release named `latest_release` is updated on GitHub Finally, for Mainline releases only: Images tagged `latest` are updated on DockerHub Now, you can validate whether images were published for all platforms: grep '^ML_PLATFORMS=' Makefile for img in $(grep '^PUBLISH=' Makefile); do img=\"weaveworks/$(echo $img | cut -d_ -f2):${TAG#v}\" platforms=$(manifest-tool \\ inspect --raw \"$img\" | \\ jq '.[].Platform | .os + \"/\" +.architecture' | \\ tr '\\n' ' ') echo \"$img: $platforms\" done Weave Net is and should also be released there. Go to https://store.docker.com/ and log in. Go to \"\". Under \"My Products\", select \"owners (weaveworks)\". Weave Net should be listed among our products. Click \"Actions\" > \"Edit Product\". Go to \"Plans & Pricing\" > \"Free Tier\". Under \"Source Repositories & Tags\", click on \"Add Source\", select \"net-plugin\" under \"Repository\" and the version to release under \"Tag\". Under \"Resources\" > \"Installation Instructions\", update the lines containing `export NET_VERSION=<version>`. Click \"Save\" Click \"Submit For Review\". You should see a message like: \"Your product has been submitted for approval. [...] We'll be in touch with next steps soon!\". You should receive an email saying: This email confirms that we received your submission on \\<date\\> of Weave Net to the Docker Store. We're reviewing your submission to ensure that it meets our and complies with our . Don't worry! We'll let you know if there's anything you need to change before we publish your submission. You should hear back from us within the next 14 days. Thanks for submitting your content to the Docker Store! Hope Docker eventually performs the release or contacts us. If nothing happens within 14 days, contact their support team: publisher-support@docker.com. If not on master, merge branch into master and push to GitHub. Close the in GitHub and create the next milestone Update the `#weavenetwork` topic heading on freenode (requires 'chanops' permission) For a mainline release vX.Y.0, create a release branch X.Y from the tag, push to GitHub and set to X.Y via a PR - this will result in X.Y.0 site docs being published to https://www.weave.works Add the new version of `weave-net` to the checkpoint system at https://checkpoint-api.weave.works/admin File a PR to update the version of the daemonset at https://github.com/kubernetes/kops/tree/master/upup/models/cloudup/resources/addons/networking.weave and kops/upup/pkg/fi/cloudup/bootstrapchannelbuilder.go and kops/upup/pkg/fi/cloudup/tests/bootstrapchannelbuilder/weave/manifest.yaml There's a few things that can go wrong. If the build is wonky, e.g., the tests don't pass, you can delete the directory in `./releases/`, fix whatever it is, move the version tag (which should still be only local) and have another go. If the DockerHub pushes fail (which sadly seems to happen a lot), you can just run `./bin/release publish` again. If you need to overwrite a release you can do so by manually deleting the GitHub version release and re-running the process. Please note that the DockerHub `latest` images, GitHub `latest_release` and download links may be in an inconsistent state until the overwrite is completed." } ]
{ "category": "Runtime", "file_name": "spider-affinity.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "English SpiderIPPool is a representation of a collection of IP addresses. It allows storing different IP addresses from the same subnet in separate IPPool instances, ensuring that there is no overlap between address sets. This design provides flexibility in managing IP resources within the underlay network, especially when faced with limited availability. SpiderIPPool offers the ability to assign different SpiderIPPool instances to various applications and tenants through affinity rules, allowing for both shared subnet usage and micro-isolation. In , we defined lots of properties to use with affinities: `spec.podAffinity` controls whether the pool can be used by the Pod. `spec.namespaceName` and `spec.namespaceAffinity` verify if they match the Namespace of the Pod. If there is no match, the pool cannot be used. (`namespaceName` takes precedence over `namespaceAffinity`). `spec.nodeName` and `spec.nodeAffinity` verify if they match the node where the Pod is located. If there is no match, the pool cannot be used. (`nodeName` takes precedence over `nodeAffinity`). `multusName` determines whether the current network card which using the pool matches the CNI configuration used by the multus net-attach-def resource. If there is no match, the pool cannot be used. These fields not only serve as filters but also have a sorting effect. The more matching fields there are, the higher priority the IP pool has for usage. Firewalls are commonly used in clusters to manage communication between internal and external networks (north-south communication). To enforce secure access control, firewalls inspect and filter communication traffic while restricting outbound communication. In order to align with firewall policies and enable north-south communication within the underlay network, certain Deployments require all Pods to be assigned IP addresses within a specific range. Existing community solutions rely on annotations to handle IP address allocation for such cases. However, this approach has limitations: Manual modification of annotations becomes necessary as the application scales, leading to potential errors. IP management through annotations is far apart from the IPPool CR mechanism, resulting in a lack of visibility into available IP addresses. Conflicting IP addresses can easily be assigned to different applications, causing deployment failures. Spiderpool addresses these challenges by leveraging the flexibility of IPPools, where IP address collections can be adjusted. By combining this with the `podAffinity` setting in the SpiderIPPool CR, Spiderpool enables the binding of specific applications or groups of applications to particular IPPools. This ensures a unified approach to IP management, decouples application scaling from IP address scaling, and provides a fixed IP usage range for each application. SpiderIPPool provides the `podAffinity` field. When an application is created and attempts to allocate an IP address from the SpiderIPPool, it can successfully obtain an IP if the Pods' `selector.matchLabels` match the specified podAffinity. Otherwise, IP allocation from that SpiderIPPool will be denied. Based on the above, using the following Yaml, create the following SpiderIPPool with application affinity, which will provide the IP address for the `app: test-app-3` Pod's eligible `selector.matchLabel`. ```bash ~# cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: test-pod-ippool spec: subnet: 10.6.0.0/16 ips: 10.6.168.151-10.6.168.160 podAffinity: matchLabels: app: test-app-3 EOF ``` Creating Applications with Specific matchLabels. In the example YAML provided, a group of Deployment applications is created. The configuration includes: `ipam.spidernet.io/ippool`: specify the IP pool with application affinity. `v1.multus-cni.io/default-network`: create a default network interface for the application. `matchLabels`: set the label for the" }, { "data": "```bash cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: test-app-3 spec: replicas: 1 selector: matchLabels: app: test-app-3 template: metadata: annotations: ipam.spidernet.io/ippool: |- { \"ipv4\": [\"test-pod-ippool\"] } v1.multus-cni.io/default-network: kube-system/macvlan-ens192 labels: app: test-app-3 spec: containers: name: test-app-3 image: nginx imagePullPolicy: IfNotPresent EOF ``` After creating the application, the Pods with `matchLabels` that match the IPPool's application affinity successfully obtain IP addresses from that SpiderIPPool. The assigned IP addresses remain within the IP pool. ```bash NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT test-pod-ippool 4 10.6.0.0/16 1 10 false ~# kubectl get po -l app=test-app-3 -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-3-6994b9d5bb-qpf5p 1/1 Running 0 52s 10.6.168.154 node2 <none> <none> ``` However, when creating another application with different `matchLabels` that do not meet the IPPool's application affinity, Spiderpool will reject IP address allocation. `matchLabels`: set the label of the application to `test-unmatch-labels`, which does not match IPPool affinity. ```bash cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: test-unmatch-labels spec: replicas: 1 selector: matchLabels: app: test-unmatch-labels template: metadata: annotations: ipam.spidernet.io/ippool: |- { \"ipv4\": [\"test-pod-ippool\"] } v1.multus-cni.io/default-network: kube-system/macvlan-ens192 labels: app: test-unmatch-labels spec: containers: name: test-unmatch-labels image: nginx imagePullPolicy: IfNotPresent EOF ``` Getting an IP address assignment fails as expected when the Pod's matchLabels do not match the application affinity for that IPPool. ```bash kubectl get po -l app=test-unmatch-labels -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-unmatch-labels-699755574-9ncp7 0/1 ContainerCreating 0 16s <none> node1 <none> <none> ``` Create an IPPool ```bash kubectl apply -f https://raw.githubusercontent.com/spidernet-io/spiderpool/main/docs/example/ippool-affinity-pod/shared-static-ipv4-ippool.yaml ``` ```yaml apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: shared-static-ipv4-ippool spec: subnet: 172.18.41.0/24 ips: 172.18.41.44-172.18.41.47 ``` Create two Deployment whose Pods are setting the Pod annotation `ipam.spidernet.io/ippool` to explicitly specify the pool selection rule. It will succeed to get IP address. ```bash kubectl apply -f https://raw.githubusercontent.com/spidernet-io/spiderpool/main/docs/example/ippool-affinity-pod/shared-static-ippool-deploy.yaml ``` ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: shared-static-ippool-deploy-1 spec: replicas: 2 selector: matchLabels: app: static template: metadata: annotations: ipam.spidernet.io/ippool: |- { \"ipv4\": [\"shared-static-ipv4-ippool\"] } labels: app: static spec: containers: name: shared-static-ippool-deploy-1 image: busybox imagePullPolicy: IfNotPresent command: [\"/bin/sh\", \"-c\", \"trap : TERM INT; sleep infinity & wait\"] apiVersion: apps/v1 kind: Deployment metadata: name: shared-static-ippool-deploy-2 spec: replicas: 2 selector: matchLabels: app: static template: metadata: annotations: ipam.spidernet.io/ippool: |- { \"ipv4\": [\"shared-static-ipv4-ippool\"] } labels: app: static spec: containers: name: shared-static-ippool-deploy-2 image: busybox imagePullPolicy: IfNotPresent command: [\"/bin/sh\", \"-c\", \"trap : TERM INT; sleep infinity & wait\"] ``` The Pods are running. ```bash kubectl get po -l app=static -o wide NAME READY STATUS RESTARTS AGE IP NODE shared-static-ippool-deploy-1-8588c887cb-gcbjb 1/1 Running 0 62s 172.18.41.45 spider-control-plane shared-static-ippool-deploy-1-8588c887cb-wfdvt 1/1 Running 0 62s 172.18.41.46 spider-control-plane shared-static-ippool-deploy-2-797c8df6cf-6vllv 1/1 Running 0 62s 172.18.41.44 spider-worker shared-static-ippool-deploy-2-797c8df6cf-ftk2d 1/1 Running 0 62s 172.18.41.47 spider-worker ``` Nodes in a cluster might have access to different IP ranges. Some scenarios include: Nodes in the same data center belonging to different subnets. Nodes spanning multiple data centers within a single cluster. In such cases, replicas of an application are scheduled on different nodes require IP addresses from different subnets. Current community solutions are limited to satisfy this needs. To address this problem, Spiderpool support node affinity solution. By setting the `nodeAffinity` and `nodeName` fields in the SpiderIPPool CR, administrators can define a node label selector. This enables the IPAM plugin to allocate IP addresses from the specified IPPool when Pods are scheduled on nodes that match the affinity rules. SpiderIPPool offers the `nodeAffinity` field. When a Pod is scheduled on a node and attempts to allocate an IP address from the SpiderIPPool, it can successfully obtain an IP if the node satisfies the specified nodeAffinity" }, { "data": "Otherwise, it will be unable to allocate an IP address from that SpiderIPPool. To create a SpiderIPPool with node affinity, use the following YAML configuration. This SpiderIPPool will provide IP addresses for Pods running on the designated node. ```bash ~# cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: test-node1-ippool spec: subnet: 10.6.0.0/16 ips: 10.6.168.101-10.6.168.110 nodeAffinity: matchExpressions: {key: kubernetes.io/hostname, operator: In, values: [node1]} EOF ``` SpiderIPPool provides an additional option for node affinity: `nodeName`. When `nodeName` is specified, a Pod is scheduled on a specific node and attempts to allocate an IP address from the SpiderIPPool. If the node matches the specified `nodeName`, the IP address can be successfully allocated from that SpiderIPPool. If not, it will be unable to allocate an IP address from that SpiderIPPool. When nodeName is left empty, Spiderpool does not impose any allocation restrictions on Pods. For example: ```yaml apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: test-node1-ippool spec: subnet: 10.6.0.0/16 ips: 10.6.168.101-10.6.168.110 nodeName: node1 ``` create an Application `ipam.spidernet.io/ippool`: specify the IP pool with node affinity. `v1.multus-cni.io/default-network`: Identifies the IP pool used by the application. ```bash cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: DaemonSet metadata: name: test-app-1 labels: app: test-app-1 spec: selector: matchLabels: app: test-app-1 template: metadata: labels: app: test-app-1 annotations: ipam.spidernet.io/ippool: |- { \"ipv4\": [\"test-node1-ippool\"] } v1.multus-cni.io/default-network: kube-system/macvlan-ens192 spec: containers: name: test-app image: nginx imagePullPolicy: IfNotPresent EOF ``` After creating an application, it can be observed that IP addresses are only allocated from the corresponding IPPool if the Pod's node matches the IPPool's node affinity. The IP address of the application remains within the assigned IPPool. ```bash NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT test-node1-ippool 4 10.6.0.0/16 1 10 false ~# kubectl get po -l app=test-app-1 -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-1-2cmnz 0/1 ContainerCreating 0 115s <none> node2 <none> <none> test-app-1-br5gw 0/1 ContainerCreating 0 115s <none> master <none> <none> test-app-1-dvhrx 1/1 Running 0 115s 10.6.168.108 node1 <none> <none> ``` Cluster administrators often partition their clusters into multiple namespaces to improve isolation, management, collaboration, security, and resource utilization. When deploying applications under different namespaces, it becomes necessary to assign specific IPPools to each namespace, preventing applications from unrelated namespaces from using them. Spiderpool addresses this requirement by introducing the `namespaceAffinity` or `namespaceName` fields in the SpiderIPPool CR. This allows administrators to define affinity rules between IPPools and one or more namespaces, ensuring that only applications meeting the specified conditions can be allocated IP addresses from the respective IPPools. ```bash ~# kubectl create ns test-ns1 namespace/test-ns1 created ~# kubectl create ns test-ns2 namespace/test-ns2 created ``` To create an IPPool with namaspace affinity, use the following YAML: SpiderIPPool provides the `namespaceAffinity` field. When an application is created and attempts to allocate an IP address from the SpiderIPPool, it will only succeed if the Pod's namespace matches the specified namespaceAffinity. Otherwise, IP allocation from that SpiderIPPool will be denied. ```bash ~# cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: test-ns1-ippool spec: subnet: 10.6.0.0/16 ips: 10.6.168.111-10.6.168.120 namespaceAffinity: matchLabels: kubernetes.io/metadata.name: test-ns1 EOF ``` SpiderIPPool also offers another option for namespace affinity: `namespaceName`. When `namespaceName` is not empty, a Pod is created and attempts to allocate an IP address from the SpiderIPPool. If the namespace of the Pod matches the specified `namespaceName`, it can successfully obtain an IP from that SpiderIPPool. However, if the namespace does not match the `namespaceName`, it will be unable to allocate an IP address from that" }, { "data": "When `namespaceName` is empty, Spiderpool does not impose any restrictions on IP allocation for Pods. For example: ```yaml apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: test-ns1-ippool spec: subnet: 10.6.0.0/16 ips: 10.6.168.111-10.6.168.120 namespaceName: test-ns1 ``` Create Applications in a Specified Namespace. In the provided YAML example, a group of Deployment applications is created under the `test-ns1` namespace. The configuration includes: `ipam.spidernet.io/ippool`specify the IP pool with tenant affinity. `v1.multus-cni.io/default-network`: create a default network interface for the application. `namespace`: the namespace where the application resides. ```bash cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: test-app-2 namespace: test-ns1 spec: replicas: 1 selector: matchLabels: app: test-app-2 template: metadata: annotations: ipam.spidernet.io/ippool: |- { \"ipv4\": [\"test-ns1-ippool\"] } v1.multus-cni.io/default-network: kube-system/macvlan-ens192 labels: app: test-app-2 spec: containers: name: test-app-2 image: nginx imagePullPolicy: IfNotPresent EOF ``` After creating the application, the Pods within the designated namespace successfully allocate IP addresses from the associated IPPool with namespace affinity. ```bash ~# kubectl get spiderippool NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT test-ns1-ippool 4 10.6.0.0/16 1 10 false ~# kubectl get po -l app=test-app-2 -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-ns1 test-app-2-975d9f-6bww2 1/1 Running 0 44s 10.6.168.111 node2 <none> <none> ``` However, if an application is created outside the `test-ns1` namespace, Spiderpool will reject IP address allocation, preventing unrelated namespace from using that IPPool. ```bash cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: test-other-ns namespace: test-ns2 spec: replicas: 1 selector: matchLabels: app: test-other-ns template: metadata: annotations: ipam.spidernet.io/ippool: |- { \"ipv4\": [\"test-ns1-ippool\"] } v1.multus-cni.io/default-network: kube-system/macvlan-ens192 labels: app: test-other-ns spec: containers: name: test-other-ns image: nginx imagePullPolicy: IfNotPresent EOF ``` Getting an IP address assignment fails as expected when the Pod belongs to a namesapce that does not match the affinity of that IPPool. ```bash ~# kubectl get po -l app=test-other-ns -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-ns2 test-other-ns-56cc9b7d95-hx4b5 0/1 ContainerCreating 0 6m3s <none> node2 <none> <none> ``` When creating multiple network interfaces for an application, we can specify the affinity of multus net-attach-def instance for the cluster-level default pool. This way is simpler compared to explicitly specifying the binding relationship between network interfaces and IPPool resources through the `ipam.spidernet.io/ippools` annotation. First, configure various properties for the IPPool resource, including: Set the `spec.default` field to `true` to simplify the experience by reducing the need to annotate the application with `ipam.spidernet.io/ippool` or `ipam.spidernet.io/ippools`. Configure the `spec.multusName` field to specify the multus net-attach-def instance. (If you do not specify the namespace of the corresponding multus net-attach-def instance, we will default to the namespace where Spiderpool is installed.) ```yaml apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: test-ippool-eth0 spec: default: true subnet: 10.6.0.0/16 ips: 10.6.168.151-10.6.168.160 multusName: default/macvlan-vlan0-eth0 apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: test-ippool-eth1 spec: default: true subnet: 10.7.0.0/16 ips: 10.7.168.151-10.7.168.160 multusName: kube-system/macvlan-vlan0-eth1 ``` Create an application with multiple network interfaces, you can use the following example YAML: `v1.multus-cni.io/default-network`: Choose the default network configuration for the created application. (If you don't specify this annotation and directly use the clusterNetwork configuration of the multus, please specify the default network configuration during the installation of Spiderpool via Helm using the parameter `--set multus.multusCNI.defaultCniCRName=default/macvlan-vlan0-eth0`). `k8s.v1.cni.cncf.io/networks`: Selects the additional network configuration for the created application. ```bash cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: test-app namespace: default spec: replicas: 1 selector: matchLabels: app: test-app template: metadata: annotations: v1.multus-cni.io/default-network: default/macvlan-vlan0-eth0 k8s.v1.cni.cncf.io/networks: kube-system/macvlan-vlan0-eth1 labels: app: test-app spec: containers: name: test-app image: nginx imagePullPolicy: IfNotPresent EOF ```" } ]
{ "category": "Runtime", "file_name": "ceph-block-pool-crd.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "title: CephBlockPool CRD Rook allows creation and customization of storage pools through the custom resource definitions (CRDs). The following settings are available for pools. For optimal performance, while also adding redundancy, this sample will configure Ceph to make three full copies of the data on multiple nodes. !!! note This sample requires at least 1 OSD per node, with each OSD located on 3 different nodes. Each OSD must be located on a different node, because the is set to `host` and the `replicated.size` is set to `3`. ```yaml apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool namespace: rook-ceph spec: failureDomain: host replicated: size: 3 deviceClass: hdd ``` Hybrid storage is a combination of two different storage tiers. For example, SSD and HDD. This helps to improve the read performance of cluster by placing, say, 1st copy of data on the higher performance tier (SSD or NVME) and remaining replicated copies on lower cost tier (HDDs). WARNING Hybrid storage pools are likely to suffer from lower availability if a node goes down. The data across the two tiers may actually end up on the same node, instead of being spread across unique nodes (or failure domains) as expected. Instead of using hybrid pools, consider configuring from the toolbox. ```yaml apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool namespace: rook-ceph spec: failureDomain: host replicated: size: 3 hybridStorage: primaryDeviceClass: ssd secondaryDeviceClass: hdd ``` !!! important The device classes `primaryDeviceClass` and `secondaryDeviceClass` must have at least one OSD associated with them or else the pool creation will fail. This sample will lower the overall storage capacity requirement, while also adding redundancy by using . !!! note This sample requires at least 3 bluestore OSDs. The OSDs can be located on a single Ceph node or spread across multiple nodes, because the is set to `osd` and the `erasureCoded` chunk settings require at least 3 different OSDs (2 `dataChunks` + 1 `codingChunks`). ```yaml apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: ecpool namespace: rook-ceph spec: failureDomain: osd erasureCoded: dataChunks: 2 codingChunks: 1 deviceClass: hdd ``` High performance applications typically will not use erasure coding due to the performance overhead of creating and distributing the chunks in the cluster. When creating an erasure-coded pool, it is highly recommended to create the pool when you have bluestore OSDs in your cluster (see the . Filestore OSDs have that are unsafe and lower performance. RADOS Block Device (RBD) mirroring is a process of asynchronous replication of Ceph block device images between two or more Ceph clusters. Mirroring ensures point-in-time consistent replicas of all changes to an image, including reads and writes, block device resizing, snapshots, clones and flattening. It is generally useful when planning for Disaster Recovery. Mirroring is for clusters that are geographically distributed and stretching a single cluster is not possible due to high latencies. The following will enable mirroring of the pool at the image level: ```yaml apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool namespace: rook-ceph spec: replicated: size: 3 mirroring: enabled: true mode: image snapshotSchedules: interval:" }, { "data": "# daily snapshots startTime: 14:00:00-05:00 ``` Once mirroring is enabled, Rook will by default create its own so that it can be used by another cluster. The bootstrap peer token can be found in a Kubernetes Secret. The name of the Secret is present in the Status field of the CephBlockPool CR: ```yaml status: info: rbdMirrorBootstrapPeerSecretName: pool-peer-token-replicapool ``` This secret can then be fetched like so: ```console kubectl get secret -n rook-ceph pool-peer-token-replicapool -o jsonpath='{.data.token}'|base64 -d eyJmc2lkIjoiOTFlYWUwZGQtMDZiMS00ZDJjLTkxZjMtMTMxMWM5ZGYzODJiIiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFEN1psOWZ3V1VGRHhBQWdmY0gyZi8xeUhYeGZDUTU5L1N0NEE9PSIsIm1vbl9ob3N0IjoiW3YyOjEwLjEwMS4xOC4yMjM6MzMwMCx2MToxMC4xMDEuMTguMjIzOjY3ODldIn0= ``` The secret must be decoded. The result will be another base64 encoded blob that you will import in the destination cluster: ```console external-cluster-console # rbd mirror pool peer bootstrap import <token file path> ``` See the official rbd mirror documentation on . Imagine the following topology with datacenters containing racks and then hosts: ```text . datacenter-1 rack-1 host-1 host-2 rack-2 host-3 host-4 datacenter-2 rack-3 host-5 host-6 rack-4 host-7 host-8 ``` As an administrator I would like to place 4 copies across both datacenter where each copy inside a datacenter is across a rack. This can be achieved by the following: ```yaml apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool namespace: rook-ceph spec: replicated: size: 4 replicasPerFailureDomain: 2 subFailureDomain: rack ``` `name`: The name of the pool to create. `namespace`: The namespace of the Rook cluster where the pool is created. `replicated`: Settings for a replicated pool. If specified, `erasureCoded` settings must not be specified. `size`: The desired number of copies to make of the data in the pool. `requireSafeReplicaSize`: set to false if you want to create a pool with size 1, setting pool size 1 could lead to data loss without recovery. Make sure you are ABSOLUTELY CERTAIN* that is what you want. `replicasPerFailureDomain`: Sets up the number of replicas to place in a given failure domain. For instance, if the failure domain is a datacenter (cluster is stretched) then you will have 2 replicas per datacenter where each replica ends up on a different host. This gives you a total of 4 replicas and for this, the `size` must be set to 4. The default is 1. `subFailureDomain`: Name of the CRUSH bucket representing a sub-failure domain. In a stretched configuration this option represent the \"last\" bucket where replicas will end up being written. Imagine the cluster is stretched across two datacenters, you can then have 2 copies per datacenter and each copy on a different CRUSH bucket. The default is \"host\". `erasureCoded`: Settings for an erasure-coded pool. If specified, `replicated` settings must not be specified. See below for more details on . `dataChunks`: Number of chunks to divide the original object into `codingChunks`: Number of coding chunks to generate `failureDomain`: The failure domain across which the data will be spread. This can be set to a value of either `osd` or `host`, with `host` being the default setting. A failure domain can also be set to a different type (e.g. `rack`), if the OSDs are created on nodes with the supported . If the `failureDomain` is changed on the pool, the operator will create a new CRUSH rule and update the" }, { "data": "If a `replicated` pool of size `3` is configured and the `failureDomain` is set to `host`, all three copies of the replicated data will be placed on OSDs located on `3` different Ceph hosts. This case is guaranteed to tolerate a failure of two hosts without a loss of data. Similarly, a failure domain set to `osd`, can tolerate a loss of two OSD devices. If erasure coding is used, the data and coding chunks are spread across the configured failure domain. !!! caution Neither Rook, nor Ceph, prevent the creation of a cluster where the replicated data (or Erasure Coded chunks) can be written safely. By design, Ceph will delay checking for suitable OSDs until a write request is made and this write can hang if there are not sufficient OSDs to satisfy the request. `deviceClass`: Sets up the CRUSH rule for the pool to distribute data only on the specified device class. If left empty or unspecified, the pool will use the cluster's default CRUSH root, which usually distributes data over all OSDs, regardless of their class. If `deviceClass` is specified on any pool, ensure that it is added to every pool in the cluster, otherwise Ceph will warn about pools with overlapping roots. `crushRoot`: The root in the crush map to be used by the pool. If left empty or unspecified, the default root will be used. Creating a crush hierarchy for the OSDs currently requires the Rook toolbox to run the Ceph tools described . `enableRBDStats`: Enables collecting RBD per-image IO statistics by enabling dynamic OSD performance counters. Defaults to false. For more info see the . `name`: The name of Ceph pools is based on the `metadata.name` of the CephBlockPool CR. Some built-in Ceph pools require names that are incompatible with K8s resource names. These special pools can be configured by setting this `name` to override the name of the Ceph pool that is created instead of using the `metadata.name` for the pool. Only the following pool names are supported: `.nfs`, `.mgr`, and `.rgw.root`. See the example . `application`: The type of application set on the pool. By default, Ceph pools for CephBlockPools will be `rbd`, CephObjectStore pools will be `rgw`, and CephFilesystem pools will be `cephfs`. `parameters`: Sets any listed to the given pool `targetsizeratio:` gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool, for more info see the `compression_mode`: Sets up the pool for inline compression when using a Bluestore OSD. If left unspecified does not setup any compression mode for the pool. Values supported are the same as Bluestore inline compression , such as `none`, `passive`, `aggressive`, and `force`. `mirroring`: Sets up mirroring of the pool `enabled`: whether mirroring is enabled on that pool (default: false) `mode`: mirroring mode to run, possible values are \"pool\" or \"image\" (required). Refer to the for more details. `snapshotSchedules`: schedule(s) snapshot at the pool* level. One or more schedules are supported. `interval`: frequency of the" }, { "data": "The interval can be specified in days, hours, or minutes using d, h, m suffix respectively. `startTime`: optional, determines at what time the snapshot process starts, specified using the ISO 8601 time format. `peers`: to configure mirroring peers. See the prerequisite first. `secretNames`: a list of peers to connect to. Currently only a single* peer is supported where a peer represents a Ceph cluster. `statusCheck`: Sets up pool mirroring status `mirror`: displays the mirroring status `disabled`: whether to enable or disable pool mirroring status `interval`: time interval to refresh the mirroring status (default 60s) `quotas`: Set byte and object quotas. See the for more info. `maxSize`: quota in bytes as a string with quantity suffixes (e.g. \"10Gi\") `maxObjects`: quota in objects as an integer !!! note A value of 0 disables the quota. With `poolProperties` you can set any pool property: ```yaml spec: parameters: <name of the parameter>: <parameter value> ``` For instance: ```yaml spec: parameters: min_size: 1 ``` allows you to keep your data safe while reducing the storage overhead. Instead of creating multiple replicas of the data, erasure coding divides the original data into chunks of equal size, then generates extra chunks of that same size for redundancy. For example, if you have an object of size 2MB, the simplest erasure coding with two data chunks would divide the object into two chunks of size 1MB each (data chunks). One more chunk (coding chunk) of size 1MB will be generated. In total, 3MB will be stored in the cluster. The object will be able to suffer the loss of any one of the chunks and still be able to reconstruct the original object. The number of data and coding chunks you choose will depend on your resiliency to loss and how much storage overhead is acceptable in your storage cluster. Here are some examples to illustrate how the number of chunks affects the storage and loss toleration. | Data chunks (k) | Coding chunks (m) | Total storage | Losses Tolerated | OSDs required | | | -- | - | - | - | | 2 | 1 | 1.5x | 1 | 3 | | 2 | 2 | 2x | 2 | 4 | | 4 | 2 | 1.5x | 2 | 6 | | 16 | 4 | 1.25x | 4 | 20 | The `failureDomain` must be also be taken into account when determining the number of chunks. The failure domain determines the level in the Ceph CRUSH hierarchy where the chunks must be uniquely distributed. This decision will impact whether node losses or disk losses are tolerated. There could also be performance differences of placing the data across nodes or osds. `host`: All chunks will be placed on unique hosts `osd`: All chunks will be placed on unique OSDs If you do not have a sufficient number of hosts or OSDs for unique placement the pool can be created, writing to the pool will hang. Rook currently only configures two levels in the CRUSH map. It is also possible to configure other levels such as `rack` with by adding to the nodes." } ]
{ "category": "Runtime", "file_name": "inotify.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "A common pattern in Kubernetes is to watch for changes to files/directories passed in as `ConfigMaps` or `Secrets`. Sidecar's normally use `inotify` to watch for changes and then signal the primary container to reload the updated configuration. Kata Containers typically will pass these host files into the guest using `virtiofs`, which does not support `inotify` today. While we work to enable this use case in `virtiofs`, we introduced a workaround in Kata Containers. This document describes how Kata Containers implements this workaround. Kubernetes creates `secrets` and `ConfigMap` mounts at very specific locations on the host filesystem. For container mounts, the `Kata Containers` runtime will check the source of the mount to identify these special cases. For these use cases, only a single file or very few would typically need to be watched. To avoid excessive overheads in making a mount watchable, we enforce a limit of eight files per mount. If a `secret` or `ConfigMap` mount contains more than 8 files, it will not be considered watchable. We similarly enforce a limit of 1 MB per mount to be considered watchable. Non-watchable mounts will continue to propagate changes from the mount on the host to the container workload, but these updates will not trigger an `inotify` event. If at any point a mount grows beyond the eight file or 1MB limit, it will no longer be `watchable.` For mounts that are considered `watchable`, inside the guest, the `kata-agent` will poll the mount presented from the host through `virtiofs` and copy any changed files to a `tmpfs` mount that is presented to the container. In this way, for `watchable` mounts, Kata will do the polling on behalf of the workload and existing workloads needn't change their usage of `inotify`." } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_multicast_subscriber_list.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List the multicast subscribers for the given group. List the multicast subscribers for the given group. To get the subscribers for all the groups, use 'cilium-dbg bpf multicast subscriber list all'. ``` cilium-dbg bpf multicast subscriber list < group | all > [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage the multicast subscribers." } ]
{ "category": "Runtime", "file_name": "initrd.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "You can use the script found to generate an initrd either based on alpine or suse linux. The script extracts the init system from each distribution and creates a initrd. Use this option for creating an initrd if you're building your own init or if you need any specific files / logic in your initrd. ```bash mkdir initrd cp /path/to/your/init initrd/init cd initrd find . -print0 | cpio --null --create --verbose --format=newc > initrd.cpio ``` When setting your boot source, add a `initrd_path` property like so: ```shell curl --unix-socket /tmp/firecracker.socket -i \\ -X PUT 'http://localhost/boot-source' \\ -H 'Accept: application/json' \\ -H 'Content-Type: application/json' \\ -d \"{ \\\"kernelimagepath\\\": \\\"/path/to/kernel\\\", \\\"boot_args\\\": \\\"console=ttyS0 reboot=k panic=1 pci=off\\\", \\\"initrd_path\\\": \\\"/path/to/initrd.cpio\\\" }\" ``` You should not use a drive with `isrootdevice: true` when using an initrd Make sure your kernel configuration has `CONFIGBLKDEV_INITRD=y` If you don't want to place your init at the root of your initrd, you can add `rdinit=/path/to/init` to your `boot_args` property If you intend to `pivot_root` in your init, it won't be possible because the initrd is mounted as a rootfs and cannot be unmounted. You will need to use `switch_root` instead." } ]
{ "category": "Runtime", "file_name": "expose-minio.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Expose Minio outside your cluster\" layout: docs When you run commands to get logs or describe a backup, the Velero server generates a pre-signed URL to download the requested items. To access these URLs from outside the cluster -- that is, from your Velero client -- you need to make Minio available outside the cluster. You can: Change the Minio Service type from `ClusterIP` to `NodePort`. Set up Ingress for your cluster, keeping Minio Service type `ClusterIP`. In Velero 0.10, you can also specify the value of a new `publicUrl` field for the pre-signed URL in your backup storage config. For basic instructions on how to install the Velero server and client, see . The Minio deployment by default specifies a Service of type `ClusterIP`. You can change this to `NodePort` to easily expose a cluster service externally if you can reach the node from your Velero client. You must also get the Minio URL, which you can then specify as the value of the new `publicUrl` field in your backup storage config. In `examples/minio/00-minio-deployment.yaml`, change the value of Service `spec.type` from `ClusterIP` to `NodePort`. Get the Minio URL: if you're running Minikube: ```shell minikube service minio --namespace=velero --url ``` in any other environment: Get the value of an external IP address or DNS name of any node in your cluster. You must be able to reach this address from the Velero client. Append the value of the NodePort to get a complete URL. You can get this value by running: ```shell kubectl -n velero get svc/minio -o jsonpath='{.spec.ports[0].nodePort}' ``` In `examples/minio/05-backupstoragelocation.yaml`, uncomment the `publicUrl` line and provide this Minio URL as the value of the `publicUrl` field. You must include the `http://` or `https://` prefix. Configuring Ingress for your cluster is out of scope for the Velero documentation. If you have already set up Ingress, however, it makes sense to continue with it while you run the example Velero configuration with Minio. In this case: Keep the Service type as `ClusterIP`. In `examples/minio/05-backupstoragelocation.yaml`, uncomment the `publicUrl` line and provide the URL and port of your Ingress as the value of the `publicUrl` field." } ]
{ "category": "Runtime", "file_name": "installing-weave.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "title: Installing Weave Net menu_order: 10 search_type: Documentation Ensure you are running Linux (kernel 3.8 or later) and have Docker (version 1.10.0 or later) installed. Install Weave Net by running the following: sudo curl -L git.io/weave -o /usr/local/bin/weave sudo chmod a+x /usr/local/bin/weave If you are on OSX and you are using Docker Machine ensure that a VM is running and configured before downloading Weave Net. To set up a VM see [the Docker Machine documentation](https://docs.docker.com/installation/mac/#from-your-shell) or refer to . After your VM is setup with Docker Machine, Weave Net can be launched directly from the OSX host. Weave Net respects the environment variable `DOCKER_HOST`, so that you can run and control a Weave Network locally on a remote host. See . With Weave Net downloaded onto your VMs or hosts, you are ready to launch a Weave network and deploy apps onto it. See . <a href=\"https://youtu.be/kihQCCT1ykE\" target=\"_blank\"> <img src=\"hello-screencast.png\" alt=\"Click to watch the screencast\" /> </a> Weave Net [periodically contacts Weaveworks servers for available versions](https://github.com/weaveworks/go-checkpoint). New versions are announced in the log and in [the status summary](/site/troubleshooting.md/#weave-status). The information sent in this check is: Host UUID hash Kernel version Docker version Weave Net version Network mode, e.g. 'awsvpc' To disable this check, run the following before launching Weave Net: export CHECKPOINT_DISABLE=1 Amazon ECS users see for the latest Weave AMIs. If you're on Amazon EC2, the standard installation instructions at the top of this page, provide the simplest setup and the most flexibility. A can optionally be enabled, which allows containers to communicate at the full speed of the underlying network. To make encryption in fast datapath work on Google Cloud Platform, see . See Also * * *" } ]
{ "category": "Runtime", "file_name": "move_instances.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "(move-instances)= To move an instance from one Incus server to another, use the command: incus move [<sourceremote>:]<sourceinstancename> <targetremote>:[<targetinstancename>] ```{note} When moving a container, you must stop it first. See {ref}`live-migration-containers` for more information. When moving a virtual machine, you must either enable {ref}`live-migration-vms` or stop it first. ``` Alternatively, you can use the command if you want to duplicate the instance: incus copy [<sourceremote>:]<sourceinstancename> <targetremote>:[<targetinstancename>] In both cases, you don't need to specify the source remote if it is your default remote, and you can leave out the target instance name if you want to use the same instance name. If you want to move the instance to a specific cluster member, specify it with the `--target` flag. In this case, do not specify the source and target remote. You can add the `--mode` flag to choose a transfer mode, depending on your network setup: `pull` (default) : Instruct the target server to connect to the source server and pull the respective instance. `push` : Instruct the source server to connect to the target server and push the instance. `relay` : Instruct the client to connect to both the source and the target server and transfer the data through the client. If you need to adapt the configuration for the instance to run on the target server, you can either specify the new configuration directly (using `--config`, `--device`, `--storage` or `--target-project`) or through profiles (using `--no-profiles` or `--profile`). See for all available flags. (live-migration)= Live migration means migrating an instance while it is running. This method is supported for virtual machines. For containers, there is limited support. (live-migration-vms)= Virtual machines can be moved to another server while they are running, thus without any downtime. To allow for live migration, you must enable support for stateful migration. To do so, ensure the following configuration: Set {config:option}`instance-migration:migration.stateful` to `true` on the instance. (live-migration-containers)= For containers, there is limited support for live migration using . However, because of extensive kernel dependencies, only very basic containers (non-`systemd` containers without a network device) can be migrated reliably. In most real-world scenarios, you should stop the container, move it over and then start it again. If you want to use live migration for containers, you must first make sure that CRIU is installed on both systems. To optimize the memory transfer for a container, set the {config:option}`instance-migration:migration.incremental.memory` property to `true` to make use of the pre-copy features in CRIU. With this configuration, Incus instructs CRIU to perform a series of memory dumps for the container. After each dump, Incus sends the memory dump to the specified remote. In an ideal scenario, each memory dump will decrease the delta to the previous memory dump, thereby increasing the percentage of memory that is already synced. When the percentage of synced memory is equal to or greater than the threshold specified via {config:option}`instance-migration:migration.incremental.memory.goal`, or the maximum number of allowed iterations specified via {config:option}`instance-migration:migration.incremental.memory.iterations` is reached, Incus instructs CRIU to perform a final memory dump and transfers it." } ]
{ "category": "Runtime", "file_name": "code_of_conduct.md", "project_name": "MinIO", "subcategory": "Cloud Native Storage" }
[ { "data": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior, in compliance with the licensing terms applying to the Project developments. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. However, these actions shall respect the licensing terms of the Project Developments that will always supersede such Code of Conduct. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at dev@min.io. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. This Code of Conduct is adapted from the , version 1.4, available at This version includes a clarification to ensure that the code of conduct is in compliance with the free software licensing terms of the project." } ]
{ "category": "Runtime", "file_name": "bug_report.md", "project_name": "kube-vip", "subcategory": "Cloud Native Network" }
[ { "data": "name: Bug report about: Create a report to help us improve title: '' labels: '' assignees: '' Describe the bug A clear and concise description of what the bug is. To Reproduce Steps to reproduce the behavior: Go to '...' Click on '....' Scroll down to '....' See error Expected behavior A clear and concise description of what you expected to happen. Screenshots If applicable, add screenshots to help explain your problem. Environment (please complete the following information): OS/Distro: [e.g. Ubuntu 1804] Kubernetes Version: [e.g. v.1.18] Kube-vip Version: [e.g. 0.2.3] `Kube-vip.yaml`: If Possible add in your kube-vip manifest (please remove anything that is confidential) Additional context Add any other context about the problem here." } ]
{ "category": "Runtime", "file_name": "Ozone.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "layout: global title: Ozone This guide describes how to configure {:target=\"_blank\"} as Alluxio's under storage system. Ozone is a scalable, redundant, and distributed object store for Hadoop. Apart from scaling to billions of objects of varying sizes, Ozone can function effectively in containerized environments such as Kubernetes and YARN. Ozone supports two different schemas. The biggest difference between `o3fs` and `ofs` is that `o3fs` suports operations only at a single bucket, while `ofs` supports operations across all volumes and buckets and provides a full view of all the volume/buckets. For more information, please read its documentation: {:target=\"_blank\"} {:target=\"_blank\"} If you haven't already, please see before you get started. In preparation for using Ozone with Alluxio: {% navtabs Prerequisites %} {% navtab o3fs %} <table class=\"table table-striped\"> <tr> <td markdown=\"span\" style=\"width:30%\">`<OZONE_VOLUME>`</td> <td markdown=\"span\">{:target=\"_blank\"} or use an existing volume</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<OZONE_BUCKET>`</td> <td markdown=\"span\">{:target=\"_blank\"} or use an existing bucket</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<OZONE_DIRECTORY>`</td> <td markdown=\"span\">The directory you want to use in the bucket, either by creating a new directory or using an existing one</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<OMSERVICEIDS>`</td> <td markdown=\"span\">To select between the available HA clusters, a logical named called a serviceID is required for each of the cluseters. Read {:target=\"_blank\"}</td> </tr> </table> {% endnavtab %} {% navtab ofs %} <table class=\"table table-striped\"> <tr> <td markdown=\"span\" style=\"width:30%\">`<OZONE_MANAGER>`</td> <td markdown=\"span\">The namespace manager for Ozone. See {:target=\"_blank\"}</td> </tr><tr> <td markdown=\"span\" style=\"width:30%\">`<OZONE_VOLUME>`</td> <td markdown=\"span\">{:target=\"_blank\"} or use an existing volume</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<OZONE_BUCKET>`</td> <td markdown=\"span\">{:target=\"_blank\"} or use an existing bucket</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<OZONE_DIRECTORY>`</td> <td markdown=\"span\">The directory you want to use in the bucket, either by creating a new directory or using an existing one</td> </tr> </table> {% endnavtab %} {% endnavtabs %} Follow the {:target=\"_blank\"} to install a Ozone cluster. To use Ozone as the UFS of Alluxio root mount point, you need to configure Alluxio to use under storage systems by modifying `conf/alluxio-site.properties`. If it does not exist, create the configuration file from the template. ```shell $ cp conf/alluxio-site.properties.template conf/alluxio-site.properties ``` Specify an Ozone bucket and directory as the underfs address by modifying `conf/alluxio-site.properties`. {% navtabs Setup %} {% navtab o3fs %} For example, the under storage address can be" }, { "data": "if you want to mount the whole bucket to Alluxio, or `o3fs://<OZONEBUCKET>.<OZONEVOLUME>/alluxio/data` if only the directory `/alluxio/data` inside the ozone bucket `<OZONEBUCKET>` of `<OZONEVOLUME>` is mapped to Alluxio. ```properties alluxio.dora.client.ufs.root=o3fs://<OZONEBUCKET>.<OZONEVOLUME>/ ``` Set the property `alluxio.underfs.hdfs.configuration` in `conf/alluxio-site.properties` to point to your `ozone-site.xml`. Make sure this configuration is set on all servers running Alluxio. ```properties alluxio.underfs.hdfs.configuration=/path/to/hdfs/conf/ozone-site.xml ``` {% endnavtab %} {% navtab ofs %} For example, the under storage address can be `ofs://<OZONEMANAGER>/<OZONEVOLUME>/<OZONE_BUCKET>/` if you want to mount the whole bucket to Alluxio, or `ofs://<OZONEMANAGER>/<OZONEVOLUME>/<OZONEBUCKET>/alluxio/data` if only the directory `/alluxio/data` inside the ozone bucket `<OZONEBUCKET>` of `<OZONE_VOLUME>` is mapped to Alluxio. ```properties alluxio.dora.client.ufs.root=ofs://<OZONEMANAGER>/<OZONEVOLUME>/<OZONE_BUCKET>/ ``` {% endnavtab %} {% endnavtabs %} To make Alluxio mount Ozone in HA mode, you should configure Alluxio's server so that it can find the OzoneManager. Please note that once set up, your application using the Alluxio client does not require any special configuration. In HA mode `alluxio.dora.client.ufs.root` needs to specify `<OMSERVICEIDS>` such as: ```properties alluxio.dora.client.ufs.root=o3fs://<OZONEBUCKET>.<OZONEVOLUME>.<OMSERVICEIDS>/ alluxio.underfs.hdfs.configuration=/path/to/hdfs/conf/ozone-site.xml ``` ```properties alluxio.dora.client.ufs.root=ofs://<OZONEMANAGER>/<OZONEVOLUME>/<OZONE_BUCKET>/ alluxio.underfs.hdfs.configuration=/path/to/hdfs/conf/ozone-site.xml ``` `<OMSERVICEIDS>` can be found in `ozone-site.xml`. In the following example `ozone-site.xml` file, `<OMSERVICEIDS>` is `omservice1`: ```xml <property> <name>ozone.om.service.ids</name> <value>omservice1</value> </property> ``` Users can mount an Ozone cluster of a specific version as an under storage into Alluxio namespace. Before mounting Ozone with a specific version, make sure you have built a client with that specific version of Ozone. You can check the existence of this client by going to the `lib` directory under the Alluxio directory. When mounting the under storage at the Alluxio root with a specific Ozone version, one can add the following line to the site properties file (`conf/alluxio-site.properties`). ```properties alluxio.underfs.version=<OZONE_VERSION> ``` Once you have configured Alluxio to Ozone, try to see that everything works. Use the HDFS shell or Ozone shell to visit your Ozone directory `o3fs://<OZONEBUCKET>.<OZONEVOLUME>/<OZONE_DIRECTORY>` to verify the files and directories created by Alluxio exist. For this test, you should see files named like ```shell <OZONEBUCKET>.<OZONEVOLUME>/<OZONEDIRECTORY>/defaulttestsfiles/BasicFileCACHEPROMOTEMUST_CACHE ``` Currently, the only tested Ozone version with Alluxio is `1.0.0`, `1.1.0`, `1.2.1`. Ozone UFS integration is contributed and maintained by the Alluxio community. The source code is located {:target=\"_blank\"}. Feel free submit pull requests to improve the integration and update the documentation {:target=\"_blank\"} if any information is missing or out of date." } ]
{ "category": "Runtime", "file_name": "feature_template.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "name: Feature Proposal about: Suggest a feature to Cilium title: 'CFP: ' labels: 'kind/feature' assignees: '' Thanks for taking time to make a feature proposal for Cilium! If you have usage questions, please try the and see the first. Is your proposed feature related to a problem? If so, please describe the problem Describe the feature you'd like Include any specific requirements you need (Optional) Describe your proposed solution Please complete this section if you have ideas / suggestions on how to implement the feature. We strongly recommend discussing your approach with Cilium committers before spending lots of time implementing a change. For longer proposals, you are welcome to link to an external doc (e.g. a Google doc). We have a to help you structure your proposal - if you would like to use it, please make a copy and ensure it's publicly visible, and then add the link here. Once the CFP is close to being finalized, please add it as a PR to the repo for final approval." } ]
{ "category": "Runtime", "file_name": "CODE_OF_CONDUCT.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting Dan Buch at dan@meatballhat.com. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. This Code of Conduct is adapted from the , version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.12.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "https://github.com/vmware-tanzu/velero/releases/tag/v1.12.0 `velero/velero:v1.12.0` https://velero.io/docs/v1.12/ https://velero.io/docs/v1.12/upgrade-to-1.12/ CSI Snapshot Data Movement refers to back up CSI snapshot data from the volatile and limited production environment into durable, heterogeneous, and scalable backup storage in a consistent manner; and restore the data to volumes in the original or alternative environment. CSI Snapshot Data Movement is useful in below scenarios: For on-premises users, the storage usually doesn't support durable snapshots, so it is impossible/less efficient/cost ineffective to keep volume snapshots by the storage This feature helps to move the snapshot data to a storage with lower cost and larger scale for long time preservation. For public cloud users, this feature helps users to fulfill the multiple cloud strategy. It allows users to back up volume snapshots from one cloud provider and preserve or restore the data to another cloud provider. Then users will be free to flow their business data across cloud providers based on Velero backup and restore CSI Snapshot Data Movement is built according to the Volume Snapshot Data Movement design (). More details can be found in the design. In many use cases, customers often need to substitute specific values in Kubernetes resources during the restoration process like changing the namespace, changing the storage class, etc. To address this need, Resource Modifiers (also known as JSON Substitutions) offer a generic solution in the restore workflow. It allows the user to define filters for specific resources and then specify a JSON patch (operator, path, value) to apply to the resource. This feature simplifies the process of making substitutions without requiring the implementation of a new RestoreItemAction plugin. More details can be found in Volume Snapshot Resource Modifiers design (). Prior to version 1.12, the Velero CSI plugin would choose the VolumeSnapshotClass in the cluster based on matching driver names and the presence of the \"velero.io/csi-volumesnapshot-class\" label. However, this approach proved inadequate for many user scenarios. With the introduction of version 1.12, Velero now offers support for multiple VolumeSnapshotClasses in the CSI Plugin, enabling users to select a specific class for a particular backup. More details can be found in Multiple VolumeSnapshotClasses design (). Before v1.12, the restore controller would only delete restore resources but wouldnt delete restore data from the backup storage location when the command `velero restore delete` was executed. The only chance Velero deletes restores data from the backup storage location is when the associated backup is deleted. In this version, Velero introduces a finalizer that ensures the cleanup of all associated data for restores when running the command `velero restore delete`. To fix CVEs and keep pace with Golang, Velero made changes as follows: Bump Golang runtime to v1.20.7. Bump several dependent libraries to new versions. Bump Kopia to v0.13. Prior to v1.12, the parameter `uploader-type` for Velero installation had a default value of \"restic\". However, starting from this version, the default value has been changed to \"kopia\". This means that Velero will now use Kopia as the default path for file system backup. The ways of setting CSI snapshot time have changed in v1.12. First, the sync waiting time for creating a snapshot handle in the CSI plugin is changed from the fixed 10 minutes into" }, { "data": "The second, the async waiting time for VolumeSnapshot and VolumeSnapshotContent's status turning into `ReadyToUse` in operation uses the operation's timeout. The default value is 4 hours. As from , it supports multiple BSL and VSL, and the BSL and VSL have changed from the map into a slice, and is not backward compatible. So it would be best to change the BSL and VSL configuration into slices before the Upgrade. The Azure plugin supports Azure AD Workload identity way, but it only works for Velero native snapshots. It cannot support filesystem backup and snapshot data mover scenarios. Fixes #6498. Get resource client again after restore actions in case resource's gv is changed. This is an improvement of pr #6499, to support group changes. A group change usually happens in a restore plugin which is used for resource conversion: convert a resource from a not supported gv to a supported gv (#6634, @27149chen) Add API support for volMode block, only error for now. (#6608, @shawn-hurley) Fix how the AWS credentials are obtained from configuration (#6598, @aws_creds) Add performance E2E test (#6569, @qiuming-best) Non default s3 credential profiles work on Unified Repository Provider (kopia) (#6558, @kaovilai) Fix issue #6571, fix the problem for restore item operation to set the errors correctly so that they can be recorded by Velero restore and then reflect the correct status for Velero restore. (#6594, @Lyndon-Li) Fix issue 6575, flush the repo after delete the snapshot, otherwise, the changes(deleting repo snapshot) cannot be committed to the repo. (#6587, @Lyndon-Li) Delete moved snapshots when the backup is deleted (#6547, @reasonerjt) check if restore crd exist before operating restores (#6544, @allenxu404) Remove PVC's selector in backup's PVC action. (#6481, @blackpiglet) Delete the expired deletebackuprequests that are stuck in \"InProgress\" (#6476, @reasonerjt) Fix issue #6534, reset PVB CR's StorageLocation to the latest one during backup sync as same as the backup CR. Also fix similar problem with DataUploadResult for data mover restore. (#6533, @Lyndon-Li) Fix issue #6519. Restrict the client manager of node-agent server to include only Velero resources from the server's namespace, otherwise, the controllers will try to reconcile CRs from all the installed Velero namespaces. (#6523, @Lyndon-Li) Track the skipped PVC and print the summary in backup log (#6496, @reasonerjt) Add restore finalizer to clean up external resources (#6479, @allenxu404) fix: Typos and add more spell checking rules to CI (#6415, @mateusoliveira43) Add missing CompletionTimestamp and metrics when restore moved into terminal phase in restoreOperationsReconciler (#6397, @Nutrymaco) Add support for resource Modifications in the restore flow. Also known as JSON Substitutions. (#6452, @anshulahuja98) Remove dependency of the legacy client code from pkg/cmd directory part 2 (#6497, @blackpiglet) Add data upload and download metrics (#6493, @allenxu404) Fix issue 6490, If a backup/restore has multiple async operations and one operation fails while others are still in-progress, when all the operations finish, the backup/restore will be set as Completed falsely (#6491, @Lyndon-Li) Velero Plugins no longer need kopia indirect dependency in their go.mod (#6484, @kaovilai) Remove dependency of the legacy client code from pkg/cmd directory (#6469, @blackpiglet) Add support for OpenStack CSI drivers topology keys (#6464, @openstack-csi-topology-keys) Add exit code log and possible memory shortage warning log for Restic command" }, { "data": "(#6459, @blackpiglet) Modify DownloadRequest controller logic (#6433, @blackpiglet) Add data download controller for data mover (#6436, @qiuming-best) Fix hook filter display issue for backup describer (#6434, @allenxu404) Retrieve DataUpload into backup result ConfigMap during volume snapshot restore. (#6410, @blackpiglet) Design to add support for Multiple VolumeSnapshotClasses in CSI Plugin. (#5774, @anshulahuja98) Clarify the deletion frequency for gc controller (#6414, @allenxu404) Add unit tests for pkg/archive (#6396, @allenxu404) Add UT for pkg/discovery (#6394, @qiuming-best) Add UT for pkg/util (#6368, @Lyndon-Li) Add the code for data mover restore expose (#6357, @Lyndon-Li) Restore Endpoints before Services (#6315, @ywk253100) Add warning message for volume snapshotter in data mover case. (#6377, @blackpiglet) Add unit test for pkg/uploader (#6374, @qiuming-best) Change kopia as the default path of PVB (#6370, @Lyndon-Li) Do not persist VolumeSnapshot and VolumeSnapshotContent for snapshot DataMover case. (#6366, @blackpiglet) Add data mover related options in CLI (#6365, @ywk253100) Add dataupload controller (#6337, @qiuming-best) Add UT cases for pkg/podvolume (#6336, @Lyndon-Li) Remove Wait VolumeSnapshot to ReadyToUse logic. (#6327, @blackpiglet) Enhance the code because of #6297, the return value of GetBucketRegion is not recorded, as a result, when it fails, we have no way to get the cause (#6326, @Lyndon-Li) Skip updating status when CRDs are restored (#6325, @reasonerjt) Include namespaces needed by namespaced-scope resources in backup. (#6320, @blackpiglet) Update metrics when backup failed with validation error (#6318, @ywk253100) Add the code for data mover backup expose (#6308, @Lyndon-Li) Fix a PVR issue for generic data path -- the namespace remap was not honored, and enhance the code for better error handling (#6303, @Lyndon-Li) Add default values for defaultItemOperationTimeout and itemOperationSyncFrequency in velero CLI (#6298, @shubham-pampattiwar) Add UT cases for pkg/repository (#6296, @Lyndon-Li) Fix issue #5875. Since Kopia has supported IAM, Velero should not require static credentials all the time (#6283, @Lyndon-Li) Fixed a bug where status.progress is not getting updated for backups. (#6276, @kkothule) Add code change for async generic data path that is used by both PVB/PVR and data mover (#6226, @Lyndon-Li) Add data mover CRD under v2alpha1, include DataUpload CRD and DataDownload CRD (#6176, @Lyndon-Li) Remove any dataSource or dataSourceRef fields from PVCs in PVC BIA for cases of prior PVC restores with CSI (#6111, @eemcmullan) Add the design for Volume Snapshot Data Movement (#5968, @Lyndon-Li) Fix issue #5123, Kopia repository supports self-cert CA for S3 compatible storage. (#6268, @Lyndon-Li) Bump up Kopia to v0.13 (#6248, @Lyndon-Li) log volumes to backup to help debug why `IsPodRunning` is called. (#6232, @kaovilai) Enable errcheck linter and resolve found issues (#6208, @blackpiglet) Enable more linters, and remove mal-functioned milestoned issue action. (#6194, @blackpiglet) Enable stylecheck linter and resolve found issues. (#6185, @blackpiglet) Fix issue #6182. If pod is not running, don't treat it as an error, let it go and leave a warning. (#6184, @Lyndon-Li) Enable staticcheck and resolve found issues (#6183, @blackpiglet) Enable linter revive and resolve found errors: part 2 (#6177, @blackpiglet) Enable linter revive and resolve found errors: part 1 (#6173, @blackpiglet) Fix usestdlibvars and whitespace linters issues. (#6162, @blackpiglet) Update Golang to v1.20 for main. (#6158, @blackpiglet) Make GetPluginConfig accessible from other packages. (#6151, @tkaovila) Ignore not found error during patching managedFields (#6136, @ywk253100) Fix the goreleaser issues and add a new goreleaser action (#6109, @blackpiglet)" } ]
{ "category": "Runtime", "file_name": "CHANGES.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "restored behavior as <= v3.9.0 with option to change path strategy using TrimRightSlashEnabled. introduced MergePathStrategy to be able to revert behaviour of path concatenation to 3.9.0 see comment in Readme how to customize this behaviour. fix broken 3.10.0 by using path package for joining paths changed tokenizer to match std route match behavior; do not trimright the path (#511) Add MIME_ZIP (#512) Add MIMEZIP and HEADERContentDisposition (#513) Changed how to get query parameter issue #510 add support for http.Handler implementations to work as FilterFunction, issue #504 (thanks to https://github.com/ggicci) use exact matching of allowed domain entries, issue #489 (#493) this changes fixes [security] Authorization Bypass Through User-Controlled Key by changing the behaviour of the AllowedDomains setting in the CORS filter. To support the previous behaviour, the CORS filter type now has a AllowedDomainFunc callback mechanism which is called when a simple domain match fails. add test and fix for POST without body and Content-type, issue #492 (#496) [Minor] Bad practice to have a mix of Receiver types. (#491) restored FilterChain (#482 by SVilgelm) fix problem with contentEncodingEnabled setting (#479) feat(parameter): adds additional openapi mappings (#478) add support for vendor extensions (#477 thx erraggy) fix removing absent route from webservice (#472) fix handling no match access selected path remove obsolete field add check for wildcard (#463) in CORS add access to Route from Request, issue #459 (#462) Added OPTIONS to WebService Fixed duplicate compression in dispatch. #449 Added check on writer to prevent compression of response twice. #447 Enable content encoding on Handle and ServeHTTP (#446) List available representations in 406 body (#437) Convert to string using rune() (#443) 405 Method Not Allowed must have Allow header (#436) (thx Bracken <abdawson@gmail.com>) add field allowedMethodsWithoutContentType (#424) support describing response headers (#426) fix openapi examples (#425) v3.0.0 fix: use request/response resulting from filter chain add Go module Module consumer should use github.com/emicklei/go-restful/v3 as import path v2.10.0 support for Custom Verbs (thanks Vinci Xu <277040271@qq.com>) fixed static example (thanks Arthur <yang_yapo@126.com>) simplify code (thanks Christian Muehlhaeuser <muesli@gmail.com>) added JWT HMAC with SHA-512 authentication code example (thanks Amim Knabben <amim.knabben@gmail.com>) v2.9.6 small optimization in filter code v2.11.1 fix WriteError return value (#415) v2.11.0 allow prefix and suffix in path variable expression (#414) v2.9.6 support google custome verb (#413) v2.9.5 fix panic in Response.WriteError if err == nil v2.9.4 fix issue #400 , parsing mime type quality Route Builder added option for contentEncodingEnabled (#398) v2.9.3 Avoid return of 415 Unsupported Media Type when request body is empty (#396) v2.9.2 Reduce allocations in per-request methods to improve performance (#395) v2.9.1 Fix issue with default responses and invalid status code 0. (#393)" }, { "data": "add per Route content encoding setting (overrides container setting) v2.8.0 add Request.QueryParameters() add json-iterator (via build tag) disable vgo module (until log is moved) v2.7.1 add vgo module v2.6.1 add JSONNewDecoderFunc to allow custom JSON Decoder usage (go 1.10+) v2.6.0 Make JSR 311 routing and path param processing consistent Adding description to RouteBuilder.Reads() Update example for Swagger12 and OpenAPI 2017-09-13 added route condition functions using `.If(func)` in route building. 2017-02-16 solved issue #304, make operation names unique 2017-01-30 [IMPORTANT] For swagger users, change your import statement to: swagger \"github.com/emicklei/go-restful-swagger12\" moved swagger 1.2 code to go-restful-swagger12 created TAG 2.0.0 2017-01-27 remove defer request body close expose Dispatch for testing filters and Routefunctions swagger response model cannot be array created TAG 1.0.0 2016-12-22 (API change) Remove code related to caching request content. Removes SetCacheReadEntity(doCache bool) 2016-11-26 Default change! now use CurlyRouter (was RouterJSR311) Default change! no more caching of request content Default change! do not recover from panics 2016-09-22 fix the DefaultRequestContentType feature 2016-02-14 take the qualify factor of the Accept header mediatype into account when deciding the contentype of the response add constructors for custom entity accessors for xml and json 2015-09-27 rename new WriteStatusAnd... to WriteHeaderAnd... for consistency 2015-09-25 fixed problem with changing Header after WriteHeader (issue 235) 2015-09-14 changed behavior of WriteHeader (immediate write) and WriteEntity (no status write) added support for custom EntityReaderWriters. 2015-08-06 add support for reading entities from compressed request content use sync.Pool for compressors of http response and request body add Description to Parameter for documentation in Swagger UI 2015-03-20 add configurable logging 2015-03-18 if not specified, the Operation is derived from the Route function 2015-03-17 expose Parameter creation functions make trace logger an interface fix OPTIONSFilter customize rendering of ServiceError JSR311 router now handles wildcards add Notes to Route 2014-11-27 (api add) PrettyPrint per response. (as proposed in #167) 2014-11-12 (api add) ApiVersion(.) for documentation in Swagger UI 2014-11-10 (api change) struct fields tagged with \"description\" show up in Swagger UI 2014-10-31 (api change) ReturnsError -> Returns (api add) RouteBuilder.Do(aBuilder) for DRY use of RouteBuilder fix swagger nested structs sort Swagger response messages by code 2014-10-23 (api add) ReturnsError allows you to document Http codes in swagger fixed problem with greedy CurlyRouter (api add) Access-Control-Max-Age in CORS add tracing functionality (injectable) for debugging purposes support JSON parse 64bit int fix empty parameters for swagger WebServicesUrl is now optional for swagger fixed duplicate AccessControlAllowOrigin in CORS (api change) expose ServeMux in container (api add) added AllowedDomains in CORS (api add) ParameterNamed for detailed documentation 2014-04-16 (api add) expose constructor of Request for" }, { "data": "2014-06-27 (api add) ParameterNamed gives access to a Parameter definition and its data (for further specification). (api add) SetCacheReadEntity allow scontrol over whether or not the request body is being cached (default true for compatibility reasons). 2014-07-03 (api add) CORS can be configured with a list of allowed domains 2014-03-12 (api add) Route path parameters can use wildcard or regular expressions. (requires CurlyRouter) 2014-02-26 (api add) Request now provides information about the matched Route, see method SelectedRoutePath 2014-02-17 (api change) renamed parameter constants (go-lint checks) 2014-01-10 (api add) support for CloseNotify, see http://golang.org/pkg/net/http/#CloseNotifier 2014-01-07 (api change) Write* methods in Response now return the error or nil. added example of serving HTML from a Go template. fixed comparing Allowed headers in CORS (is now case-insensitive) 2013-11-13 (api add) Response knows how many bytes are written to the response body. 2013-10-29 (api add) RecoverHandler(handler RecoverHandleFunction) to change how panic recovery is handled. Default behavior is to log and return a stacktrace. This may be a security issue as it exposes sourcecode information. 2013-10-04 (api add) Response knows what HTTP status has been written (api add) Request can have attributes (map of string->interface, also called request-scoped variables 2013-09-12 (api change) Router interface simplified Implemented CurlyRouter, a Router that does not use|allow regular expressions in paths 2013-08-05 add OPTIONS support add CORS support 2013-08-27 fixed some reported issues (see github) (api change) deprecated use of WriteError; use WriteErrorString instead 2014-04-15 (fix) v1.0.1 tag: fix Issue 111: WriteErrorString 2013-08-08 (api add) Added implementation Container: a WebServices collection with its own http.ServeMux allowing multiple endpoints per program. Existing uses of go-restful will register their services to the DefaultContainer. (api add) the swagger package has be extended to have a UI per container. if panic is detected then a small stack trace is printed (thanks to runner-mei) (api add) WriteErrorString to Response Important API changes: (api remove) package variable DoNotRecover no longer works ; use restful.DefaultContainer.DoNotRecover(true) instead. (api remove) package variable EnableContentEncoding no longer works ; use restful.DefaultContainer.EnableContentEncoding(true) instead. 2013-07-06 (api add) Added support for response encoding (gzip and deflate(zlib)). This feature is disabled on default (for backwards compatibility). Use restful.EnableContentEncoding = true in your initialization to enable this feature. 2013-06-19 (improve) DoNotRecover option, moved request body closer, improved ReadEntity 2013-06-03 (api change) removed Dispatcher interface, hide PathExpression changed receiver names of type functions to be more idiomatic Go 2013-06-02 (optimize) Cache the RegExp compilation of Paths. 2013-05-22 (api add) Added support for request/response filter functions 2013-05-18 (api add) Added feature to change the default Http Request Dispatch function (travis cline) (api change) Moved Swagger Webservice to swagger package (see example restful-user) [2012-11-14 .. 2013-05-18> See https://github.com/emicklei/go-restful/commits 2012-11-14 Initial commit" } ]
{ "category": "Runtime", "file_name": "coordinator.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "| Case ID | Title | Priority | Smoke | Status | Other | |||-|-|--|-| | C00001 | coordinator in tuneMode: underlay works well | p1 | smoke | done | | | C00002 | coordinator in tuneMode: overlay works well | p1 | smoke | done | | | C00003 | coordinator in tuneMode: underlay with two NIC | p1 | smoke | done | | | C00004 | coordinator in tuneMode: overlay with two NIC | p1 | smoke | done | | | C00005 | In overlay mode: specify the NIC (net1) where the default route is located, use 'ip r get 8.8.8.8' to see if default route nic is the specify NIC | p2 | | done | | | C00006 | In underlay mode: specify the NIC (net1) where the default route is located, use 'ip r get 8.8.8.8' to see if default route nic is the specify NIC | p2 | | done | | | C00007 | ip conflict detection (ipv4, ipv6) | p2 | | done | | | C00008 | override pod mac prefix | p2 | | done | | | C00009 | gateway connection detection | p2 | | done | | | C00010 | auto clean up the dirty rules(routing\\neighborhood) while pod starting | p2 | | done | | | C00011 | In the default scenario (Do not specify the NIC where the default route is located in any way) , use 'ip r get 8.8.8.8' to see if default route NIC is `eth0` | p2 | | done | | | C00012 | In multi-nic case , use 'ip r get <service_subnet> and <hostIP>' to see if src is from pod's eth0, note: only for ipv4. | p2 | | done | | | C00013 | Support `spec.externalTrafficPolicy` for service in Local mode, it works well | p2 | | done | | | C00014 | Specify the NIC of the default route, but the NIC does not exist | p3 | | done | | | C00015 | In multi-NIC mode, whether the NIC name is random and pods are created normally | p3 | | done | | | C00016 | The table name can be customized by hostRuleTable | p3 | | done | | | C00017 | TunePodRoutes If false, no routing will be coordinated | p3 | | done | | | C00018 | The conflict IPs for stateless Pod should be released | p3 | | done | | | C00019 | The conflict IPs for stateful Pod should not be released | p3 | | done | |" } ]
{ "category": "Runtime", "file_name": "20230807-backingimage-backup-support.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "This feature enables Longhorn to backup the BackingImage to backup store and restore it. When a Volume with a BackingImage being backed up, the BackingImage will also be backed up. User can manually back up the BackingImage. When restoring a Volume with a BackingImage, the BackingImage will also be restored. User can manually restore the BackingImage. All BackingImages are backed up in blocks. If the block contains the same data, BackingImages will reuse the same block in backup store instead of uploading another identical one. With this feature, there is no need for user to manually handle BackingImage across cluster when backing up and restoring the Volumes with BackingImages. Before this feature: The BackingImage will not be backed up automatically when backing up a Volume with the BackingImage. So the user needs to prepare the BackingImage again in another cluster before restoring the Volume back. After this feature: A BackingImage will be backed up automatically when a Volume with the BackingImage is being backed up. User can also manually back up a BackingImage independently. Then, when the Volume with the BackingImage is being restored from backup store, Longhorn will restore the BackingImage at the same time automatically. User can also manually restore the BackingImage independently. This improve the user experience and reduce the operation overhead. Backup `BackingImage` is not the same as backup `Volume` which consists of a series of `Snapshots`. Instead, a `BackingImage` already has all the blocks we need to backup. Therefore, we don't need to find the delta between two `BackingImages` like what we do for`Snapshots` which delta might exist in other `Snapshots` between the current `Snapshot` and the last backup `Snapshot`. All the `BackingImages` share the same block pools in backup store, so we can reuse the blocks to increase the backup speed and save the space. This can happen when user create v1 `BackingImage`, use the image to add more data and then export another v2 `BackingImage`. For restoration, we still restore fully on one of the ready disk. Different from `Volume` backup, `BackingImage` does not have any size limit. It can be less than 2MB or not a multiple of 2MB. Thus, the last block might not be 2MB. When backing up `BackingImage` `preload()`: the BackingImage to get the all the sectors that have data inside. `createBackupBackingMapping()`: to get all the blocks we need to backup Block: offset + size (2MB for each block, last block might less than 2MB) `backupMappings()`: write the block to the backup store if the block is already in the backup store, skip it. `saveBackupBacking()`: save the metadata of the `BackupBackingImage` including the block mapping to the backup store. Mapping needs to include block size. When restoring `BackingImage` `loadBackupBacking()`: load the metadata of the `BackupBackingImage` from the backup store `populateBlocksForFullRestore() + restoreBlocks()`: based on the mapping, write the block data to the correct offset. We backup the blocks in async way to increase the backup speed. For qcow2 `BackingImage`, the format is not the same as raw file, we can't detect the hole and the data sector. So we back up all the blocks. Add a new CRD `backupbackingimage.longhorn.io` ```go type BackupBackingImageSpec struct { SyncRequestedAt" }, { "data": "`json:\"syncRequestedAt\"` UserCreated bool `json:\"userCreated\"` Labels map[string]string `json:\"labels\"` } type BackupBackingImageStatus struct { OwnerID string `json:\"ownerID\"` Checksum string `json:\"checksum\"` URL string `json:\"url\"` Size string `json:\"size\"` Labels map[string]string `json:\"labels\"` State BackupBackingImageState `json:\"state\"` Progress int `json:\"progress\"` Error string `json:\"error,omitempty\"` Messages map[string]string `json:\"messages\"` ManagerAddress string `json:\"managerAddress\"` BackupCreatedAt string `json:\"backupCreatedAt\"` LastSyncedAt metav1.Time `json:\"lastSyncedAt\"` CompressionMethod BackupCompressionMethod `json:\"compressionMethod\"` } ``` ```go type BackupBackingImageState string const ( BackupBackingImageStateNew = BackupBackingImageState(\"\") BackupBackingImageStatePending = BackupBackingImageState(\"Pending\") BackupBackingImageStateInProgress = BackupBackingImageState(\"InProgress\") BackupBackingImageStateCompleted = BackupBackingImageState(\"Completed\") BackupBackingImageStateError = BackupBackingImageState(\"Error\") BackupBackingImageStateUnknown = BackupBackingImageState(\"Unknown\") ) ``` Field `Spec.UserCreated` indicates whether this Backup is created by user to create the backup in backupstore or it is synced from backupstrore. Field `Status.ManagerAddress` indicates the address of the backing-image-manager running BackingImage backup. Field `Status.Checksum` records the checksum of the BackingImage. Users may create a new BackingImage with the same name but different content after deleting an old one or there is another BackingImage with the same name in another cluster. To avoid the confliction, we use checksum to check if they are the same. If cluster already has the `BackingImage` with the same name as in the backup store, we still create the `BackupBackingImage` CR. User can use the checksum to check if they are the same. Therefore we don't use `UUID` across cluster since user might already prepare the same BackingImage with the same name and content in another cluster. Add a new controller `BackupBackingImageController`. Workflow Check and update the ownership. Do cleanup if the deletion timestamp is set. Cleanup the backup `BackingImage` on backup store Stop the monitoring If `Status.LastSyncedAt.IsZero() && Spec.BackingImageName != \"\"` means it is created by the User/API layer, we need to do the backup Start the monitor Pick one `BackingImageManager` Request `BackingImageManager` to backup the `BackingImage` by calling `CreateBackup()` grpc Else it means the `BackupBackingImage` CR is created by `BackupTargetController` and the backup `BackingImage` already exists in the remote backup target before the CR creation. Use `backupTargetClient` to get the info of the backup `BackingImage` Sync the status In `BackingImageManager - manager(backing_image.go)` Implement `CreateBackup()` grpc Backup `BackingImage` to backup store in blocks In controller `BackupTargetController` Workflow Implement `syncBackupBackingImage()` function Create the `BackupBackingImage` CRs whose name are in the backup store but not in the cluster Delete the `BackupBackingImage` CRs whose name are in the cluster but not in the backup store Request `BackupBackingImageController` to reconcile those `BackupBackingImage` CRs Add a backup API for `BackingImage` Add new action `backup` to `BackingImage` (`\"/v1/backingimages/{name}\"`) create `BackupBackingImage` CR to init the backup process if `BackupBackingImage` already exists, it means there is already a `BackupBackingImage` in backup store, user can check the checksum to verify if they are the same. API Watch: establish a streaming connection to report BackupBackingImage info. Trigger Back up through `BackingImage` operation manually Back up `BackingImage` when user back up the volume in `SnapshotBackup()` API we get the `BackingImage` of the `Volume` back up `BackingImage` if the `BackupBackingImage` does not exist Add new data source type `restore` for `BackingImageDataSource` ```go type BackingImageDataSourceType string const ( BackingImageDataSourceTypeDownload = BackingImageDataSourceType(\"download\") BackingImageDataSourceTypeUpload = BackingImageDataSourceType(\"upload\") BackingImageDataSourceTypeExportFromVolume = BackingImageDataSourceType(\"export-from-volume\") BackingImageDataSourceTypeRestore = BackingImageDataSourceType(\"restore\") DataSourceTypeRestoreParameterBackupURL = \"backup-url\" ) // BackingImageDataSourceSpec defines the desired state of the Longhorn backing image data source type BackingImageDataSourceSpec struct { NodeID string `json:\"nodeID\"` UUID string `json:\"uuid\"` DiskUUID string `json:\"diskUUID\"` DiskPath string `json:\"diskPath\"` Checksum string `json:\"checksum\"` SourceType BackingImageDataSourceType `json:\"sourceType\"` Parameters map[string]string `json:\"parameters\"` FileTransferred bool `json:\"fileTransferred\"` } ``` Create BackingImage APIs No need to change Create BackingImage CR with `type=restore` and `restore-url=${URL}` If BackingImage already exists in the cluster, user can use checksum to verify if they are the" }, { "data": "In `BackingImageController` No need to change, it will create the `BackingImageDataSource` CR In `BackingImageDataSourceController` No need to change, it will create the `BackingImageDataSourcePod` to do the restore. In `BackingImageManager - data_source` When init the service, if the type is `restore`, then restore from `backup-url` by requesting sync service in the same pod. ```go requestURL := fmt.Sprintf(\"http://%s/v1/files\", client.Remote) req, err := http.NewRequest(\"POST\", requestURL, nil) q := req.URL.Query() q.Add(\"action\", \"restoreFromBackupURL\") q.Add(\"url\", backupURL) q.Add(\"file-path\", filePath) q.Add(\"uuid\", uuid) q.Add(\"disk-uuid\", diskUUID) q.Add(\"expected-checksum\", expectedChecksum) ```` In `sync/service` implement `restoreFromBackupURL()` to restore the `BackingImage` from backup store to the local disk. In `BackingImageDataSourceController` No need to change, it will take over control when `BackingImageDataSource` status is `ReadyForTransfer`. If it failed to restore the `BackingImage`, the status of the `BackingImage` will be failed and `BackingImageDataSourcePod` will be cleaned up and retry with backoff limit like `type=download`. The process is the same as other `BackingImage` creation process. Trigger Restore through `BackingImage` operation manually Restore when user restore the `Volume` with `BackingImage` Restoring a Volume is actually requesting `Create` a Volume with `fromBackup` in the spec In `Create()` API we check if the `Volume` has `fromBackup` parameters and has `BackingImage` Check if `BackingImage` exists Check and restore `BackupBackingImage` if `BackingImage` does not exist Restore `BackupBackingImage` by creating `BackingImage` with type `restore` and `backupURL` Then Create the `Volume` CR so the admission webhook won't failed because of missing `BackingImage` () Restore when user create `Volume` through `CSI` In `CreateVolume()` we check if the `Volume` has `fromBackup` parameters and has `BackingImage` In `checkAndPrepareBackingImage()`, we restore `BackupBackingImage` by creating `BackingImage` with type `restore` and `backupURL` `longhorn-ui`: Add a new page of `BackupBackingImage` like `Backup` The columns on `BackupBackingImage` list page should be: `Name`, `Size`, `State`, `Created At`, `Operation`. `Name` can be clicked and will show `Checksum` of the `BackupBackingImage` `State`: `BackupBackingImageState` of the `BackupBackingImage` CR `Operation` includes `restore` `delete` Add a new operation `backup` for every `BackingImage` in the `BackingImage` page `API`: Add new action `backup` to `BackingImage` (`\"/v1/backingimages/{name}\"`) create `BackupBackingImage` CR to init the backup process `BackupBackingImage` `GET \"/v1/backupbackingimages\"`: get all `BackupBackingImage` API Watch: establish a streaming connection to report `BackupBackingImage` info change. Integration tests `BackupBackingImage` Basic Operation Setup Create a `BackingImage` ``` apiVersion: longhorn.io/v1beta2 kind: BackingImage metadata: name: parrot namespace: longhorn-system spec: sourceType: download sourceParameters: url: https://longhorn-backing-image.s3-us-west-1.amazonaws.com/parrot.raw checksum: 304f3ed30ca6878e9056ee6f1b02b328239f0d0c2c1272840998212f9734b196371560b3b939037e4f4c2884ce457c2cbc9f0621f4f5d1ca983983c8cdf8cd9a ``` Setup the backup target Back up `BackingImage` by applying the yaml yaml ```yaml apiVersion: longhorn.io/v1beta2 kind: BackupBackingImage metadata: name: parrot namespace: longhorn-system spec: userCreated: true labels: usecase: test type: raw ``` `BackupBackingImage` CR should be complete You can get the backup URL from `Status.URL` Delete the `BackingImage` in the cluster Restore the `BackupBackingImage` by applying the yaml ```yaml apiVersion: longhorn.io/v1beta2 kind: BackingImage metadata: name: parrot-restore namespace: longhorn-system spec: sourceType: restore sourceParameters: backup-url: s3://backupbucket@us-east-1/?backingImage=parrot concurrent-limit: \"2\" checksum: 304f3ed30ca6878e9056ee6f1b02b328239f0d0c2c1272840998212f9734b196371560b3b939037e4f4c2884ce457c2cbc9f0621f4f5d1ca983983c8cdf8cd9a ``` Checksum should be the same Back up `BackingImage` when backing up and restoring Volume Setup Create a `BackingImage` Setup the backup target Create a Volume with the `BackingImage` Back up the `Volume` `BackupBackingImage` CR should be created and complete Delete the `BackingImage` Restore the Volume with same `BackingImage` `BackingImage` should be restored and the `Volume` should also be restored successfully `Volume` checksum is the same Manual tests `BackupBackingImage` reuse blocks Setup Create a `BackingImage` A Setup the backup target Create a `Volume` with `BackingImage` A, write some data and export to another `BackingImage` B Back up `BackingImage` A Back up `BackingImage` B Check it reuses the blocks when backing up `BackingImage` B (by trace log)" } ]
{ "category": "Runtime", "file_name": "seccomp.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "Seccomp filters are used by default to limit the host system calls Firecracker can use. The default filters only allow the bare minimum set of system calls and parameters that Firecracker needs in order to function correctly. The filters are loaded in the Firecracker process, on a per-thread basis, as follows: VMM (main) - right before executing guest code on the VCPU threads; API - right before launching the HTTP server; VCPUs - right before executing guest code. Note: On experimental GNU targets, there are no default seccomp filters installed, since they are not intended for production use. Firecracker uses JSON files for expressing the filter rules and relies on the tool for all the seccomp functionality. At build time, the default target-specific JSON file is compiled into the serialized binary file, using seccompiler-bin, and gets embedded in the Firecracker binary. This process is performed automatically, when building the executable. To minimise the overhead of succesive builds, the compiled filter file is cached in the build folder and is only recompiled if modified. You can find the default seccomp filters under `resources/seccomp`. For a certain release, the default JSON filters used to build Firecracker are also included in the respective release archive, viewable on the . Note 1: This feature overrides the default filters and can be dangerous. Filter misconfiguration can result in abruptly terminating the process or disabling the seccomp security boundary altogether. We recommend using the default filters instead. Note 2: The user is fully responsible for managing the filter files. We recommend using integrity checks whenever transferring/downloading files, for example checksums, as well as for the Firecracker binary or other artifacts, in order to mitigate potential man-in-the-middle attacks. Firecracker exposes a way for advanced users to override the default filters with fully customisable alternatives, leveraging the same JSON/seccompiler tooling, at startup time. Via Firecracker's optional `--seccomp-filter` parameter, one can supply the path to a custom filter file compiled with seccompiler-bin. Potential use cases: Users of experimentally-supported targets (like GNU libc builds) may be able to use this feature to implement seccomp filters without needing to have a custom build of Firecracker. Faced with a theoretical production issue, due to a syscall that was issued by the Firecracker process, but not allowed by the seccomp policy, one may use a custom filter in order to quickly mitigate the issue. This can speed up the resolution time, by not needing to build and deploy a new Firecracker binary. However, as the note above states, this needs to be thoroughly tested and should not be a long-term solution. Firecracker also has support for a `--no-seccomp` parameter, which disables all seccomp filtering. It can be helpful when quickly prototyping changes in Firecracker that use new system calls. Do not use in production." } ]
{ "category": "Runtime", "file_name": "upgrade-to-1.1.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Upgrading to Velero 1.1\" layout: docs installed. Velero v1.1 only requires user intervention if Velero is running in a namespace other than `velero`. Previous versions of Velero's server detected the namespace in which it was running by inspecting the container's filesystem. With v1.1, this is no longer the case, and the server must be made aware of the namespace it is running in with the `VELERO_NAMESPACE` environment variable. `velero install` automatically writes this for new deployments, but existing installations will need to add the environment variable before upgrading. You can use the following command to patch the deployment: ```bash kubectl patch deployment/velero -n <YOUR_NAMESPACE> \\ --type='json' \\ -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/containers/0/env/0\",\"value\":{\"name\":\"VELERO_NAMESPACE\", \"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}}]' ```" } ]
{ "category": "Runtime", "file_name": "instances.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "(instances)= ```{toctree} :titlesonly: explanation/instances.md Create instances <howto/instances_create.md> Manage instances <howto/instances_manage.md> Configure instances <howto/instances_configure.md> Back up instances <howto/instances_backup.md> Use profiles <profiles.md> Use cloud-init <cloud-init> Run commands <instance-exec.md> Access the console <howto/instances_console.md> Access files <howto/instancesaccessfiles.md> Add a routed NIC to a VM </howto/instancesroutednic_vm.md> Troubleshoot errors <howto/instances_troubleshoot.md> explanation/instance_config.md Container environment <container-environment> migration ```" } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "Container Storage Interface (CSI)", "subcategory": "Cloud Native Storage" }
[ { "data": "This document outlines some of the requirements and conventions for contributing to the Container Storage Interface, including development workflow, commit message formatting, contact points, and other resources to make it easier to get your contribution accepted. CSI is under and accepts contributions via GitHub pull requests. Before contributing to the Container Storage Interface, contributors MUST sign the CLA available . The CLA MAY be signed on behalf of a company, possibly covering multiple contributors, or as an individual (put \"Individual\" for \"Corporation name\"). The completed CLA MUST be mailed to the CSI Approvers mailing list: container-storage-interface-approvers@googlegroups.com. To keep consistency throughout the Markdown files in the CSI spec, all files should be formatted one sentence per line. This fixes two things: it makes diffing easier with git and it resolves fights about line wrapping length. For example, this paragraph will span three lines in the Markdown source. This also applies to the code snippets in the markdown files. Please wrap the code at 72 characters. This also applies to the code snippets in the markdown files. End each sentence within a comment with a punctuation mark (please note that we generally prefer periods); this applies to incomplete sentences as well. For trailing comments, leave one space between the end of the code and the beginning of the comment. The \"system of record\" for the specification is the `spec.md` file and all hand-edits of the specification should happen there. DO NOT manually edit the generated protobufs or generated language bindings. Once changes to `spec.md` are complete, please run `make` to update generated files. IMPORTANT: Prior to committing code please run `make` to ensure that your specification changes have landed in all generated files. Each commit should represent a single logical (atomic) change: this makes your changes easier to review. Try to avoid unrelated cleanups (e.g., typo fixes or style nits) in the same commit that makes functional changes. While typo fixes are great, including them in the same commit as functional changes makes the commit history harder to read. Developers often make incremental commits to save their progress when working on a change, and then rewrite history (e.g., using `git rebase -i`) to create a clean set of commits once the change is ready to be reviewed. Simple house-keeping for clean git history. Read more on or the Discussion section of . Separate the subject from body with a blank line. Limit the subject line to 50 characters. Capitalize the subject line. Do not end the subject line with a period. Use the imperative mood in the subject line. Wrap the body at 72 characters. Use the body to explain what and why vs. how. If there was important/useful/essential conversation or information, copy or include a reference. When possible, one keyword to scope the change in the subject (i.e. \"README: ...\", \"tool: ...\")." } ]
{ "category": "Runtime", "file_name": "move-gh-org.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Currently, the Velero repository sits under the Heptio GitHub organization. With the acquisition of Heptio by VMware, it is due time that this repo moves to one of the VMware GitHub organizations. This document outlines a plan to move this repo to the VMware Tanzu (https://github.com/vmware-tanzu) organization. List all steps necessary to have this repo fully functional under the new org Highlight any step necessary around setting up the new organization and its members [ ] PR: Blog post communicating the move. https://github.com/heptio/velero/issues/1841. Who: TBD. [ ] PR: Find/replace in all Go, script, yaml, documentation, and website files: `github.com/heptio/velero -> github.com/vmware-tanzu/velero`. Who: a Velero developer; TBD [ ] PR: Update website with the correct GH links. Who: a Velero developer; TBD [ ] PR: Change deployment and grpc-push scripts with the new location path. Who: a Velero developer; TBD [ ] Delete branches not to be carried over (https://github.com/heptio/velero/branches/all). Who: Any of the current repo owners; TBD [ ] Use GH UI to transfer the repository to the VMW org; must be accepted within a day. Who: new org owner; TBD [ ] Make owners of this repo owners of repo in the new org. Who: new org owner; TBD [ ] Update Travis CI. Who: Any of the new repo owners; TBD [ ] Add DCO for signoff check (https://probot.github.io/apps/dco/). Who: Any of the new repo owners; TBD [ ] Each individual developer should point their origin to the new location: `git remote set-url origin git@github.com:vmware-tanzu/velero.git` [ ] Transfer ZenHub. Who: Any of the new repo owners; TBD [ ] Update Netlify deploy settings. Any of the new repo owners; TBD [ ] GH app: Netlify integration. Who: Any of the new repo owners; TBD [ ] GH app: Slack integration. Who: Any of the new repo owners; TBD [ ] Add webhook: travis CI. Who: Any of the new repo owners; TBD [ ] Add webhook: zenhub. Who: Any of the new repo owners; TBD [ ] Move all 3 native provider plugins into their own individual repo. https://github.com/heptio/velero/issues/1537. Who: @carlisia. [ ] Merge PRs from the \"pre move\" section [ ] Create a team for the Velero core members" }, { "data": "Who: Any of the new repo owners; TBD All action items needed for the repo transfer are listed in the Todo list above. For details about what gets moved and other info, this is the GH documentation: https://help.github.com/en/articles/transferring-a-repository [Pending] We will find out this week who will be the organization owner(s) who will accept this transfer in the new GH org. This organization owner will make all current owners in this repo owners in the new org Velero repo. Someone with owner permission on the new repository needs to go to their Travis CI account and authorize Travis CI on the repo. Here are instructions: https://docs.travis-ci.com/user/tutorial/. After this, webhook notifications can be added following these instructions: https://docs.travis-ci.com/user/notifications/#configuring-webhook-notifications. Pre-requisite: A new Zenhub account must exist for a vmware or vmware-tanzu organization. This page contains a pre-migration checklist for ensuring a repo migration goes well with Zenhub: https://help.zenhub.com/support/solutions/articles/43000010366-moving-a-repo-cross-organization-or-to-a-new-organization. After this, webhooks can be added by following these instructions: https://github.com/ZenHubIO/API#webhooks. The settings for Netflify should remain the same, except that it now needs to be installed in the new repo. The instructions on how to install Netlify on the new repo are here: https://www.netlify.com/docs/github-permissions/. [Pending] We will find out this week how this move will be communicated to the community. In particular, the Velero repository move might be tied to the move of our provider plugins into their own repos, also in the new org: https://github.com/heptio/velero/issues/1814. Many items on the todo list must be done by a repository member with owner permission. This doesn't all need to be done by the same person obviously, but we should specify if @skriss wants to split these tasks with any other owner(s). Might want to exclude updating documentation prior to v1.0.0. GH documentation does not specify if branches on the server are also moved. All links to the original repository location are automatically redirected to the new location. Alternatives such as moving Velero to its own organization, or even not moving at all, were considered. Collectively, however, the open source leadership decided it would be best to move it so it lives alongside other VMware supported cloud native related repositories. Ensure that only the Velero core team has maintainer/owner privileges." } ]
{ "category": "Runtime", "file_name": "Install-Alluxio-Cluster-with-HA.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "layout: global title: Install Alluxio Cluster with HA An Alluxio cluster with High Availability (HA) is achieved by running multiple Alluxio master processes on different nodes in the system. One master is elected as the leading master which serves all workers and clients as the primary point of contact. The other masters act as standby masters and maintain the same file system state as the leading master by reading a shared journal. Standby masters do not serve any client or worker requests; however, if the leading master fails, one standby master will automatically be elected as the new leading master. Once the new leading master starts serving, Alluxio clients and workers proceed as usual. During the failover to a standby master, clients may experience brief delays or transient errors. The major challenges to achieving high-availability are maintaining a shared file system state across service restarts and maintaining consensus among masters about the identity of the leading master after failover. : Uses an internal replicated state machine based on the to both store the file system journal and run leader elections. This approach is introduced in Alluxio 2.0 and requires no dependency on external services. To deploy an Alluxio cluster, first the pre-compiled Alluxio binary file, extract the tarball and copy the extracted directory to all nodes (including nodes running masters and workers). Enable SSH login without password from all master nodes to all worker nodes. You can add a public SSH key for the host into `~/.ssh/authorized_keys`. See for more details. TCP traffic across all nodes is allowed. For basic functionality make sure RPC port (default :19998) is open on all nodes. Alluxio admins can create and edit the properties file `conf/alluxio-site.properties` to configure Alluxio masters or workers. If this file does not exist, it can be copied from the template file under `${ALLUXIO_HOME}/conf`: ```shell $ cp conf/alluxio-site.properties.template conf/alluxio-site.properties ``` Make sure that this file is distributed to `${ALLUXIO_HOME}/conf` on every Alluxio master and worker before starting the cluster. Restarting Alluxio processes is the safest way to ensure any configuration updates are applied. The minimal configuration to set up a HA cluster is to give the embedded journal addresses to all nodes inside the cluster. On each Alluxio node, copy the `conf/alluxio-site.properties` configuration file and add the following properties to the file: ```properties alluxio.master.hostname=<MASTER_HOSTNAME> # Only needed on master node alluxio.master.embedded.journal.addresses=<EMBEDDEDJOURNALADDRESS> ``` Explanation: The first property `alluxio.master.hostname=<MASTER_HOSTNAME>` is required on each master node to be its own externally visible hostname. This is required on each individual component of the master quorum to have its own address set. On worker nodes, this parameter will be ignored. Examples include `alluxio.master.hostname=1.2.3.4`, `alluxio.master.hostname=node1.a.com`. The second property `alluxio.master.embedded.journal.addresses` sets the sets of masters to participate Alluxio's internal leader election and determine the leading master. The default embedded journal port is `19200`. An example: `alluxio.master.embedded.journal.addresses=masterhostname1:19200,masterhostname2:19200,masterhostname3:19200` Note that embedded journal feature relies on which uses leader election based on the Raft protocol and has its own format for storing journal entries. Enabling embedded journal enables Alluxio's internal leader election. See for more details and alternative ways to set up HA cluster with internal leader election. Before Alluxio can be started for the first time, the Alluxio master journal and worker storage must be formatted. Formatting the journal will delete all metadata from Alluxio. Formatting the worker storage will delete all data from the configured Alluxio storage. However, the data in under storage will be" }, { "data": "On all the Alluxio master nodes, list all the worker hostnames in the `conf/workers` file, and list all the masters in the `conf/masters` file. This will allow alluxio scripts to run operations on the cluster nodes. `init format` Alluxio cluster with the following command in one of the master nodes: ```shell $ ./bin/alluxio init format ``` In one of the master nodes, start the Alluxio cluster with the following command: ```shell $ ./bin/alluxio process start all ``` This will start Alluxio masters on all the nodes specified in `conf/masters`, and start the workers on all the nodes specified in `conf/workers`. To verify that Alluxio is running, you can visit the web UI of the leading master. To determine the leading master, run: ```shell $ ./bin/alluxio info report ``` Then, visit `http://<LEADER_HOSTNAME>:19999` to see the status page of the Alluxio leading master. Alluxio comes with a simple program that writes and reads sample files in Alluxio. Run the sample program with: ```shell $ ./bin/alluxio exec basicIOTest ``` When an application interacts with Alluxio in HA mode, the client must know about the connection information of Alluxio HA cluster, so that the client knows how to discover the Alluxio leading master. The following sections list three ways to specify the HA Alluxio service address on the client side. Users can pre-configure the service address of an Alluxio HA cluster in environment variables or site properties, and then connect to the service using an Alluxio URI such as `alluxio:///path`. For example, with Alluxio connection information in `core-site.xml` of Hadoop, Hadoop CLI can connect to the Alluxio cluster. ```shell $ hadoop fs -ls alluxio:///directory ``` Depending on the different approaches to achieve HA, different properties are required: If using embedded journal, set `alluxio.master.rpc.addresses`. ```properties alluxio.master.rpc.addresses=masterhostname1:19998,masterhostname2:19998,masterhostname3:19998 ``` Or specify the properties in Java option. For example, for Spark applications, add the following to `spark.executor.extraJavaOptions` and `spark.driver.extraJavaOptions`: ```properties -Dalluxio.master.rpc.addresses=masterhostname1:19998,masterhostname2:19998,masterhostname3:19998 ``` Users can also fully specify the HA cluster information in the URI to connect to an Alluxio HA cluster. Configuration derived from the HA authority takes precedence over all other forms of configuration, e.g. site properties or environment variables. When using embedded journal, use `alluxio://masterhostname1:19998, masterhostname2:19998,masterhostname3:19998/path` For many applications (e.g., Hadoop, Hive and Flink), you can use a comma as the delimiter for multiple addresses in the URI, like `alluxio://masterhostname1:19998,masterhostname2:19998,masterhostname3:19998/path`. For some other applications (e.g., Spark) where comma is not accepted inside a URL authority, you need to use semicolons as the delimiter for multiple addresses, like `alluxio://masterhostname1:19998;masterhostname2:19998;masterhostname3:19998`. Some frameworks may not accept either of these ways to connect to a highly available Alluxio HA cluster, so Alluxio also supports connecting to an Alluxio HA cluster via a logical name. In order to use logical names, the following configuration options need to be set in your environment variables or site properties. If you are using embedded journal, you need to configure the following configuration options and connect to the highly available alluxio node via `alluxio://ebj@[logical-name]` , for example `alluxio://ebj@my-alluxio-cluster`. alluxio.master.nameservices.[logical-name] unique identifier for each alluxio master node A comma-separated ID of the alluxio master node that determine all the alluxio master nodes in the cluster. For example, if you previously used `my-alluxio-cluster` as the logical name and wanted to use `master1,master2,master3` as individual IDs for each alluxio master, you configure this as such: ```properties alluxio.master.nameservices.my-alluxio-cluster=master1,master2,master3 ``` alluxio.master.rpc.address.[logical name]. [master node ID] RPC Address for each alluxio master node For each alluxio master node previously configured, set the full address of each alluxio master node, for example: ```properties alluxio.master.rpc.address.my-alluxio-cluster.master1=master1:19998 alluxio.master.rpc.address.my-alluxio-cluster.master2=master2:19998 alluxio.master.rpc.address.my-alluxio-cluster.master3=master3:19998 ``` Below are common operations to perform on an Alluxio cluster. To stop an Alluxio service, run: ```shell $" }, { "data": "process stop all ``` This will stop all the processes on all nodes listed in `conf/workers` and `conf/masters`. You can stop just the masters and just the workers with the following commands: ```shell $ ./bin/alluxio process stop masters # stops all masters in conf/masters $ ./bin/alluxio process stop workers # stops all workers in conf/workers ``` If you do not want to use `ssh` to login to all the nodes and stop all the processes, you can run commands on each node individually to stop each component. For any node, you can stop a master or worker with: ```shell $ ./bin/alluxio process stop master # stops the local master $ ./bin/alluxio process stop worker # stops the local worker ``` Starting Alluxio is similar. If `conf/workers` and `conf/masters` are both populated, you can start the cluster with: ```shell $ ./bin/alluxio process start all ``` You can start just the masters and just the workers with the following commands: ```shell $ ./bin/alluxio process start masters # starts all masters in conf/masters $ ./bin/alluxio process start workers # starts all workers in conf/workers ``` If you do not want to use `ssh` to login to all the nodes and start all the processes, you can run commands on each node individually to start each component. For any node, you can start a master or worker with: ```shell $ ./bin/alluxio process start master # starts the local master $ ./bin/alluxio process start worker # starts the local worker ``` Adding a worker to an Alluxio cluster dynamically is as simple as starting a new Alluxio worker process, with the appropriate configuration. In most cases, the new worker's configuration should be the same as all the other workers' configuration. Run the following command on the new worker to add ```shell $ ./bin/alluxio process start worker # starts the local worker ``` Once the worker is started, it will register itself with the Alluxio leading master and become part of the Alluxio cluster. Removing a worker is as simple as stopping the worker process. ```shell $ ./bin/alluxio process stop worker # stops the local worker ``` Once the worker is stopped, and after a timeout on the master (configured by master parameter `alluxio.master.worker.timeout`), the master will consider the worker as \"lost\", and no longer consider it as part of the cluster. In order to add a master, the Alluxio cluster must operate in HA mode. If you are running the cluster as a single master cluster, you must configure it to be an HA cluster before having more than one master. See the for more information about adding and removing masters. In order to update the master-side configuration, you can first , update the `conf/alluxio-site.properties` file on master node, and then . Note that, this approach introduces downtime of the Alluxio service. Alternatively, one benefit of running Alluxio in HA mode is to use rolling restarts to minimize downtime when updating configurations: Update the master configuration on all the master nodes without restarting any master. Restart standby masters one by one (the cluster ). Elect a standby master as the leading master (tutorial ). Restart the old leading master that is now a standby master. Verify the configuration update. If you only need to update some local configuration for a worker (e.g., change the mount of storage capacity allocated to this worker or update the storage directory), the master node does not need to be stopped and restarted. Simply stop the desired worker, update the configuration (e.g., `conf/alluxio-site.properties`) file on that node, and then restart the process." } ]
{ "category": "Runtime", "file_name": "k8s_csi_interface_en.md", "project_name": "Curve", "subcategory": "Cloud Native Storage" }
[ { "data": "Currently, Curve can be connected to Kubernetes through the CSI plugin. This article gives instructions on the development of CSI plugin. For the source code of the Curve CSI plugin, please see . Curve provides a command line management tool curve, which is used to create and delete volumes and other management operations. The specific interface is as follows: create volume: `curve create [-h] --filename FILENAME --length LENGTH --user USER` delete volume: `curve delete [-h] --user USER --filename FILENAME` recover volume: `curve recover [-h] --user USER --filename FILENAME [--id ID]` extend volume: `curve extend [-h] --user USER --filename FILENAME --length LENGTH` get volume info: `curve stat [-h] --user USER --filename FILENAME` rename volume: `curve rename [-h] --user USER --filename FILENAME --newname NEWNAME` create directory: `curve mkdir [-h] --user USER --dirname DIRNAME` delete directory: `curve rmdir [-h] --user USER --dirname DIRNAME` list files in the directory`curve list [-h] --user USER --dirname DIRNAME` Provide curve-nbd tool to map, unmap, list on node: ```bash Usage: curve-nbd [options] map <image> (Map an image to nbd device) unmap <device|image> (Unmap nbd device) list-mapped (List mapped nbd devices) Map options: --device <device path> Specify nbd device path (/dev/nbd{num}) --read-only Map read-only --nbdsmax <limit> Override for module param nbdsmax --maxpart <limit> Override for module param maxpart --timeout <seconds> Set nbd request timeout --try-netlink Use the nbd netlink interface ``` CSI spec: ``` CreateVolume ++ DeleteVolume +->| CREATED +--+ | ++-^+ | | Controller | | Controller v +++ Publish | | Unpublish +++ |X| Volume | | Volume | | +-+ +v-++ +-+ | NODE_READY | ++-^+ Node | | Node Stage | | Unstage Volume | | Volume +v-++ | VOL_READY | ++-^+ Node | | Node Publish | | Unpublish Volume | | Volume +v-++ | PUBLISHED | ++ ``` In CSI plugin: CreateVolume: curve mkdir: DIRNAME defined in `k8s storageClass` curve create: FILENAME is `k8s persistentVolume name` curve stat: wait volume ready Controller Publish Volume: Nothing to do Node Stage Volume: curve-nbd list-mapped: check if it has been mounted curve-nbd map: mount Node Publish Volume: mount the stagePath to the publishPath Node Unpublish Volume: umount publishPath Node Unstage Volume: curve-nbd list-mapped: check if it has been umounted curve-nbd unmap: umount Controller Unpublish Volume: Nothing to do DeleteVolume: curve delete Other optional support: Extend ControllerExpandVolume: curve extend NodeExpandVolume: resize2fs/xfs_growfs Snapshot: not yet supported" } ]
{ "category": "Runtime", "file_name": "linstorsatelliteconfiguration.md", "project_name": "Piraeus Datastore", "subcategory": "Cloud Native Storage" }
[ { "data": "This resource controls the state of one or more LINSTOR satellites. Configures the desired state of satellites. Selects which nodes the LinstorSatelliteConfiguration should apply to. If empty, the configuration applies to all nodes. This example sets the `AutoplaceTarget` property to `no` on all nodes labelled `piraeus.io/autoplace: \"no\"`. ```yaml apiVersion: piraeus.io/v1 kind: LinstorSatelliteConfiguration metadata: name: disabled-nodes spec: nodeSelector: piraeus.io/autplace: \"no\" properties: name: AutoplaceTarget value: \"no\" ``` Selects which nodes the LinstorSatelliteConfiguration should apply to. If empty, the configuration applies to all nodes. When this is used together with `.spec.nodeSelector`, both need to match in order for the configuration to apply to a node. This example sets the `AutoplaceTarget` property to `no` on all non-worker nodes: ```yaml apiVersion: piraeus.io/v1 kind: LinstorSatelliteConfiguration metadata: name: disabled-nodes spec: nodeAffinity: nodeSelectorTerms: matchExpressions: key: node-role.kubernetes.io/control-plane operator: Exists properties: name: AutoplaceTarget value: \"no\" ``` Sets the given properties on the LINSTOR Satellite level. The property value can either be set directly using `value`, or inherited from the Kubernetes Node's metadata using `valueFrom`. Metadata fields are specified using the same syntax as the for Pods. In addition, setting `optional` to true means the property is only applied if the value is not empty. This is useful in case the property value should be inherited from the node's metadata This examples sets three Properties on every satellite: `PrefNic` (the preferred network interface) is always set to `default-ipv6` `Aux/example-property` (an auxiliary property, unused by LINSTOR itself) takes the value from the `piraeus.io/example` label of the Kubernetes Node. If a node has no `piraeus.io/example` label, the property value will be `\"\"`. `AutoplaceTarget` (if set to `no`, will exclude the node from LINSTOR's Autoplacer) takes the value from the `piraeus.io/autoplace` annotation of the Kubernetes Node. If a node has no `piraeus.io/autoplace` annotation, the property will not be set. ```yaml apiVersion: piraeus.io/v1 kind: LinstorSatelliteConfiguration metadata: name: ipv6-nodes spec: properties: name: PrefNic value: \"default-ipv6\" name: Aux/example-property valueFrom: nodeFieldRef: metadata.labels['piraeus.io/example'] name: AutoplaceTarget valueFrom: nodeFieldRef: metadata.annotations['piraeus.io/autoplace'] optional: yes ``` Configures LINSTOR Storage Pools. Every Storage Pool needs at least a `name`, and a type. Types are specified by setting a (potentially empty) value on the matching key. Available types are: `lvmPool`: Configures a as storage pool. Defaults to using the storage pool name as the VG name. Can be overridden by setting `volumeGroup`. `lvmThinPool`: Configures a as storage pool. Defaults to using the storage pool name as name for the thin pool volume and the storage pool name prefixed by `linstor_` as the VG name. Can be overridden by setting `thinPool` and `volumeGroup`. `filePool`: Configures a file system based storage pool. Configures a host directory as location for the volume files. Defaults to using the `/var/lib/linstor-pools/<storage pool name>` directory. `fileThinPool`: Configures a file system based storage pool. Behaves the same as `filePool`, except the files will be thinly allocated on file systems that support sparse files. `zfsPool`: Configure a as storage pool. Defaults to using the storage pool name as name for the zpool. Can be overriden by setting `zPool`. `zfsThinPool`: Configure a as storage pool. Behaves the same as `zfsPool`, except the contained zVol will be created using sparse reservation. Optionally, you can configure LINSTOR to automatically create the backing pools. `source.hostDevices` takes a list of raw block devices, which LINSTOR will prepare as the chosen backing pool. This example configures these LINSTOR Storage Pools on all satellites: A LVM Pool named `vg1`. It will use the VG `vg1`, which needs to exist on the nodes already. A LVM Thin Pool named" }, { "data": "It will use the thin pool `vg1/thin`, which also needs to exist on the nodes. A LVM Pool named `vg2-from-raw-devices`. It will use the VG `vg2`, which will be created on demand from the raw devices `/dev/sdb` and `/dev/sdc` if it does not exist already. A File System Pool named `fs1`. It will use the `/var/lib/linstor-pools/fs1` directory on the host, creating the directory if necessary. A File System Pool named `fs2`, using sparse files. It will use the custom `/mnt/data` directory on the host. A ZFS Pool named `zfs1`. It will use ZPool `zfs1`, which needs to exist on the nodes already. A ZFS Thin Pool named `zfs2`. It will use ZPool `zfs-thin2`, which will be created on demand from the raw device `/dev/sdd`. ```yaml apiVersion: piraeus.io/v1 kind: LinstorSatelliteConfiguration metadata: name: storage-satellites spec: storagePools: name: vg1 lvmPool: {} name: vg1-thin lvmThinPool: volumeGroup: vg1 thinPool: thin name: vg2-from-raw-devices lvmPool: volumeGroup: vg2 source: hostDevices: /dev/sdb /dev/sdc name: fs1 filePool: {} name: fs2 fileThinPool: directory: /mnt/data name: zfs1 zfsPool: {} name: zfs2 zfsThinPool: zPool: zfs-thin2 source: hostDevices: /dev/sdd ``` Configures a TLS secret used by the LINSTOR Satellites to: Validate the certificate of the LINSTOR Controller, that is the Controller must have certificates signed by `ca.crt`. Provide a server certificate for authentication by the LINSTOR Controller, that is `tls.key` and `tls.crt` must be accepted by the Controller. To configure TLS communication between Satellite and Controller, must be set accordingly. Setting a `secretName` is optional, it will default to `<node-name>-tls`, where `<node-name>` is replaced with the name of the Kubernetes Node. Optional, a reference to a can be provided to let the operator create the required secret. This example creates a manually provisioned TLS secret and references it in the LinstorSatelliteConfiguration, setting it for all nodes. ```yaml apiVersion: v1 kind: Secret metadata: name: my-node-tls namespace: piraeus-datastore data: ca.crt: LS0tLS1CRUdJT... tls.crt: LS0tLS1CRUdJT... tls.key: LS0tLS1CRUdJT... apiVersion: piraeus.io/v1 kind: LinstorSatelliteConfiguration metadata: name: satellite-tls spec: internalTLS: secretName: my-node-tls ``` This example sets up automatic creation of the LINSTOR Satellite TLS secrets using a cert-manager issuer named `piraeus-root`. ```yaml apiVersion: piraeus.io/v1 kind: LinstorSatelliteConfiguration metadata: name: satellite-tls spec: internalTLS: certManager: kind: Issuer name: piraeus-root ``` Configures the Pod used to run the LINSTOR Satellite. The template is applied as a patch (see ) to the default resources, so it can be \"sparse\". This example configures a resource request of `cpu: 100m` on the satellite, and also enables host networking. ```yaml apiVersion: piraeus.io/v1 kind: LinstorSatelliteConfiguration metadata: name: resource-and-host-network spec: podTemplate: spec: hostNetwork: true containers: name: linstor-satellite resources: requests: cpu: 100m ``` The given patches will be applied to all resources controlled by the operator. The patches are forwarded to `kustomize` internally, and take the . The unpatched resources are available in the . No checks are run on the result of user-supplied patches: the resources are applied as-is. Patching some fundamental aspect, such as removing a specific volume from a container may lead to a degraded cluster. This example configures the LINSTOR Satellite to use the \"TRACE\" log level, creating very verbose output. ```yaml apiVersion: piraeus.io/v1 kind: LinstorSatelliteConfiguration metadata: name: all-satellites spec: patches: target: kind: ConfigMap name: satellite-config patch: |- apiVersion: v1 kind: ConfigMap metadata: name: satellite-config data: linstor_satellite.toml: | [logging] linstor_level = \"TRACE\" ``` Reports the actual state of the cluster. The Operator reports the current state of the Satellite Configuration through a set of conditions. Conditions are identified by their `type`. | `type` | Explanation | |--|--| | `Applied` | The given configuration was applied to all `LinstorSatellite` resources. |" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_endpoint_log.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> View endpoint status log ``` cilium-dbg endpoint log <endpoint id> [flags] ``` ``` cilium endpoint log 5421 ``` ``` -h, --help help for log -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage endpoints" } ]
{ "category": "Runtime", "file_name": "manual-deploy.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "Use the following command to build server, client, and related dependencies at the same time: ``` bash $ git clone https://github.com/cubeFS/cubefs.git $ cd cubefs $ make build ``` If the build is successful, the executable files `cfs-server` and `cfs-client` will be generated in the `build/bin` directory. ``` bash ./cfs-server -c master.json ``` Example `master.json`: ::: tip Recommended To ensure high availability of the service, at least 3 instances of the Master service should be started. ::: ``` json { \"role\": \"master\", \"ip\": \"127.0.0.1\", \"listen\": \"17010\", \"prof\":\"17020\", \"id\":\"1\", \"peers\": \"1:127.0.0.1:17010,2:127.0.0.2:17010,3:127.0.0.3:17010\", \"retainLogs\":\"20000\", \"logDir\": \"/cfs/master/log\", \"logLevel\":\"info\", \"walDir\":\"/cfs/master/data/wal\", \"storeDir\":\"/cfs/master/data/store\", \"consulAddr\": \"http://consul.prometheus-cfs.local\", \"clusterName\":\"cubefs01\", \"metaNodeReservedMem\": \"1073741824\" } ``` Configuration parameters can be found in . ``` bash ./cfs-server -c metanode.json ``` Example `meta.json`: ::: tip Recommended To ensure high availability of the service, at least 3 instances of the MetaNode service should be started. ::: ``` json { \"role\": \"metanode\", \"listen\": \"17210\", \"prof\": \"17220\", \"logLevel\": \"info\", \"metadataDir\": \"/cfs/metanode/data/meta\", \"logDir\": \"/cfs/metanode/log\", \"raftDir\": \"/cfs/metanode/data/raft\", \"raftHeartbeatPort\": \"17230\", \"raftReplicaPort\": \"17240\", \"totalMem\": \"8589934592\", \"consulAddr\": \"http://consul.prometheus-cfs.local\", \"exporterPort\": 9501, \"masterAddr\": [ \"127.0.0.1:17010\", \"127.0.0.2:17010\", \"127.0.0.3:17010\" ] } ``` For detailed configuration parameters, please refer to . ::: tip Recommended Using a separate disk as the data directory and configuring multiple disks can achieve higher performance. ::: Prepare the data directory View the machine's disk information and select the disk to be used by CubeFS ``` bash fdisk -l ``` Format the disk, it is recommended to format it as XFS ``` bash mkfs.xfs -f /dev/sdx ``` Create a mount directory ``` bash mkdir /data0 ``` Mount the disk ``` bash mount /dev/sdx /data0 ``` Start the Data Node ``` bash ./cfs-server -c datanode.json ``` Example `datanode.json`: ::: tip Recommended To ensure high availability of the service, at least 3 instances of the DataNode service should be started. ::: ``` json { \"role\": \"datanode\", \"listen\": \"17310\", \"prof\": \"17320\", \"logDir\": \"/cfs/datanode/log\", \"logLevel\": \"info\", \"raftHeartbeat\": \"17330\", \"raftReplica\": \"17340\", \"raftDir\":\"/cfs/datanode/log\", \"consulAddr\": \"http://consul.prometheus-cfs.local\", \"exporterPort\": 9502, \"masterAddr\": [ \"127.0.0.1:17010\", \"127.0.0.1:17010\", \"127.0.0.1:17010\" ], \"disks\": [ \"/data0:10737418240\", \"/data1:10737418240\" ] } ``` For detailed configuration parameters, please refer to . ::: tip Note Optional section. If you need to use the object storage service, you need to deploy the object gateway (ObjectNode). ::: ``` bash ./cfs-server -c objectnode.json ``` Example `objectnode.json`, as follows: ``` json { \"role\": \"objectnode\", \"domains\": [ \"object.cfs.local\" ], \"listen\": \"17410\", \"masterAddr\": [ \"127.0.0.1:17010\", \"127.0.0.2:17010\", \"127.0.0.3:17010\" ], \"logLevel\": \"info\", \"logDir\": \"/cfs/Logs/objectnode\" } ``` For detailed configuration parameters, please refer to . ::: tip Note Optional section. If you need to use the erasure coding volume, you need to deploy it. ::: Please refer to for deployment." } ]
{ "category": "Runtime", "file_name": "ADOPTERS.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "<a href=\"http://glasnostic.com\" border=\"0\" target=\"_blank\"> <img alt=\"glasnostic.com\" src=\"docs/assets/adopters/glasnostic-logo.png\" height=\"50\"></a>&nbsp; &nbsp; &nbsp; <a href=\"https://www.transwarp.io\" border=\"0\" target=\"_blank\"> <img alt=\"https://www.transwarp.io\" src=\"docs/assets/adopters/transwarp-logo.png\" height=\"50\"></a>&nbsp; &nbsp; &nbsp; <a href=\"https://www.terasky.com\" border=\"0\" target=\"_blank\"> <img alt=\"https://www.terasky.com\" src=\"docs/assets/adopters/terasky-logo.png\" height=\"50\"></a>&nbsp; &nbsp; &nbsp; Below is a list of adopters of Antrea that have publicly shared the details of how they use it. Glasnostic makes modern cloud operations resilient. It does this by shaping how systems interact, automatically and in real-time. As a result, DevOps and SRE teams can deploy reliably, prevent failure and assure the customer experience. We use Antrea's Open vSwitch support to tune how services interact in Kubernetes clusters. We are @glasnostic on Twitter. Transwarp is committed to building enterprise-level big data infrastructure software, providing enterprises with infrastructure software and supporting around the whole data lifecycle to build a data world of the future. We use Antrea's AntreaClusterNetworkPolicy and AntreaNetworkPolicy to protect big data software for every tenant of our kubernetes platform. We use Antrea's Open vSwitch to support Pod-To-Pod network between flannel and antrea clusters, and also between antrea clusters We use Antrea's Open vSwitch to support Pod-To-Pod network between flannel and antrea nodes in one cluster for upgrading. We use Antrea's Egress feature to keep the original source ip to ensure Internal Pods can get the real source IP of the request. You can contact us with <mkt@transwarp.io> TeraSky is a Global Advanced Technology Solutions Provider. Antrea is used in our internal Kubernetes clusters as well as by many of our customers. Antrea helps us to apply a very strong and flexible security models in Kubernetes. We are very heavily utilizing Antrea Cluster Network Policies, Antrea Network Policies, and the Egress functionality. We are @TeraSkycom1 on Twitter. It would be great to have your success story and logo on our list of Antrea adopters! To add yourself, you can follow the steps outlined below, alternatively, feel free to reach out via Slack or on Github to have our team add your success story and logo. Prepare your addition and PR as described in the Antrea . Add your name to the success stories, using bold format with a link to your web site like this: `` Below your name, describe your organization or yourself and how you make use of Antrea. Optionally, list the features of Antrea you are using. Please keep the line width at 80 characters maximum, and avoid trailing spaces. If you are willing to share contact details, e.g. your Twitter handle, etc. add a line where people can find you. Example: ```markdown Example.com is a company operating internationally, focusing on creating documentation examples. We are using Antrea in our K8s clusters deployed using Kubeadm. We making use of Antrea's Network Policy capabilities. You can reach us on twitter @vmwopensource. ``` (Optional) To add your logo, simply drop your logo in PNG or SVG format with a maximum size of 50KB to the directory. Name the image file something that reflects your company (e.g., if your company is called Acme, name the image acme-logo.png). Then add an inline html link directly bellow the . Use the following format: ```html <a href=\"http://example.com\" border=\"0\" target=\"_blank\"> <img alt=\"example.com\" src=\"docs/assets/adopters/example-logo.png\" height=\"50\"></a>&nbsp; &nbsp; &nbsp; ``` Send a PR with your addition as described in the Antrea We are working on adding an Adopters section on . Follow the steps above to add your organization to the list of Antrea Adopters. We will follow up and publish it to the website." } ]
{ "category": "Runtime", "file_name": "2020-03-02-Velero-1.3-Voyage-Continues.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Velero 1.3: Improved CRD Backups/Restores, Multi-Arch Docker Images, and More!\" excerpt: Velero 1.3 includes improvements to CRD backups and restores, multi-arch Docker images including support for arm/arm64 and ppc64le, and many other usability and stability enhancements. This release includes significant contributions by community members, and were thrilled to be able to partner with you all in continuing to improve Velero. author_name: Steve Kriss slug: Velero-1.3-Voyage-Continues categories: ['velero','release'] image: /img/posts/post-1.3.jpg tags: ['Velero Team', 'Steve Kriss'] Veleros voyage continues with the release of version 1.3, which includes improvements to CRD backups and restores, multi-arch Docker images including support for arm/arm64 and ppc64le, and many other usability and stability enhancements. This release includes significant contributions by community members, and were thrilled to be able to partner with you all in continuing to improve Velero. Lets take a deeper look at some of this releases highlights. This release includes a number of related bug fixes and improvements to how Velero backs up and restores custom resource definitions (CRDs) and instances of those CRDs. We found and fixed three issues around restoring CRDs that were originally created via the `v1beta1` CRD API. The first issue affected CRDs that had the `PreserveUnknownFields` field set to `true`. These CRDs could not be restored into 1.16+ Kubernetes clusters, because the `v1` CRD API does not allow this field to be set to `true`. We added code to the restore process to check for this scenario, to set the `PreserveUnknownFields` field to `false`, and to instead set `x-kubernetes-preserve-unknown-fields` to `true` in the OpenAPIv3 structural schema, per Kubernetes guidance. For more information on this, see the . The second issue affected CRDs without structural schemas. These CRDs need to be backed up/restored through the `v1beta1` API, since all CRDs created through the `v1` API must have structural schemas. We added code to detect these CRDs and always back them up/restore them through the `v1beta1` API. Finally, related to the previous issue, we found that our restore code was unable to handle backups with multiple API versions for a given resource type, and weve remediated this as well. We also improved the CRD restore process to enable users to properly restore CRDs and instances of those CRDs in a single restore operation. Previously, users found that they needed to run two separate restores: one to restore the CRD(s), and another to restore instances of the CRD(s). This was due to two deficiencies in the Velero" }, { "data": "First, Velero did not wait for a CRD to be fully accepted by the Kubernetes API server and ready for serving before moving on; and second, Velero did not refresh its cached list of available APIs in the target cluster after restoring CRDs, so it was not aware that it could restore instances of those CRDs. We fixed both of these issues by (1) adding code to wait for CRDs to be ready after restore before moving on, and (2) refreshing the cached list of APIs after restoring CRDs, so any instances of newly-restored CRDs could subsequently be restored. With all of these fixes and improvements in place, we hope that the CRD backup and restore experience is now seamless across all supported versions of Kubernetes. Thanks to community members and , Velero now provides multi-arch container images by using Docker manifest lists. We are currently publishing images for `linux/amd64`, `linux/arm64`, `linux/arm`, and `linux/ppc64le` in . Users dont need to change anything other than updating their version tag - the v1.3 image is `velero/velero:v1.3.0`, and Docker will automatically pull the proper architecture for the host. For more information on manifest lists, see . We fixed a large number of bugs and made some smaller usability improvements in this release. Here are a few highlights: Support private registries with custom ports for the restic restore helper image (, ) Use AWS profile from BackupStorageLocation when invoking restic (, ) Allow restores from schedules in other clusters (, ) Fix memory leak & race condition in restore code (, ) For a full list of all changes in this release, see . Previously, we planned to release version 1.3 around the end of March, with CSI snapshot support as the headline feature. However, because we had already merged a large number of fixes and improvements since 1.2, and wanted to make them available to users, we decided to release 1.3 early, without the CSI snapshot support. Our priorities have not changed, and were continuing to work hard on adding CSI snapshot support to Velero 1.4, including helping upstream CSI providers migrate to the `v1beta1` CSI snapshot API where possible. We still anticipate releasing 1.4 with beta support for CSI snapshots in the first half of 2020. Velero is better because of our contributors and maintainers. It is because of you that we can bring great software to the community. Please join us during our and catch up with past meetings on YouTube on the . You can always find the latest project information at . Look for issues on GitHub marked or if you want to roll up your sleeves and write some code with us. You can chat with us on and follow us on Twitter at ." } ]
{ "category": "Runtime", "file_name": "testing.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "CRI Plugin Testing Guide ======================== This document assumes you have already setup the development environment (go, git, `github.com/containerd/containerd` repo etc.). Before sending pull requests you should at least make sure your changes have passed code verification, unit, integration and CRI validation tests. Follow the instructions. Run all CRI integration tests: ```bash make cri-integration ``` Run specific CRI integration tests: use the `FOCUS` parameter to specify the test case. ```bash FOCUS=<TEST_NAME> make cri-integration ``` Example: ```bash FOCUS=TestContainerListStats make cri-integration ``` is a test framework for validating that a Container Runtime Interface (CRI) implementation such as containerd with the `cri` plugin meets all the requirements necessary to manage pod sandboxes, containers, images etc. CRI validation test makes it possible to verify CRI conformance of `containerd` without setting up Kubernetes components or running Kubernetes end-to-end tests. . Run the containerd you built above with the `cri` plugin built in: ```bash containerd -l debug ``` Run CRI test. about CRI validation test. is a test framework testing Kubernetes node level functionalities such as managing pods, mounting volumes etc. It starts a local cluster with Kubelet and a few other minimum dependencies, and runs node functionality tests against the local cluster. Currently e2e-node tests are supported from via Pull Request comments on github. Enter \"/test all\" as a comment on a pull request for a list of testing options that have been integrated through prow bot with kubernetes testing services hosted on GCE. Typing `/test pull-containerd-node-e2e` will start a node e2e test run on your pull request commits. about Kubernetes node e2e test." } ]
{ "category": "Runtime", "file_name": "tracing.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "To enable tracing support in CRI-O, either start `crio` with `--enable-tracing` or add the corresponding option to a config overwrite, for example `/etc/crio/crio.conf.d/01-tracing.conf`: ```toml [crio.tracing] enable_tracing = true ``` Traces in CRI-O get exported via the by using an endpoint. This endpoint defaults to `0.0.0.0:4317`, but can be configured by using the `--tracing-endpoint` flag or the corresponding TOML configuration: ```toml [crio.tracing] tracing_endpoint = \"0.0.0.0:4317\" ``` The final configuration aspect of OpenTelemetry tracing in CRI-O is the `--tracing-sampling-rate-per-million` / `tracingsamplingratepermillion` configuration, which refers to the amount of samples collected per million spans. This means if it being set to `0` (the default), then CRI-O will not collect any traces at all. If set to `1000000` (one million), then CRI-O will create traces for all created spans. If the value is below one million, then there is no way right now to select a subset of spans other than the modulo of the set value. has the capability to add additional tracing past the scope of CRI-O. This is automatically enabled when the `pod` runtime type is chosen, like so: ```toml [crio.runtime] default_runtime = \"runc\" [crio.runtime.runtimes.runc] runtime_type = \"pod\" ``` Then conmon-rs will export traces and spans in the same way CRI-O does automatically. Both CRI-O and conmon-rs will correlate their logs to the traces and spans. If the connection to the OTLP instance gets lost, then CRI-O will not block, and all the traces during that time will be lost. The alone cannot be used to visualize traces and spans. For that a frontend like can be used to connect to it. To achieve that, a configuration file for OTLP needs to be created, like this" }, { "data": "```yaml receivers: otlp: protocols: http: grpc: exporters: logging: loglevel: debug jaeger: endpoint: localhost:14250 tls: insecure: true processors: batch: extensions: health_check: pprof: endpoint: localhost:1888 zpages: endpoint: localhost:55679 service: extensions: [pprof, zpages, health_check] pipelines: traces: receivers: [otlp] processors: [batch] exporters: [logging, jaeger] metrics: receivers: [otlp] processors: [batch] exporters: [logging] ``` The `jaeger` `endpoint` has been set to `localhost:14250`, means before starting the collector we have to start the jaeger instance: ```bash podman run -it --rm --network host jaegertracing/all-in-one:1.41.0 ``` After jaeger is up and running we can start the OpenTelemetry collector: ```bash podman run -it --rm --network host \\ -v ./otel-collector-config.yaml:/etc/otel-collector-config.yaml \\ otel/opentelemetry-collector:0.70.0 --config=/etc/otel-collector-config.yaml ``` The collector logs will indicate that the connection to Jaeger was successful: ```text 2023-01-26T13:26:22.015Z info jaegerexporter@v0.70.0/exporter.go:184 \\ State of the connection with the Jaeger Collector backend \\ {\"kind\": \"exporter\", \"data_type\": \"traces\", \"name\": \"jaeger\", \"state\": \"READY\"} ``` The Jaeger UI should be now available on `http://localhost:16686`. It's now possible to start CRI-O with enabled tracing: ```bash sudo crio --enable-tracing --tracing-sampling-rate-per-million 1000000 ``` And when now running a CRI API call, for example by using: ```bash sudo crictl ps ``` Then the OpenTelemetry collector will indicate that it has received new traces and spans, where the trace with the ID `1987d3baa753087d60dd1a566c14da31` contains the invocation for listing the containers via `crictl ps`: ```text Span #2 Trace ID : 1987d3baa753087d60dd1a566c14da31 Parent ID : ID : 3b91638c1aa3cf30 Name : /runtime.v1.RuntimeService/ListContainers Kind : Internal Start time : 2023-01-26 13:29:44.409289041 +0000 UTC End time : 2023-01-26 13:29:44.409831126 +0000 UTC Status code : Unset Status message : Events: SpanEvent #0 -> Name: log -> Timestamp: 2023-01-26 13:29:44.409324579 +0000 UTC -> DroppedAttributesCount: 0 -> Attributes:: -> log.severity: Str(DEBUG) -> log.message: Str(Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}) -> id: Str(4e395179-b3c6-4f87-ac77-a70361dd4ebd) -> name: Str(/runtime.v1.RuntimeService/ListContainers) SpanEvent #1 -> Name: log -> Timestamp: 2023-01-26 13:29:44.409813328 +0000 UTC -> DroppedAttributesCount: 0 -> Attributes:: -> log.severity: Str(DEBUG) -> log.message: Str(Response: &ListContainersResponse{Containers:[]*Container{},}) -> name: Str(/runtime.v1.RuntimeService/ListContainers) -> id: Str(4e395179-b3c6-4f87-ac77-a70361dd4ebd) ``` We can see that there are `SpanEvent`s attached to the `Span`, carrying the log messages from CRI-O. The visualization of the trace, its spans and log messages can be found in Jaeger as well, under `http://localhost:16686/trace/1987d3baa753087d60dd1a566c14da31`: If kubelet tracing is enabled, then the spans are nested under the kubelet traces. This is caused by the CRI calls from the kubelet, which propagates the trace ID through the gRPC API." } ]
{ "category": "Runtime", "file_name": "tilt.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Rapid iterative Velero development with Tilt \" layout: docs This document describes how to use with any cluster for a simplified workflow that offers easy deployments and rapid iterative builds. This setup allows for continuing deployment of the Velero server and, if specified, any provider plugin or the restic daemonset. It does this work by: Deploying the necessary Kubernetes resources, such as the Velero CRDs and Velero deployment Building a local binary for Velero and (if specified) provider plugins as a `local_resource` Invoking `docker_build` to live update any binary into the container/init container and trigger a re-start Tilt will look for configuration files under `velero/tilt-resources`. Most of the files in this directory are gitignored so you may configure your setup according to your needs. v19.03 or newer A Kubernetes cluster v1.16 or greater (does not have to be Kind) v0.12.0 or newer Clone the repository locally Access to an S3 object storage Clone any you want to make changes to and deploy (optional, must be configured to be deployed by the Velero Tilt's setup, ) Note: To properly configure any plugin you use, please follow the plugin's documentation. Copy all sample files under `velero/tilt-resources/examples` into `velero/tilt-resources`. Configure the `velerov1backupstoragelocation.yaml` file, and the `cloud` file for the storage credentials/secret. Run `tilt up`. Create a configuration file named `tilt-settings.json` and place it in your local copy of `velero/tilt-resources`. Alternatively, you may copy and paste the sample file found in `velero/tilt-resources/examples`. Here is an example: ```json { \"default_registry\": \"\", \"enable_providers\": [ \"aws\", \"gcp\", \"azure\", \"csi\" ], \"providers\": { \"aws\": \"../velero-plugin-for-aws\", \"gcp\": \"../velero-plugin-for-gcp\", \"azure\": \"../velero-plugin-for-microsoft-azure\", \"csi\": \"../velero-plugin-for-csi\" }, \"allowed_contexts\": [ \"development\" ], \"enable_restic\": false, \"createbackuplocations\": true, \"setup-minio\": true, \"enable_debug\": false, \"debugcontinueon_start\": true } ``` default_registry (String, default=\"\"): The image registry to use if you need to push images. See the [Tilt *documentation](https://docs.tilt.dev/api.html#api.default_registry) for more details. provider_repos (Array[]String, default=[]): A list of paths to all the provider plugins you want to make changes to. Each provider must have a `tilt-provider.json` file describing how to build the provider. enable_providers (Array for more details. Note: when not making changes to a plugin, it is not necessary to load them into Tilt: an existing image and version might be specified in the Velero deployment instead, and Tilt will load that. allowed_contexts (Array, default=[]): A list of kubeconfig contexts Tilt is allowed to use. See the Tilt documentation on for more details. Note: Kind is automatically allowed. enable_restic (Bool, default=false): Indicate whether to deploy the restic Daemonset. If set to `true`, Tilt will look for a `velero/tilt-resources/restic.yaml` file containing the configuration of the Velero restic DaemonSet. create_backup_locations (Bool, default=false): Indicate whether to create one or more backup storage locations. If set to `true`, Tilt will look for a `velero/tilt-resources/velerov1backupstoragelocation.yaml` file containing at least one configuration for a Velero backup storage location. setup-minio (Bool, default=false): Configure this to `true` if you want to configure backup storage locations in a Minio instance running inside your cluster. enable_debug (Bool, default=false): Configure this to `true` if you want to debug the velero process using" }, { "data": "debug_continue_on_start (Bool, default=true): Configure this to `true` if you want the velero process to continue on start when in debug mode. See . All needed Kubernetes resource files are provided as ready to use samples in the `velero/tilt-resources/examples` directory. You only have to move them to the `velero/tilt-resources` level. Because the Velero Kubernetes deployment as well as the restic DaemonSet contain the configuration for any plugin to be used, files for these resources are expected to be provided by the user so you may choose which provider plugin to load as a init container. Currently, the sample files provided are configured with all the plugins supported by Velero, feel free to remove any of them as needed. For Velero to operate fully, it also needs at least one backup storage location. A sample file is provided that needs to be modified with the specific configuration for your object storage. See the next sub-section for more details on this. You will have to configure the `velero/tilt-resources/velerov1backupstoragelocation.yaml` with the proper values according to your storage provider. Read the to learn what field/value pairs are required for your particular provider's backup storage location configuration. Below are some ways to configure a backup storage location for Velero. Follow the provider documentation to provision the storage. We have a with corresponding plugins for Velero. Note: to use MinIO as an object storage, you will need to use the , and configure the storage location with the `spec.provider` set to `aws` and the `spec.config.region` set to `minio`. Example: ``` spec: config: region: minio s3ForcePathStyle: \"true\" s3Url: http://minio.velero.svc:9000 objectStorage: bucket: velero provider: aws ``` Here are two ways to use MinIO as the storage: 1) As a MinIO instance running inside your cluster (don't do this for production!) In the `tilt-settings.json` file, set `\"setup-minio\": true`. This will configure a Kubernetes deployment containing a running instance of MinIO inside your cluster. There are necessary to expose MinIO outside the cluster. To access this storage, you will need to expose MinIO outside the cluster by forwarding the MinIO port to the local machine using kubectl port-forward -n <velero-namespace> svc/minio 9000. Update the BSL configuration to use that as its \"public URL\" by adding `publicUrl: http://localhost:9000` to the BSL config. This is necessary to do things like download a backup file. Note: with this setup, when your cluster is terminated so is the storage and any backup/restore in it. 1) As a standalone MinIO instance running locally in a Docker container See to run MinIO locally on your computer, as a standalone as opposed to running it on a Pod. Please see our to learn more how backup locations work. Whatever object storage provider you use, configure the credentials for in the `velero/tilt-resources/cloud` file. Read the to learn what field/value pairs are required for your provider's credentials. The Tilt file will invoke Kustomize to create the secret under the hard-coded key `secret.cloud-credentials.data.cloud` in the Velero namespace. There is a sample credentials file properly formatted for a MinIO storage credentials in" }, { "data": "If you would like to debug the Velero process, you can enable debug mode by setting the field `enable_debug` to `true` in your `tilt-resources/tile-settings.json` file. This will enable you to debug the process using . By enabling debug mode, the Velero executable will be built in debug mode (using the flags `-gcflags=\"-N -l\"` which disables optimizations and inlining), and the process will be started in the Velero deployment using . The debug server will accept connections on port 2345 and Tilt is configured to forward this port to the local machine. Once Tilt is and the Velero resource is ready, you can connect to the debug server to begin debugging. To connect to the session, you can use the Delve CLI locally by running `dlv connect 127.0.0.1:2345`. See the for more guidance on how to use Delve. Delve can also be used within a number of . By default, the Velero process will continue on start when in debug mode. This means that the process will run until a breakpoint is set. You can disable this by setting the field `debugcontinueon_start` to `false` in your `tilt-resources/tile-settings.json` file. When this setting is disabled, the Velero process will not continue to run until a `continue` instruction is issued through your Delve session. When exiting your debug session, the CLI and editor integrations will typically ask if the remote process should be stopped. It is important to leave the remote process running and just disconnect from the debugging session. By stopping the remote process, that will cause the Velero container to stop and the pod to restart. If backups are in progress, these will be left in a stale state as they are not resumed when the Velero pod restarts. To launch your development environment, run: ``` bash tilt up ``` This will output the address to a web browser interface where you can monitor Tilt's status and the logs for each Tilt resource. After a brief amount of time, you should have a running development environment, and you should now be able to create backups/restores and fully operate Velero. Note: Running `tilt down` after exiting out of Tilt specified in the Tiltfile. Tip: Create an alias to `velero/_tuiltbuild/local/velero` and you won't have to run `make local` to get a refreshed version of the Velero CLI, just use the alias. Please see the documentation for . A provider must supply a `tilt-provider.json` file describing how to build it. Here is an example: ```json { \"plugin_name\": \"velero-plugin-for-aws\", \"context\": \".\", \"image\": \"velero/velero-plugin-for-aws\", \"livereloaddeps\": [ \"velero-plugin-for-aws\" ], \"go_main\": \"./velero-plugin-for-aws\" } ``` Each provider plugin configured to be deployed by Velero's Tilt setup has a `livereloaddeps` list. This defines the files and/or directories that Tilt should monitor for changes. When a dependency is modified, Tilt rebuilds the provider's binary on your local machine, copies the binary to the init container, and triggers a restart of the Velero container. This is significantly faster than rebuilding the container image for each change. It also helps keep the size of each development image as small as possible (the container images do not need the entire go toolchain, source code, module dependencies, etc.)." } ]
{ "category": "Runtime", "file_name": "general.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "See the following sections for information on how to get started with Incus: ```{toctree} :maxdepth: 1 Containers and VMs </explanation/containersandvms> Install Incus </installing> Initialize Incus </howto/initialize> Get support </support> Frequently asked </faq> Contribute to Incus </contributing> ``` You can find a series of demos and tutorials on YouTube: <iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/videoseries?list=PLVhiK8li7a-5aRPwUHHfpfMuwWCava4\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>" } ]