content
listlengths
1
171
tag
dict
[ { "data": "Kata implements CRI's API and supports and interfaces to expose containers metrics. User can use these interfaces to get basic metrics about containers. Unlike `runc`, Kata is a VM-based runtime and has a different architecture. Kata 1.x has a number of limitations related to observability that may be obstacles to running Kata Containers at scale. In Kata 2.0, the following components will be able to provide more details about the system: containerd shim v2 (effectively `kata-runtime`) Hypervisor statistics Agent process Guest OS statistics Note: In Kata 1.x, the main user-facing component was the runtime (`kata-runtime`). From 1.5, Kata introduced the Kata containerd shim v2 (`containerd-shim-kata-v2`) which is essentially a modified runtime that is loaded by containerd to simplify and improve the way VM-based containers are created and managed. For Kata 2.0, the main component is the Kata containerd shim v2, although the deprecated `kata-runtime` binary will be maintained for a period of time. Any mention of the \"Kata runtime\" in this document should be taken to refer to the Kata containerd shim v2 unless explicitly noted otherwise (for example by referring to it explicitly as the `kata-runtime` binary). Kata 2.0 metrics strongly depend on , a graduated project from CNCF. Kata Containers 2.0 introduces a new Kata component called `kata-monitor` which is used to monitor the Kata components on the host. It's shipped with the Kata runtime to provide an interface to: Get metrics Get events At present, `kata-monitor` supports retrieval of metrics only: this is what will be covered in this document. This is the architecture overview of metrics in Kata Containers 2.0: And the sequence diagram is shown below: For a quick evaluation, you can check out . The `kata-monitor` management agent should be started on each node where the Kata containers runtime is installed. `kata-monitor` will: Note: a node running Kata containers will be either a single host system or a worker node belonging to a K8s cluster capable of running Kata pods. Aggregate sandbox metrics running on the node, adding the `sandbox_id` label to them. Attach the additional `criuid`, `criname` and `cri_namespace` labels to the sandbox metrics, tracking the `uid`, `name` and `namespace` Kubernetes pod metadata. Expose a new Prometheus target, allowing all node metrics coming from the Kata shim to be collected by Prometheus indirectly. This simplifies the targets count in Prometheus and avoids exposing shim's metrics by `ip:port`. Only one `kata-monitor` process runs in each node. `kata-monitor` uses a different communication channel than the one used by the container engine (`containerd`/`CRI-O`) to communicate with the Kata shim. The Kata shim exposes a dedicated socket address reserved to `kata-monitor`. The shim's metrics socket file is created under the virtcontainers sandboxes directory, i.e. `vc/sbs/${PODID}/shim-monitor.sock`. Note: If there is no Prometheus server configured, i.e., there are no scrape operations, `kata-monitor` will not collect any metrics. Kata runtime is responsible for: Gather metrics about shim process Gather metrics about hypervisor process Gather metrics about running sandbox Get metrics from Kata agent (through `ttrpc`) Kata agent is responsible for: Gather agent process metrics Gather guest OS metrics In Kata 2.0, the agent adds a new interface: ```protobuf rpc GetMetrics(GetMetricsRequest) returns (Metrics); message GetMetricsRequest {} message Metrics { string metrics = 1; } ``` The `metrics` field is Prometheus encoded content. This can avoid defining a fixed structure in protocol buffers. Metrics should not become a bottleneck for the system or downgrade the performance: they should run with minimal" }, { "data": "Requirements: Metrics MUST* be quick to collect Metrics MUST* be small Metrics MUST* be generated only if there are subscribers to the Kata metrics service Metrics MUST* be stateless In Kata 2.0, metrics are collected only when needed (pull mode), mainly from the `/proc` filesystem, and consumed by Prometheus. This means that if the Prometheus collector is not running (so no one cares about the metrics) the overhead will be zero. The metrics service also doesn't hold any metrics in memory. |\\*|No Sandbox | 1 Sandbox | 2 Sandboxes | ||||| |Metrics count| 39 | 106 | 173 | |Metrics size (bytes)| 9K | 144K | 283K | |Metrics size (`gzipped`, bytes)| 2K | 10K | 17K | Metrics size: response size of one Prometheus scrape request. It's easy to estimate the size of one metrics fetch request issued by Prometheus. The formula to calculate the expected size when no gzip compression is in place is: 9 + (144 - 9) * `number of kata sandboxes` Prometheus supports `gzip compression`. When enabled, the response size of each request will be smaller: 2 + (10 - 2) * `number of kata sandboxes` Example We have 10 sandboxes running on a node. The expected size of one metrics fetch request issued by Prometheus against the kata-monitor agent running on that node will be: 9 + (144 - 9) 10 = 1.35M* If `gzip compression` is enabled: 2 + (10 - 2) 10 = 82K* And here is some test data: End-to-end (from Prometheus server to `kata-monitor` and `kata-monitor` write response back): 20ms(avg) Agent (RPC all from shim to agent): 3ms(avg) Test infrastructure: OS: Ubuntu 20.04 Hardware: Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz, 6 Cores, and 16GB memory. Scrape interval Prometheus default `scrapeinterval` is 1 minute, but it is usually set to 15 seconds. A smaller `scrapeinterval` causes more overhead, so users should set it depending on their monitoring needs. Here are listed all the metrics supported by Kata 2.0. Some metrics are dependent on the VM guest kernel, so the available ones may differ based on the environment. Metrics are categorized by the component from/for which the metrics are collected. Note: Labels here do not include the `instance` and `job` labels added by Prometheus. Notes about metrics unit `Kibibytes`, abbreviated `KiB`. 1 `KiB` equals 1024 B. For some metrics (like network devices statistics from file `/proc/net/dev`), unit depends on label( for example `recvbytes` and `recvpackets` have different units). Most of these metrics are collected from the `/proc` filesystem, so the unit of each metric matches the unit of the relevant `/proc` entry. See the `proc(5)` manual page for further details. Prometheus offers four core metric types. Counter: A counter is a cumulative metric that represents a single monotonically increasing counter whose value can only increase. Gauge: A gauge metric represents a single numerical value that can go up and down, typically used for measured values like current memory usage. Histogram: A histogram samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. Summary: A summary samples observations like histogram, it can calculate configurable quantiles over a sliding time window. See for detailed explanations about these metric types. Agent's metrics contains metrics about agent process. | Metric name | Type | Units | Labels | Introduced in Kata version | |||||| | `kataagentiostat`: <br> Agent process IO stat. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/io`)<ul><li>`cancelledwritebyte`</li><li>`rchar`</li><li>`readbytes`</li><li>`syscr`</li><li>`syscw`</li><li>`wchar`</li><li>`writebytes`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | | `kataagentprocstat`: <br> Agent process stat. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/stat`)<ul><li>`cstime`</li><li>`cutime`</li><li>`stime`</li><li>`utime`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | | `kataagentprocstatus`: <br> Agent process status. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/status`)<ul><li>`hugetlbpages`</li><li>`nonvoluntaryctxtswitches`</li><li>`rssanon`</li><li>`rssfile`</li><li>`rssshmem`</li><li>`vmdata`</li><li>`vmexe`</li><li>`vmhwm`</li><li>`vmlck`</li><li>`vmlib`</li><li>`vmpeak`</li><li>`vmpin`</li><li>`vmpte`</li><li>`vmrss`</li><li>`vmsize`</li><li>`vmstk`</li><li>`vmswap`</li><li>`voluntaryctxtswitches`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | | `kataagentprocesscpusecondstotal`: <br> Total user and system CPU time spent in" }, { "data": "| `COUNTER` | `seconds` | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `kataagentprocessmaxfds`: <br> Maximum number of open file descriptors. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `kataagentprocessopenfds`: <br> Number of open file descriptors. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `kataagentprocessresidentmemorybytes`: <br> Resident memory size in bytes. | `GAUGE` | `bytes` | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `kataagentprocessstarttimeseconds`: <br> Start time of the process since `unix` epoch in seconds. | `GAUGE` | `seconds` | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `kataagentprocessvirtualmemorybytes`: <br> Virtual memory size in bytes. | `GAUGE` | `bytes` | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `kataagentscrapecount`: <br> Metrics scrape count | `COUNTER` | | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `kataagenttotalrss`: <br> Agent process total `rss` size | `GAUGE` | | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `kataagenttotaltime`: <br> Agent process total time | `GAUGE` | | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `kataagenttotalvm`: <br> Agent process total `vm` size | `GAUGE` | | <ul><li>`sandboxid`</li></ul> | 2.0.0 | Metrics for Firecracker vmm. | Metric name | Type | Units | Labels | Introduced in Kata version | |||||| | `katafirecrackerapiserver`: <br> Metrics related to the internal API server. | `GAUGE` | | <ul><li>`item`<ul><li>`processstartuptimecpuus`</li><li>`processstartuptimeus`</li><li>`syncresponsefails`</li><li>`syncvmmsendtimeoutcount`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `katafirecrackerblock`: <br> Block Device associated metrics. | `GAUGE` | | <ul><li>`item`<ul><li>`activatefails`</li><li>`cfgfails`</li><li>`eventfails`</li><li>`executefails`</li><li>`flushcount`</li><li>`invalidreqscount`</li><li>`noavailbuffer`</li><li>`queueeventcount`</li><li>`ratelimitereventcount`</li><li>`ratelimiterthrottledevents`</li><li>`readbytes`</li><li>`readcount`</li><li>`updatecount`</li><li>`updatefails`</li><li>`writebytes`</li><li>`writecount`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | | `katafirecrackergetapirequests`: <br> Metrics specific to GET API Requests for counting user triggered actions and/or failures. | `GAUGE` | | <ul><li>`item`<ul><li>`instanceinfocount`</li><li>`instanceinfofails`</li><li>`machinecfgcount`</li><li>`machinecfgfails`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `katafirecrackeri8042`: <br> Metrics specific to the i8042 device. | `GAUGE` | | <ul><li>`item`<ul><li>`errorcount`</li><li>`missedreadcount`</li><li>`missedwritecount`</li><li>`readcount`</li><li>`resetcount`</li><li>`writecount`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `katafirecrackerlatenciesus`: <br> Performance metrics related for the moment only to snapshots. | `GAUGE` | | <ul><li>`item`<ul><li>`diffcreatesnapshot`</li><li>`fullcreatesnapshot`</li><li>`loadsnapshot`</li><li>`pausevm`</li><li>`resumevm`</li><li>`vmmdiffcreatesnapshot`</li><li>`vmmfullcreatesnapshot`</li><li>`vmmloadsnapshot`</li><li>`vmmpausevm`</li><li>`vmmresumevm`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `katafirecrackerlogger`: <br> Metrics for the logging subsystem. | `GAUGE` | | <ul><li>`item`<ul><li>`logfails`</li><li>`metricsfails`</li><li>`missedlogcount`</li><li>`missedmetricscount`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `katafirecrackermmds`: <br> Metrics for the MMDS functionality. | `GAUGE` | | <ul><li>`item`<ul><li>`connectionscreated`</li><li>`connectionsdestroyed`</li><li>`rxaccepted`</li><li>`rxacceptederr`</li><li>`rxacceptedunusual`</li><li>`rxbadeth`</li><li>`rxcount`</li><li>`txbytes`</li><li>`txcount`</li><li>`txerrors`</li><li>`txframes`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `katafirecrackernet`: <br> Network-related metrics. | `GAUGE` | | <ul><li>`item`<ul><li>`activatefails`</li><li>`cfgfails`</li><li>`eventfails`</li><li>`macaddressupdates`</li><li>`norxavailbuffer`</li><li>`notxavailbuffer`</li><li>`rxbytescount`</li><li>`rxcount`</li><li>`rxeventratelimitercount`</li><li>`rxfails`</li><li>`rxpacketscount`</li><li>`rxpartialwrites`</li><li>`rxqueueeventcount`</li><li>`rxratelimiterthrottled`</li><li>`rxtapeventcount`</li><li>`tapreadfails`</li><li>`tapwritefails`</li><li>`txbytescount`</li><li>`txcount`</li><li>`txfails`</li><li>`txmalformedframes`</li><li>`txpacketscount`</li><li>`txpartialreads`</li><li>`txqueueeventcount`</li><li>`txratelimitereventcount`</li><li>`txratelimiterthrottled`</li><li>`txspoofedmaccount`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | | `katafirecrackerpatchapirequests`: <br> Metrics specific to PATCH API Requests for counting user triggered actions and/or failures. | `GAUGE` | | <ul><li>`item`<ul><li>`drivecount`</li><li>`drivefails`</li><li>`machinecfgcount`</li><li>`machinecfgfails`</li><li>`networkcount`</li><li>`networkfails`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `katafirecrackerputapirequests`: <br> Metrics specific to PUT API Requests for counting user triggered actions and/or failures. | `GAUGE` | | <ul><li>`item`<ul><li>`actionscount`</li><li>`actionsfails`</li><li>`bootsourcecount`</li><li>`bootsourcefails`</li><li>`drivecount`</li><li>`drivefails`</li><li>`loggercount`</li><li>`loggerfails`</li><li>`machinecfgcount`</li><li>`machinecfgfails`</li><li>`metricscount`</li><li>`metricsfails`</li><li>`networkcount`</li><li>`networkfails`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `katafirecrackerrtc`: <br> Metrics specific to the RTC device. | `GAUGE` | | <ul><li>`item`<ul><li>`errorcount`</li><li>`missedreadcount`</li><li>`missedwritecount`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | | `katafirecrackerseccomp`: <br> Metrics for the seccomp filtering. | `GAUGE` | | <ul><li>`item`<ul><li>`numfaults`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | | `katafirecrackersignals`: <br> Metrics related to signals. | `GAUGE` | | <ul><li>`item`<ul><li>`sigbus`</li><li>`sigsegv`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `katafirecrackeruart`: <br> Metrics specific to the UART device. | `GAUGE` | | <ul><li>`item`<ul><li>`errorcount`</li><li>`flushcount`</li><li>`missedreadcount`</li><li>`missedwritecount`</li><li>`readcount`</li><li>`writecount`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `katafirecrackervcpu`: <br> Metrics specific to VCPUs' mode of functioning. | `GAUGE` | | <ul><li>`item`<ul><li>`exitioin`</li><li>`exitioout`</li><li>`exitmmioread`</li><li>`exitmmiowrite`</li><li>`failures`</li><li>`filtercpuid`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | | `katafirecrackervmm`: <br> Metrics specific to the machine manager as a whole. | `GAUGE` | | <ul><li>`item`<ul><li>`deviceevents`</li><li>`paniccount`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `katafirecrackervsock`: <br> VSOCK-related metrics. | `GAUGE` | | <ul><li>`item`<ul><li>`activatefails`</li><li>`cfgfails`</li><li>`conneventfails`</li><li>`connsadded`</li><li>`connskilled`</li><li>`connsremoved`</li><li>`evqueueeventfails`</li><li>`killqresync`</li><li>`muxereventfails`</li><li>`rxbytescount`</li><li>`rxpacketscount`</li><li>`rxqueueeventcount`</li><li>`rxqueueeventfails`</li><li>`rxreadfails`</li><li>`txbytescount`</li><li>`txflushfails`</li><li>`txpacketscount`</li><li>`txqueueeventcount`</li><li>`txqueueeventfails`</li><li>`txwritefails`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | Guest OS's metrics in hypervisor. | Metric name | Type | Units | Labels | Introduced in Kata version | |||||| | `kataguestcputime`: <br> Guest CPU stat. | `GAUGE` | | <ul><li>`cpu` (CPU no. and total for all CPUs)<ul><li>`0` (CPU 0)</li><li>`1` (CPU 1)</li><li>`total` (for all CPUs)</li></ul></li><li>`item` (Kernel/system statistics, from `/proc/stat`)<ul><li>`guest`</li><li>`guestnice`</li><li>`idle`</li><li>`iowait`</li><li>`irq`</li><li>`nice`</li><li>`softirq`</li><li>`steal`</li><li>`system`</li><li>`user`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `kataguestdiskstat`: <br> Disks stat in system. | `GAUGE` | | <ul><li>`disk` (disk name)</li><li>`item` (see `/proc/diskstats`)<ul><li>`discards`</li><li>`discardsmerged`</li><li>`flushes`</li><li>`inprogress`</li><li>`merged`</li><li>`reads`</li><li>`sectorsdiscarded`</li><li>`sectorsread`</li><li>`sectorswritten`</li><li>`timediscarding`</li><li>`timeflushing`</li><li>`timeinprogress`</li><li>`timereading`</li><li>`timewriting`</li><li>`weightedtimeinprogress`</li><li>`writes`</li><li>`writesmerged`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | | `kataguestload`: <br> Guest system load. | `GAUGE` | | <ul><li>`item`<ul><li>`load1`</li><li>`load15`</li><li>`load5`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `kataguestmeminfo`: <br> Statistics about memory usage on the system. | `GAUGE` | | <ul><li>`item` (see `/proc/meminfo`)<ul><li>`active`</li><li>`activeanon`</li><li>`activefile`</li><li>`anonhugepages`</li><li>`anonpages`</li><li>`bounce`</li><li>`buffers`</li><li>`cached`</li><li>`cmafree`</li><li>`cmatotal`</li><li>`commitlimit`</li><li>`committedas`</li><li>`directmap1G`</li><li>`directmap2M`</li><li>`directmap4M`</li><li>`directmap4k`</li><li>`dirty`</li><li>`hardwarecorrupted`</li><li>`highfree`</li><li>`hightotal`</li><li>`hugepagesfree`</li><li>`hugepagesrsvd`</li><li>`hugepagessurp`</li><li>`hugepagestotal`</li><li>`hugepagesize`</li><li>`hugetlb`</li><li>`inactive`</li><li>`inactiveanon`</li><li>`inactivefile`</li><li>`kreclaimable`</li><li>`kernelstack`</li><li>`lowfree`</li><li>`lowtotal`</li><li>`mapped`</li><li>`memavailable`</li><li>`memfree`</li><li>`memtotal`</li><li>`mlocked`</li><li>`mmapcopy`</li><li>`nfsunstable`</li><li>`pagetables`</li><li>`percpu`</li><li>`quicklists`</li><li>`sreclaimable`</li><li>`sunreclaim`</li><li>`shmem`</li><li>`shmemhugepages`</li><li>`shmempmdmapped`</li><li>`slab`</li><li>`swapcached`</li><li>`swapfree`</li><li>`swaptotal`</li><li>`unevictable`</li><li>`vmallocchunk`</li><li>`vmalloctotal`</li><li>`vmallocused`</li><li>`writeback`</li><li>`writebacktmp`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `kataguestnetdevstat`: <br> Guest net devices" }, { "data": "| `GAUGE` | | <ul><li>`interface` (network device name)</li><li>`item` (see `/proc/net/dev`)<ul><li>`recvbytes`</li><li>`recvcompressed`</li><li>`recvdrop`</li><li>`recverrs`</li><li>`recvfifo`</li><li>`recvframe`</li><li>`recvmulticast`</li><li>`recvpackets`</li><li>`sentbytes`</li><li>`sentcarrier`</li><li>`sentcolls`</li><li>`sentcompressed`</li><li>`sentdrop`</li><li>`senterrs`</li><li>`sentfifo`</li><li>`sentpackets`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | | `kataguesttasks`: <br> Guest system load. | `GAUGE` | | <ul><li>`item`<ul><li>`cur`</li><li>`max`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `kataguestvmstat`: <br> Guest virtual memory stat. | `GAUGE` | | <ul><li>`item` (see `/proc/vmstat`)<ul><li>`allocstalldma`</li><li>`allocstalldma32`</li><li>`allocstallmovable`</li><li>`allocstallnormal`</li><li>`balloondeflate`</li><li>`ballooninflate`</li><li>`compactdaemonfreescanned`</li><li>`compactdaemonmigratescanned`</li><li>`compactdaemonwake`</li><li>`compactfail`</li><li>`compactfreescanned`</li><li>`compactisolated`</li><li>`compactmigratescanned`</li><li>`compactstall`</li><li>`compactsuccess`</li><li>`droppagecache`</li><li>`dropslab`</li><li>`htlbbuddyallocfail`</li><li>`htlbbuddyallocsuccess`</li><li>`kswapdhighwmarkhitquickly`</li><li>`kswapdinodesteal`</li><li>`kswapdlowwmarkhitquickly`</li><li>`nractiveanon`</li><li>`nractivefile`</li><li>`nranonpages`</li><li>`nranontransparenthugepages`</li><li>`nrbounce`</li><li>`nrdirtied`</li><li>`nrdirty`</li><li>`nrdirtybackgroundthreshold`</li><li>`nrdirtythreshold`</li><li>`nrfilepages`</li><li>`nrfreecma`</li><li>`nrfreepages`</li><li>`nrinactiveanon`</li><li>`nrinactivefile`</li><li>`nrisolatedanon`</li><li>`nrisolatedfile`</li><li>`nrkernelstack`</li><li>`nrmapped`</li><li>`nrmlock`</li><li>`nrpagetablepages`</li><li>`nrshmem`</li><li>`nrshmemhugepages`</li><li>`nrshmempmdmapped`</li><li>`nrslabreclaimable`</li><li>`nrslabunreclaimable`</li><li>`nrunevictable`</li><li>`nrunstable`</li><li>`nrvmscanimmediatereclaim`</li><li>`nrvmscanwrite`</li><li>`nrwriteback`</li><li>`nrwritebacktemp`</li><li>`nrwritten`</li><li>`nrzoneactiveanon`</li><li>`nrzoneactivefile`</li><li>`nrzoneinactiveanon`</li><li>`nrzoneinactivefile`</li><li>`nrzoneunevictable`</li><li>`nrzonewritepending`</li><li>`oomkill`</li><li>`pageoutrun`</li><li>`pgactivate`</li><li>`pgallocdma`</li><li>`pgallocdma32`</li><li>`pgallocmovable`</li><li>`pgallocnormal`</li><li>`pgdeactivate`</li><li>`pgfault`</li><li>`pgfree`</li><li>`pginodesteal`</li><li>`pglazyfree`</li><li>`pglazyfreed`</li><li>`pgmajfault`</li><li>`pgmigratefail`</li><li>`pgmigratesuccess`</li><li>`pgpgin`</li><li>`pgpgout`</li><li>`pgrefill`</li><li>`pgrotated`</li><li>`pgscandirect`</li><li>`pgscandirectthrottle`</li><li>`pgscankswapd`</li><li>`pgskipdma`</li><li>`pgskipdma32`</li><li>`pgskipmovable`</li><li>`pgskipnormal`</li><li>`pgstealdirect`</li><li>`pgstealkswapd`</li><li>`pswpin`</li><li>`pswpout`</li><li>`slabsscanned`</li><li>`swapra`</li><li>`swaprahit`</li><li>`unevictablepgscleared`</li><li>`unevictablepgsculled`</li><li>`unevictablepgsmlocked`</li><li>`unevictablepgsmunlocked`</li><li>`unevictablepgsrescued`</li><li>`unevictablepgsscanned`</li><li>`unevictablepgsstranded`</li><li>`workingsetactivate`</li><li>`workingsetnodereclaim`</li><li>`workingsetrefault`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | Hypervisors metrics, collected mainly from `proc` filesystem of hypervisor process. | Metric name | Type | Units | Labels | Introduced in Kata version | |||||| | `katahypervisorfds`: <br> Open FDs for hypervisor. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katahypervisoriostat`: <br> Process IO statistics. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/io`)<ul><li>`cancelledwritebytes`</li><li>`rchar`</li><li>`readbytes`</li><li>`syscr`</li><li>`syscw`</li><li>`wchar`</li><li>`writebytes`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | | `katahypervisornetdev`: <br> Net devices statistics. | `GAUGE` | | <ul><li>`interface` (network device name)</li><li>`item` (see `/proc/net/dev`)<ul><li>`recvbytes`</li><li>`recvcompressed`</li><li>`recvdrop`</li><li>`recverrs`</li><li>`recvfifo`</li><li>`recvframe`</li><li>`recvmulticast`</li><li>`recvpackets`</li><li>`sentbytes`</li><li>`sentcarrier`</li><li>`sentcolls`</li><li>`sentcompressed`</li><li>`sentdrop`</li><li>`senterrs`</li><li>`sentfifo`</li><li>`sentpackets`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `katahypervisorprocstat`: <br> Hypervisor process statistics. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/stat`)<ul><li>`cstime`</li><li>`cutime`</li><li>`stime`</li><li>`utime`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | | `katahypervisorprocstatus`: <br> Hypervisor process status. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/status`)<ul><li>`hugetlbpages`</li><li>`nonvoluntaryctxtswitches`</li><li>`rssanon`</li><li>`rssfile`</li><li>`rssshmem`</li><li>`vmdata`</li><li>`vmexe`</li><li>`vmhwm`</li><li>`vmlck`</li><li>`vmlib`</li><li>`vmpeak`</li><li>`vmpin`</li><li>`vmpmd`</li><li>`vmpte`</li><li>`vmrss`</li><li>`vmsize`</li><li>`vmstk`</li><li>`vmswap`</li><li>`voluntaryctxtswitches`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | | `katahypervisorthreads`: <br> Hypervisor process threads. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | Metrics about monitor itself. | Metric name | Type | Units | Labels | Introduced in Kata version | |||||| | `katamonitorgogcduration_seconds`: <br> A summary of the pause duration of garbage collection cycles. | `SUMMARY` | `seconds` | | 2.0.0 | | `katamonitorgo_goroutines`: <br> Number of goroutines that currently exist. | `GAUGE` | | | 2.0.0 | | `katamonitorgo_info`: <br> Information about the Go environment. | `GAUGE` | | <ul><li>`version` (golang version)<ul><li>`go1.13.9` (environment dependent variable)</li></ul></li></ul> | 2.0.0 | | `katamonitorgomemstatsalloc_bytes`: <br> Number of bytes allocated and still in use. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorgomemstatsallocbytestotal`: <br> Total number of bytes allocated, even if freed. | `COUNTER` | `bytes` | | 2.0.0 | | `katamonitorgomemstatsbuckhashsys_bytes`: <br> Number of bytes used by the profiling bucket hash table. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorgomemstatsfrees_total`: <br> Total number of frees. | `COUNTER` | | | 2.0.0 | | `katamonitorgomemstatsgccpufraction`: <br> The fraction of this program's available CPU time used by the GC since the program started. | `GAUGE` | | | 2.0.0 | | `katamonitorgomemstatsgcsysbytes`: <br> Number of bytes used for garbage collection system metadata. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorgomemstatsheapallocbytes`: <br> Number of heap bytes allocated and still in use. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorgomemstatsheapidlebytes`: <br> Number of heap bytes waiting to be used. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorgomemstatsheapinusebytes`: <br> Number of heap bytes that are in use. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorgomemstatsheap_objects`: <br> Number of allocated objects. | `GAUGE` | | | 2.0.0 | | `katamonitorgomemstatsheapreleasedbytes`: <br> Number of heap bytes released to OS. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorgomemstatsheapsysbytes`: <br> Number of heap bytes obtained from system. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorgomemstatslastgctime_seconds`: <br> Number of seconds since 1970 of last garbage collection. | `GAUGE` | `seconds` | | 2.0.0 | | `katamonitorgomemstatslookups_total`: <br> Total number of pointer lookups. | `COUNTER` | | | 2.0.0 | | `katamonitorgomemstatsmallocs_total`: <br> Total number of `mallocs`. | `COUNTER` | | | 2.0.0 | | `katamonitorgomemstatsmcacheinusebytes`: <br> Number of bytes in use by `mcache` structures. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorgomemstatsmcachesysbytes`: <br> Number of bytes used for `mcache` structures obtained from system. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorgomemstatsmspaninusebytes`: <br> Number of bytes in use by `mspan` structures. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorgomemstatsmspansysbytes`: <br> Number of bytes used for `mspan` structures obtained from" }, { "data": "| `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorgomemstatsnextgcbytes`: <br> Number of heap bytes when next garbage collection will take place. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorgomemstatsothersysbytes`: <br> Number of bytes used for other system allocations. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorgomemstatsstackinusebytes`: <br> Number of bytes in use by the stack allocator. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorgomemstatsstacksysbytes`: <br> Number of bytes obtained from system for stack allocator. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorgomemstatssys_bytes`: <br> Number of bytes obtained from system. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorgo_threads`: <br> Number of OS threads created. | `GAUGE` | | | 2.0.0 | | `katamonitorprocesscpuseconds_total`: <br> Total user and system CPU time spent in seconds. | `COUNTER` | `seconds` | | 2.0.0 | | `katamonitorprocessmaxfds`: <br> Maximum number of open file descriptors. | `GAUGE` | | | 2.0.0 | | `katamonitorprocessopenfds`: <br> Number of open file descriptors. | `GAUGE` | | | 2.0.0 | | `katamonitorprocessresidentmemory_bytes`: <br> Resident memory size in bytes. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorprocessstarttime_seconds`: <br> Start time of the process since `unix` epoch in seconds. | `GAUGE` | `seconds` | | 2.0.0 | | `katamonitorprocessvirtualmemory_bytes`: <br> Virtual memory size in bytes. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorprocessvirtualmemorymaxbytes`: <br> Maximum amount of virtual memory available in bytes. | `GAUGE` | `bytes` | | 2.0.0 | | `katamonitorrunningshimcount`: <br> Running shim count(running sandboxes). | `GAUGE` | | | 2.0.0 | | `katamonitorscrape_count`: <br> Scape count. | `COUNTER` | | | 2.0.0 | | `katamonitorscrapedurationshistogram_milliseconds`: <br> Time used to scrape from shims | `HISTOGRAM` | `milliseconds` | | 2.0.0 | | `katamonitorscrapefailedcount`: <br> Failed scape count. | `COUNTER` | | | 2.0.0 | Metrics about Kata containerd shim v2 process. | Metric name | Type | Units | Labels | Introduced in Kata version | |||||| | `katashimagentrpcdurationshistogrammilliseconds`: <br> RPC latency distributions. | `HISTOGRAM` | `milliseconds` | <ul><li>`action` (RPC actions of Kata agent)<ul><li>`grpc.CheckRequest`</li><li>`grpc.CloseStdinRequest`</li><li>`grpc.CopyFileRequest`</li><li>`grpc.CreateContainerRequest`</li><li>`grpc.CreateSandboxRequest`</li><li>`grpc.DestroySandboxRequest`</li><li>`grpc.ExecProcessRequest`</li><li>`grpc.GetMetricsRequest`</li><li>`grpc.GuestDetailsRequest`</li><li>`grpc.ListInterfacesRequest`</li><li>`grpc.ListProcessesRequest`</li><li>`grpc.ListRoutesRequest`</li><li>`grpc.MemHotplugByProbeRequest`</li><li>`grpc.OnlineCPUMemRequest`</li><li>`grpc.PauseContainerRequest`</li><li>`grpc.RemoveContainerRequest`</li><li>`grpc.ReseedRandomDevRequest`</li><li>`grpc.ResumeContainerRequest`</li><li>`grpc.SetGuestDateTimeRequest`</li><li>`grpc.SignalProcessRequest`</li><li>`grpc.StartContainerRequest`</li><li>`grpc.StatsContainerRequest`</li><li>`grpc.TtyWinResizeRequest`</li><li>`grpc.UpdateContainerRequest`</li><li>`grpc.UpdateInterfaceRequest`</li><li>`grpc.UpdateRoutesRequest`</li><li>`grpc.WaitProcessRequest`</li><li>`grpc.WriteStreamRequest`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimfds`: <br> Kata containerd shim v2 open FDs. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimgogcdurationseconds`: <br> A summary of the pause duration of garbage collection cycles. | `SUMMARY` | `seconds` | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimgogoroutines`: <br> Number of goroutines that currently exist. | `GAUGE` | | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimgoinfo`: <br> Information about the Go environment. | `GAUGE` | | <ul><li>`sandboxid`</li><li>`version` (golang version)<ul><li>`go1.13.9` (environment dependent variable)</li></ul></li></ul> | 2.0.0 | | `katashimgomemstatsallocbytes`: <br> Number of bytes allocated and still in use. | `GAUGE` | `bytes` | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimgomemstatsallocbytestotal`: <br> Total number of bytes allocated, even if freed. | `COUNTER` | `bytes` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimgomemstatsbuckhashsysbytes`: <br> Number of bytes used by the profiling bucket hash table. | `GAUGE` | `bytes` | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimgomemstatsfreestotal`: <br> Total number of frees. | `COUNTER` | | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimgomemstatsgccpufraction`: <br> The fraction of this program's available CPU time used by the GC since the program started. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimgomemstatsgcsysbytes`: <br> Number of bytes used for garbage collection system metadata. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimgomemstatsheapallocbytes`: <br> Number of heap bytes allocated and still in use. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimgomemstatsheapidlebytes`: <br> Number of heap bytes waiting to be used. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimgomemstatsheapinusebytes`: <br> Number of heap bytes that are in" }, { "data": "| `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimgomemstatsheapobjects`: <br> Number of allocated objects. | `GAUGE` | | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimgomemstatsheapreleasedbytes`: <br> Number of heap bytes released to OS. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimgomemstatsheapsysbytes`: <br> Number of heap bytes obtained from system. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimgomemstatslastgctimeseconds`: <br> Number of seconds since 1970 of last garbage collection. | `GAUGE` | `seconds` | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimgomemstatslookupstotal`: <br> Total number of pointer lookups. | `COUNTER` | | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimgomemstatsmallocstotal`: <br> Total number of `mallocs`. | `COUNTER` | | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimgomemstatsmcacheinusebytes`: <br> Number of bytes in use by `mcache` structures. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimgomemstatsmcachesysbytes`: <br> Number of bytes used for `mcache` structures obtained from system. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimgomemstatsmspaninusebytes`: <br> Number of bytes in use by `mspan` structures. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimgomemstatsmspansysbytes`: <br> Number of bytes used for `mspan` structures obtained from system. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimgomemstatsnextgcbytes`: <br> Number of heap bytes when next garbage collection will take place. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimgomemstatsothersysbytes`: <br> Number of bytes used for other system allocations. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimgomemstatsstackinusebytes`: <br> Number of bytes in use by the stack allocator. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimgomemstatsstacksysbytes`: <br> Number of bytes obtained from system for stack allocator. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimgomemstatssysbytes`: <br> Number of bytes obtained from system. | `GAUGE` | `bytes` | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimgothreads`: <br> Number of OS threads created. | `GAUGE` | | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimiostat`: <br> Kata containerd shim v2 process IO statistics. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/io`)<ul><li>`cancelledwritebytes`</li><li>`rchar`</li><li>`readbytes`</li><li>`syscr`</li><li>`syscw`</li><li>`wchar`</li><li>`writebytes`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimnetdev`: <br> Kata containerd shim v2 network devices statistics. | `GAUGE` | | <ul><li>`interface` (network device name)</li><li>`item` (see `/proc/net/dev`)<ul><li>`recvbytes`</li><li>`recvcompressed`</li><li>`recvdrop`</li><li>`recverrs`</li><li>`recvfifo`</li><li>`recvframe`</li><li>`recvmulticast`</li><li>`recvpackets`</li><li>`sentbytes`</li><li>`sentcarrier`</li><li>`sentcolls`</li><li>`sentcompressed`</li><li>`sentdrop`</li><li>`senterrs`</li><li>`sentfifo`</li><li>`sentpackets`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimpodoverheadcpu`: <br> Kata Pod overhead for CPU resources(percent). | `GAUGE` | percent | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimpodoverheadmemoryinbytes`: <br> Kata Pod overhead for memory resources(bytes). | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimprocstat`: <br> Kata containerd shim v2 process statistics. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/stat`)<ul><li>`cstime`</li><li>`cutime`</li><li>`stime`</li><li>`utime`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimprocstatus`: <br> Kata containerd shim v2 process status. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/status`)<ul><li>`hugetlbpages`</li><li>`nonvoluntaryctxtswitches`</li><li>`rssanon`</li><li>`rssfile`</li><li>`rssshmem`</li><li>`vmdata`</li><li>`vmexe`</li><li>`vmhwm`</li><li>`vmlck`</li><li>`vmlib`</li><li>`vmpeak`</li><li>`vmpin`</li><li>`vmpmd`</li><li>`vmpte`</li><li>`vmrss`</li><li>`vmsize`</li><li>`vmstk`</li><li>`vmswap`</li><li>`voluntaryctxtswitches`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimprocesscpusecondstotal`: <br> Total user and system CPU time spent in seconds. | `COUNTER` | `seconds` | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimprocessmaxfds`: <br> Maximum number of open file descriptors. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimprocessopenfds`: <br> Number of open file descriptors. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimprocessresidentmemorybytes`: <br> Resident memory size in bytes. | `GAUGE` | `bytes` | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimprocessstarttimeseconds`: <br> Start time of the process since `unix` epoch in seconds. | `GAUGE` | `seconds` | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimprocessvirtualmemorybytes`: <br> Virtual memory size in bytes. | `GAUGE` | `bytes` | <ul><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimprocessvirtualmemorymaxbytes`: <br> Maximum amount of virtual memory available in bytes. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 | | `katashimrpcdurationshistogrammilliseconds`: <br> RPC latency distributions. | `HISTOGRAM` | `milliseconds` | <ul><li>`action` (Kata shim v2 actions)<ul><li>`checkpoint`</li><li>`closeio`</li><li>`connect`</li><li>`create`</li><li>`delete`</li><li>`exec`</li><li>`kill`</li><li>`pause`</li><li>`pids`</li><li>`resizepty`</li><li>`resume`</li><li>`shutdown`</li><li>`start`</li><li>`state`</li><li>`stats`</li><li>`update`</li><li>`wait`</li></ul></li><li>`sandboxid`</li></ul> | 2.0.0 | | `katashimthreads`: <br> Kata containerd shim v2 process threads. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> | 2.0.0 |" } ]
{ "category": "Runtime", "file_name": "kata-2-0-metrics.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "(standard-devices)= Incus provides each instance with the basic devices that are required for a standard POSIX system to work. These devices aren't visible in the instance or profile configuration, and they may not be overridden. The standard devices are: | Device | Type of device | |:|:| | `/dev/null` | Character device | | `/dev/zero` | Character device | | `/dev/full` | Character device | | `/dev/console` | Character device | | `/dev/tty` | Character device | | `/dev/random` | Character device | | `/dev/urandom` | Character device | | `/dev/net/tun` | Character device | | `/dev/fuse` | Character device | | `lo` | Network interface | Any other devices must be defined in the instance configuration or in one of the profiles used by the instance. The default profile typically contains a network interface that becomes `eth0` in the instance." } ]
{ "category": "Runtime", "file_name": "standard_devices.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "The purpose of the document is to define a category for various xlators and expectations around what each category means from a perspective of health and maintenance of a xlator. The need to do this is to ensure certain categories are kept in good health, and helps the community and contributors focus their efforts around the same. This document also provides implementation details for xlator developers to declare a category for any xlator. Audience Categories (and expectations of each category) Implementation and usage details This document is intended for the following community participants, New xlator contributors Existing xlator maintainers Packaging and gluster management stack maintainers For a more user facing understanding it is recommended to read section (TBD) in the gluster documentation. Experimental (E) TechPreview (TP) Maintained (M) Deprecated (D) Obsolete (O) Developed in the experimental branch, for exploring new features. These xlators are NEVER packaged as a part of releases, interested users and contributors can build and work with these from sources. In the future, these maybe available as an package based on a weekly build of the same. Compiles or passes smoke tests Does not break nightly experimental regressions NOTE: If a nightly is broken, then all patches that were merged are reverted till the errant patch is found and subsequently fixed Xlators in master or release branches that are not deemed fit to be in production deployments, but are feature complete to invite feedback and host user data. These xlators will be worked upon with priority by maintainers/authors who are involved in making them more stable than xlators in the Experimental/Deprecated/ Obsolete categories. There is no guarantee that these xlators will move to the Maintained state, and may just get Obsoleted based on feedback, or other project goals or technical alternatives. Same as Maintained, minus Performance, Scale, other(?) TBD NOTE Need inputs, Intention is all quality goals as in Maintained, other than the list above (which for now has scale and performance) These xltors are part of the core Gluster functionality and are maintained actively. These are part of master and release branches and are higher in priority of maintainers and other interested contributors. NOTE: A short note on what each of these mean are added here, details to" }, { "data": "NOTE: Out of the gate all of the following are not mandated, consider the following a desirable state to reach as we progress on each Bug backlog: Actively address bug backlog Enhancement backlog: Actively maintain outstanding enhancement backlog (need not be acted on, but should be visible to all) Review backlog: Actively keep this below desired counts and states Static code health: Actively meet near-zero issues in this regard Coverity, spellcheck and other checks Runtime code health: Actively meet defined coverage levels in this regard Coverage, others? Per-patch regressions Glusto runs Performance Scalability Technical specifications: Implementation details should be documented and updated at regular cadence (even per patch that change assumptions in here) User documentation: User facing details should be maintained to current status in the documentation Debuggability: Steps, tools, procedures should be documented and maintained each release/patch as applicable Troubleshooting: Steps, tools, procedures should be documented and maintained each release/patch as applicable Steps/guides for self service Knowledge base for problems Other common criteria that will apply: Required metrics/desired states to be defined per criteria Monitoring, usability, statedump, and other such xlator expectations Xlators on master or release branches that would be obsoleted and/or replaced with similar or other functionality in the next major release. Retain status-quo when moved to this state, till it is moved to obsoleted Provide migration steps if feature provided by the xlator is replaced with other xlators Xlator/code still in tree, but not packaged or shipped or maintained in any form. This is noted as a category till the code is removed from the tree. These xlators and their corresponding code and test health will not be executed. None While defining 'xlatorapit' structure for the corresponding xlator, add a flag like below: ``` diff --git a/xlators/performance/nl-cache/src/nl-cache.c b/xlators/performance/nl-cache/src/nl-cache.c index 0f0e53bac2..8267d6897c 100644 a/xlators/performance/nl-cache/src/nl-cache.c +++ b/xlators/performance/nl-cache/src/nl-cache.c @@ -869,4 +869,5 @@ xlatorapit xlator_api = { .cbks = &nlc_cbks, .options = nlc_options, .identifier = \"nl-cache\", .category = GFTECHPREVIEW, }; diff --git a/xlators/performance/quick-read/src/quick-read.c b/xlators/performance/quick-read/src/quick-read.c index 8d39720e7f..235de27c19 100644 a/xlators/performance/quick-read/src/quick-read.c +++ b/xlators/performance/quick-read/src/quick-read.c @@ -1702,4 +1702,5 @@ xlatorapit xlator_api = { .cbks = &qr_cbks, .options = qr_options, .identifier = \"quick-read\", .category = GF_MAINTAINED, }; ``` Similarly, if a particular option is in different state other than the xlator state, one can add the same flag in options structure too. ``` diff --git a/xlators/cluster/afr/src/afr.c b/xlators/cluster/afr/src/afr.c index 0e86e33d03..81996743d1 100644 a/xlators/cluster/afr/src/afr.c +++ b/xlators/cluster/afr/src/afr.c @@ -772,6 +772,7 @@ struct volume_options options[] = { .description = \"Maximum latency for shd halo replication in msec.\" }, { .key = {\"halo-enabled\"}, .category = GFTECHPREVIEW, .type = GFOPTIONTYPE_BOOL, .default_value = \"False\", ``` This section details which category of xlators can be used when and specifics around when each category is enabled. Maintained category xlators can be used by default, this implies, volumes created with these xlators enabled will throw no warnings, or need no user intervention to use the xlator. Tech Preview category xlators needs cluster configuration changes to allow these xlatorss to be used in volumes, further, logs will contain a message stating TP xlators are in use. Without the cluster configured to allow TP xlators, volumes created or edited to use such xlators would result in errors. (TBD) Cluster configuration option (TBD) Warning message (TBD) Code mechanics on how this is achieved Deprecated category xlators can be used by default, but will throw a warning in the logs that such are in use and will be deprecated in the future. (TBD) Warning message Obsolete category xlators will not be packaged and hence cannot be used from release builds. Experimental category xlators will not be packaged and hence cannot be used from release builds, if running experimental (weekly or other such)" } ]
{ "category": "Runtime", "file_name": "xlator-classification.md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }
[ { "data": "add batch consume kafka msg for scheduler, to solve handle blob delete msg too slow. remove kafka client manage offset local, let kafka server manage offset. upgrade action 1.Back up old version stop switch check status get current kafka offset ``` shell curl -XPOST --header 'Content-Type: application/json' http://$ip:$port/config/set -d '{\"key\":\"blob_delete\", \"value\":\"false\"}' curl -XPOST --header 'Content-Type: application/json' http://$ip:$port/config/set -d '{\"key\":\"shard_repair\", \"value\":\"false\"}' curl http://$ip:$port/config/get?key=blob_delete curl $ip2:$port2/metrics | grep kafkatopicpartitionoffset | grep consume > metricoffset.log ``` e.g. metric_offset.log: It has 4 kafka partitions kafkatopicpartitionoffset{clusterid=\"102\",modulename=\"SCHEDULER\",partition=\"1\",topic=\"blobdelete_102\",type=\"consume\"} 1.2053985e+07 kafkatopicpartitionoffset{clusterid=\"102\",modulename=\"SCHEDULER\",partition=\"2\",topic=\"blobdelete_102\",type=\"consume\"} 1.2051438e+07 kafkatopicpartitionoffset{clusterid=\"102\",modulename=\"SCHEDULER\",partition=\"3\",topic=\"blobdelete_102\",type=\"consume\"} 1234 kafkatopicpartitionoffset{clusterid=\"102\",modulename=\"SCHEDULER\",partition=\"4\",topic=\"blobdelete_102\",type=\"consume\"} 1025 2.Set kafka offset batch set kafka offset ``` shell DIR=\"/xxx/kafka_2.12-2.3.1-host/bin\" # kafka path EXEC=\"kafka-consumer-groups.sh\" HOST=\"192.168.0.3:9095\" # kafka host TOPIC=\"blobdelete102\" GROUP=\"SCHEDULER-$TOPIC\" arrayoffset=(12053985 12051438 1234 1025) # From the file above(metricoffset.log) echo \"set consumer offset... ${TOPIC} ${GROUP}\" cd $DIR for i in {0..63} # number of kafka partitions do echo \" $i : ${array_offset[$i]}\" ./$EXEC --bootstrap-server $HOST --reset-offsets --topic ${TOPIC}:$i --group ${GROUP} --to-offset ${array_offset[$i]} --execute done echo \"all done...\" cd - ``` 3.update new version config file stop old version, and install new version open switch ``` json { \"blob_delete\": { \"maxbatchsize\": 10, # batch consumption size of kafka messages, default is 10. If the batch is full or the time interval is reached, consume the Kafka messages accumulated during this period \"batchintervals\": 2, # time interval for consuming kafka messages, default is 2s \"messageslowdowntimes\": 3, # slow down when overload, default is 3s } } ``` ``` shell curl -XPOST --header 'Content-Type: application/json' http://$ip:$port/config/set -d '{\"key\":\"blob_delete\", \"value\":\"true\"}' curl -XPOST --header 'Content-Type: application/json' http://$ip:$port/config/set -d '{\"key\":\"shard_repair\", \"value\":\"true\"}' curl http://$ip:$port/config/get?key=blob_delete ```" } ]
{ "category": "Runtime", "file_name": "UPDATE.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "This guide will walk you through signing, distributing, and verifying the hello ACI created in the . ``` hello-0.0.1-linux-amd64.aci ``` By default rkt requires ACIs to be signed using a gpg detached signature. The following steps will walk you through the creation of a gpg keypair suitable for signing an ACI. If you have an existing gpg signing key skip to the step. Create a file named `gpg-batch` with the following contents: ``` %echo Generating a default key Key-Type: RSA Key-Length: 2048 Subkey-Type: RSA Subkey-Length: 2048 Name-Real: Carly Container Name-Comment: ACI signing key Name-Email: carly@example.com Expire-Date: 0 Passphrase: rkt %pubring rkt.pub %secring rkt.sec %commit %echo done ``` ``` $ gpg --batch --gen-key gpg-batch ``` ``` $ gpg --no-default-keyring \\ --secret-keyring ./rkt.sec --keyring ./rkt.pub --list-keys ./rkt.pub pub 2048R/26EF7A14 2015-01-09 uid [ unknown] Carly Container (ACI signing key) <carly@example.com> sub 2048R/B9C074CD 2015-01-09 ``` From the output above the level of trust for the signing key is unknown. This will cause the following warning if we attempt to validate an ACI signed with this key using the gpg cli: ``` gpg: WARNING: This key is not certified with a trusted signature! ``` Since we know exactly where this key came from let's trust it: ``` $ gpg --no-default-keyring \\ --secret-keyring ./rkt.sec \\ --keyring ./rkt.pub \\ --edit-key 26EF7A14 \\ trust gpg (GnuPG/MacGPG2) 2.0.22; Copyright (C) 2013 Free Software Foundation, Inc. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Secret key is available. pub 2048R/26EF7A14 created: 2015-01-09 expires: never usage: SC trust: unknown validity: unknown sub 2048R/B9C074CD created: 2015-01-09 expires: never usage: E [ unknown] (1). Carly Container (ACI signing key) <carly@example.com> Please decide how far you trust this user to correctly verify other users' keys (by looking at passports, checking fingerprints from different sources, etc.) 1 = I don't know or won't say 2 = I do NOT trust 3 = I trust marginally 4 = I trust fully 5 = I trust ultimately m = back to the main menu Your decision? 5 Do you really want to set this key to ultimate trust? (y/N) y pub 2048R/26EF7A14 created: 2015-01-09 expires: never usage: SC trust: ultimate validity: unknown sub 2048R/B9C074CD created: 2015-01-09 expires: never usage: E [ unknown] (1). Carly Container (ACI signing key) <carly@example.com> Please note that the shown key validity is not necessarily correct unless you restart the program. gpg> quit ``` ``` $ gpg --no-default-keyring --armor \\ --secret-keyring ./rkt.sec --keyring ./rkt.pub \\ --export carly@example.com > pubkeys.gpg ``` ``` $ gpg --no-default-keyring --armor \\ --secret-keyring ./rkt.sec --keyring ./rkt.pub \\ --output hello-0.0.1-linux-amd64.aci.asc \\ --detach-sig hello-0.0.1-linux-amd64.aci ``` ``` $ gpg --no-default-keyring \\ --secret-keyring ./rkt.sec --keyring ./rkt.pub \\ --verify hello-0.0.1-linux-amd64.aci.asc" }, { "data": "gpg: Signature made Fri Jan 9 05:01:49 2015 PST using RSA key ID 26EF7A14 gpg: Good signature from \"Carly Container (ACI signing key) <carly@example.com>\" [ultimate] ``` At this point you should have the following three files: ``` hello-0.0.1-linux-amd64.aci.asc hello-0.0.1-linux-amd64.aci pubkeys.gpg ``` Host `example.com/hello` with the following HTML contents and meta tags: ```html <!DOCTYPE html> <html lang=\"en\"> <head> <meta charset=\"utf-8\"> <meta name=\"ac-discovery\" content=\"example.com/hello https://example.com/images/{name}-{version}-{os}-{arch}.{ext}\"> <meta name=\"ac-discovery-pubkeys\" content=\"example.com/hello https://example.com/pubkeys.gpg\"> </head> </html> ``` Serve the following files at the locations described in the meta tags: ``` https://example.com/images/example.com/hello-0.0.1-linux-amd64.aci.asc https://example.com/images/example.com/hello-0.0.1-linux-amd64.aci https://example.com/pubkeys.gpg ``` Let's walk through the steps rkt takes when fetching images using Meta Discovery. The following rkt command: ``` $ rkt run example.com/hello:0.0.1 ``` results in rkt retrieving the following URIs: ``` https://example.com/hello?ac-discovery=1 https://example.com/images/example.com/hello-0.0.1-linux-amd64.aci https://example.com/images/example.com/hello-0.0.1-linux-amd64.aci.asc ``` The first response contains the template URL used to download the ACI and detached signature file. ``` <meta name=\"ac-discovery\" content=\"example.com/hello https://example.com/images/{name}-{version}-{os}-{arch}.{ext}\"> ``` rkt populates the `{os}` and `{arch}` based on the current running system. The `{version}` will be taken from the tag given on the command line or \"latest\" if not supplied. The `{ext}` will be substituted appropriately depending on artifact being retrieved: .aci will be used for ACI images and .aci.asc will be used for detached signatures. Once the ACI image has been downloaded rkt will extract the image's name from the image metadata. The image's name will be used to locate trusted public keys in the rkt keystore and perform signature validation. By default rkt does not trust any signing keys. Trust is established by storing public keys in the rkt keystore. This can be done using or manually, using the procedures described in the next section. The following directories make up the default rkt keystore layout: ``` /etc/rkt/trustedkeys/root.d /etc/rkt/trustedkeys/prefix.d /usr/lib/rkt/trustedkeys/root.d /usr/lib/rkt/trustedkeys/prefix.d ``` System administrators should store trusted keys under `/etc/rkt` as `/usr/lib/rkt` is designed to be used by the OS distribution. Trusted keys are saved in the desired directory named after the fingerprint of the public key. System administrators can \"disable\" a trusted key by writing an empty file under `/etc/rkt`. For example, if your OS distribution shipped with the following trusted key: ``` /usr/lib/rkt/trustedkeys/prefix.d/coreos.com/a175e31de7e3c5b9d2c4603e4dfb22bf75ef7a23 ``` you can disable it by writing the following empty file: ``` /etc/rkt/trustedkeys/prefix.d/coreos.com/a175e31de7e3c5b9d2c4603e4dfb22bf75ef7a23 ``` As an example, let's look at how we can trust a key used to sign images of the prefix `example.com/hello` The easiest way to trust a key is to use the subcommand. In this case, we directly pass it the URI containing the public key we wish to trust: ``` $ rkt trust --prefix=example.com/hello https://example.com/pubkeys.gpg Prefix: \"example.com/hello\" Key: \"https://example.com/aci-pubkeys.gpg\" GPG key fingerprint is: B346 E31D E7E3 C6F9 D1D4 603F 4DFB 61BF 26EF 7A14 Carly Container (ACI signing key)" }, { "data": "Are you sure you want to trust this key (yes/no)? yes Trusting \"https://example.com/aci-pubkeys.gpg\" for prefix \"example.com/hello\". Added key for prefix \"example.com/hello\" at \"/etc/rkt/trustedkeys/prefix.d/example.com/hello/b346e31de7e3c6f9d1d4603f4dfb61bf26ef7a14\" ``` Now the public key with fingerprint `b346e31de7e3c6f9d1d4603f4dfb61bf26ef7a14` will be trusted for all images with a name prefix of `example.com/hello`. An alternative to using is to manually trust keys by adding them to rkt's database. We do this by downloading the key, capturing its fingerprint, and storing it in the database using the fingerprint as filename ``` $ curl -O https://example.com/pubkeys.gpg ``` ``` $ gpg --no-default-keyring --with-fingerprint --keyring ./pubkeys.gpg carly@example.com pub 2048R/26EF7A14 2015-01-09 Key fingerprint = B346 E31D E7E3 C6F9 D1D4 603F 4DFB 61BF 26EF 7A14 uid [ unknown] Carly Container (ACI signing key) <carly@example.com> sub 2048R/B9C074CD 2015-01-09 ``` Remove white spaces and convert to lowercase: ``` $ echo \"B346 E31D E7E3 C6F9 D1D4 603F 4DFB 61BF 26EF 7A14\" | \\ tr -d \"[:space:]\" | tr '[:upper:]' '[:lower:]' ``` ``` b346e31de7e3c6f9d1d4603f4dfb61bf26ef7a14 ``` ``` mkdir -p /etc/rkt/trustedkeys/prefix.d/example.com/hello mv pubkeys.gpg /etc/rkt/trustedkeys/prefix.d/example.com/hello/b346e31de7e3c6f9d1d4603f4dfb61bf26ef7a14 ``` Now the public key with fingerprint `b346e31de7e3c6f9d1d4603f4dfb61bf26ef7a14` will be trusted for all images with a name prefix of `example.com/hello`. If you would like to trust a public key for any image, store the public key in one of the following \"root\" directories: ``` /etc/rkt/trustedkeys/root.d /usr/lib/rkt/trustedkeys/root.d ``` By default rkt will attempt to download the ACI detached signature and verify the image: ``` rkt: starting to discover app img example.com/hello:0.0.1 rkt: starting to fetch img from http://example.com/images/example.com/hello-0.0.1-linux-amd64.aci Downloading aci: [ ] 7.24 KB/1.26 MB rkt: example.com/hello:0.0.1 verified signed by: Carly Container (ACI signing key) <carly@example.com> /etc/localtime is not a symlink, not updating container timezone. ^]^]Container stage1 terminated by signal KILL. ``` Use the `--insecure-options=image` flag to disable image verification for a single run: ``` rkt: starting to discover app img example.com/hello:0.0.1 rkt: starting to fetch img from http://example.com/images/example.com/hello-0.0.1-linux-amd64.aci rkt: warning: image signature verification has been disabled Downloading aci: [= ] 32.8 KB/1.26 MB /etc/localtime is not a symlink, not updating container timezone. ^]^]Container stage1 terminated by signal KILL. ``` Notice when the `--insecure-options=image` flag is used, rkt will print the following warning: ``` rkt: warning: image signature verification has been disabled ``` Using the subcommand you can download and verify an ACI without immediately running a pod. This can be useful to precache ACIs on a large number of hosts: ``` rkt: starting to discover app img example.com/hello:0.0.1 rkt: starting to fetch img from http://example.com/images/example.com/hello-0.0.1-linux-amd64.aci Downloading aci: [ ] 14.5 KB/1.26 MB rkt: example.com/hello:0.0.1 verified signed by: Carly Container (ACI signing key) <carly@example.com> sha512-b3f138e10482d4b5f334294d69ae5c40 ``` As before, use the `--insecure-options=image` flag to disable image verification: ``` rkt: starting to discover app img example.com/hello:0.0.1 rkt: starting to fetch img from http://example.com/images/example.com/hello-0.0.1-linux-amd64.aci rkt: warning: image signature verification has been disabled Downloading aci: [ ] 4.34 KB/1.26 MB sha512-b3f138e10482d4b5f334294d69ae5c40 ```" } ]
{ "category": "Runtime", "file_name": "signing-and-verification-guide.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "In Docker v1.13 and later, the default iptables forwarding policy was changed to `DROP`. For more detail on the Docker change, see the Docker . This problems manifests itself as connectivity problems between containers running on different hosts. To resolve it upgrade to the latest version of flannel. Flannel uses the `klog` library but only supports logging to stderr. The severity level can't be changed but the verbosity can be changed with the `-v` option. Flannel does not make extensive use of the verbosity level but increasing the value from `0` (the default) will result in some additional logs. To get the most detailed logs, use `-v=10` ``` -v value log level for V logs -vmodule value comma-separated list of pattern=N settings for file-filtered logging -logbacktraceat value when logging hits line file:N, emit a stack trace ``` When running under systemd (e.g. on CoreOS Container Linux) the logs can be viewed with `journalctl -u flanneld` When flannel is running as a pod on Kubernetes, the logs can be viewed with `kubectl logs --namespace kube-flannel <POD_ID> -c kube-flannel`. You can find the pod IDs with `kubectl get pod --namespace kube-flannel -l app=flannel` Most backends require that each node has a unique \"public IP\" address. This address is chosen when flannel starts. Because leases are tied to the public address, if the address changes, flannel must be restarted. The interface chosen and the public IP in use is logged out during startup, e.g. ``` I0629 14:28:35.866793 5522 main.go:386] Determining IP address of default interface I0629 14:28:35.866987 5522 main.go:399] Using interface with name enp62s0u1u2 and address 172.24.17.174 I0629 14:28:35.867000 5522 main.go:412] Using 10.10.10.10 as external address ``` Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed. This may lead to problems with flannel. By default, flannel selects the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this issue, pass the `--iface=eth1` flag to flannel so that the second interface is chosen. When the public IP is behind NAT, the UDP checksum fields of the VXLAN packets can be corrupted. In that case, try running the following commands to avoid corrupted checksums: ```bash /usr/sbin/ethtool -K flannel.1 tx-checksum-ip-generic off ``` To automate the command above via udev, create `/etc/udev/rules.d/90-flannel.rules` as follows: ``` SUBSYSTEM==\"net\", ACTION==\"add|change|move\", ENV{INTERFACE}==\"flannel.1\", RUN+=\"/usr/sbin/ethtool -K flannel.1 tx-checksum-ip-generic off\" ``` <!-- ref: https://github.com/flannel-io/flannel/issues/1279 https://github.com/kubernetes/kops/pull/9074 https://github.com/karmab/kcli/commit/b1a8eff658d17cf4e28162f0fa2c8b2b10e5ad00 --> Depending on the backend being used, flannel may need to run with super user permissions. Examples include creating VXLAN devices or programming routes. If you see errors similar to the following, confirm that the user running flannel has the right permissions (or try running with `sudo)`. `Error adding route...` `Add L2 failed` `Failed to set up IP Masquerade` `Error registering network: operation not permitted` Flannel is known to scale to a very large number of" }, { "data": "A delay in contacting pods in a newly created host may indicate control plane problems. Flannel doesn't need much CPU or RAM but the first thing to check would be that it has adequate resources available. Flannel is also reliant on the performance of the datastore, either etcd or the Kubernetes API server. Check that they are performing well. Flannel relies on the underlying network so that's the first thing to check if you're seeing poor data plane performance. There are two flannel specific choices that can have a big impact on performance 1) The type of backend. For example, if encapsulation is used, `vxlan` will always perform better than `udp`. For maximum data plane performance, avoid encapsulation. 2) The size of the MTU can have a large impact. To achieve maximum raw bandwidth, a network supporting a large MTU should be used. Flannel writes an MTU setting to the `subnet.env` file. This file is read by either the Docker daemon or the CNI flannel plugin which does the networking for individual containers. To troubleshoot, first ensure that the network interface that flannel is using has the right MTU. Then check that the correct MTU is written to the `subnet.env`. Finally, check that the containers have the correct MTU on their virtual ethernet device. When using `udp` backend, flannel uses UDP port 8285 for sending encapsulated packets. When using `vxlan` backend, kernel uses UDP port 8472 for sending encapsulated packets. Make sure that your firewall rules allow this traffic for all hosts participating in the overlay network. Make sure that your firewall rules allow traffic from pod network cidr visit your kubernetes master node. The flannel kube subnet manager relies on the fact that each node already has a `podCIDR` defined. You can check the podCidr for your nodes with one of the following two commands `kubectl get nodes -o jsonpath='{.items[].spec.podCIDR}'` `kubectl get nodes -o template --template={{.spec.podCIDR}}` If your nodes do not have a podCIDR, then either use the `--pod-cidr` kubelet command-line option or the `--allocate-node-cidrs=true --cluster-cidr=<cidr>` controller-manager command-line options. If `kubeadm` is being used then pass `--pod-network-cidr=10.244.0.0/16` to `kubeadm init` which will ensure that all nodes are automatically assigned a `podCIDR`. It's possible (but not generally recommended) to manually set the `podCIDR` to a fixed value for each node. The node subnet ranges must not overlap. `kubectl patch node <NODE_NAME> -p '{\"spec\":{\"podCIDR\":\"<SUBNET>\"}}'` `failed to read net conf` - flannel expects to be able to read the net conf from \"/etc/kube-flannel/net-conf.json\". In the provided manifest, this is set up in the `kube-flannel-cfg` ConfigMap. `error parsing subnet config` - The net conf is malformed. Double check that it has the right content and is valid JSON. `node <NODE_NAME> pod cidr not assigned` - The node doesn't have a `podCIDR` defined. See above for more info. `Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-abc123': the server does not allow access to the requested resource` - The kubernetes cluster has RBAC enabled. Run `https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-old-manifests/kube-flannel-rbac.yml`" } ]
{ "category": "Runtime", "file_name": "troubleshooting.md", "project_name": "Flannel", "subcategory": "Cloud Native Network" }
[ { "data": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting urfave-governance@googlegroups.com, a members-only group that is world-postable. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. This Code of Conduct is adapted from the , version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html" } ]
{ "category": "Runtime", "file_name": "CODE_OF_CONDUCT.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "title: Ceph Operator Helm Chart {{ template \"generatedDocsWarning\" . }} Installs to create, configure, and manage Ceph clusters on Kubernetes. This chart bootstraps a deployment on a cluster using the package manager. Kubernetes 1.22+ Helm 3.x See the for more details. The Ceph Operator helm chart will install the basic components necessary to create a storage platform for your Kubernetes cluster. Install the Helm chart . The `helm install` command deploys rook on the Kubernetes cluster in the default configuration. The section lists the parameters that can be configured during installation. It is recommended that the rook operator be installed into the `rook-ceph` namespace (you will install your clusters into separate namespaces). Rook currently publishes builds of the Ceph operator to the `release` and `master` channels. The release channel is the most recent release of Rook that is considered stable for the community. ```console helm repo add rook-release https://charts.rook.io/release helm install --create-namespace --namespace rook-ceph rook-ceph rook-release/rook-ceph -f values.yaml ``` For example settings, see the next section or The following table lists the configurable parameters of the rook-operator chart and their default values. {{ template \"chart.valuesTable\" . }} To deploy from a local build from your development environment: Build the Rook docker image: `make` Copy the image to your K8s cluster, such as with the `docker save` then the `docker load` commands Install the helm chart: ```console cd deploy/charts/rook-ceph helm install --create-namespace --namespace rook-ceph rook-ceph . ``` To see the currently installed Rook chart: ```console helm ls --namespace rook-ceph ``` To uninstall/delete the `rook-ceph` deployment: ```console helm delete --namespace rook-ceph rook-ceph ``` The command removes all the Kubernetes components associated with the chart and deletes the release. After uninstalling you may want to clean up the CRDs as described on the ." } ]
{ "category": "Runtime", "file_name": "operator-chart.gotmpl.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "sidebar_position: 8 sidebar_label: \"Local Volumes\" Running a stateful application with HwameiStor is super easy. As an example, we will deploy a MySQL application by creating a local volume. :::note The yaml file for MySQL is learnt from ::: Make sure the StorageClasses have been created successfully by HwameiStor Operator. And then select one of them to provision the data volume for the application. ```console $ kubectl get sc hwameistor-storage-lvm-hdd -o yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hwameistor-storage-lvm-hdd parameters: convertible: \"false\" csi.storage.k8s.io/fstype: xfs poolClass: HDD poolType: REGULAR replicaNumber: \"1\" striped: \"true\" volumeKind: LVM provisioner: lvm.hwameistor.io reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true ``` With HwameiStor and its `StorageClass` ready, a MySQL StatefulSet and its volumes can be deployed by a single command: ```Console $ kubectl apply -f sts-mysql_local.yaml ``` Please note the `volumeClaimTemplates` uses `storageClassName: hwameistor-storage-lvm-hdd`: ```yaml spec: volumeClaimTemplates: metadata: name: data labels: app: sts-mysql-local app.kubernetes.io/name: sts-mysql-local spec: storageClassName: hwameistor-storage-lvm-hdd accessModes: [\"ReadWriteOnce\"] resources: requests: storage: 1Gi ``` Please note the minimum PVC size need to be over 4096 blocks, for example, 16MB with 4KB block. In this example, the pod is scheduled on node `k8s-worker-3`. ```console $ kubectl get po -l app=sts-mysql-local -o wide NAME READY STATUS RESTARTS AGE IP NODE sts-mysql-local-0 2/2 Running 0 3m08s 10.1.15.154 k8s-worker-3 $ kubectl get pvc -l app=sts-mysql-local NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE data-sts-mysql-local-0 Bound pvc-accf1ddd-6f47-4275-b520-dc317c90f80b 1Gi RWO hwameistor-storage-lvm-hdd 3m Filesystem ``` By listing `LocalVolume(LV)` objects with the same name as that of the `PV`, we can see that the local volume is also created on node `k8s-worker-3` ```console $ kubectl get lv pvc-accf1ddd-6f47-4275-b520-dc317c90f80b NAME POOL REPLICAS CAPACITY ACCESSIBILITY STATE RESOURCE PUBLISHED AGE pvc-accf1ddd-6f47-4275-b520-dc317c90f80b LocalStorage_PoolHDD 1 1073741824 Ready -1 k8s-worker-3 3m ``` HwameiStor supports `StatefulSet` scaleout. Each `pod` of the `StatefulSet` will attach and mount an independent HwameiStor volume. ```console $ kubectl scale sts/sts-mysql-local --replicas=3 $ kubectl get po -l app=sts-mysql-local -o wide NAME READY STATUS RESTARTS AGE IP NODE sts-mysql-local-0 2/2 Running 0 4h38m 10.1.15.154 k8s-worker-3 sts-mysql-local-1 2/2 Running 0 19m 10.1.57.44 k8s-worker-2 sts-mysql-local-2 0/2 Init:0/2 0 14s 10.1.42.237 k8s-worker-1 $ kubectl get pvc -l app=sts-mysql-local -o wide NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE data-sts-mysql-local-0 Bound pvc-accf1ddd-6f47-4275-b520-dc317c90f80b 1Gi RWO hwameistor-storage-lvm-hdd 3m07s Filesystem data-sts-mysql-local-1 Bound pvc-a4f8b067-9c1d-450f-aff4-5807d61f5d88 1Gi RWO hwameistor-storage-lvm-hdd 2m18s Filesystem data-sts-mysql-local-2 Bound pvc-47ee308d-77da-40ec-b06e-4f51499520c1 1Gi RWO hwameistor-storage-lvm-hdd 2m18s Filesystem $ kubectl get lv NAME POOL REPLICAS CAPACITY ACCESSIBILITY STATE RESOURCE PUBLISHED AGE pvc-47ee308d-77da-40ec-b06e-4f51499520c1 LocalStorage_PoolHDD 1 1073741824 Ready -1 k8s-worker-1 2m50s pvc-a4f8b067-9c1d-450f-aff4-5807d61f5d88 LocalStorage_PoolHDD 1 1073741824 Ready -1 k8s-worker-2 2m50s pvc-accf1ddd-6f47-4275-b520-dc317c90f80b LocalStorage_PoolHDD 1 1073741824 Ready -1 k8s-worker-3 3m40s ```" } ]
{ "category": "Runtime", "file_name": "local.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "This user manual is made to assist users in using iSulad. `iSulad` can be integrated with kubernetes through the CRI interface. For integrating with kubernetes, please refer to . Device Mapper is a kernel-based framework that underpins many advanced volume management technologies on Linux. For devicemapper environment preparation, please refer to . If you want to use native network in iSulad, please refer to . If you want to Setup bridge network for Container, please refer to . If you want to run iSulad with a non-root user, please refer to . If you want to use isula search, please refer to ." } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "iSulad", "subcategory": "Container Runtime" }
[ { "data": "This document should serve as a reference point for WasmEdge users and contributors to understand where the project is heading, and help determine if a contribution could be conflicting with a longer term plan. Discussion on the roadmap can take place in threads under . Please open and comment on an issue if you want to give suggestions and feedback to items in the roadmap. Please review the roadmap to avoid potential duplicated efforts. Please open an issue to track any initiative on the roadmap of WasmEdge (Usually driven by new feature requests). We will work with and rely on our community to focus our efforts to improve WasmEdge. The following table includes the current roadmap for WasmEdge. If you have any questions or would like to contribute to WasmEdge, please create an issue to discuss with our team. If you don't know where to start, we are always looking for contributors that will help us reduce technical, automation, and documentation debt. Please take the timelines & dates as proposals and goals. Priorities and requirements change based on community feedback, roadblocks encountered, community contributions, etc. Last Updated: Mar 2023 | Theme | Description | Timeline | | | | | | Proposal | Function reference proposal | Q2 / 2023 | | Proposal | Exception handling proposal | Q2 / 2023 | | Proposal | Typed continuation proposal | Q2 / 2023 | | Proposal | Stack-switch proposal | Q2 / 2023 | | Proposal | GC proposal | Q2 / 2023 | | Feature | WasmEdge unified tool | Q2 / 2023 | | Feature | WASM serialization/deserialization | Q2 / 2023 | | Proposal | WASI signature proposal | Q3 / 2023 | | Feature | Enhance info/debug logging, provide verbose mode (wasmedge a.wasm verbose=3) | Q3 / 2023 | | Feature | Wasm coredump | Q3 / 2023 | | Proposal | WASM C API proposal | Q4 / 2023 | | Proposal | WASM memory64 proposal | Q4 / 2023 | | Feature | WASI-NN training extension | Q4 / 2023 | | Feature | DWARF symbol | Q4 / 2023 | | Proposal | Component model proposal | ?? / 2023 | | Languages Bindings | | ?? / 2023 | | Feature | Enable PaddlePaddle backend for WASI-NN | ?? / 2023 | | Open Source Collaboration | Eventmesh, Krustlet, APISIX, Dapr, MOSN, KubeEdge, OpenYurt, Fedora, SuperEdge, UDF for SaaS and Databases, Substrate, ParaState, Filecoin | long term work |" } ]
{ "category": "Runtime", "file_name": "ROADMAP.md", "project_name": "WasmEdge Runtime", "subcategory": "Container Runtime" }
[ { "data": "myst: substitutions: type: \"instance\" (instances-backup)= There are different ways of backing up your instances: {ref}`instances-snapshots` {ref}`instances-backup-export` {ref}`instances-backup-copy` % Include content from ```{include} storagebackupvolume.md :start-after: <!-- Include start backup types --> :end-before: <!-- Include end backup types --> ``` ```{note} Custom storage volumes might be attached to an instance, but they are not part of the instance. Therefore, the content of a custom storage volume is not stored when you back up your instance. You must back up the data of your storage volume separately. See {ref}`howto-storage-backup-volume` for instructions. ``` (instances-snapshots)= You can save your instance at a point in time by creating an instance snapshot, which makes it easy to restore the instance to a previous state. Instance snapshots are stored in the same storage pool as the instance volume itself. % Include content from ```{include} storagebackupvolume.md :start-after: <!-- Include start optimized snapshots --> :end-before: <!-- Include end optimized snapshots --> ``` Use the following command to create a snapshot of an instance: incus snapshot create <instance_name> [<snapshot name>] % Include content from ```{include} storagebackupvolume.md :start-after: <!-- Include start create snapshot options --> :end-before: <!-- Include end create snapshot options --> ``` For virtual machines, you can add the `--stateful` flag to capture not only the data included in the instance volume but also the running state of the instance. Note that this feature is not fully supported for containers because of CRIU limitations. Use the following command to display the snapshots for an instance: incus info <instance_name> You can view or modify snapshots in a similar way to instances, by referring to the snapshot with `<instancename>/<snapshotname>`. To show configuration information about a snapshot, use the following command: incus config show <instancename>/<snapshotname> To change the expiry date of a snapshot, use the following command: incus config edit <instancename>/<snapshotname> ```{note} In general, snapshots cannot be edited, because they preserve the state of the instance. The only exception is the expiry date. Other changes to the configuration are silently ignored. ``` To delete a snapshot, use the following command: incus snapshot delete <instancename> <snapshotname> You can configure an instance to automatically create snapshots at specific times (at most once every minute). To do so, set the {config:option}`instance-snapshots:snapshots.schedule` instance" }, { "data": "For example, to configure daily snapshots, use the following command: incus config set <instance_name> snapshots.schedule @daily To configure taking a snapshot every day at 6 am, use the following command: incus config set <instance_name> snapshots.schedule \"0 6 *\" When scheduling regular snapshots, consider setting an automatic expiry ({config:option}`instance-snapshots:snapshots.expiry`) and a naming pattern for snapshots ({config:option}`instance-snapshots:snapshots.pattern`). You should also configure whether you want to take snapshots of instances that are not running ({config:option}`instance-snapshots:snapshots.schedule.stopped`). You can restore an instance to any of its snapshots. To do so, use the following command: incus snapshot restore <instancename> <snapshotname> If the snapshot is stateful (which means that it contains information about the running state of the instance), you can add the `--stateful` flag to restore the state. (instances-backup-export)= You can export the full content of your instance to a standalone file that can be stored at any location. For highest reliability, store the backup file on a different file system to ensure that it does not get lost or corrupted. Use the following command to export an instance to a compressed file (for example, `/path/to/my-instance.tgz`): incus export <instancename> [<filepath>] If you do not specify a file path, the export file is saved as `<instance_name>.<extension>` in the working directory (for example, `my-container.tar.gz`). ```{warning} If the output file (`<instance_name>.<extension>` or the specified file path) already exists, the command overwrites the existing file without warning. ``` % Include content from ```{include} storagebackupvolume.md :start-after: <!-- Include start export info --> :end-before: <!-- Include end export info --> ``` `--instance-only` : By default, the export file contains all snapshots of the instance. Add this flag to export the instance without its snapshots. You can import an export file (for example, `/path/to/my-backup.tgz`) as a new instance. To do so, use the following command: incus import <filepath> [<instancename>] If you do not specify an instance name, the original name of the exported instance is used for the new instance. If an instance with that name already (or still) exists in the specified storage pool, the command returns an error. In that case, either delete the existing instance before importing the backup or specify a different instance name for the import. (instances-backup-copy)= You can copy an instance to a secondary backup server to back it up. See {ref}`move-instances` for instructions." } ]
{ "category": "Runtime", "file_name": "instances_backup.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "(cluster-manage-instance)= In a cluster setup, each instance lives on one of the cluster members. You can operate each instance from any cluster member, so you do not need to log on to the cluster member on which the instance is located. (cluster-target-instance)= When you launch an instance, you can target it to run on a specific cluster member. You can do this from any cluster member. For example, to launch an instance named `c1` on the cluster member `server2`, use the following command: incus launch images:ubuntu/22.04 c1 --target server2 You can launch instances on specific cluster members or on specific {ref}`cluster groups <howto-cluster-groups>`. If you do not specify a target, the instance is assigned to a cluster member automatically. See {ref}`clustering-instance-placement` for more information. To check on which member an instance is located, list all instances in the cluster: incus list The location column indicates the member on which each instance is running. You can move an existing instance to another cluster member. For example, to move the instance `c1` to the cluster member `server1`, use the following commands: incus stop c1 incus move c1 --target server1 incus start c1 See {ref}`move-instances` for more information. To move an instance to a member of a cluster group, use the group name prefixed with `@` for the `--target` flag. For example: incus move c1 --target @group1" } ]
{ "category": "Runtime", "file_name": "cluster_manage_instance.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "Let us assume that you already have a working Go environment. You need to make sure that your `$GOPATH/bin` is included in your PATH. ``` sudo apt-get install vim ``` The following link has detailed instructions on setting up `vim-go` on your system and using the shortcuts. https://github.com/fatih/vim-go-tutorial#quick-setup The following is an extract of the command required to install on Ubuntu. ``` curl -fLo ~/.vim/autoload/plug.vim --create-dirs https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim git clone https://github.com/fatih/vim-go.git ~/.vim/plugged/vim-go ``` Create `~/.vimrc` with following content: ``` call plug#begin() Plug 'fatih/vim-go', { 'do': ':GoInstallBinaries' } call plug#end() ``` Launch vim. In the vim editor, type the following: ``` :PlugInstall ``` The above should show a window that Plugins are installed. Now, run the following in the same vim editor: ``` :GoInstallBinaries ``` You should see the dependent go binaries downloaded and installed in your `GOPATH/bin`. The `vim-go` repository provides a large number of additional plugins and shortcuts that can be added to the `.vimrc`. However, with the default `.vimrc` use above, you can already get going: Initiate the build `:GoBuild`. Navigate to the definition `ctrl-{`. Navigate back `ctrl-o`." } ]
{ "category": "Runtime", "file_name": "quick-start-with-vim-go.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "[TOC] This section explains the steps required to install Falco+gVisor integration depending your environment. First, install and on the machine. Run `runsc --version` and check that `runsc version release-20220704.0` or newer is reported. Run `falco --version` and check that `Falco version` reports `0.33.1` or higher. Once both are installed, you can configure gVisor to connect to Falco whenever a new sandbox is started. The first command below generates a configuration file containing a list of trace points that Falco is interested in receiving. This file is passed to gVisor during container startup so that gVisor connects to Falco before the application starts. The second command installs runsc as a Docker runtime pointing it to the configuration file we just generated: ```shell falco --gvisor-generate-config | sudo tee /etc/falco/pod-init.json sudo runsc install --runtime=runsc-falco -- --pod-init-config=/etc/falco/pod-init.json sudo systemctl restart docker ``` gVisor is now configured. Next, let's start Falco and tell it to enable gVisor monitoring. You should use the same command line that you normally use to start Falco with these additional flags: `--gvisor-config`: path to the gVisor configuration file, in our case `/etc/falco/pod-init.json`. `--gvisor-root`: path to the `--root` flag that docker uses with gVisor, normally: `/var/run/docker/runtime-runc/moby`. For our example, let's just start with the default settings and rules: ```shell sudo falco \\ -c /etc/falco/falco.yaml \\ --gvisor-config /etc/falco/pod-init.json \\ --gvisor-root /var/run/docker/runtime-runc/moby ``` Note: If you get `Error: Cannot find runsc binary`, make sure `runsc` is in the `PATH`. From this point on, every time a gVisor sandbox starts, it connects to Falco to send trace points occurring inside the container. Those are translated into Falco events that are processed by the rules you have defined. If you used the command above, the configuration files are defined in `/etc/falco/facorules.yaml` and `/etc/falco/facorules.local.yaml` (where you can add your own rules). If you are using Kubernetes, the steps above must be done on every node that has gVisor enabled. Luckily, this can be done for you automatically using . You can find more details, like available options, in the section. Here is a quick example using , which already pre-configures gVisor for you. You can use any version that is equal or higher than 1.24.4-gke.1800: ```shell gcloud container clusters create my-cluster --release-channel=rapid --cluster-version=1.25 gcloud container node-pools create gvisor --sandbox=type=gvisor --cluster=my-cluster gcloud container clusters get-credentials my-cluster helm install falco-gvisor falcosecurity/falco \\ -f https://raw.githubusercontent.com/falcosecurity/charts/master/falco/values-gvisor-gke.yaml \\ --namespace falco-gvisor --create-namespace ``` Let's run something interesting inside a container to see a few rules trigger in" }, { "data": "Package managers, like `apt`, don't normally run inside containers in production, and often indicate that an attacker is trying to install tools to expose the container. To detect such cases, the default set of rules trigger an `Error` event when the package manager is invoked. Let's see it in action, first start a container and run a simple `apt` command: ```shell sudo docker run --rm --runtime=runsc-falco -ti ubuntu $ apt update ``` In the terminal where falco is running, you should see in the output many `Error Package management process launched` events. Here is one of the events informing that a package manager was invoked inside the container: ```json { \"output\": \"18:39:27.542112944: Error Package management process launched in container (user=root userloginuid=0 command=apt apt update containerid=1473cfd51410 containername=sadwu image=ubuntu:latest) container=1473cfd51410 pid=4 tid=4\", \"priority\": \"Error\", \"rule\": \"Launch Package Management Process in Container\", \"source\": \"syscall\", \"tags\": [ \"mitre_persistence\", \"process\" ], \"time\": \"2022-08-02T18:39:27.542112944Z\", \"output_fields\": { \"container.id\": \"1473cfd51410\", \"container.image.repository\": \"ubuntu\", \"container.image.tag\": \"latest\", \"container.name\": \"sad_wu\", \"evt.time\": 1659465567542113000, \"proc.cmdline\": \"apt apt update\", \"proc.vpid\": 4, \"thread.vtid\": 4, \"user.loginuid\": 0, \"user.name\": \"root\" } } ``` As you can see, it's warning that `apt update` command was ran inside container `sad_wu`, and gives more information about the user, TID, image name, etc. There are also rules that trigger when there is a write under `/` and other system directories that are normally part of the image and shouldn't be changed. If we proceed with installing packages into the container, apart from the event above, there are a few other events that are triggered. Let's execute `apt-get install -y netcat` and look at the output: ```json { \"output\": \"18:40:42.192811725: Warning Sensitive file opened for reading by non-trusted program (user=root userloginuid=0 program=dpkg-preconfigure command=dpkg-preconfigure /usr/sbin/dpkg-preconfigure --apt file=/etc/shadow parent=sh gparent=<NA> ggparent=<NA> gggparent=<NA> containerid=1473cfd51410 image=ubuntu) container=1473cfd51410 pid=213 tid=213\", \"priority\": \"Warning\", \"rule\": \"Read sensitive file untrusted\", \"source\": \"syscall\", \"tags\": [ \"filesystem\", \"mitrecredentialaccess\", \"mitre_discovery\" ], } { \"output\": \"18:40:42.494933664: Error File below / or /root opened for writing (user=root userloginuid=0 command=tar tar -x -f - --warning=no-timestamp parent=dpkg-deb file=md5sums program=tar containerid=1473cfd51410 image=ubuntu) container=1473cfd51410 pid=221 tid=221\", \"priority\": \"Error\", \"rule\": \"Write below root\", \"source\": \"syscall\", \"tags\": [ \"filesystem\", \"mitre_persistence\" ], } ``` The first event is raised as a `Warning` when `/etc/shadow` is open by the package manager to inform that a sensitive file has been open for read. The second one triggers an `Error` when the package manager tries to untar a file under the root directory. None of these actions are expected from a webserver that is operating normally and should raise security alerts. You can also install and for better ways to visualize the events. Now you can configure the rules and run your containers using gVisor and Falco." } ]
{ "category": "Runtime", "file_name": "falco.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "<!-- toc --> - - - - - - - - <!-- /toc --> `ExternalNode` is a CRD API that enables Antrea to manage the network connectivity and security on a Non-Kubernetes Node (like a virtual machine or a bare-metal server). It supports specifying which network interfaces on the external Node are expected to be protected with Antrea NetworkPolicy rules. The virtual machine or bare-metal server represented by an `ExternalNode` resource can be either Linux or Windows. \"External Node\" will be used to designate such a virtual machine or bare-metal server in the rest of this document. Antrea NetworkPolicies are applied to an external Node by leveraging the `ExternalEntity` resource. `antrea-controller` creates an `ExternalEntity` resource for each network interface specified in the `ExternalNode` resource. `antrea-agent` is running on the external Node, and it controls network connectivity and security by attaching the network interface(s) to an OVS bridge. A has been implemented, dedicated to the ExternalNode feature. You may be interested in using this capability for the below scenarios: To apply Antrea NetworkPolicy to an external Node. You want the same security configurations on the external Node for all Operating Systems. This guide demonstrates how to configure `ExternalNode` to achieve the above result. `ExternalNode` is introduced in v1.8 as an alpha feature. The feature gate `ExternalNode` must be enabled in the `antrea-controller` and `antrea-agent` configuration. The configuration for `antrea-controller` is modified in the `antrea-config` ConfigMap as follows for the feature to work: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: antrea-config namespace: kube-system data: antrea-controller.conf: | featureGates: ExternalNode: true ``` The `antrea-controller` implements the `antrea` Service, which accepts connections from each `antrea-agent` and is an important part of the NetworkPolicy implementation. By default, the `antrea` Service has type `ClusterIP`. Because external Nodes run outside of the Kubernetes cluster, they cannot directly access the `ClusterIP` address. Therefore, the `antrea` Service needs to become externally-accessible, by changing its type to `NodePort` or `LoadBalancer`. Since `antrea-agent` is running on an external Node which is not managed by Kubernetes, a configuration file needs to be present on each machine where the `antrea-agent` is running, and the path to this file will be provided to the `antrea-agent` as a command-line argument. Refer to the to learn the`antrea-agent` configuration options when running on an external Node. A further will provide detailed steps for running the `antrea-agent` on a VM. An example `ExternalNode` resource: ```yaml apiVersion: crd.antrea.io/v1alpha1 kind: ExternalNode metadata: name: vm1 namespace: vm-ns labels: role: db spec: interfaces: ips: [ \"172.16.100.3\" ] name: \"\" ``` Note: Only one interface is supported for Antrea v1.8. The `name` field in an `ExternalNode` uniquely identifies an external Node. The `ExternalNode` name is provided to `antrea-agent` via an environment variable `NODE_NAME`, otherwise `antrea-agent` will use the hostname to find the `ExternalNode` resource if `NODE_NAME` is not set. `ExternalNode` resource is `Namespace` scoped. The `Namespace` is provided to `antrea-agent` with option `externalNodeNamespace` in . ```yaml externalNodeNamespace: vm-ns ``` The `interfaces` field specifies the list of the network interfaces expected to be guarded by Antrea NetworkPolicy. At least one interface is required. Interface `name` or `ips` is used to identify the target interface. The field `ips` must be provided in the CRD, but `name` is" }, { "data": "Multiple IPs on a single interface is supported. In the case that multiple `interfaces` are configured, `name` must be specified for every `interface`. `antrea-controller` creates an `ExternalEntity` for each interface whenever an `ExternalNode` is created. The created `ExternalEntity` has the following characteristics: It is configured within the same Namespace as the `ExternalNode`. The `name` is generated according to the following principles: Use the `ExternalNode` name directly, if there is only one interface, and interface name is not specified. Use the format `$ExternalNode.name-$hash($interface.name)[:5]` for other cases. The `externalNode` field is set with the `ExternalNode` name. The `owner` is referring to the `ExternalNode` resource. All labels added on `ExternalNode` are copied to the `ExternalEntity`. Each IP address of the interface is added as an endpoint in the `endpoints` list, and the interface name is used as the endpoint name if it is set. The `ExternalEntity` resource created for the above `ExternalNode` interface would look like this: ```yaml apiVersion: crd.antrea.io/v1alpha2 kind: ExternalEntity metadata: labels: role: db name: vm1 namespace: vm-ns ownerReferences: apiVersion: v1alpha1 kind: ExternalNode name: vm1 uid: 99b09671-72da-4c64-be93-17185e9781a5 resourceVersion: \"5513\" uid: 5f360f32-7806-4d2d-9f36-80ce7db8de10 spec: endpoints: ip: 172.16.100.3 externalNode: vm1 ``` Enable `ExternalNode` feature on the `antrea-controller`, and expose the antrea Service externally (e.g., as a NodePort Service). Create a Namespace for `antrea-agent`. This document will use `vm-ns` as an example Namespace for illustration. ```bash kubectl create ns vm-ns ``` Create a ServiceAccount, ClusterRole and ClusterRoleBinding for `antrea-agent` as shown below. If you use a Namespace other than `vm-ns`, you need to update the and change `vm-ns` to the right Namespace. ```bash kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/externalnode/vm-agent-rbac.yml ``` Create `antrea-agent.kubeconfig` file for `antrea-agent` to access the K8S API server. ```bash CLUSTER_NAME=\"kubernetes\" SERVICE_ACCOUNT=\"vm-agent\" NAMESPACE=\"vm-ns\" KUBECONFIG=\"antrea-agent.kubeconfig\" APISERVER=$(kubectl config view -o jsonpath=\"{.clusters[?(@.name==\\\"$CLUSTER_NAME\\\")].cluster.server}\") TOKEN=$(kubectl -n $NAMESPACE get secrets -o jsonpath=\"{.items[?(@.metadata.name=='${SERVICE_ACCOUNT}-service-account-token')].data.token}\"|base64 --decode) kubectl config --kubeconfig=$KUBECONFIG set-cluster $CLUSTER_NAME --server=$APISERVER --insecure-skip-tls-verify=true kubectl config --kubeconfig=$KUBECONFIG set-credentials antrea-agent --token=$TOKEN kubectl config --kubeconfig=$KUBECONFIG set-context antrea-agent@$CLUSTERNAME --cluster=$CLUSTERNAME --user=antrea-agent kubectl config --kubeconfig=$KUBECONFIG use-context antrea-agent@$CLUSTER_NAME ``` Create `antrea-agent.antrea.kubeconfig` file for `antrea-agent` to access the `antrea-controller` API server. ```bash ANTREAAPISERVER=\"https://172.18.0.1:443\" ANTREACLUSTERNAME=\"antrea\" NAMESPACE=\"vm-ns\" KUBECONFIG=\"antrea-agent.antrea.kubeconfig\" TOKEN=$(kubectl -n $NAMESPACE get secrets -o jsonpath=\"{.items[?(@.metadata.name=='${SERVICE_ACCOUNT}-service-account-token')].data.token}\"|base64 --decode) kubectl config --kubeconfig=$KUBECONFIG set-cluster $ANTREACLUSTERNAME --server=$ANTREAAPISERVER --insecure-skip-tls-verify=true kubectl config --kubeconfig=$KUBECONFIG set-credentials antrea-agent --token=$TOKEN kubectl config --kubeconfig=$KUBECONFIG set-context antrea-agent@$ANTREACLUSTERNAME --cluster=$ANTREACLUSTERNAME --user=antrea-agent kubectl config --kubeconfig=$KUBECONFIG use-context antrea-agent@$ANTREACLUSTERNAME ``` Create an `ExternalNode` resource for the VM. After preparing the `ExternalNode` configuration yaml for the VM, we can apply it in the cluster. ```bash cat << EOF | kubectl apply -f - apiVersion: crd.antrea.io/v1alpha1 kind: ExternalNode metadata: name: vm1 namespace: vm-ns labels: role: db spec: interfaces: ips: [ \"172.16.100.3\" ] name: \"\" EOF ``` OVS needs to be installed on the VM. For more information about OVS installation please refer to the . `Antrea Agent` can be installed as a native service or can be installed in a container. Build `antrea-agent` binary in the root of the Antrea code tree and copy the `antrea-agent` binary from the `bin` directory to the Linux VM. ```bash make docker-bin ``` Copy configuration files to the VM, including , which specifies agent configuration parameters; `antrea-agent.antrea.kubeconfig` and `antrea-agent.kubeconfig`, which were generated in steps 4 and 5 of . Bootstrap `antrea-agent` using one of these 2 methods: Bootstrap `antrea-agent` using the as shown below (Ubuntu 18.04 and" }, { "data": "and Red Hat Enterprise Linux 8.4). ```bash ./install-vm.sh --ns vm-ns --bin ./antrea-agent --config ./antrea-agent.conf \\ --kubeconfig ./antrea-agent.kubeconfig \\ --antrea-kubeconfig ./antrea-agent.antrea.kubeconfig --nodename vm1 ``` Bootstrap `antrea-agent` manually. First edit the `antrea-agent.conf` file to set `clientConnection`, `antreaClientConnection` and `externalNodeNamespace` to the correct values. ```bash AGENT_NAMESPACE=\"vm-ns\" AGENTCONFPATH=\"/etc/antrea\" mkdir -p $AGENTCONFPATH cp ./antrea-agent.kubeconfig $AGENTCONFPATH cp ./antrea-agent.antrea.kubeconfig $AGENTCONFPATH sed -i \"s|kubeconfig: |kubeconfig: $AGENTCONFPATH/|g\" antrea-agent.conf sed -i \"s|#externalNodeNamespace: default|externalNodeNamespace: $AGENT_NAMESPACE|g\" antrea-agent.conf cp ./antrea-agent.conf $AGENTCONFPATH ``` Then create `antrea-agent` service. Below is a sample snippet to start `antrea-agent` as a service on Ubuntu 18.04 or later: Note: Environment variable `NODE_NAME` needs to be set in the service configuration, if the VM's hostname is different from the name defined in the `ExternalNode` resource. ```bash AGENTBINPATH=\"/usr/sbin\" AGENTLOGPATH=\"/var/log/antrea\" mkdir -p $AGENTBINPATH mkdir -p $AGENTLOGPATH cat << EOF > /etc/systemd/system/antrea-agent.service Description=\"antrea-agent as a systemd service\" After=network.target [Service] Environment=\"NODE_NAME=vm1\" ExecStart=$AGENTBINPATH/antrea-agent \\ --config=$AGENTCONFPATH/antrea-agent.conf \\ --logtostderr=false \\ --logfile=$AGENTLOG_PATH/antrea-agent.log Restart=on-failure [Install] WantedBy=multi-user.target EOF sudo systemctl daemon-reload sudo systemctl enable antrea-agent sudo systemctl start antrea-agent ``` `Docker` is used as the container runtime for Linux VMs. The Docker image can be built from source code or can be downloaded from the Antrea repository. From Source Build `antrea-agent-ubuntu` Docker image in the root of the Antrea code tree. ```bash make build-agent-ubuntu ``` Note: The image repository name should be `antrea/antrea-agent-ubuntu` and tag should be `latest`. Copy the `antrea/antrea-agent-ubuntu:latest` image to the target VM. Please follow the below steps. ```bash docker save -o <tar file path in source host machine> antrea/antrea-agent-ubuntu:latest docker load -i <path to image tar file> ``` Docker Repository The released version of `antrea-agent-ubuntu` Docker image can be downloaded from Antrea `Dockerhub` repository. Pick a version from the . For any given release `<TAG>` (e.g. `v1.15.0`), download `antrea-agent-ubuntu` Docker image as follows: ```bash docker pull antrea/antrea-agent-ubuntu:<TAG> ``` The automatically downloads the specific released version of `antrea-agent-ubuntu` Docker image on VM by specifying the installation argument `--antrea-version`. Also, the script automatically loads that image into Docker. For any given release `<TAG>` (e.g. `v1.15.0`), specify it in the --antrea-version argument as follows. ```bash --antrea-version <TAG> ``` Copy configuration files to the VM, including , which specifies agent configuration parameters; `antrea-agent.antrea.kubeconfig` and `antrea-agent.kubeconfig`, which were generated in steps 4 and 5 of . Bootstrap `antrea-agent` using the as shown below (Ubuntu 18.04, 20.04, and Rhel 8.4). ```bash ./install-vm.sh --ns vm-ns --config ./antrea-agent.conf \\ --kubeconfig ./antrea-agent.kubeconfig \\ --antrea-kubeconfig ./antrea-agent.antrea.kubeconfig --containerize --antrea-version v1.9.0 ``` Enable the Windows Hyper-V optional feature on Windows VM. ```powershell Install-WindowsFeature Hyper-V-Powershell Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All -NoRestart ``` OVS needs to be installed on the VM. For more information about OVS installation please refer to the . Download which will be used to create the Windows service for `antrea-agent`. Note: Only Windows Server 2019 is supported in the first release at the moment. Build `antrea-agent` binary in the root of the antrea code tree and copy the `antrea-agent` binary from the `bin` directory to the Windows VM. ```bash make docker-windows-bin ``` Copy , `antrea-agent.kubeconfig` and `antrea-agent.antrea.kubeconfig` files to the VM. Please refer to the step 2 of section for more information. ```powershell $WINAGENTCONF_PATH=\"C:\\antrea-agent\\conf\" New-Item -ItemType Directory -Force -Path $WINAGENTCONF_PATH Copy-Item .\\antrea-agent.kubeconfig $WINAGENTCONF_PATH Copy-Item .\\antrea-agent.antrea.kubeconfig $WINAGENTCONF_PATH Copy-Item" }, { "data": "$WINAGENTCONF_PATH ``` Bootstrap `antrea-agent` using one of these 2 methods: Bootstrap `antrea-agent` using the as shown below (only Windows Server 2019 is tested and supported). ```powershell .\\Install-vm.ps1 -Namespace vm-ns -BinaryPath .\\antrea-agent.exe ` -ConfigPath .\\antrea-agent.conf -KubeConfigPath .\\antrea-agent.kubeconfig ` -AntreaKubeConfigPath .\\antrea-agent.antrea.kubeconfig ` -InstallDir C:\\antrea-agent -NodeName vm1 ``` Bootstrap `antrea-agent` manually. First edit the `antrea-agent.conf` file to set `clientConnection`, `antreaClientConnection` and `externalNodeNamespace` to the correct values. Configure environment variable `NODE_NAME` if the VM's hostname is different from the name defined in the `ExternalNode` resource. ```powershell ``` Then create `antrea-agent` service using nssm. Below is a sample snippet to start `antrea-agent` as a service: ```powershell $WINAGENTBIN_PATH=\"C:\\antrea-agent\" $WINAGENTLOG_PATH=\"C:\\antrea-agent\\logs\" New-Item -ItemType Directory -Force -Path $WINAGENTBIN_PATH New-Item -ItemType Directory -Force -Path $WINAGENTLOG_PATH Copy-Item .\\antrea-agent.exe $WINAGENTBIN_PATH nssm.exe install antrea-agent $WINAGENTBINPATH\\antrea-agent.exe --config $WINAGENTCONFPATH\\antrea-agent.conf --logfile $WINAGENTLOGPATH\\antrea-agent.log --logtostderr=false nssm.exe start antrea-agent ``` `antrea-agent` uses the interface IPs or name to find the network interface on the external Node, and then attaches it to the OVS bridge. The network interface is attached to OVS as uplink, and a new OVS internal Port is created to take over the uplink interface's IP/MAC and routing configurations. On Windows, the DNS configurations are also moved to the OVS internal port from uplink. Before attaching the uplink to OVS, the network interface is renamed with a suffix \"~\", and OVS internal port is configured with the original name of the uplink. As a result, IP/MAC/routing entries are seen on a network interface configuring with the same name on the external Node. The outbound traffic sent from the external Node enters OVS from the internal port, and finally output from the uplink, and the inbound traffic enters OVS from the uplink and output to the internal port. The IP packet is processed by the OpenFlow pipeline, and the non-IP packet is forwarded directly. The following diagram depicts the OVS bridge and traffic forwarding on an external Node: An external Node is regarded as an untrusted entity on the network. To follow the least privilege principle, the RBAC configuration for `antrea-agent` running on an external Node is as follows: Only `get`, `list` and `watch` permissions are given on resource `ExternalNode` Only `update` permission is given on resource `antreaagentinfos`, and `create` permission is moved to `antrea-controller` For more details please refer to `antrea-agent` reports its status by updating the `antreaagentinfo` resource which is created with the same name as the `ExternalNode`. `antrea-controller` creates an `antreaagentinfo` resource for each new `ExternalNode`, and then `antrea-agent` updates it every minute with its latest status. `antreaagentinfo` is deleted by `antrea-controller` when the `ExternalNode` is deleted. An Antrea NetworkPolicy is applied to an `ExternalNode` by providing an `externalEntitySelector` in the `appliedTo` field. The `ExternalEntity` resource is automatically created for each interface of an `ExternalNode`. `ExternalEntity` resources are used by `antrea-controller` to process the NetworkPolicies, and each `antrea-agent` (including those running on external Nodes) receives the appropriate internal AntreaNetworkPolicy objects. Following types of (from/to) network peers are supported in an Antrea NetworkPolicy applied to an external Node: ExternalEntities selected by an `externalEntitySelector` An `ipBlock` A FQDN address in an egress rule Following actions are supported in an Antrea NetworkPolicy applied to an external Node: Allow Drop Reject Below is an example of applying an Antrea NetworkPolicy to the external Nodes labeled with `role=db` to reject SSH connections from IP" }, { "data": "or from other external Nodes labeled with `role=front`: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: NetworkPolicy metadata: name: annp1 namespace: vm-ns spec: priority: 9000.0 appliedTo: externalEntitySelector: matchLabels: role: db ingress: action: Reject ports: protocol: TCP port: 22 from: externalEntitySelector: matchLabels: role: front ipBlock: cidr: 172.16.100.5/32 ``` In some cases, users may want some particular traffic to bypass Antrea NetworkPolicy rules on an external Node, e.g.,the SSH connection from a special host to the external Node. `policyBypassRules` can be added in the agent configuration to define traffic that needs to bypass NetworkPolicy enforcement. Below is a configuration example: ```yaml policyBypassRules: direction: ingress protocol: tcp cidr: 1.1.1.1/32 port: 22 ``` The `direction` can be `ingress` or `egress`. The supported protocols include: `tcp`,`udp`, `icmp` and `ip`. The `cidr` gives the peer address, which is the destination in an `egress` rule, and the source in an `ingress` rule. For `tcp` and `udp` protocols, the `port` is required to specify the destination port. A new OpenFlow pipeline is implemented by `antrea-agent` dedicated for `ExternalNode` feature. `NonIPTable` is a new OpenFlow table introduced only on external Nodes, which is dedicated to all non-IP packets. A non-IP packet is forwarded between the pair ports directly, e.g., a non-IP packet entering OVS from the uplink interface is output to the paired internal port, and a packet from the internal port is output to the uplink. A new OpenFlow pipeline is set up on external Nodes to process IP packets. Antrea NetworkPolicy enforcement is the major function in this new pipeline, and the OpenFlow tables used are similar to the Pod pipeline. No L3 routing is provided on an external Node, and a simple L2 forwarding policy is implemented. OVS connection tracking is used to assist the NetworkPolicy function; as a result only the first packet is validated by the OpenFlow entries, and the subsequent packets in an accepted connection are allowed directly. Egress/Ingress Tables Table `XgressSecurityClassifierTable` is installed in both `stageEgressSecurity` and `stageIngressSecurity`, which is used to install the OpenFlow entries for the in the agent configuration. This is an example of the OpenFlow entry for the above configuration: ```yaml table=IngressSecurityClassifier, priority=200,ctstate=+new+trk,tcp,nwsrc=1.1.1.1,tp_dst=22 actions=resubmit(,IngressMetric) ``` Other OpenFlow tables in `stageEgressSecurity` and `stageIngressSecurity` are the same as those installed on a Kubernetes worker Node. For more details about these tables, please refer to the general of Antrea OVS pipeline. L2 Forwarding Tables `L2ForwardingCalcTable` is used to calculate the expected output port of an IP packet. As the pair ports with the internal port and uplink always exist on the OVS bridge, and both interfaces are configured with the same MAC address, the match condition of an OpenFlow entry in `L2ForwardingCalcTable` uses the input port number but not the MAC address of the packet. The flow actions are: 1) set flag `OutputToOFPortRegMark`, and 2) set the peer port as the `TargetOFPortField`, and 3) enforce the packet to go to stageIngressSecurity. Below is an example OpenFlow entry in `L2ForwardingCalcTable` ```yaml table=L2ForwardingCalc, priority=200,ip,inport=ens224 actions=load:0x1->NXMNXREG0[8],load:0x7->NXMNX_REG1[],resubmit(,IngressSecurityClassifier) table=L2ForwardingCalc, priority=200,ip,inport=\"ens224~\" actions=load:0x1->NXMNXREG0[8],load:0x8->NXMNX_REG1[],resubmit(,IngressSecurityClassifier) ``` This feature currently supports only one interface per `ExternalNode` object, and `ips` must be set in the interface. The support for multiple network interfaces will be added in the future. `ExternalNode` name must be unique in the `cluster` scope even though it is itself a Namespaced resource." } ]
{ "category": "Runtime", "file_name": "external-node.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"Migrating from Heptio Ark to Velero\" layout: docs As of v0.11.0, Heptio Ark has become Velero. This means the following changes have been made: The `ark` CLI client is now `velero`. The default Kubernetes namespace and ServiceAccount are now named `velero` (formerly `heptio-ark`). The container image name is now `gcr.io/heptio-images/velero` (formerly `gcr.io/heptio-images/ark`). CRDs are now under the new `velero.io` API group name (formerly `ark.heptio.com`). The following instructions will help you migrate your existing Ark installation to Velero. Ark v0.10.x installed. See the v0.10.x to upgrade from older versions. `kubectl` installed. `cluster-admin` permissions. At a high level, the migration process involves the following steps: Scale down the `ark` deployment, so it will not process schedules, backups, or restores during the migration period. Create a new namespace (named `velero` by default). Apply the new CRDs. Migrate existing Ark CRD objects, labels, and annotations to the new Velero equivalents. Recreate the existing cloud credentials secret(s) in the velero namespace. Apply the updated Kubernetes deployment and daemonset (for restic support) to use the new container images and namespace. Remove the existing Ark namespace (which includes the deployment), CRDs, and ClusterRoleBinding. These steps are provided in a script here: ```bash kubectl scale --namespace heptio-ark deployment/ark --replicas 0 OS=$(uname | tr '[:upper:]' '[:lower:]') # Determine if the OS is Linux or macOS ARCH=\"amd64\" curl -L https://github.com/vmware-tanzu/velero/releases/download/v0.11.0/velero-v0.11.0-${OS}-${ARCH}.tar.gz --output velero-v0.11.0-${OS}-${ARCH}.tar.gz tar xvf velero-v0.11.0-${OS}-${ARCH}.tar.gz kubectl apply -f config/common/00-prereqs.yaml curl -L https://github.com/vmware/crd-migration-tool/releases/download/v1.0.0/crd-migration-tool-v1.0.0-${OS}-${ARCH}.tar.gz --output crd-migration-tool-v1.0.0-${OS}-${ARCH}.tar.gz tar xvf crd-migration-tool-v1.0.0-${OS}-${ARCH}.tar.gz ./crd-migrator \\ --from ark.heptio.com/v1 \\ --to velero.io/v1 \\ --label-mappings ark.heptio.com:velero.io,ark-schedule:velero.io/schedule-name \\ --annotation-mappings ark.heptio.com:velero.io \\ --namespace-mappings heptio-ark:velero kubectl get secret --namespace heptio-ark cloud-credentials --export -o yaml | kubectl apply --namespace velero -f - ./velero get backup ./velero get restore kubectl delete namespace heptio-ark kubectl delete crds -l component=ark kubectl delete clusterrolebindings -l component=ark ```" } ]
{ "category": "Runtime", "file_name": "migrating-to-velero.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Name | Type | Description | Notes | - | - | - Destination | int32 | | Distance | int32 | | `func NewNumaDistance(destination int32, distance int32, ) *NumaDistance` NewNumaDistance instantiates a new NumaDistance object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewNumaDistanceWithDefaults() *NumaDistance` NewNumaDistanceWithDefaults instantiates a new NumaDistance object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *NumaDistance) GetDestination() int32` GetDestination returns the Destination field if non-nil, zero value otherwise. `func (o NumaDistance) GetDestinationOk() (int32, bool)` GetDestinationOk returns a tuple with the Destination field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NumaDistance) SetDestination(v int32)` SetDestination sets Destination field to given value. `func (o *NumaDistance) GetDistance() int32` GetDistance returns the Distance field if non-nil, zero value otherwise. `func (o NumaDistance) GetDistanceOk() (int32, bool)` GetDistanceOk returns a tuple with the Distance field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NumaDistance) SetDistance(v int32)` SetDistance sets Distance field to given value." } ]
{ "category": "Runtime", "file_name": "NumaDistance.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "https://github.com/vmware-tanzu/velero/releases/tag/v1.2.0 `velero/velero:v1.2.0` Please note that as of this release we are no longer publishing new container images to `gcr.io/heptio-images`. The existing ones will remain there for the foreseeable future. https://velero.io/docs/v1.2.0/ https://velero.io/docs/v1.2.0/upgrade-to-1.2/ Velero has had built-in support for AWS, Microsoft Azure, and Google Cloud Platform (GCP) since day 1. When Velero moved to a plugin architecture for object store providers and volume snapshotters in version 0.6, the code for these three providers was converted to use the plugin interface provided by this new architecture, but the cloud provider code still remained inside the Velero codebase. This put the AWS, Azure, and GCP plugins in a different position compared with other providers plugins, since they automatically shipped with the Velero binary and could include documentation in-tree. With version 1.2, weve extracted the AWS, Azure, and GCP plugins into their own repositories, one per provider. We now also publish one plugin image per provider. This change brings these providers to parity with other providers plugin implementations, reduces the size of the core Velero binary by not requiring each providers SDK to be included, and opens the door for the plugins to be maintained and released independently of core Velero. Weve continued to work on improving Veleros restic integration. With this release, weve made the following enhancements: Restic backup and restore progress is now captured during execution and visible to the user through the `velero backup/restore describe --details` command. The details are updated every 10 seconds. This provides a new level of visibility into restic operations for users. Restic backups of persistent volume claims (PVCs) now remain incremental across the rescheduling of a pod. Previously, if the pod using a PVC was rescheduled, the next restic backup would require a full rescan of the volumes contents. This improvement potentially makes such backups significantly faster. Read-write-many volumes are no longer backed up once for every pod using the volume, but instead just once per Velero backup. This improvement speeds up backups and prevents potential restore issues due to multiple copies of the backup being processed simultaneously. Before version 1.2, you could clone a Kubernetes namespace by backing it up and then restoring it to a different namespace in the same cluster by using the `--namespace-mappings` flag with the `velero restore create` command. However, in this scenario, Velero was unable to clone persistent volumes used by the namespace, leading to errors for users. In version 1.2, Velero automatically detects when you are trying to clone an existing namespace, and clones the persistent volumes used by the namespace as well. This doesnt require the user to specify any additional flags for the `velero restore create` command. This change lets you fully achieve your goal of cloning namespaces using persistent storage within a cluster. To help you secure your important backup data, weve added support for more forms of server-side encryption of backup data on both AWS and GCP. Specifically: On AWS, Velero now supports Amazon S3-managed encryption keys (SSE-S3), which uses AES256 encryption, by specifying `serverSideEncryption: AES256` in a backup storage locations config. On GCP, Velero now supports using a specific Cloud KMS key for server-side encryption by specifying `kmsKeyName: <key name>` in a backup storage locations config. In Kubernetes 1.16, custom resource definitions (CRDs) reached general availability. Structural schemas are required for CRDs created in the `apiextensions.k8s.io/v1` API" }, { "data": "Velero now defines a structural schema for each of its CRDs and automatically applies it the user runs the `velero install` command. The structural schemas enable the user to get quicker feedback when their backup, restore, or schedule request is invalid, so they can immediately remediate their request. Ensure object store plugin processes are cleaned up after restore and after BSL validation during server start up (#2041, @betta1) bug fix: don't try to restore pod volume backups that don't have a snapshot ID (#2031, @skriss) Restore Documentation: Updated Restore Documentation with Clarification implications of removing restore object. (#1957, @nainav) add `--allow-partially-failed` flag to `velero restore create` for use with `--from-schedule` to allow partially-failed backups to be restored (#1994, @skriss) Allow backup storage locations to specify backup sync period or toggle off sync (#1936, @betta1) Remove cloud provider code (#1985, @carlisia) Restore action for cluster/namespace role bindings (#1974, @alexander-demichev) Add `--no-default-backup-location` flag to `velero install` (#1931, @Frank51) If includeClusterResources is nil/auto, pull in necessary CRDs in backupResource (#1831, @sseago) Azure: add support for Azure China/German clouds (#1938, @andyzhangx) Add a new required `--plugins` flag for `velero install` command. `--plugins` takes a list of container images to add as initcontainers. (#1930, @nrb) restic: only backup read-write-many PVCs at most once, even if they're annotated for backup from multiple pods. (#1896, @skriss) Azure: add support for cross-subscription backups (#1895, @boxcee) adds `insecureSkipTLSVerify` server config for AWS storage and `--insecure-skip-tls-verify` flag on client for self-signed certs (#1793, @s12chung) Add check to update resource field during backupItem (#1904, @spiffcs) Add `LDLIBRARYPATH` (=/plugins) to the env variables of velero deployment. (#1893, @lintongj) backup sync controller: stop using `metadata/revision` file, do a full diff of bucket contents vs. cluster contents each sync interval (#1892, @skriss) bug fix: during restore, check item's original namespace, not the remapped one, for inclusion/exclusion (#1909, @skriss) adds structural schema to Velero CRDs created on Velero install, enabling validation of Velero API fields (#1898, @prydonius) GCP: add support for specifying a Cloud KMS key name to use for encrypting backups in a storage location. (#1879, @skriss) AWS: add support for SSE-S3 AES256 encryption via `serverSideEncryption` config field in BackupStorageLocation (#1869, @skriss) change default `restic prune` interval to 7 days, add `velero server/install` flags for specifying an alternate default value. (#1864, @skriss) velero install: if `--use-restic` and `--wait` are specified, wait up to a minute for restic daemonset to be ready (#1859, @skriss) report restore progress in PodVolumeRestores and expose progress in the velero restore describe --details command (#1854, @prydonius) Jekyll Site updates - modifies documentation to use a wider layout; adds better markdown table formatting (#1848, @ccbayer) fix excluding additional items with the velero.io/exclude-from-backup=true label (#1843, @prydonius) report backup progress in PodVolumeBackups and expose progress in the velero backup describe --details command. Also upgrades restic to v0.9.5 (#1821, @prydonius) Add `--features` argument to all velero commands to provide feature flags that can control enablement of pre-release features. (#1798, @nrb) when backing up PVCs with restic, specify `--parent` flag to prevent full volume rescans after pod reschedules (#1807, @skriss) remove 'restic check' calls from before/after 'restic prune' since they're redundant (#1794, @skriss) fix error formatting due interpreting % as printf formatted strings (#1781, @s12chung) when using `velero restore create --namespace-mappings ...` to create a second copy of a namespace in a cluster, create copies of the PVs used (#1779, @skriss) adds --from-schedule flag to the `velero create backup` command to create a Backup from an existing Schedule (#1734, @prydonius)" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.2.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Velero 1.6: Bring All Your Credentials\" excerpt: In this release, we've grown the team and continue to welcome new members to our community. We're excited to introduce new features such as multiple Backup Storage Location credentials, restore progress reporting, and restoring Kubernetes API groups by priority level, as well as many user and developer experience improvements. We're thrilled to have such significant contributions from the community and we're proud to deliver Velero 1.6. author_name: Bridget McErlean slug: Velero-1.6-Bring-All-Your-Credentials categories: ['velero','release'] image: /img/posts/post-1.6.jpg tags: ['Velero Team', 'Bridget McErlean', 'Velero Release'] A highly requested feature for Velero has been designed and implemented in the latest 1.6 release, and that is the ability for Velero to handle multiple credentials for providers. We are very excited to deliver this feature, as well as other features such as progress reporting for restores and better restoring with the most accurate resource version. The theme of the v1.6 release is most definitely more flexibility, not only with the features added, but also in terms of the Velero contribution, development, and quality assurance processes. We worked very hard this cycle to implement an end-to-end testing framework and were able to automate a significant portion of our usually manual testing process. We also made the day-to-day development process ridiculously easier and faster by integrating our code base with Tilt. Lastly but no less important, the project has added an additional maintainer and a product manager, and in this post we celebrate this news alongside all the external contributions from our incredible community! Before this release, Velero only supported one set of credentials which was used with all Backup Storage Locations. Although it was possible to work around this limitation to use different credentials for different providers, it required users to edit the Velero Deployment and, since Velero was only able to recognize one set of credentials per provider, it was not possible to use different credentials for the same provider. This prevented users from backing up to multiple storage locations with the same provider if different credentials were required. With the release of Velero 1.6, users can now optionally configure credentials for each Backup Storage Location by creating a Kubernetes Secret in the Velero namespace and then associating the relevant key within that Secret with a new or existing Backup Storage Location. Velero will then use those credentials when interacting with storage provider plugins. The" }, { "data": "releases of all the plugins maintained by the Velero team support this new way of handling credentials and we look forward to collaborating with more community plugin providers to grow adoption of this feature. For more information, and detailed instructions on how to configure credentials for your Backup Storage Locations, please see our . It's always been important for our users to have insight into long-running Velero operations. Velero introduced support for reporting Backup progress in v1.4.0 and, with v1.6.0, this is now also available for Restores, thanks to from Red Hat. While a restore operation is in progress, users can now inspect the restore using `velero describe restore` to see how many items will be restored, and how many of them have been restored up to that point in time. Many users rely on Velero to not only backup and restore their workloads, but also to migrate across clusters and perform Kubernetes upgrades. When performing a migration or upgrade, it is not uncommon for the Kubernetes API group versions to differ between clusters. This can result in backups where not all resources can be restored as the API group version for some resources may not be available for use in the new cluster because of Kubernetes API deprecations. In Velero 1.4, a new feature was added to backup all versions of each resource, not just the cluster preferred version. For Velero 1.6, from VMware adapted the restore flow to make use of this data, enabling Velero or the user to choose the most appropriate version of the resource to restore on the destination cluster. This allows for much more flexibility when performing migrations into new clusters. Using this new feature requires the `EnableAPIGroupVersions` feature flag to be enabled in your Velero instance. For a detailed description of the feature, as well as more information about how Velero decides which version to restore, please see our . Velero has upgraded the version of restic it is using from v0.9.6 to the latest version, v0.12.0. This is an exciting upgrade as it incorporates many bug fixes and performance improvements that will result in a better experience when using restic. For more information about the improvements provided with this upgrade, please see the . As the Velero project continues to grow, it is important that our community of users can continue to trust and rely on Velero to support them in their disaster recovery and migration needs. During this release cycle, we decided to focus on increased stability by adding a new suite of . Although Velero features have always been thoroughly tested, these new tests execute Velero workflows from the perspective of the user interacting with a Velero installation using the" }, { "data": "This new test framework has enabled us to create tests that can be easily run across multiple Kubernetes providers, ensuring that we prevent regressions quickly and early in the development of new features. We plan to expand on this work in our upcoming releases, including and running these tests as . We're excited to announce that from SUSE has joined the Velero team as a maintainer! Already a maintainer of the Velero Helm chart, JenTing has consistently contributed to the Velero community and wed like to thank him for his continued support to the project and community. Welcome, JenTing! We are also glad to announce that from VMware has joined the project as our product manager. She is working with the team and community to develop the Velero vision and roadmap. Feel free to reach out to her with any product thoughts or feedback. Welcome, Eleanor! The Velero project welcomes contributors of all kinds. If you are interested in contributing more, or finding out what is involved in becoming a maintainer, please see our documentation on . As we continue to grow our community of contributors, we want to lower the barrier to entry for making contributions to the Velero project. Weve made huge improvements to the developer experience during this release cycle by introducing Tilt to the developer workflow. Using Tilt enables developers to make changes to Velero and its plugins, and have those changes automatically built and deployed to your cluster. This removes the need for any manual building or pushing of images, and provides a faster and much simpler workflow. Our Tilt configuration also enables contributors to more easily debug the Velero process using Delve, which has integrations with many editors and IDEs. If you would like to try it out, please see our . We have more exciting additions and improvements to Velero earmarked for future releases. For v1.7, we are looking forward to adding the ability to install Velero with , supporting plugin versioning, bringing the CSI snapshotting to GA, having a diagnostics tool, and other improvements. See our for the complete list. Velero is better because of our contributors and maintainers. It is because of you that we can bring great software to the community. Please join us during our online and catch up with past meetings on YouTube on the . You can always find the latest project information at . Look for issues on GitHub marked or if you want to roll up your sleeves and write some code with us. For opportunities to help and be helped, visit our . You can chat with us on and follow us on Twitter at ." } ]
{ "category": "Runtime", "file_name": "2021-04-13-Velero-1.6-Bring-All-Your-Credentials.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "(userns-idmap)= Incus runs safe containers. This is achieved mostly through the use of user namespaces which make it possible to run containers unprivileged, greatly limiting the attack surface. User namespaces work by mapping a set of UIDs and GIDs on the host to a set of UIDs and GIDs in the container. For example, we can define that the host UIDs and GIDs from 100000 to 165535 may be used by Incus and should be mapped to UID/GID 0 through 65535 in the container. As a result a process running as UID 0 in the container will actually be running as UID 100000. Allocations should always be of at least 65536 UIDs and GIDs to cover the POSIX range including root (0) and nobody (65534). User namespaces require a kernel >= 3.12, Incus will start even on older kernels but will refuse to start containers. On most hosts, Incus will check `/etc/subuid` and `/etc/subgid` for allocations for the `incus` user and on first start, set the default profile to use the first 65536 UIDs and GIDs from that range. If the range is shorter than 65536 (which includes no range at all), then Incus will fail to create or start any container until this is corrected. If some but not all of `/etc/subuid`, `/etc/subgid`, `newuidmap` (path lookup) and `newgidmap` (path lookup) can be found on the system, Incus will fail the startup of any container until this is corrected as this shows a broken shadow setup. If none of those files can be found, then Incus will assume a 1000000000 UID/GID range starting at a base UID/GID of" }, { "data": "This is the most common case and is usually the recommended setup when not running on a system which also hosts fully unprivileged containers (where the container runtime itself runs as a user). The source map is sent when moving containers between hosts so that they can be remapped on the receiving host. Incus supports using different idmaps per container, to further isolate containers from each other. This is controlled with two per-container configuration keys, `security.idmap.isolated` and `security.idmap.size`. Containers with `security.idmap.isolated` will have a unique ID range computed for them among the other containers with `security.idmap.isolated` set (if none is available, setting this key will simply fail). Containers with `security.idmap.size` set will have their ID range set to this size. Isolated containers without this property set default to a ID range of size 65536; this allows for POSIX compliance and a `nobody` user inside the container. To select a specific map, the `security.idmap.base` key will let you override the auto-detection mechanism and tell Incus what host UID/GID you want to use as the base for the container. These properties require a container reboot to take effect. Incus also supports customizing bits of the idmap, e.g. to allow users to bind mount parts of the host's file system into a container without the need for any UID-shifting file system. The per-container configuration key for this is `raw.idmap`, and looks like: both 1000 1000 uid 50-60 500-510 gid 100000-110000 10000-20000 The first line configures both the UID and GID 1000 on the host to map to UID 1000 inside the container (this can be used for example to bind mount a user's home directory into a container). The second and third lines map only the UID or GID ranges into the container, respectively. The second entry per line is the source ID, i.e. the ID on the host, and the third entry is the range inside the container. These ranges must be the same size. This property requires a container reboot to take effect." } ]
{ "category": "Runtime", "file_name": "userns-idmap.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "Environment Replace this with the output of: `printf \"$(rkt version)\\n--\\n$(uname -srm)\\n--\\n$(cat /etc/os-release)\\n--\\n$(systemctl --version)\\n\"` What did you do? What did you expect to see? What did you see instead?" } ]
{ "category": "Runtime", "file_name": "ISSUE_TEMPLATE.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "Please refer to for details. For backward compatibility, Cobra still supports its legacy dynamic completion solution (described below). Unlike the `ValidArgsFunction` solution, the legacy solution will only work for Bash shell-completion and not for other shells. This legacy solution can be used along-side `ValidArgsFunction` and `RegisterFlagCompletionFunc()`, as long as both solutions are not used for the same command. This provides a path to gradually migrate from the legacy solution to the new solution. Note: Cobra's default `completion` command uses bash completion V2. If you are currently using Cobra's legacy dynamic completion solution, you should not use the default `completion` command but continue using your own. The legacy solution allows you to inject bash functions into the bash completion script. Those bash functions are responsible for providing the completion choices for your own completions. Some code that works in kubernetes: ```bash const ( bashcompletionfunc = `kubectlparseget() { local kubectl_output out if kubectl_output=$(kubectl get --no-headers \"$1\" 2>/dev/null); then out=($(echo \"${kubectl_output}\" | awk '{print $1}')) COMPREPLY=( $( compgen -W \"${out[*]}\" -- \"$cur\" ) ) fi } kubectlgetresource() { if [[ ${#nouns[@]} -eq 0 ]]; then return 1 fi kubectlparseget ${nouns[${#nouns[@]} -1]} if [[ $? -eq 0 ]]; then return 0 fi } kubectlcustomfunc() { case ${last_command} in kubectlget | kubectldescribe | kubectldelete | kubectlstop) kubectlgetresource return ;; *) ;; esac } `) ``` And then I set that in my command definition: ```go cmds := &cobra.Command{ Use: \"kubectl\", Short: \"kubectl controls the Kubernetes cluster manager\", Long: `kubectl controls the Kubernetes cluster manager. Find more information at https://github.com/GoogleCloudPlatform/kubernetes.`, Run: runHelp, BashCompletionFunction: bashcompletionfunc, } ``` The `BashCompletionFunction` option is really only valid/useful on the root command. Doing the above will cause `kubectl_custom_func()` (`<command-use>customfunc()`) to be called when the built in processor was unable to find a solution. In the case of kubernetes a valid command might look something like `kubectl get pod ` the `kubectl_customc_func()` will run because the cobra.Command only understood \"kubectl\" and \"get.\" `kubectlcustomfunc()` will see that the cobra.Command is \"kubectlget\" and will thus call another helper `kubectlgetresource()`. `kubectlgetresource` will look at the 'nouns' collected. In our example the only noun will be `pod`. So it will call `kubectlparseget pod`. `kubectlparse_get` will actually call out to kubernetes and get any pods. It will then set `COMPREPLY` to valid pods! Similarly, for flags: ```go annotation := make(mapstring) annotation[cobra.BashCompCustom] = []string{\"kubectlgetnamespaces\"} flag := &pflag.Flag{ Name: \"namespace\", Usage: usage, Annotations: annotation, } cmd.Flags().AddFlag(flag) ``` In addition add the `kubectlgetnamespaces` implementation in the `BashCompletionFunction` value, e.g.: ```bash kubectlgetnamespaces() { local template template=\"{{ range .items }}{{ .metadata.name }} {{ end }}\" local kubectl_out if kubectl_out=$(kubectl get -o template --template=\"${template}\" namespace 2>/dev/null); then COMPREPLY=( $( compgen -W \"${kubectl_out}[*]\" -- \"$cur\" ) ) fi } ```" } ]
{ "category": "Runtime", "file_name": "bash_completions.md", "project_name": "Inclavare Containers", "subcategory": "Container Runtime" }
[ { "data": "This document has troubleshooting tips when installing or using Sysbox on Kubernetes clusters. For troubleshooting outside of Kubernetes environments, see . This is likely because either: The RBAC resource for the sysbox-deploy-k8s is not present. The K8s worker node is not labeled with \"sysbox-install=yes\". Make sure to follow the to solve this. If the sysbox-deploy-k8s daemonset fails to install Sysbox, take a look at the logs for the sysbox-deploy-k8s pod (there is one such pod for each K8s worker node where sysbox is installed). The logs should ideally look like this: ```console Adding K8s label \"crio-runtime=installing\" to node node/gke-cluster-3-default-pool-766039d3-68mw labeled Deploying CRI-O installer agent on the host ... Running CRI-O installer agent on the host (may take several seconds) ... Removing CRI-O installer agent from the host ... Configuring CRI-O ... Adding K8s label \"sysbox-runtime=installing\" to node node/gke-cluster-3-default-pool-766039d3-68mw labeled Installing Sysbox dependencies on host Copying shiftfs sources to host Deploying Sysbox installer helper on the host ... Running Sysbox installer helper on the host (may take several seconds) ... Stopping the Sysbox installer helper on the host ... Removing Sysbox installer helper from the host ... Installing Sysbox on host Detected host distro: ubuntu_20.04 Configuring host sysctls kernel.unprivilegedusernsclone = 1 fs.inotify.maxqueuedevents = 1048576 fs.inotify.maxuserwatches = 1048576 fs.inotify.maxuserinstances = 1048576 kernel.keys.maxkeys = 20000 kernel.keys.maxbytes = 400000 Starting Sysbox Adding Sysbox to CRI-O config Restarting CRI-O ... Deploying Kubelet config agent on the host ... Running Kubelet config agent on the host (will restart Kubelet and temporary bring down all pods on this node for ~1 min) ... ``` The sysbox-deploy-k8s pod will restart the kubelet on the K8s node. After the restart, the pod's log will continue as follows: ```console Stopping the Kubelet config agent on the host ... Removing Kubelet config agent from the host ... Kubelet reconfig completed. Adding K8s label \"crio-runtime=running\" to node node/gke-cluster-3-default-pool-766039d3-68mw labeled Adding K8s label \"sysbox-runtime=running\" to node node/gke-cluster-3-default-pool-766039d3-68mw labeled The k8s runtime on this node is now CRI-O. Sysbox installation completed. Done. ``` In addition, the sysbox-deploy-k8s creates 3 ephemeral systemd services on the K8s nodes where Sysbox is installed. These ephemeral systemd services are short-lived and help with the installation of CRI-O and Sysbox, and with the reconfiguration of the kubelet. They are caled `crio-installer`, `sysbox-installer-helper`, and `kubelet-config-helper`. Look at the logs generated by these systemd services in the K8s worker nodes to make sure they don't report errors: ```console journalctl -eu crio-installer journalctl -eu sysbox-installer-helper journalctl -eu kubelet-config-helper ``` The sysbox-deploy-k8s daemonset may cause some pods to enter an error state temporarily: ```console $ kubectl get all --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/coredns-74ff55c5b-ff58f 1/1 Running 0 11d kube-system pod/coredns-74ff55c5b-t6t5b 1/1 Error 0 11d kube-system pod/etcd-k8s-node2 1/1 Running 1 11d kube-system pod/kube-apiserver-k8s-node2 1/1 Running 1 11d kube-system pod/kube-controller-manager-k8s-node2 1/1 Running 1 11d kube-system pod/kube-flannel-ds-4pqgp 1/1 Error 4 4d22h kube-system pod/kube-flannel-ds-lkvnp 1/1 Running 0 10d kube-system pod/kube-proxy-4mbfj 1/1 Error 4 4d22h kube-system pod/kube-proxy-5lfz4 1/1 Running 0 11d kube-system pod/kube-scheduler-k8s-node2 1/1 Running 1 11d kube-system pod/sysbox-deploy-k8s-rbl76 1/1 Error 1 136m ``` This is expected because sysbox-deploy-k8s restarts the Kubelet, and this causes all pods on the K8s worker node(s) where Sysbox is being installed to be re-created. This is a condition that should resolve itself within 1->2 minutes after running the sysbox-deploy-k8s daemonset, as Kubernetes will automatically restart the affected" }, { "data": "If for some reason one of the pods remains in an error state forever, check the state and logs associated with that pod: kubectl -n kube-system describe <pod> kubectl -n kube-system logs <pod> Feel free to open an issue in the Sysbox repo so we can investigate. As a work-around, if the pod is part of a deployment or daemonset, try removing the pod as this causes Kubernetes to re-create it and sometimes the problem this fixes the problem. If the CRI-O log (`journalctl -u crio`) shows an error such as: ```console Error validating CNI config file /etc/cni/net.d/10-containerd-net.conflist: [failed to find plugin in opt/cni/bin] ``` it means it can't find the binaries for the CNI. By default CRI-O looks for those in `/opt/cni/bin`. In addition, the sysbox-deploy-k8s daemonset configures CRI-O to look for CNIs in `/home/kubernetes/bin/`, as these is where they are found on GKE nodes. If you see this error, find the directory where the CNIs are located in the worker node, and add that directory to the `/etc/crio/crio.conf` file: ```console [crio.network] plugin_dirs = [\"/opt/cni/bin/\", \"/home/kubernetes/bin\"] ``` Then restart CRI-O and it should pick the CNI binaries correctly. If you are deploying a pod with Sysbox and it gets stuck in the \"Creating\" state for a long time (e.g., > 1 minute), then something is likely wrong. To debug, start by doing a `kubectl describe <pod-name>` to see what Kubernetes reports. Additionally, check the CRI-O and Kubelet logs on the node where the pod was scheduled: ```console $ journalctl -eu kubelet $ journalctl -eu crio ``` These often have information that helps pin-point the problem. For AWS EKS nodes, the kubelet runs in a snap package; check it's health via: ```console $ snap get kubelet-eks $ journalctl -eu snap.kubelet-eks.daemon.service ``` If the kubelet log shows an error such as `failed to find runtime handler sysbox-runc from runtime list`, then this means CRI-O has not recognized Sysbox for some reason. In this case, double check that the CRI-O config has the Sysbox runtime directive in it (the sysbox-deploy-k8s daemonset should have set this config): ```console $ cat /etc/crio/crio.conf ... [crio.runtime.runtimes.sysbox-runc] allowed_annotations = [\"io.kubernetes.cri-o.userns-mode\"] runtime_path = \"/usr/bin/sysbox-runc\" runtime_type = \"oci\" ... ``` If the sysbox runtime config is present, then try restarting CRI-O on the worker node: ```console systemctl restart crio ``` Note that restarting CRI-O will cause all pods on the node to be restarted (including the kube-proxy and CNI pods). If the sysbox runtime config is not present, then and the sysbox daemonset. ```console $ systemctl status sysbox $ systemctl status sysbox-mgr $ systemctl status sysbox-fs $ journalctl -eu sysbox $ journalctl -eu sysbox-mgr $ journalctl -eu sysbox-fs ``` ```console $ systemctl status crio $ journalctl -eu crio ``` ```console $ systemctl status kubelet $ journalctl -eu kubelet ``` The `crictl` tool can be used to communicate with CRI implementations directly (e.g., CRI-O or containerd). `crictl` is typically present in K8s nodes. If for some reason it's not, you can install it as shown here: https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md You should install `crictl` on the K8s worker nodes where CRI-O is installed, and configure it as follows: ```console $ cat /etc/crictl.yaml runtime-endpoint: \"unix:///var/run/crio/crio.sock\" image-endpoint: \"unix:///var/run/crio/crio.sock\" timeout: 0 debug: false pull-image-on-create: true disable-pull-on-run: false ``` The key is to set `runtime-endpoint` and `image-endpoint` to CRI-O as shown above. After this, you can use crictl to determine the health of the pods on the node: For example, in a properly initialized K8s worker node you should see the kube-proxy pod running on the node. ```console $ sudo crictl ps CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID 48912b06799d2 43154ddb57a83de3068fe603e9c7393e7d2b77cb18d9e0daf869f74b1b4079c0 2 days ago Running" } ]
{ "category": "Runtime", "file_name": "troubleshoot-k8s.md", "project_name": "Sysbox", "subcategory": "Container Runtime" }
[ { "data": "As a rook user, I want to clean up data on the hosts after I intentionally uninstall ceph cluster, so that I can start a new cluster without having to do any manual clean up. If the user deletes a rook-ceph cluster and wants to start a new cluster on the same hosts, then following manual steps should be performed: Delete the dataDirHostPath on each host. Otherwise, stale keys and other configs will remain from the previous cluster and the new mons will fail to start. Clean the OSD disks from the previous cluster before starting a new one. Read more about the manual clean up steps This implementation aims to automate both of these manual steps. Important: User confirmation is mandatory before cleaning up the data on hosts. This is important because user might have accidentally deleted the CR and in that case cleaning up the hostpath wont recover the cluster. Adding these user confirmation on the ceph cluster would cause the operator to refuse running an orchestration If the user really wants to clean up the data on the cluster, then update the ceph cluster CRD with cleanupPolicy configuration like below : ```yaml apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: image: quay.io/ceph/ceph:v18.2.2 dataDirHostPath: /var/lib/rook mon: count: 3 allowMultiplePerNode: true storage: useAllNodes: true useAllDevices: true cleanupPolicy: confirmation: yes-really-destroy-data sanitizeDisks: method: quick dataSource: zero iteration: 1 allowUninstallWithVolumes: false ``` Updating the cluster `cleanupPolicy` with `confirmation: yes-really-destroy-data` would cause the operator to refuse running any further orchestration. Operator starts the clean up flow only when deletionTimeStamp is present on the ceph Cluster. Operator checks for user confirmation (that is, `confirmation: yes-really-destroy-data`) on the ceph cluster before starting the clean up. Identify the nodes where ceph daemons are running. Wait till all the ceph daemons are destroyed on each node. This is important because deleting the data (say dataDirHostPath) before the daemons would cause the daemons to panic. Create a batch job that runs on each of the above nodes. The job performs the following action on each node based on the user confirmation: cleanup the cluster namespace on the dataDirHostPath. For example `/var/lib/rook/rook-ceph` Delete all the ceph monitor directories on the dataDirHostPath. For example `/var/lib/rook/mon-a`, `/var/lib/rook/mon-b`, etc. Sanitize the local disks used by OSDs on each node. Local disk sanitization can be further configured by the admin with following options: `method`: use `complete` to sanitize the entire disk and `quick` (default) to sanitize only ceph's metadata. `dataSource`: indicate where to get random bytes from to write on the disk. Possible choices are `zero` (default) or `random`. Using random sources will consume entropy from the system and will take much more time then the zero source. `iteration`: overwrite N times instead of the default (1). Takes an integer value. If `allowUninstallWithVolumes` is `false` (default), then operator would wait for the PVCs to be deleted before finally deleting the cluster. ```yaml apiVersion: batch/v1 kind: Job metadata: name: cluster-cleanup-job-<node-name> namespace: <namespace> labels: app: rook-ceph-cleanup rook-ceph-cleanup: \"true\" rook_cluster: <namespace> spec: template: spec: containers: name: rook-ceph-cleanup-<node-name> securityContext: privileged: true image: <rook-image> env: name: ROOKDATADIRHOSTPATH value: <dataDirHostPath> name: ROOKNAMESPACEDIR value: <namespace> name: ROOKMONSECRET value: <dataDirHostPath> name: ROOKCLUSTERFSID value: <dataDirHostPath> name: ROOKLOGLEVEL value: <dataDirHostPath> name: ROOKSANITIZEMETHOD value: <method> name: ROOKSANITIZEDATA_SOURCE value: <dataSource> name: ROOKSANITIZEITERATION value: <iteration> args: []string{\"ceph\", \"clean\"} volumeMounts: name: cleanup-volume mountPath: <dataDirHostPath> name: devices mountPath: /dev volume: name: cleanup-volume hostPath: path: <dataDirHostPath> name: devices hostpath: path: /dev restartPolicy: OnFailure ```" } ]
{ "category": "Runtime", "file_name": "ceph-cluster-cleanup.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "https://github.com/vmware-tanzu/velero/releases/tag/v1.13.0 `velero/velero:v1.13.0` https://velero.io/docs/v1.13/ https://velero.io/docs/v1.13/upgrade-to-1.13/ Velero introduced the Resource Modifiers in v1.12.0. This feature allows users to specify a ConfigMap with a set of rules to modify the resources during restoration. However, only the JSON Patch is supported when creating the rules, and JSON Patch has some limitations, which cannot cover all use cases. In v1.13.0, Velero adds new support for JSON Merge Patch and Strategic Merge Patch, which provide more power and flexibility and allow users to use the same ConfigMap to apply patches on the resources. More design details can be found in design. For instructions on how to use the feature, please refer to the doc. Velero data movement activities from fs-backups and CSI snapshot data movements run in Velero node-agent, so may be hosted by every node in the cluster and consume resources (i.e. CPU, memory, network bandwidth) from there. With v1.13, users are allowed to configure how many data movement activities (a.k.a, loads) run in each node globally or by node, so that users can better leverage the performance of Velero data movement activities and the resource consumption in the cluster. For more information, check the document. Velero now supports configurable options for parallel files upload when using Kopia uploader to do fs-backups or CSI snapshot data movements which makes speed up backup possible. For more information, please check . If using fs-restore or CSI snapshot data movements, its supported to write sparse files during restore. For more information, please check . In v1.13, the Backup Volume section is added to the velero backup describe command output. The backup Volumes section describes information for all the volumes included in the backup of various backup types, i.e. native snapshot, fs-backup, CSI snapshot, and CSI snapshot data movement. Particularly, the velero backup description now supports showing the information of CSI snapshot data movements, which is not supported in v1.12. Additionally, backup describe command will not check EnableCSI feature gate from client side, so if a backup has volumes with CSI snapshot or CSI snapshot data movement, backup describe command always shows the corresponding information in its output. Create a new metadata file in the backup repository's backup name sub-directory to store the backup-including PVC and PV information. The information includes the backing-up method of the PVC and PV data, snapshot information, and status. The VolumeInfo metadata file determines how the PV resource should be restored. The Velero downstream software can also use this metadata file to get a summary of the backup's volume data information. When performing backup and restore operations, enhancements have been implemented for Velero server pods or node agents to ensure that the current backup or restore process is not stuck or interrupted after restart due to certain exceptional circumstances. Hook execution status is now included in the backup/restore CR status and displayed in the backup/restore describe command output. Specifically, it will show the number of hooks which attempted to execute under the HooksAttempted field and the number of hooks which failed to execute under the HooksFailed field. Bump up AWS SDK for Go to version 2, which offers significant performance improvements in CPU and memory utilization over version" }, { "data": "Azure AD/Workload Identity is the recommended approach to do the authentication with Azure services/AKS, Velero has introduced support for Azure AD/Workload Identity on the Velero Azure plugin side in previous releases, and in v1.13.0 Velero adds new support for Kopia operations(file system backup/data mover/etc.) with Azure AD/Workload Identity. To fix CVEs and keep pace with Golang, Velero made changes as follows: Bump Golang runtime to v1.21.6. Bump several dependent libraries to new versions. Bump Kopia to v0.15.0. Backup describe command: due to the backup describe output enhancement, some existing information (i.e. the output for native snapshot, CSI snapshot, and fs-backup) has been moved to the Backup Volumes section with some format changes. API type changes: changes the field in DataUploadSpec from `map`` to `map[string]string` Velero install command: due to the issue , v1.13.0 introduces a break change that make the informer cache enabled by default to keep the actual behavior consistent with the helper message(the informer cache is disabled by default before the change). The backup's VolumeInfo metadata doesn't have the information updated in the async operations. This function could be supported in v1.14 release. Velero introduces the informer cache which is enabled by default. The informer cache improves the restore performance but may cause higher memory consumption. Increase the memory limit of the Velero pod or disable the informer cache by specifying the `--disable-informer-cache` option when installing Velero if you get the OOM error. The generated k8s clients, informers, and listers are deprecated in the Velero v1.13 release. They are put in the Velero repository's pkg/generated directory. According to the n+2 supporting policy, the deprecated are kept for two more releases. The pkg/generated directory should be deleted in the v1.15 release. After the backup VolumeInfo metadata file is added to the backup, Velero decides how to restore the PV resource according to the VolumeInfo content. To support the backup generated by the older version of Velero, the old logic is also kept. The support for the backup without the VolumeInfo metadata file will be kept for two releases. The support logic will be deleted in the v1.15 release. Make \"disable-informer-cache\" option false(enabled) by default to keep it consistent with the help message (#7294, @ywk253100) Fix issue #6928, remove snapshot deletion timeout for PVB (#7282, @Lyndon-Li) Do not set \"targetNamespace\" to namespace items (#7274, @reasonerjt) Fix issue #7244. By the end of the upload, check the outstanding incomplete snapshots and delete them by calling ApplyRetentionPolicy (#7245, @Lyndon-Li) Adjust the newline output of resource list in restore describer (#7238, @allenxu404) Remove the redundant newline in backup describe output (#7229, @allenxu404) Fix issue #7189, data mover generic restore - don't assume the first volume as the restore volume (#7201, @Lyndon-Li) Update CSIVolumeSnapshotsCompleted in backup's status and the metric during backup finalize stage according to async operations content. (#7184, @blackpiglet) Refactor DownloadRequest Stream function (#7175, @blackpiglet) Add `--skip-immediately` flag to schedule commands; `--schedule-skip-immediately` server and install (#7169, @kaovilai) Add node-agent concurrency doc and change the config name from dataPathConcurrency to loadCocurrency (#7161, @Lyndon-Li) Enhance hooks tracker by adding a returned error to record function (#7153, @allenxu404) Track the skipped PV when SnapshotVolumes set as false (#7152, @reasonerjt) Add more linters part" }, { "data": "(#7151, @blackpiglet) Fix issue #7135, check pod status before checking node-agent pod status (#7150, @Lyndon-Li) Treat namespace as a regular restorable item (#7143, @reasonerjt) Allow sparse option for Kopia & Restic restore (#7141, @qiuming-best) Use VolumeInfo to help restore the PV. (#7138, @blackpiglet) Node agent restart enhancement (#7130, @qiuming-best) Fix issue #6695, add describe for data mover backups (#7125, @Lyndon-Li) Add hooks status to backup/restore CR (#7117, @allenxu404) Include plugin name in the error message by operations (#7115, @reasonerjt) Fix issue #7068, due to a behavior of CSI external snapshotter, manipulations of VS and VSC may not be handled in the same order inside external snapshotter as the API is called. So add a protection finalizer to ensure the order (#7102, @Lyndon-Li) Generate VolumeInfo for backup. (#7100, @blackpiglet) Fix issue #7094, fallback to full backup if previous snapshot is not found (#7096, @Lyndon-Li) Fix issue #7068, due to an behavior of CSI external snapshotter, manipulations of VS and VSC may not be handled in the same order inside external snapshotter as the API is called. So add a protection finalizer to ensure the order (#7095, @Lyndon-Li) Skip syncing the backup which doesn't contain backup metadata (#7081, @ywk253100) Fix issue #6693, partially fail restore if CSI snapshot is involved but CSI feature is not ready, i.e., CSI feature gate is not enabled or CSI plugin is not installed. (#7077, @Lyndon-Li) Truncate the credential file to avoid the change of secret content messing it up (#7072, @ywk253100) Add VolumeInfo metadata structures. (#7070, @blackpiglet) improve discoveryHelper.Refresh() in restore (#7069, @27149chen) Add DataUpload Result and CSI VolumeSnapshot check for restore PV. (#7061, @blackpiglet) Add the implementation for design #6950, configurable data path concurrency (#7059, @Lyndon-Li) Make data mover fail early (#7052, @qiuming-best) Remove dependency of generated client part 3. (#7051, @blackpiglet) Update Backup.Status.CSIVolumeSnapshotsCompleted during finalize (#7046, @kaovilai) Remove the Velero generated client. (#7041, @blackpiglet) Fix issue #7027, data mover backup exposer should not assume the first volume as the backup volume in backup pod (#7038, @Lyndon-Li) Read information from the credential specified by BSL (#7034, @ywk253100) Fix #6857. Added check for matching Owner References when synchronizing backups, removing references that are not found/have mismatched uid. (#7032, @deefdragon) Add description markers for dataupload and datadownload CRDs (#7028, @shubham-pampattiwar) Add HealthCheckNodePort deletion logic for Service restore. (#7026, @blackpiglet) Fix inconsistent behavior of Backup and Restore hook execution (#7022, @allenxu404) Fix #6964. Don't use csiSnapshotTimeout (10 min) for waiting snapshot to readyToUse for data mover, so as to make the behavior complied with CSI snapshot backup (#7011, @Lyndon-Li) restore: Use warning when Create IsAlreadyExist and Get error (#7004, @kaovilai) Bump kopia to 0.15.0 (#7001, @Lyndon-Li) Make Kopia file parallelism configurable (#7000, @qiuming-best) Fix unified repository (kopia) s3 credentials profile selection (#6995, @kaovilai) Fix #6988, always get region from BSL if it is not empty (#6990, @Lyndon-Li) Limit PVC block mode logic to non-Windows platform. (#6989, @blackpiglet) It is a valid case that the Status.RestoreSize field in VolumeSnapshot is not set, if so, get the volume size from the source PVC to create the backup PVC (#6976, @Lyndon-Li) Check whether the action is a CSI action and whether CSI feature is enabled, before executing the" }, { "data": "(#6968, @blackpiglet) Add the PV backup information design document. (#6962, @blackpiglet) Change controller-runtime List option from MatchingFields to ListOptions (#6958, @blackpiglet) Add the design for node-agent concurrency (#6950, @Lyndon-Li) Import auth provider plugins (#6947, @0x113) Fix #6668, add a limitation for file system restore parallelism with other types of restores (CSI snapshot restore, CSI snapshot movement restore) (#6946, @Lyndon-Li) Add MSI Support for Azure plugin. (#6938, @yanggangtony) Partially fix #6734, guide Kubernetes' scheduler to spread backup pods evenly across nodes as much as possible, so that data mover backup could achieve better parallelism (#6926, @Lyndon-Li) Bump up aws sdk to aws-sdk-go-v2 (#6923, @reasonerjt) Optional check if targeted container is ready before executing a hook (#6918, @Ripolin) Support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers (#6917, @27149chen) Fix issue 6913: Velero Built-in Datamover: Backup stucks in phase WaitingForPluginOperations when Node Agent pod gets restarted (#6914, @shubham-pampattiwar) Set ParallelUploadAboveSize as MaxInt64 and flush repo after setting up policy so that policy is retrieved correctly by TreeForSource (#6885, @Lyndon-Li) Replace the base image with paketobuildpacks image (#6883, @ywk253100) Fix issue #6859, move plugin depending podvolume functions to util pkg, so as to remove the dependencies to unnecessary repository packages like kopia, azure, etc. (#6875, @Lyndon-Li) Fix #6861. Only Restic path requires repoIdentifier, so for non-restic path, set the repoIdentifier fields as empty in PVB and PVR and also remove the RepoIdentifier column in the get output of PVBs and PVRs (#6872, @Lyndon-Li) Add volume types filter in resource policies (#6863, @qiuming-best) change the metrics backupattempttotal default value to 1. (#6838, @yanggangtony) Bump kopia to v0.14 (#6833, @Lyndon-Li) Retry failed create when using generateName (#6830, @sseago) Fix issue #6786, always delete VSC regardless of the deletion policy (#6827, @Lyndon-Li) Proposal to support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers (#6797, @27149chen) Fix the node-agent missing metrics-address defines. (#6784, @yanggangtony) Fix default BSL setting not work (#6771, @qiuming-best) Update restore controller logic for restore deletion (#6770, @ywk253100) Fix #6752: add namespace exclude check. (#6760, @blackpiglet) Fix issue #6753, remove the check for read-only BSL in restore async operation controller since Velero cannot fully support read-only mode BSL in restore at present (#6757, @Lyndon-Li) Fix issue #6647, add the --default-snapshot-move-data parameter to Velero install, so that users don't need to specify --snapshot-move-data per backup when they want to move snapshot data for all backups (#6751, @Lyndon-Li) Use old(origin) namespace in resource modifier conditions in case namespace may change during restore (#6724, @27149chen) Perf improvements for existing resource restore (#6723, @sseago) Remove schedule-related metrics on schedule delete (#6715, @nilesh-akhade) Kubernetes 1.27 new job label batch.kubernetes.io/controller-uid are deleted during restore per https://github.com/kubernetes/kubernetes/pull/114930 (#6712, @kaovilai) This pr made some improvements in Resource Modifiers: 1. add label selector 2. change the field name from groupKind to groupResource (#6704, @27149chen) Make Kopia support Azure AD (#6686, @ywk253100) Add support for block volumes with Kopia (#6680, @dzaninovic) Delete PartiallyFailed orphaned backups as well as Completed ones (#6649, @sseago) Add CSI snapshot data movement doc (#6637, @Lyndon-Li) Fixes #6636, skip subresource in resource discovery (#6635, @27149chen) Add `orLabelSelectors` for backup, restore commands (#6475, @nilesh-akhade) fix run preHook and postHook on completed pods (#5211, @cleverhu)" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.13.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "% runc-state \"8\" runc-state - show the state of a container runc state container-id The state command outputs current state information for the specified container-id in a JSON format. runc(8)." } ]
{ "category": "Runtime", "file_name": "runc-state.8.md", "project_name": "runc", "subcategory": "Container Runtime" }
[ { "data": "Containers typically live in their own, possibly shared, networking namespace. At some point in a container lifecycle, container engines will set up that namespace to add the container to a network which is isolated from the host network. In order to setup the network for a container, container engines call into a networking plugin. The network plugin will usually create a virtual ethernet (`veth`) pair adding one end of the `veth` pair into the container networking namespace, while the other end of the `veth` pair is added to the host networking namespace. This is a very namespace-centric approach as many hypervisors or VM Managers (VMMs) such as `virt-manager` cannot handle `veth` interfaces. Typically, interfaces are created for VM connectivity. To overcome incompatibility between typical container engines expectations and virtual machines, Kata Containers networking transparently connects `veth` interfaces with `TAP` ones using : With a TC filter rules in place, a redirection is created between the container network and the virtual machine. As an example, the network plugin may place a device, `eth0`, in the container's network namespace, which is one end of a VETH device. Kata Containers will create a tap device for the VM, `tap0_kata`, and setup a TC redirection filter to redirect traffic from `eth0`'s ingress to `tap0_kata`'s egress, and a second TC filter to redirect traffic from `tap0_kata`'s ingress to `eth0`'s egress. Kata Containers maintains support for MACVTAP, which was an earlier implementation used in Kata. With this method, Kata created a MACVTAP device to connect directly to the `eth0` device. TC-filter is the default because it allows for simpler configuration, better CNI plugin compatibility, and performance on par with MACVTAP. Kata Containers has deprecated support for bridge due to lacking performance relative to TC-filter and MACVTAP. Kata Containers supports both and for networking management. Kata Containers has developed a set of network sub-commands and APIs to add, list and remove a guest network endpoint and to manipulate the guest route table. The following diagram illustrates the Kata Containers network hotplug workflow." } ]
{ "category": "Runtime", "file_name": "networking.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "A SpiderIPPool resource represents a collection of IP addresses from which Spiderpool expects endpoint IPs to be assigned. ```yaml apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: master-172 spec: ipVersion: 4 subnet: 172.31.192.0/20 ips: 172.31.199.180-172.31.199.189 172.31.199.205-172.31.199.209 excludeIPs: 172.31.199.186-172.31.199.188 172.31.199.207 gateway: 172.31.207.253 default: true disable: false ``` | Field | Description | Schema | Validation | |-|-|--|| | name | the name of this SpiderIPPool resource | string | required | This is the IPPool spec for users to configure. | Field | Description | Schema | Validation | Values | Default | |-||-|||| | ipVersion | IP version of this pool | int | optional | 4,6 | | | subnet | subnet of this pool | string | required | IPv4 or IPv6 CIDR.<br/>Must not overlap | | | ips | IP ranges for this pool to use | list of strings | optional | array of IP ranges and single IP address | | | excludeIPs | isolated IP ranges for this pool to filter | list of strings | optional | array of IP ranges and single IP address | | | gateway | gateway for this pool | string | optional | an IP address | | | vlan | vlan ID(deprecated) | int | optional | [0,4094] | 0 | | routes | custom routes in this pool (please don't set default route `0.0.0.0/0` if property `gateway` exists) | list of | optional | | | | podAffinity | specify which pods can use this pool | | optional | kubernetes LabelSelector | | | namespaceAffinity | specify which namespaces pods can use this pool | | optional | kubernetes LabelSelector | | | namespaceName | specify which namespaces pods can use this pool (The priority is higher than property `namespaceAffinity`) | list of strings | optional | | | | nodeAffinity | specify which nodes pods can use this pool | | optional | kubernetes LabelSelector | | | nodeName | specify which nodes pods can use this pool (The priority is higher than property `nodeAffinity`) | list of strings | optional | | | | multusName | specify which multus net-attach-def objects can use this pool | list of strings | optional | | | | default | configure this resource as a default pool for pods | boolean | optional | true,false | false | | disable | configure whether the pool is usable | boolean | optional | true,false | false | The IPPool status is a subresource that processed automatically by the system to summarize the current state | Field | Description | Schema | |-|-|--| | allocatedIPs | current IP allocations in this pool | string | | totalIPCount | total IP counts of this pool to use | int | | allocatedIPCount | current allocated IP counts | int | | Field | Description | Schema | Validation | |-||--|-| | dst | destination of this route | string | required | | gw | gateway of this route | string | required | For details on configuring SpiderIPPool podAffinity, please read the . For details on configuring SpiderIPPool namespaceAffinity or namespaceName, please read the . Notice: `namespaceName` has higher priority than `namespaceAffinity`. For details on configuring SpiderIPPool nodeAffinity or nodeName, please read the and . Notice: `nodeName` has higher priority than `nodeAffinity`. For details on configuring SpiderIPPool multusName, please read the ." } ]
{ "category": "Runtime", "file_name": "crd-spiderippool.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"ark delete\" layout: docs Delete ark resources Delete ark resources ``` -h, --help help for delete ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Back up and restore Kubernetes cluster resources. - Delete a backup - Delete a restore - Delete a schedule" } ]
{ "category": "Runtime", "file_name": "ark_delete.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Welcome to Kubernetes. We are excited about the prospect of you joining our ! The Kubernetes community abides by the CNCF . Here is an excerpt: As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. We have full documentation on how to get started contributing here: <! If your repo has certain guidelines for contribution, put them here ahead of the general k8s resources --> Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests - Main contributor documentation, or you can just jump directly to the - Common resources for existing developers - We have a diverse set of mentorship programs available that are always looking for volunteers! <! Custom Information - if you're copying this template for the first time you can add custom content here, for example: - Replace `kubernetes-users` with your slack channel string, this will send users directly to your channel. --> ontributor-cheatsheet) - Common resources for existing developers You can reach the maintainers of this project via the . - We have a diverse set of mentorship programs available that are always looking for volunteers!" } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "This brief document is largely for my own notes about how this functionality is added to `kube-vip`. New Flags Startup Secret(s) A `--wireguard` flag or `vip_wireguard` environment variable will determine if the Wireguard mode is enabled, if this is the case then it will start the wireguard manager process. This will require `kube-vip` starting as a daemonset as it will need to read existing data (secrets) from inside the cluster. Create a private key for the cluster: ``` PRIKEY=$(wg genkey) PUBKEY=$(echo $PRIKEY | wg pubkey) PEERKEY=$(sudo wg show wg0 public-key) echo \"kubectl create -n kube-system secret generic wireguard --from-literal=privateKey=$PRIKEY --from-literal=peerPublicKey=$PEERKEY --from-literal=peerEndpoint=192.168.0.179\" sudo wg set wg0 peer $PUBKEY allowed-ips 10.0.0.0/8 ```" } ]
{ "category": "Runtime", "file_name": "architecture.md", "project_name": "kube-vip", "subcategory": "Cloud Native Network" }
[ { "data": "Target version: 0.9 Some tools want to use Rook to run containers, but not manage the logical Ceph resources like Pools. We should make Rook's pool management optional. Currently in Rook 0.8, creating and destroying a Filesystem (or ObjectStore) in a Ceph cluster also creates and destroys the associated Ceph filesystem and pools. The current design works well when the Ceph configuration is within the scope of what Rook can configure itself, and the user does not modify the Ceph configuration of pools out of band. The current model is problematic in some cases: A user wants to use Ceph functionality outside of Rook's subset, and therefore create their pools by hand before asking Rook to run the daemon containers for a filesystem. A user externally modifies the configuration of a pool (such as the number of replicas), they probably want that new configuration, rather than for Rook to change it back to match the Rook Filesystem settings. A risk-averse user wants to ensure that mistaken edits to their Rook config cannot permanently erase Ceph pools (i.e. they want to only delete pools through an imperative interface with confirmation prompts etc). In FilesystemSpec (and ObjectStoreSpec), when the metadata and data pool fields are left empty, Rook will not do any management of logical Ceph resources (Ceph pools and Ceph filesystems) for the filesystem. The pools may be initially non-nil, and later modified to be nil. In this case, while Rook may have created the logical resources for the filesystem, it will not remove them when the Rook filesystem is removed. If either of the metadata/data fields are non-nil, then they both must be non-nil: Rook will not partially manage the pools for a given filesystem or object store. ```yaml apiVersion: ceph.rook.io/v1 kind: Filesystem metadata: name: myfs namespace: rook-ceph spec: metadataPool: replicated: size: 3 dataPools: erasureCoded: dataChunks: 2 codingChunks: 1 metadataServer: activeCount: 1 activeStandby: true ``` In this example, the pools are omitted. Rook will not create any pools or a Ceph filesystem. A filesystem named ``myfs`` should already exist in Ceph, otherwise Rook will not start any MDS pods. ```yaml apiVersion: ceph.rook.io/v1 kind: Filesystem metadata: name: myfs namespace: rook-ceph spec: metadataServer: activeCount: 1 activeStandby: true ``` Rook Operator: add logic to skip logical resource management when pools are omitted in FilesystemSpec or ObjectStoreSpec Migration: none required. Existing filesystems and objectstores always have pools set explicitly, so will continue to have these managed by Rook." } ]
{ "category": "Runtime", "file_name": "external-management.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "layout: global title: Code Conventions First off, we thank you for your interest in the Alluxio open source project! We greatly appreciate any contribution; whether it be new features or bug fixes. If you are a first time contributor to the Alluxio open source project, we strongly encourage you to follow the step-by-step instructions within the and finish new contributor tasks before making more advanced changes to the Alluxio codebase. Submitting changes to Alluxio is done via pull requests. Please read our for details on how to submit a pull request to the Alluxio repository. Below are some tips for the pull requests. We encourage you to break your work into small, single-purpose patches if possible. It is more difficult to merge in a large change with a lot of disjoint features. We track issues and features in our . Open an issue detailing the proposed change or the bug description. Submit the patch as a GitHub pull request. If your pull request aims to solve an existing Github issue, please include a link to the Github issue in the last line of the description field of the pull request, such as `Fixes #1234`. Please follow the style of the existing codebase. We mainly follow the , with the following deviations: Maximum line length of 100 characters. Third-party imports are grouped together to make IDE formatting much simpler. Class member variable names should be prefixed with `m` example: `private WorkerClient mWorkerClient;` Static variable names should be prefixed with `s` example: `private static String sUnderFSAddress;` Bash scripts follow the , and must be compatible with Bash 3.x If you use IntelliJ IDEA: You can use the and configure it under Preferences->Tools->Checkstyle. To automatically format the import, configure in Preferences->Code Style->Java->Imports->Import Layout according to . In that same settings pane, you can set your style scheme to the one you imported in the Checkstyle plugin settings . If you use Eclipse: You can download our To organize your imports correctly, configure \"Organize Imports\" to look like To automatically reorder methods alphabetically, try the , open Preferences, search for rearranger, remove the unnecessary comments, then right click, choose \"Rearrange\", codes will be formatted to what you want To verify that the coding standards match, you should run before sending a pull request to verify no new warnings are introduced: ```shell $ mvn checkstyle:checkstyle ``` This codebase follows the with the following refinements: All public classes/interfaces should have a class/interface-level comment that describes the purpose of the class/interface. All public members should have a member-level comment the describes the purpose of the member. ```java // The number of logical bytes used. public final AtomicLong mBytes = new AtomicLong(0); ``` All public methods (including constructors) should use the following format. ```java / Does something. This is a method description that uses 3rd person (does something) as opposed to 2nd person (do something). * @param param_1 description of 1st parameter ... @param param_n description of nth parameter @return description of return argument (if applicable) @throws exception_1 description of 1st exception case" }, { "data": "@throws exception_n description of nth exception case */ ``` An exception to the above rule is that `@throws` doesnt need to be provided for `@Test` methods, or for generic exceptions like IOException when there is nothing interesting to document. Only write exception javadoc when you think it will be useful to the developer using the method. There are so many sources of `IOException` that its almost never useful to include javadoc for it. Do not write javadoc for unchecked exceptions like `RuntimeException` unless it is critical for this method. Getters and setters should omit the method description if it is redundant and only use `@param` and `@return` descriptions. ```java / @return the number of pages stored */ long getPages(); ``` Most sentences should start with a capital letter and end with a period. An exception to this style is an isolated sentence; it does not start with a capital letter nor end with a period. GOOD (isolated): this is a short description GOOD (full sentence): This is a short description. GOOD (2 full sentences): This is a slightly longer description. It has two sentences. BAD: this is a short description. BAD: This is a short description BAD: this is a slightly longer description. It has two sentences When writing the description, the first sentence should be a concise summary of the class or method and the description should generally be implementation-independent. It is a good idea to use additional sentences to describe any significant performance implications. ```java / The default implementation of a metadata store for pages stored in cache. */ public class DefaultMetaStore implements MetaStore { ... } ``` When the `@deprecated` annotation is added, it should also at least tell the user when the API was deprecated and what to use as a replacement with `@see` or `@link` tag. ```java / @deprecated as of Alluxio 2.1, replaced by {@link #newMethodName(int,int,int,int)} */ ``` When descriptions of `@param`, `@return`, `@throw` exceed one line, the text should align with the first argument after the tag. ```java @throws FileAlreadyExistsException if there is already a file or directory at the given path in Alluxio Filesystem ``` When reference a class name in javadoc, prefer `<code>ClassName</code>` tags to `{@link ClassName}`. Alluxio uses for logging with typical usage pattern of: ```java import org.slf4j.Logger; import org.slf4j.LoggerFactory; public MyClass { private static final Logger LOG = LoggerFactory.getLogger(MyClass.class); public void someMethod() { LOG.info(\"Hello world\"); } } ``` Note that, each class must use its own logger based on the class name, like `LoggerFactory.getLogger(MyClass.class)` in above example, so its output can easily be searched for. The location of the output of SLF4J loggers can be found for and . When applicable, logging should be parameterized, to provide good performance and consistent style. ```java // Recommended: Parameterized logging LOG.debug(\"Client {} registered with {}\", mClient, mHostname); ``` ```java // Not recommended: Non-parameterized logging, hard to read and expensive due to String // concatenation regardless of if DEBUG is enabled LOG.debug(\"Client \" + mClient + \" registered with \" + mHostname); ``` Error Messages Should Be Descriptive. Error messages should include enough detail and context to determine the issue and potential solution. This is important for quick diagnosis or fixes of errors. ```java // Recommended: error messages with context" }, { "data": "{} failed to register to {}\", mClient, mHostname, exception); ``` ```java // Not recommended: There is no information on which client failed, or what the issue was LOG.error(\"Client failed to register\"); ``` Log messages should be concise and readable. Log messages should be written with readability in mind. Here are some tips for writing good log messages. Log levels `INFO` and above should be easily human readable Log files should be concise and easy to read, noise reduces the value of logs Keep the amount of additional words to a minimum Clearly indicate if a variable reference is being printed by formatting the output as `variable: value` Ensure objects being logged have appropriate `toString()` implementations Error level logs should have troubleshooting pointers if applicable There are several levels of logging, see detailed explanation of Here are the guidelines for deciding which level to use. Error level logging (`LOG.error`) indicates system level problems which cannot be recovered from. It should be accompanied by a stack trace of the exception thrown to help debug the issue. ```java // Recommended: a stack trace will be shown in the log LOG.error(\"Failed to do something due to an exception\", e); ``` ```java // Not recommended: wrong logging syntax LOG.error(\"Failed to do something due to an exception {}\", e); // Not recommended: stack trace will not be logged LOG.error(\"Failed to do something due to an exception {}\", e.toString()); ``` When to Use An unrecoverable event has occurred. The thread will be terminated. Example: Failed to flush to the journal An unrecoverable event has occurred. The process will exit. Example: Master failed to bind to the RPC port During Exception Handling Error level logging should include the exception and full stack trace The exception should only be logged once, at the boundary between the process and the external caller. For internal services (ie. periodic threads), the exception should be logged at the top level thread loop. Warn level logging (`LOG.warn`) indicates a logical mismatch between user intended behavior and Alluxio behavior. Warn level logs are accompanied by an exception message using `e.toString()`. The associated stack trace is typically not included to avoid spamming the logs. If needed, print the details in separate debug level logs. ```java // Recommended LOG.warn(\"Failed to do something: {}\", e.toString()); ``` ```java // Not recommended: wrong logging syntax LOG.warn(\"Failed to do something: {}\", e); // Not recommended: this will print out the stack trace, can be noisy LOG.warn(\"Failed to do something\", e); // Not recommended: the exception class name is not included LOG.warn(\"Failed to do something {}\", e.getMessage()); ``` When to Use Unexpected state has been reached, but the thread can continue Example: Failed to delete the temporary file for buffering writes to S3 User intended behavior cannot be achieved, but a default or fallback behavior is in place Example: EPOLL is specified but not available in the Netty library provided, default to NIO A misuse of configuration or API is suspected Example: Zookeeper address is specified but zookeeper is not enabled During Exception Handling Warn level logging should be used for logical errors Only the exception message should be logged, as the stack trace is irrelevant, suppressed stack traces can be revealed at the debug level The warn level error message should be logged once, at the point where the process makes an explicit deviation from expected behavior Info level logging" }, { "data": "records important system state changes. Exception messages and stack traces are never associated with info level logs. Note that, this level of logging should not be used on critical path of operations that may happen frequently to prevent negative performance impact. ```java LOG.info(\"Master started with address: {}.\", address); ``` When to Use Initialization of a long lived object within the process Example: Master RPC Server begins serving Important logical events in the process Example: A worker registers with the master End of life of a long lived object within the process Example: The Master RPC Server shuts down During Exception Handling Info level logging is never involved with exceptions Debug level logging (`LOG.debug`) includes detailed information for various aspects of the Alluxio system. Control flow logging (Alluxio system enter and exit calls) is done in debug level logs. Debug level logging of exceptions typically has the detailed information including stack trace. Please avoid the slow strings construction on debug-level logging on critical path. ```java // Recommended LOG.debug(\"Failed to connect to {} due to exception\", mAddress, e); // Recommended: string concatenation is only performed if debug is enabled if (LOG.isDebugEnabled()) { LOG.debug(\"Failed to connect to address {} due to exception\", host + \":\" + port, e); } ``` ```java // Not recommended: string concatenation is always performed LOG.debug(\"Failed to connect to {} due to exception\", host + \":\" + port, e); ``` When to Use Any exit or entry point of the process where execution is handed to an external process Example: A request is made to the under file system During Exception Handling Debug level logs provide the control flow logging, in the case of an error exit, the exception message is logged For warn level messages which may not always be benign, the stack trace can be logged at the debug level. These debug level stack traces should eventually be phased out. When to Use Any degree of detail within an experimental or new feature During Exception Handling Up to the developers discretion These are the guidelines for throwing and handling exceptions throughout the Alluxio codebase. These are the guidelines for how and when to throw exceptions. Examples: Illegal states, invalid API usage ```java public void method(String neverSupposedToBeNull) { Preconditions.checkNotNull(neverSupposedToBeNull, \"neverSupposedToBeNull\"); } ``` Examples: File not found, interrupted exception, timeout exceeded ```java // Recommended: the checked exception should just be propagated public void handleRawUserInput(String date) throws InvalidDateException { parseDate(date); } // Don't do this - it's reasonable for user input to be invalid. public void handleRawUserInput(String date) throws InvalidDateException { try { parseDate(date); } catch (InvalidDateException e) { throw new RuntimeExcepiton(\"date \" + date + \" is invalid\", e); } } ``` Require callers to validate inputs so that invalid arguments can be unchecked exceptions. Try to find an appropriate existing exception before inventing a new one. Only write exception javadoc when you think it will be useful to the developer using the method. There are so many sources of IOException that it's almost never useful to include javadoc for it. On the wire we represent exceptions with one of 14 status codes, e.g. `NOT_FOUND`, `UNAVAILABLE`. Within our server and client code, we represent these exceptions using exception classes corresponding to these statuses," }, { "data": "`NotFoundException` and `UnavailableException`. `AlluxioStatusException` is the superclass for these Java exceptions. These are the guidelines for how to handle exceptions. Either log the exception or propagate it. It is usually wrong to both log and propagate. If every method did this, the same exception would be logged dozens of times, polluting the logs. The responsibility for logging lies with whatever method ends up handling the exception without propagating it. See this . An `InterruptedException` means that another thread has signalled that this thread should die. There are a few acceptable ways to handle an `InterruptedException`, listed in order of preference. Actually stop the thread ```java class MyRunnable implements Runnable { public void run() { while (true) { try { doSomethingThatThrowsInterruptedException(); } catch (InterruptedException e) { break; } } } } ``` Directly propagate the exception ```java public void myMethod() throws InterruptedException { doSomethingThatThrowsInterruptedException(); } ``` Reset the interrupted flag and continue This punts the problem forward. Hopefully later code will inspect the interrupted flag and handle it appropriately. ```java public void myMethod() { try { doSomethingThatThrowsInterruptedException(); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } ``` Reset the interrupted flag and wrap the InterruptedException ```java public void myMethod() { try { doSomethingThatThrowsInterruptedException(); } catch (InterruptedException e) { Thread.currentThread().interrupt(); throw new RuntimeException(e); } } ``` Assume that any method might throw an unchecked exception and make sure this doesn't cause resource leaks. We do not stop servers when RPC threads throw RuntimeExceptions. `try-finally` and `try-with-resources` blocks can make releasing resources much easier. ```java // with try-finally Resource r = acquireResource(); try { doSomething(r); } finally { r.close(); } // with try-with-resources try (Resource r = acquireResource()) { doSomething(r); } ``` If both the try block and the finally block throw exceptions, the exception from the try block will be lost and the finally block exception will be thrown. This is almost never desirable since the exception in the try block happened first. To avoid this, either make sure your finally blocks can't throw exceptions, or use Guava's Closer. is helpful for reducing boilerplate and making exception handling less error-prone, for releasing resources. ```java Closer closer = new Closer(); closer.register(resource1); closer.register(resource2); closer.close(); ``` If both calls to `close()` throw an exception, the first exception will be thrown and the second exception will be added as a suppressed exception of the first one. From the Closer javadoc: ```java Closer closer = Closer.create(); try { InputStream in = closer.register(openInputStream()); OutputStream out = closer.register(openOutputStream()); // do stuff } catch (Throwable e) { // ensure that any checked exception types other than IOException that could be thrown are // provided here, e.g. throw closer.rethrow(e, CheckedException.class); throw closer.rethrow(e); } finally { closer.close(); } ``` ```java mCloser = new Closer(); mCloser.register(new resource1()); try { doSomething(); } catch (Throwable t) { // We want to close resources and throw the original exception t. Any exceptions thrown while // closing resources should be suppressed on t. try { throw mCloser.rethrow(t); } finally { mCloser.close(); } } ``` We have a util method for closing a closer and suppressing its exceptions onto an existing exception ```java mCloser = new Closer(); mCloser.register(new resource1()); try { doSomething(); } catch (Throwable t) { throw" }, { "data": "t); } ``` This will improve static analysis of our code so that we can detect potential `NullPointerException`s before they happen. Use the `javax.annotation.Nullable` import. ```java import javax.annotation.Nullable; @Nullable public String getName() { if (mName == \"\") { return null; } return mName; } ``` When a method is specifically designed to be able to handle null parameters, those parameters should be annotated with `@Nullable`. Use the `javax.annotation.Nullable` import. ```java import javax.annotation.Nullable; public repeat(@Nullable String s) { if (s == null) { System.out.println(\"Hello world\"); } else { System.out.println(s); } } ``` The preconditions check gives a more useful error message when you tell it the name of the variable being checked. ```java Preconditions.checkNotNull(blockInfo, \"blockInfo\") // Do this Preconditions.checkNotNull(blockInfo); // Do NOT do this ``` Tests are easier to read when there is less boilerplate. Use static imports for methods in `org.junit.Assert`, `org.junit.Assume`, `org.mockito.Matchers`, and `org.mockito.Mockito`. ```java // Change Assert.assertFalse(fileInfo.isFolder()); // to assertFalse(fileInfo.isFolder()); // It may be necessary to add the import import static org.junit.Assert.assertFalse; ``` Unit tests act as examples of how to use the code under test. Unit tests detect when an object breaks its specification. Unit tests don't break when an object is refactored but still meets the same specification. If creating an instance of the class takes some work, create a `@Before` method to perform shared setup steps. The `@Before` method gets run automatically before each unit test. Test-specific setup should be done locally in the tests that need it. In this example, we are testing a `BlockMaster`, which depends on a journal, clock, and executor service. The executor service and journal we provide are real implementations, and the `TestClock` is a fake clock which can be controlled by unit tests. ```java @Before public void before() throws Exception { Journal blockJournal = new ReadWriteJournal(mTestFolder.newFolder().getAbsolutePath()); mClock = new TestClock(); mExecutorService = Executors.newFixedThreadPool(2, ThreadFactoryUtils.build(\"TestBlockMaster-%d\", true)); mMaster = new BlockMaster(blockJournal, mClock, mExecutorService); mMaster.start(true); } ``` If anything created in `@Before` creates something which needs to be cleaned up (e.g. a `BlockMaster`), create an `@After` method to do the cleanup. This method is automatically called after each test. ```java @After public void after() throws Exception { mMaster.stop(); } ``` Decide on an element of functionality to test. The functionality you decide to test should be part of the public API and should not care about implementation details. Tests should be focused on testing only one thing. Give your test a name that describes what functionality it's testing. The functionality being tested should ideally be simple enough to fit into a name, e.g. `removeNonexistentBlockThrowsException`, `mkdirCreatesDirectory`, or `cannotMkdirExistingFile`. ```java @Test public void detectLostWorker() throws Exception { ``` Set up the situation you want to test. Here we register a worker and then simulate an hour passing. The `HeartbeatScheduler` section enforces that the lost worker heartbeat runs at least once. ```java // Register a worker. long worker1 = mMaster.getWorkerId(NETADDRESS1); mMaster.workerRegister(worker1, ImmutableList.of(\"MEM\"), ImmutableMap.of(\"MEM\", 100L), ImmutableMap.of(\"MEM\", 10L), NOBLOCKSON_TIERS); // Advance the block master's clock by an hour so that the worker appears lost. mClock.setTimeMs(System.currentTimeMillis() + Constants.HOUR_MS); // Run the lost worker detector. HeartbeatScheduler.await(HeartbeatContext.MASTERLOSTWORKER_DETECTION, 1, TimeUnit.SECONDS); HeartbeatScheduler.schedule(HeartbeatContext.MASTERLOSTWORKER_DETECTION); HeartbeatScheduler.await(HeartbeatContext.MASTERLOSTWORKER_DETECTION, 1, TimeUnit.SECONDS); ``` Check that the class behaved correctly: ```java // Make sure the worker is detected as lost. Set<WorkerInfo> info = mMaster.getLostWorkersInfo(); assertEquals(worker1, Iterables.getOnlyElement(info).getId()); } ``` Repeat from" }, { "data": "#3 until the class's entire public API has been tested. The tests for `src/main/java/ClassName.java` should go in `src/test/java/ClassNameTest.java` Tests do not need to handle or document specific checked exceptions. Prefer to simply add `throws Exception` to the test method signature. Aim to keep tests short and simple enough so that they don't require comments to understand. Avoid randomness. Edge cases should be handled explicitly. Avoid waiting for something by calling `Thread.sleep()`. This leads to slower unit tests and can cause flaky failures if the sleep isn't long enough. Avoid using Whitebox to mess with the internal state of objects under test. If you need to mock a dependency, change the object to take the dependency as a parameter in its constructor (see ) Avoid slow tests. Mock expensive dependencies and aim to keep individual test times under 100ms. All tests in a module run in the same JVM, so it's important to properly manage global state so that tests don't interfere with each other. Global state includes system properties, Alluxio configuration, and any static fields. Our solution to managing global state is to use JUnit's support for `@Rules`. Some unit tests want to test Alluxio under different configurations. This requires modifying the global `Configuration` object. When all tests in a suite need configuration parameters set a certain way, use `ConfigurationRule` to set them. ```java @Rule public ConfigurationRule mConfigurationRule = new ConfigurationRule(ImmutableMap.of( PropertyKey.key1, \"value1\", PropertyKey.key2, \"value2\")); ``` For configuration changes needed for an individual test, use `ConfigurationRule#set(key, value)` on the instance created by your test class. The resulting change of this method is only visible for the calling test. ```java @Rule public ConfigurationRule mConfigurationRule = new ConfigurationRule(ImmutableMap.of( PropertyKey.key1, \"value1\", PropertyKey.key2, \"value2\")); @Test public void testSomething() { mConfigurationRule.set(PropertyKey.key1, \"value3\"); // Now PropertyKey.key1 = \"value3\" ... } @Test public void testAnotherThing() { // Now PropertyKey.key1 = \"value1\" } ``` If you need to change a system property for the duration of a test suite, use `SystemPropertyRule`. ```java @Rule public SystemPropertyRule mSystemPropertyRule = new SystemPropertyRule(\"propertyName\", \"value\"); ``` To set a system property during a specific test, use the `SystemPropertyRule#toResource()` method to get a `Closeable` for a try-catch statement: ```java @Test public void test() { try (Closeable p = new SystemPropertyRule(\"propertyKey\", \"propertyValue\").toResource()) { // Test something with propertyKey set to propertyValue. } } ``` If a test needs to modify other types of global state, create a new `@Rule` for managing the state so that it can be shared across tests. One example of this is . Sometimes you will need to play with a few system settings in order to have the unit tests pass locally. A common setting that may need to be set is `ulimit`. In order to increase the number of files and processes allowed on MacOS, run the following ```shell $ sudo launchctl limit maxfiles 32768 32768 $ sudo launchctl limit maxproc 32768 32768 ``` It is also recommended to exclude your local clone of Alluxio from Spotlight indexing. Otherwise, your Mac may hang constantly trying to re-index the file system during the unit tests. To do this, go to `System Preferences > Spotlight > Privacy`, click the `+` button, browse to the directory containing your local clone of Alluxio, and click `Choose` to add it to the exclusions list." } ]
{ "category": "Runtime", "file_name": "Code-Conventions.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "Copyright (c) 2015-2017, Gregory M. Kurtzer. All rights reserved. Copyright (c) 2016-2017, The Regents of the University of California. All right reserved. Copyright (c) 2017, SingularityWare, LLC. All rights reserved. Copyright (c) 2018-2019, Sylabs, Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE." } ]
{ "category": "Runtime", "file_name": "LICENSE.md", "project_name": "Singularity", "subcategory": "Container Runtime" }
[ { "data": "title: \"ark restic server\" layout: docs Run the ark restic server Run the ark restic server ``` ark restic server [flags] ``` ``` -h, --help help for server --log-level the level at which to log. Valid values are debug, info, warning, error, fatal, panic. (default info) ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with restic" } ]
{ "category": "Runtime", "file_name": "ark_restic_server.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Copyright (c) 2015, Chris Simpson <chris@victoryonemedia.com>, with Reserved Font Name: \"Metropolis\". This Font Software is licensed under the SIL Open Font License, Version 1.1. This license is copied below, and is also available with a FAQ at: http://scripts.sil.org/OFL Version 2.0 - 18 March 2012 SIL Open Font License ==================================================== Preamble The goals of the Open Font License (OFL) are to stimulate worldwide development of collaborative font projects, to support the font creation efforts of academic and linguistic communities, and to provide a free and open framework in which fonts may be shared and improved in partnership with others. The OFL allows the licensed fonts to be used, studied, modified and redistributed freely as long as they are not sold by themselves. The fonts, including any derivative works, can be bundled, embedded, redistributed and/or sold with any software provided that any reserved names are not used by derivative works. The fonts and derivatives, however, cannot be released under any other type of license. The requirement for fonts to remain under this license does not apply to any document created using the fonts or their derivatives. Definitions `\"Font Software\"` refers to the set of files released by the Copyright Holder(s) under this license and clearly marked as such. This may include source files, build scripts and documentation. `\"Reserved Font Name\"` refers to any names specified as such after the copyright statement(s). `\"Original Version\"` refers to the collection of Font Software components as distributed by the Copyright Holder(s). `\"Modified Version\"` refers to any derivative made by adding to, deleting, or substituting -- in part or in whole -- any of the components of the Original Version, by changing formats or by porting the Font Software to a new environment. `\"Author\"` refers to any designer, engineer, programmer, technical writer or other person who contributed to the Font" }, { "data": "Permission & Conditions Permission is hereby granted, free of charge, to any person obtaining a copy of the Font Software, to use, study, copy, merge, embed, modify, redistribute, and sell modified and unmodified copies of the Font Software, subject to the following conditions: Neither the Font Software nor any of its individual components, in Original or Modified Versions, may be sold by itself. Original or Modified Versions of the Font Software may be bundled, redistributed and/or sold with any software, provided that each copy contains the above copyright notice and this license. These can be included either as stand-alone text files, human-readable headers or in the appropriate machine-readable metadata fields within text or binary files as long as those fields can be easily viewed by the user. No Modified Version of the Font Software may use the Reserved Font Name(s) unless explicit written permission is granted by the corresponding Copyright Holder. This restriction only applies to the primary font name as presented to the users. The name(s) of the Copyright Holder(s) or the Author(s) of the Font Software shall not be used to promote, endorse or advertise any Modified Version, except to acknowledge the contribution(s) of the Copyright Holder(s) and the Author(s) or with their explicit written permission. The Font Software, modified or unmodified, in part or in whole, must be distributed entirely under this license, and must not be distributed under any other license. The requirement for fonts to remain under this license does not apply to any document created using the Font Software. Termination -- This license becomes null and void if any of the above conditions are not met. DISCLAIMER THE FONT SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM OTHER DEALINGS IN THE FONT SOFTWARE." } ]
{ "category": "Runtime", "file_name": "Open Font License.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "This is a collection of email templates to handle various situations the security team encounters. ``` Subject: Upcoming security release of CubeFS $VERSION To: security@cubefs.groups.io Hello CubeFS Community, The CubeFS Product Security Committee and maintainers would like to announce the forthcoming release of CubeFS $VERSION. This release will be made available on the $ORDINALDAY of $MONTH $YEAR at $PDTHOUR PDT ($GMTHOUR GMT). This release will fix $NUMDEFECTS security defect(s). The highest rated security defect is considered $SEVERITY severity. No further details or patches will be made available in advance of the release. Thanks Thanks to $REPORTER, $DEVELOPERS, and the $RELEASELEADS for the coordination is making this release. Thanks, $PERSON on behalf of the CubeFS Product Security Committee and maintainers ``` ``` Subject: Security release of CubeFS $VERSION is now available To: security@cubefs.groups.io Hello CubeFS Community, The Product Security Committee and maintainers would like to announce the availability of CubeFS $VERSION. This addresses the following CVE(s): CVE-YEAR-ABCDEF (CVSS score $CVSS): $CVESUMMARY ... Upgrading to $VERSION is encouraged to fix these issues. Am I vulnerable? Run `cfs-server -v` and if it indicates a base version of $OLDVERSION or older that means it is a vulnerable version. <!-- Provide details on features, extensions, configuration that make it likely that a system is vulnerable in practice. --> How do I mitigate the vulnerability? <!-- [This is an optional section. Remove if there are no mitigations.] --> How do I upgrade? Follow the upgrade instructions at https://cubefs.io/docs/master/overview/introduction.html Vulnerability Details <!-- [For each CVE] --> *CVE-YEAR-ABCDEF* $CVESUMMARY This issue is filed as $CVE. We have rated it as ($CVSS, $SEVERITY) Thanks Thanks to $REPORTER, $DEVELOPERS, and the $RELEASELEADS for the coordination in making this release. Thanks, $PERSON on behalf of the CubeFS Product Security Committee and maintainers ```" } ]
{ "category": "Runtime", "file_name": "email-templates.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Upgrading to Velero 1.0\" layout: docs Velero v0.11 installed. If you're not already on v0.11, see the . Upgrading directly from v0.10.x or earlier to v1.0 is not supported! (Optional, but strongly recommended) Create a full copy of the object storage bucket(s) Velero is using. Part 1 of the upgrade procedure will modify the contents of the bucket, so we recommend creating a backup copy of it prior to upgrading. You need to replace legacy metadata in object storage with updated versions for any backups that were originally taken with a version prior to v0.11 (i.e. when the project was named Ark). While Velero v0.11 is backwards-compatible with these legacy files, Velero v1.0 is not. If you're sure that you do not have any backups that were originally created prior to v0.11 (with Ark), you can proceed directly to Part 2. We've added a CLI command to , `velero migrate-backups`, to help you with this. This command will: Replace `ark-backup.json` files in object storage with equivalent `velero-backup.json` files. Create `<backup-name>-volumesnapshots.json.gz` files in object storage if they don't already exist, containing snapshot metadata populated from the backups' `status.volumeBackups` field*. *backups created prior to v0.10 stored snapshot metadata in the `status.volumeBackups` field, but it has subsequently been replaced with the `<backup-name>-volumesnapshots.json.gz` file. Download the tarball for your client platform. Extract the tarball: ```bash tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to ``` Move the `velero` binary from the Velero directory to somewhere in your PATH. Scale down your existing Velero deployment: ```bash kubectl -n velero scale deployment/velero --replicas 0 ``` Fetch velero's credentials for accessing your object storage bucket and store them locally for use by `velero migrate-backups`: For AWS: ```bash export AWSSHAREDCREDENTIALS_FILE=./velero-migrate-backups-credentials kubectl -n velero get secret cloud-credentials -o jsonpath=\"{.data.cloud}\" | base64 --decode > $AWSSHAREDCREDENTIALS_FILE ```` For Azure: ```bash export AZURESUBSCRIPTIONID=$(kubectl -n velero get secret cloud-credentials -o jsonpath=\"{.data.AZURESUBSCRIPTIONID}\" | base64 --decode) export AZURETENANTID=$(kubectl -n velero get secret cloud-credentials -o jsonpath=\"{.data.AZURETENANTID}\" | base64 --decode) export AZURECLIENTID=$(kubectl -n velero get secret cloud-credentials -o jsonpath=\"{.data.AZURECLIENTID}\" | base64 --decode) export AZURECLIENTSECRET=$(kubectl -n velero get secret cloud-credentials -o jsonpath=\"{.data.AZURECLIENTSECRET}\" | base64 --decode) export AZURERESOURCEGROUP=$(kubectl -n velero get secret cloud-credentials -o jsonpath=\"{.data.AZURERESOURCEGROUP}\" | base64 --decode) ``` For GCP: ```bash export GOOGLEAPPLICATIONCREDENTIALS=./velero-migrate-backups-credentials kubectl -n velero get secret cloud-credentials -o jsonpath=\"{.data.cloud}\" | base64 --decode > $GOOGLEAPPLICATIONCREDENTIALS ``` List all of your backup storage locations: ```bash velero backup-location get ``` For each backup storage location that you want to use with Velero 1.0, replace any legacy pre-v0.11 backup metadata with the equivalent current formats: ``` velero migrate-backups \\ --backup-location <BACKUPLOCATIONNAME> \\ --snapshot-location <SNAPSHOTLOCATIONNAME> ``` Scale up your deployment: ```bash kubectl -n velero scale deployment/velero --replicas 1 ``` Remove the local `velero` credentials: For AWS: ``` rm $AWSSHAREDCREDENTIALS_FILE unset AWSSHAREDCREDENTIALS_FILE ``` For Azure: ``` unset AZURESUBSCRIPTIONID unset AZURETENANTID unset AZURECLIENTID unset AZURECLIENTSECRET unset AZURERESOURCEGROUP ``` For GCP: ``` rm $GOOGLEAPPLICATIONCREDENTIALS unset GOOGLEAPPLICATIONCREDENTIALS ``` Download the tarball for your client platform. Extract the tarball: ```bash tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to ``` Move the `velero` binary from the Velero directory to somewhere in your PATH, replacing any existing pre-1.0 `velero` binaries. Update the image for the Velero deployment and daemon set (if applicable): ```bash kubectl -n velero set image deployment/velero velero=gcr.io/heptio-images/velero:v1.0.0 kubectl -n velero set image daemonset/restic restic=gcr.io/heptio-images/velero:v1.0.0 ```" } ]
{ "category": "Runtime", "file_name": "upgrade-to-1.0.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage the VTEP mappings for IP/CIDR <-> VTEP MAC/IP ``` -h, --help help for vtep ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - Delete vtep entries - List VTEP CIDR and their corresponding VTEP MAC/IP - Update vtep entries" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_vtep.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "The `yaml` Project is released on an as-needed basis. The process is as follows: An issue is proposing a new release with a changelog since the last release All must LGTM this release An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION` The release issue is closed An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kubernetes-template-project $VERSION is released`" } ]
{ "category": "Runtime", "file_name": "RELEASE.md", "project_name": "Kilo", "subcategory": "Cloud Native Network" }
[ { "data": "Antrea has a few network requirements to get started, ensure that your hosts and firewalls allow the necessary traffic based on your configuration. | Configuration | Host(s) | ports/protocols | Other | ||-|--|| | Antrea with VXLAN enabled | All | UDP 4789 | | | Antrea with Geneve enabled | All | UDP 6081 | | | Antrea with STT enabled | All | TCP 7471 | | | Antrea with GRE enabled | All | IP Protocol ID 47 | No support for IPv6 clusters | | Antrea with IPsec ESP enabled | All | IP protocol ID 50 and 51, UDP 500 and 4500 | | | Antrea with WireGuard enabled | All | UDP 51820 | | | Antrea Multi-cluster with WireGuard encryption | Multi-cluster Gateway Node | UDP 51821 | | | All | kube-apiserver host | TCP 443 or 6443\\* | | | All | All | TCP 10349, 10350, 10351, UDP 10351 | | \\* _The value passed to kube-apiserver using the --secure-port flag. If you cannot locate this, check the targetPort value returned by kubectl get svc kubernetes -o yaml._" } ]
{ "category": "Runtime", "file_name": "network-requirements.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "rkt has a built-in garbage collection command that is designed to be run periodically from a timer or cron job. Stopped pods are moved to the garbage and cleaned up during a subsequent garbage collection pass. Each `gc` pass removes any pods remaining in the garbage past the grace period. . ``` Moving pod \"21b1cb32-c156-4d26-82ae-eda1ab60f595\" to garbage Moving pod \"5dd42e9c-7413-49a9-9113-c2a8327d08ab\" to garbage Moving pod \"f07a4070-79a9-4db0-ae65-a090c9c393a3\" to garbage ``` On the next pass, the pods are removed: ``` Garbage collecting pod \"21b1cb32-c156-4d26-82ae-eda1ab60f595\" Garbage collecting pod \"5dd42e9c-7413-49a9-9113-c2a8327d08ab\" Garbage collecting pod \"f07a4070-79a9-4db0-ae65-a090c9c393a3\" ``` | Flag | Default | Options | Description | | | | | | | `--expire-prepared` | `24h0m0s` | A time | Duration to wait before expiring prepared pods | | `--grace-period` | `30m0s` | A time | Duration to wait before discarding inactive pods from garbage | | `--mark-only` | `false` | `true` or `false` | If set to true, only the \"mark\" phase of the garbage collection process will be formed (i.e., exited/aborted pods will be moved to the garbage, but nothing will be deleted) | See the table with ." } ]
{ "category": "Runtime", "file_name": "gc.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "title: Resources description: Velero Resources id: resources Here you will find external resources about Velero, such as videos, podcasts, and community articles. Community Meetings Playlist {{< youtube \"videoseries?list=PL7bmigfV0EqQRysvqvqOtRNk4L5S7uqwM\" >}} Velero Demo and Deep Dives Playlist {{< youtube \"videoseries?list=PL7bmigfV0EqT82eQaVCslWWM4k4d63KyK\" >}} Kubecon NA 2019 - How to Backup and Restore Your Kubernetes Cluster - Annette Clewett & Dylan Murray, Red Hat: {{< youtube JyzgS-KKuoo >}} Kubernetes Back Up, Restore and Migration with Velero - A NewStack interview, with guests: Carlisia Thompson, Tom Spoonemore, and Efri Nattel-Shayo: {{< youtube 71NoY5CIcQ8 >}} How to migrate applications between Kubernetes clusters using Velero: {{< youtube IZlwKMoqBqE >}} TGIK 080: Velero 1.0: {{< youtube tj5Ey2bHsfM >}} Watch our recent webinar on backup and migration strategies: {{< youtube csrSPt3HFtg >}} Velero demo by Just me and Opensource - [ Kube 45 ] Velero - Backup & Restore Kubernetes Cluster: {{< youtube C9hzrexaIDA >}}" } ]
{ "category": "Runtime", "file_name": "_index.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Each module of the erasure coding subsystem supports audit logs. If audit logs and monitoring metrics are enabled (you can ), you can view the service request status code information in the audit log file or the monitoring metric `serviceresponsecode`. The following describes the special status codes of each module. ::: tip Note The error status code range of the Access service is [550,599]. ::: | Status Code | Error Message | Description | |-|--|--| | 551 | access client service discovery disconnect | The access client cannot discover available access nodes from Consul. | | 552 | access limited | The service interface has reached the connection limit. | | 553 | access exceed object size | The uploaded file exceeds the maximum size limit. | ::: tip Note The error status code range of the Proxy service is [800,899]. ::: | Status Code | Error Message | Description | |-||--| | 801 | this codemode has no avaliable volume | There is no available volume for the corresponding encoding mode. | | 802 | alloc bid from clustermgr error | Failed to obtain a bid from the CM. | | 803 | clusterId not match | The requested cluster ID does not match the cluster ID where the proxy service is located. | ::: tip Note The error status code range of the Clustermgr service is [900,999]. ::: | Status Code | Error Message | Description | |-|--|| | 900 | cm: unexpected error | Internal error occurred. | | 902 | lock volume not allow | Failed to lock, for example, because it is in the active state. | | 903 | unlock volume not allow | Failed to unlock. | | 904 | volume not exist | The volume does not exist. | | 906 | raft propose error | Raft proposal information error. | | 907 | no leader | No leader is available. | | 908 | raft read index error | Linear consistency read timeout. | | 910 | duplicated member info | Duplicate member information. | | 911 | disk not found | The disk cannot be found. | | 912 | invalid status | The disk status is invalid. | | 913 | not allow to change status back | The disk status cannot be rolled back, for example, rolling back a bad disk to the normal state. | | 914 | alloc volume unit concurrently | Concurrently applying for volume units. Retry is available. | | 916 | alloc volume request params is invalid | Invalid request parameters for volume allocation. | | 917 | no available volume | No available volumes are available for allocation. | | 918 | update volume unit, old vuid not match | The new and old VUIDs do not match when updating the volume" }, { "data": "| | 919 | update volume unit, new vuid not match | The new VUID does not match when updating the volume unit. | | 920 | update volume unit, new diskID not match | The new disk ID does not match when updating the volume unit. | | 921 | config argument marshal error | Configuration serialization error. | | 922 | request params error, invalid clusterID | Invalid request parameters, the cluster ID is invalid. | | 923 | request params error,invalid idc | Invalid request parameters, the IDC is invalid. | | 924 | volume unit not exist | The volume unit does not exist. | | 925 | register service params is invalid | Invalid registration service parameters. | | 926 | disk is abnormal or not readonly, can't add into dropping list | The disk is abnormal or not in the read-only state, and cannot be set to offline. | | 927 | stat blob node chunk failed | Failed to obtain chunk information for the blob node. | | 928 | request alloc volume codeMode not invalid | The requested volume mode does not exist. | | 929 | retain volume is not alloc | The retained volume is not allocated. | | 930 | dropped disk still has volume unit remain, migrate them firstly | The disk cannot be marked as offline in advance because there are remaining chunks on the disk. Migration must be completed first. | | 931 | list volume v2 not support idle status | The v2 version does not support the idle status for listing volumes. | | 932 | dropping disk not allow change state or set readonly | The disk in the offline state cannot have its status changed or be set to read-only. | | 933 | reject delete system config | The system configuration cannot be deleted. | ::: tip Note The error status code range of the BlobNode service is [600,699]. ::: | Status Code | Error Message | Description | |-||--| | 600 | blobnode: invalid params | Invalid parameters. | | 601 | blobnode: entry already exist | The VUID already exists. | | 602 | blobnode: out of limit | The request exceeds the concurrency limit. | | 603 | blobnode: internal error | Internal error. | | 604 | blobnode: service is overload | The request is overloaded. | | 605 | blobnode: path is not exist | The registered disk directory does not exist. | | 606 | blobnode: path is not empty | The registered disk directory is not empty. | | 607 | blobnode: path find online disk | The registered path is still in the active state and needs to be offline before re-registration. | | 611 | disk not found | The disk does not" }, { "data": "| | 613 | disk is broken | The disk is bad. | | 614 | disk id is invalid | The disk ID is invalid. | | 615 | disk no space | The disk has no available space. | | 621 | vuid not found | The VUID does not exist. | | 622 | vuid readonly | The VUID is in read-only state. | | 623 | vuid released | The VUID is in the released state. | | 624 | vuid not match | The VUID does not match. | | 625 | chunk must readonly | The chunk must be in read-only state. | | 626 | chunk must normal | The chunk must be in normal state. | | 627 | chunk no space | The chunk has no available write space. | | 628 | chunk is compacting | The chunk is being compressed. | | 630 | chunk id is invalid | The chunk ID is invalid. | | 632 | too many chunks | The number of chunks exceeds the threshold. | | 633 | chunk in use | The chunk has a request being processed. | | 651 | bid not found | The BID does not exist. | | 652 | shard size too large | The shard size exceeds the threshold (1<<32 - 1). | | 653 | shard must mark delete | The shard must be marked for deletion first. | | 654 | shard already mark delete | The shard has already been marked for deletion. | | 655 | shard offset is invalid | The shard offset is invalid. | | 656 | shard list exceed the limit | The number of shards in the shard list exceeds the threshold (65536). | | 657 | shard key bid is invalid | The BID is invalid. | | 670 | dest replica is bad can not repair | The target node to be repaired is in an abnormal state, possibly due to a request timeout or the service not being started. | | 671 | shard is an orphan | The shard is an orphan and cannot be repaired. If this error occurs when the Scheduler initiates a repair, the relevant orphan information will be recorded. | | 672 | illegal task | The background task is illegal. | | 673 | request limited | The repair interface request is overloaded. | ::: tip Note The error status code range of the Scheduler service is [700,799]. ::: | Status Code | Error Message | Description | |-||--| | 700 | nothing to do | When the BlobNode service requests the Scheduler to pull background tasks, if there are no relevant tasks to be done, the Scheduler will return this status code, which can be ignored. |" } ]
{ "category": "Runtime", "file_name": "code.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "This page lists all active maintainers of this repository. If you were a maintainer and would like to add your name to the Emeritus list, please send us a PR. See for governance guidelines and how to become a maintainer. See for general contribution guidelines. , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC" } ]
{ "category": "Runtime", "file_name": "MAINTAINERS.md", "project_name": "Soda Foundation", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Retrieve a key ``` cilium-dbg kvstore get [options] <key> [flags] ``` ``` cilium kvstore get --recursive foo ``` ``` -h, --help help for get -o, --output string json| yaml| jsonpath='{}' --recursive Recursive lookup ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API --kvstore string Key-Value Store type --kvstore-opt map Key-Value Store options ``` - Direct access to the kvstore" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_kvstore_get.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "This document formally describes the process of addressing and managing a reported vulnerability that has been found in the MinIO server code base, any directly connected ecosystem component or a direct / indirect dependency of the code base. The vulnerability management policy described in this document covers the process of investigating, assessing and resolving a vulnerability report opened by a MinIO employee or an external third party. Therefore, it lists pre-conditions and actions that should be performed to resolve and fix a reported vulnerability. The vulnerability management process requires that the vulnerability report contains the following information: The project / component that contains the reported vulnerability. A description of the vulnerability. In particular, the type of the reported vulnerability and how it might be exploited. Alternatively, a well-established vulnerability identifier, e.g. CVE number, can be used instead. Based on the description mentioned above, a MinIO engineer or security team member investigates: Whether the reported vulnerability exists. The conditions that are required such that the vulnerability can be exploited. The steps required to fix the vulnerability. In general, if the vulnerability exists in one of the MinIO code bases itself - not in a code dependency - then MinIO will, if possible, fix the vulnerability or implement reasonable countermeasures such that the vulnerability cannot be exploited anymore." } ]
{ "category": "Runtime", "file_name": "VULNERABILITY_REPORT.md", "project_name": "MinIO", "subcategory": "Cloud Native Storage" }
[ { "data": "| Case ID | Title | Priority | Smoke | Status | Other | | - |-|-|-|--| -- | | D00001 | An IPPool fails to add an IP that already exists in an other IPPool | p2 | | done | | | D00002 | Add a route with `routes` and `gateway` fields in the ippool spec, which only takes effect on the new pod and does not on the old pods | p2 | smoke | done | | | D00003 | Failed to add wrong IPPool gateway and route to an IPPool CR | p2 | | done | | | D00004 | Failed to delete an IPPool whose IP is not de-allocated at all | p2 | | done | | | D00005 | A \"true\" value of IPPool/Spec/disabled should forbid IP allocation, but still allow ip de-allocation | p2 | | done | | | D00006 | Successfully create and delete IPPools in batch | p2 | | done | | | D00007 | Add, delete, modify, and query ippools that are created manually | p1 | | done | | | D00008 | Manually ippool inherits subnet attributes (including gateway,routes, etc.) | p3 | | done | | | D00009 | multusName matches, IP can be assigned | p2 | | done | | | D00010 | multusName mismatch, unable to assign IP | p3 | | done | | | D00011 | The node where the pod is located matches the nodeName, and the IP can be assigned | p2 | | done | | | D00012 | The node where the pod resides does not match the nodeName, and the IP cannot be assigned | p3 | | done | | | D00013 | nodeName has higher priority than nodeAffinity | p3 | | done | | | D00014 | The namespace where the pod is located matches the namespaceName, and the IP can be assigned | p2 | | done | | | D00015 | The namespace where the pod resides does not match the namespaceName, and the IP cannot be assigned | p2 | | done | | | D00016 | namespaceName has higher priority than namespaceAffinity | p3 | | done | |" } ]
{ "category": "Runtime", "file_name": "ippoolcr.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Generate the autocompletion script for bash Generate the autocompletion script for the bash shell. This script depends on the 'bash-completion' package. If it is not installed already, you can install it via your OS's package manager. To load completions in your current shell session: source <(cilium-bugtool completion bash) To load completions for every new session, execute once: cilium-bugtool completion bash > /etc/bash_completion.d/cilium-bugtool cilium-bugtool completion bash > $(brew --prefix)/etc/bash_completion.d/cilium-bugtool You will need to start a new shell for this setup to take effect. ``` cilium-bugtool completion bash ``` ``` -h, --help help for bash --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell" } ]
{ "category": "Runtime", "file_name": "cilium-bugtool_completion_bash.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "To release the Zenko and Zenko-base ISOs: Update the version in the `VERSION` file and merge the changes (e.g., `2.4.15`). Start a new promotion using the Select the branch to release. Specify the tag from the step 1 (e.g., `2.4.15`). Specify the artifacts name to promote. The artifact URL can be found in the commit build you want to promote, under `Annotations`. For example, given `https://artifacts.scality.net/builds/github:scality:Zenko:staging-d13ed9e848.build-iso-and-end2end-test.230` the name will be `github:scality:Zenko:staging-d13ed9e848.build-iso-and-end2end-test.230` The workflow will automatically create a new GitHub release." } ]
{ "category": "Runtime", "file_name": "release.md", "project_name": "Zenko", "subcategory": "Cloud Native Storage" }
[ { "data": "The design of Kanister follows the operator pattern. This means Kanister defines its own resources and interacts with those resources through a controller. [This blog post](https://www.redhat.com/en/blog/operators-over-easy-introduction-kubernetes-operators) describes the pattern in detail. In particular, Kanister is composed of three main components: the Controller and two Custom Resources - ActionSets and Blueprints. The diagram below illustrates their relationship and how they fit together: As seen in the above diagram and described in detail below, all Kanister operations are declarative and require an ActionSet to be created by the user. Once the ActionSet is detected by the Kanister controller, it examines the environment for Blueprint referenced in the ActionSet (along with other required configuration). If all requirements are satisfied, it will then use the discovered Blueprint to complete the action (e.g., backup) specified in the ActionSet. Finally, the original ActionSet will be updated by the controller with status and other metadata generated by the action execution. Users interact with Kanister through Kubernetes resources known as CustomResources (CRs). When the controller starts, it creates the CR definitions called CustomResourceDefinitions (CRDs). were introduced in Kubernetes 1.7 and replaced TPRs. The lifecycle of these objects can be managed entirely through kubectl. Kanister uses Kubernetes\\' code generation tools to create go client libraries for its CRs. The schemas of the Kanister CRDs can be found in Blueprint CRs are a set of instructions that tell the controller how to perform actions on a specific application. A Blueprint contains a field called `Actions` which is a mapping of Action Name to `BlueprintAction`. The definition of a `BlueprintAction` is: ``` go // BlueprintAction describes the set of phases that constitute an action. type BlueprintAction struct { Name string `json:\"name\"` Kind string `json:\"kind\"` ConfigMapNames []string `json:\"configMapNames\"` SecretNames []string `json:\"secretNames\"` InputArtifactNames []string `json:\"inputArtifactNames\"` OutputArtifacts map[string]Artifact `json:\"outputArtifacts\"` Phases []BlueprintPhase `json:\"phases\"` DeferPhase *BlueprintPhase `json:\"deferPhase,omitempty\"` } ``` `Kind` represents the type of Kubernetes object this BlueprintAction is written for. Specifying this is optional and going forward, if this is specified, Kanister will enforce that it matches the `Object` kind specified in an ActionSet referencing this BlueprintAction `ConfigMapNames`, `SecretNames`, `InputArtifactNames` are optional but, if specified, they list named parameters that must be included by the `ActionSet`. `OutputArtifacts` is an optional map of rendered parameters made available to the `BlueprintAction`. `Phases` is a required list of `BlueprintPhases`. These phases are invoked in order when executing this Action. `DeferPhase` is an optional `BlueprintPhase` invoked after the execution of `Phases` defined above. A `DeferPhase`, when specified, is executed regardless of the statuses of the `Phases`. A `DeferPhase` can be used for cleanup operations at the end of an `Action`. ``` go // BlueprintPhase is a an individual unit of execution. type BlueprintPhase struct { Func string `json:\"func\"` Name string `json:\"name\"` ObjectRefs map[string]ObjectReference `json:\"objects\"` Args map[string]interface{} `json:\"args\"` } ``` `Func` is required as the name of a registered Kanister function. See for the list of functions supported by the controller. `Name` is mostly cosmetic. It is useful in quickly identifying which phases the controller has finished executing. `Object` is a map of references to the Kubernetes objects on which the action will be performed. `Args` is a map of named arguments that the controller will pass to the Kanister function. String argument values can be templates that the controller will render using the template parameters. Each argument is rendered" }, { "data": "As a reference, below is an example of a BlueprintAction. ``` yaml actions: example-action: phases: func: KubeExec name: examplePhase args: namespace: \"{{ .Deployment.Namespace }}\" pod: \"{{ index .Deployment.Pods 0 }}\" container: kanister-sidecar command: bash -c | echo \"Example Action\" ``` Creating an ActionSet instructs the controller to run an action now. The user specifies the runtime parameters inside the spec of the ActionSet. Based on the parameters, the Controller populates the Status of the object, executes the actions, and updates the ActionSet\\'s status. An ActionSetSpec contains a list of ActionSpecs. An ActionSpec is defined as follows: ``` go // ActionSpec is the specification for a single Action. type ActionSpec struct { Name string `json:\"name\"` Object ObjectReference `json:\"object\"` Blueprint string `json:\"blueprint,omitempty\"` Artifacts map[string]Artifact `json:\"artifacts,omitempty\"` ConfigMaps map[string]ObjectReference `json:\"configMaps\"` Secrets map[string]ObjectReference `json:\"secrets\"` Options map[string]string `json:\"options\"` Profile *ObjectReference `json:\"profile\"` PodOverride map[string]interface{} `json:\"podOverride,omitempty\"` } ``` `Name` is required and specifies the action in the Blueprint. `Object` is a required reference to the Kubernetes object on which the action will be performed. `Blueprint` is a required name of the Blueprint that contains the action to run. `Artifacts` are input Artifacts passed to the Blueprint. This must contain an Artifact for each name listed in the BlueprintAction\\'s InputArtifacts. `ConfigMaps` and `Secrets`, similar to `Artifacts`, are a mappings of names specified in the Blueprint referencing the Kubernetes object to be used. `Profile` is a reference to a `Profile<profiles>`{.interpreted-text role=\"ref\"} Kubernetes CustomResource that will be made available to the Blueprint. `Options` is used to specify additional values to be used in the Blueprint `PodOverride` is used to specify pod specs that will override default specs of the Pod created while executing functions like KubeTask, PrepareData, etc. As a reference, below is an example of a ActionSpec. ``` yaml spec: actions: name: example-action blueprint: example-blueprint object: kind: Deployment name: example-deployment namespace: example-namespace profile: apiVersion: v1alpha1 kind: profile name: example-profile namespace: example-namespace ``` In addition to the Spec, an ActionSet also contains an ActionSetStatus which mirrors the Spec, but contains the phases of execution, their state, and the overall execution progress. ``` go // ActionStatus is updated as we execute phases. type ActionStatus struct { Name string `json:\"name\"` Object ObjectReference `json:\"object\"` Blueprint string `json:\"blueprint\"` Phases []Phase `json:\"phases\"` Artifacts map[string]Artifact `json:\"artifacts\"` } ``` Unlike in the ActionSpec, the Artifacts in the ActionStatus are the rendered output artifacts from the Blueprint. These are rendered and populated once the action is complete. Each phase in the ActionStatus phases list contains the phase name of the Blueprint phase along with its state of execution and output. ``` go // Phase is subcomponent of an action. type Phase struct { Name string `json:\"name\"` State State `json:\"state\"` Output map[string]interface{} `json:\"output\"` } ``` Deleting an ActionSet will cause the controller to delete the ActionSet, which will stop the execution of the actions. ``` bash $ kubectl --namespace kanister delete actionset s3backup-j4z6f actionset.cr.kanister.io \"s3backup-j4z6f\" deleted ``` ::: tip NOTE Since ActionSets are `Custom Resources`, Kubernetes allows users to delete them like any other API objects. Currently, deleting an ActionSet to stop execution is an alpha feature. ::: Profile CRs capture information about a location for data operation artifacts and corresponding credentials that will be made available to a" }, { "data": "The definition of a `Profile` is: ``` go // Profile type Profile struct { Location Location `json:\"location\"` Credential Credential `json:\"credential\"` SkipSSLVerify bool `json:\"skipSSLVerify\"` } ``` `SkipSSLVerify` is boolean and specifies whether skipping SkipSSLVerify verification is allowed when operating with the `Location`. If omitted from a CR definition it default to `false` `Location` is required and used to specify the location that the Blueprint can use. Currently, only s3 compliant locations are supported. If any of the sub-components are omitted, they will be treated as \\\"\\\". The definition of `Location` is as follows: ``` go // LocationType type LocationType string const ( LocationTypeGCS LocationType = \"gcs\" LocationTypeS3Compliant LocationType = \"s3Compliant\" LocationTypeAzure LocationType = \"azure\" ) // Location type Location struct { Type LocationType `json:\"type\"` Bucket string `json:\"bucket\"` Endpoint string `json:\"endpoint\"` Prefix string `json:\"prefix\"` Region string `json:\"region\"` } ``` `Credential` is required and used to specify the credentials associated with the `Location`. Currently, only key pair s3, gcs and azure location credentials are supported. The definition of `Credential` is as follows: ``` go // CredentialType type CredentialType string const ( CredentialTypeKeyPair CredentialType = \"keyPair\" ) // Credential type Credential struct { Type CredentialType `json:\"type\"` KeyPair *KeyPair `json:\"keyPair\"` } // KeyPair type KeyPair struct { IDField string `json:\"idField\"` SecretField string `json:\"secretField\"` Secret ObjectReference `json:\"secret\"` } ``` `IDField` and `SecretField` are required and specify the corresponding keys in the secret under which the `KeyPair` credentials are stored. `Secret` is required reference to a Kubernetes Secret object storing the `KeyPair` credentials. As a reference, below is an example of a Profile and the corresponding secret. ``` yaml apiVersion: cr.kanister.io/v1alpha1 kind: Profile metadata: name: example-profile namespace: example-namespace location: type: s3Compliant bucket: example-bucket endpoint: <endpoint URL>:<port> prefix: \"\" region: \"\" credential: type: keyPair keyPair: idField: examplekeyid secretField: examplesecretaccess_key secret: apiVersion: v1 kind: Secret name: example-secret namespace: example-namespace skipSSLVerify: true apiVersion: v1 kind: Secret type: Opaque metadata: name: example-secret namespace: example-namespace data: examplekeyid: <access key> examplesecretaccess_key: <access secret> ``` The Kanister controller is a Kubernetes Deployment and is installed easily using `kubectl`. See for more information on deploying the controller. The controller watches for new/updated ActionSets in the same namespace in which it is deployed. When it sees an ActionSet with a nil status field, it immediately initializes the ActionSet\\'s status to the Pending State. The status is also prepopulated with the pending phases. Execution begins by resolving all the `templates`{.interpreted-text role=\"ref\"}. If any required object references or artifacts are missing from the ActionSet, the ActionSet status is marked as failed. Otherwise, the template params are used to render the output Artifacts, and then the args in the Blueprint. For each action, all phases are executed in-order. The rendered args are passed to which correspond to a single phase. When a phase completes, the status of the phase is updated. If any single phase fails, the entire ActionSet is marked as failed. Upon failure, the controller ceases execution of the ActionSet. Within an ActionSet, individual Actions are run in parallel. Currently the user is responsible for cleaning up ActionSets once they complete. During execution, Kanister controller emits events to the respective ActionSets. In above example, the execution transitions of ActionSet `s3backup-j4z6f` can be seen by using the following command: ``` bash $ kubectl --namespace kanister describe actionset s3backup-j4z6f Events: Type Reason Age From Message - - - Normal Started Action 23s Kanister Controller Executing action backup Normal Started Phase 23s Kanister Controller Executing phase backupToS3 Normal Update Complete 19s Kanister Controller Updated ActionSet 's3backup-j4z6f' Status->complete Normal Ended Phase 19s Kanister Controller Completed phase backupToS3 ```" } ]
{ "category": "Runtime", "file_name": "architecture.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "This guide will guide you how to use remote attestation based on SGX in skeleton with rune. Currently `rune attest` can only run on the machines with , we will support as soon as possible. Build a skeleton bundle according to from scratch. Build rune according to . Register a `SPID` and `Subscription Key` of to get IAS report(optional). After the registration, Intel will respond with a SPID which is needed to communicate with IAS. Before using `rune attest` command, you must ensure your skeleton container/bundles(such as skeleton-enclave-container) running by setting `\"wait_timeout\",\"100\"` of `process.args` in config.json as following: ```json \"process\": { \"args\": [ \"${YOURPROGRAM}\",\"waittimeout\",\"100\" ], } ``` Only `liberpal-skeleton-v3.so` supports `rune attest` command. So you also need to configure enclave runtime as following: ```json \"annotations\": { \"enclave.type\": \"intelSgx\", \"enclave.runtime.path\": \"/usr/lib/liberpal-skeleton-v3.so\", \"enclave.runtime.args\": \"debug\" } ``` If you want to use `rune attest` command to get IAS report, you also need to `delete` the `network` namespace configuration in your `config.json` to ensure you run skeleton in host network mode. After doing this, your `namespaces` is as following without the `network` type namespace: ```json \"namespaces\": [ { \"type\": \"pid\" }, { \"type\": \"ipc\" }, { \"type\": \"uts\" }, { \"type\": \"mount\" } ], ``` Then you can run your skeleton containers by typing the following commands: ```shell cd \"$HOME/rune_workdir/rune-container\" cp /etc/resolv.conf rootfs/etc/resolv.conf sudo rune --debug run skeleton-enclave-container ``` You can type the following command to use `rune attest` command with skeleton in another shell to get local report: ```shell rune --debug attest --quote-type={SGXQUOTETYPE} skeleton-enclave-container ``` where: @quote-type: specify the quote types of sgx, such as, `epidUnlinkable`: `epidLinkable`: `ecdsa`: . Note `rune attest` currently doesn't support the ecdsa quote type, and we will support it soon. You can type the following command to use `rune attest` command with skeleton in another shell to get IAS report: ```shell rune --debug attest --isRA \\ --quote-type={SGXQUOTETYPE} \\ --spid=${EPID_SPID} \\ --subscription-key=${EPIDSUBSCRIPTIONKEY} \\ skeleton-enclave-container ``` where: @isRA: specify the type of report is local or remote report. @quote-type: specify the quote types of sgx which is the same as the parameters of . @spid: specify the `SPID`. @subscription-key: specify the `Subscription Key`." } ]
{ "category": "Runtime", "file_name": "running_skeleton_with_rune_attest_command.md", "project_name": "Inclavare Containers", "subcategory": "Container Runtime" }
[ { "data": "](https://goreportcard.com/report/github.com/sodafoundation/api) ](https://travis-ci.org/sodafoundation/api) ](https://coveralls.io/github/sodafoundation/api?branch=master) <img src=\"https://www.opensds.io/wp-content/uploads/sites/18/2016/11/logo_opensds.png\" width=\"100\"> opensds is Apache 2.0 licensed and accepts contributions via GitHub pull requests. This document outlines some of the conventions on commit message formatting, contact points for developers and other resources to make getting your contribution into opensds easier. Email: Slack: # Before you start, NOTICE that ```master``` branch is the relatively stable version provided for customers and users. So all code modifications SHOULD be submitted to ```development``` branch. Fork the repository on GitHub. Read the README.md and INSTALL.md for project information and build instructions. For those who just get in touch with this project recently, here is a proposed contributing . The coding style suggested by the Golang community is used in opensds. See the for more details. Please follow this style to make opensds easy to review, maintain and develop. A great way to contribute to the project is to send a detailed report when you encounter an issue. We always appreciate a well-written, thorough bug report, and will thank you for it! When reporting issues, refer to this format: What version of env (opensds, os, golang etc) are you using? Is this a BUG REPORT or FEATURE REQUEST? What happened? What you expected to happen? How to reproduce it?(as minimally and precisely as possible) Raise your idea as an If it is a new feature that needs lots of design details, a design proposal should also be submitted . After reaching consensus in the issue discussions and design proposal reviews, complete the development on the forked repo and submit a PR. Here are the that are already closed. If a PR is submitted by one of the core members, it has to be merged by a different core member. After PR is sufficiently discussed, it will get merged, abondoned or rejected depending on the outcome of the discussion. Thank you for your contribution !" } ]
{ "category": "Runtime", "file_name": "Community-Contributing.md", "project_name": "Soda Foundation", "subcategory": "Cloud Native Storage" }
[ { "data": "Prepare a new aarch64 architecture physical server Optional - Recommend to install openEuler 23.03 version OS in the server openEuler 23.03 version OS image download link: https://repo.openeuler.org/openEuler-23.03/ISO/ Note: All of the following commands need to run with root privilege. If you use openEuler 23.03 OS in your physical server, you can directly install the newest StratoVirt package by yum. ```bash $ yum install stratovirt ``` If you use another Linux distribution OS, you can build the stratovirt from the source and install it: After you build or install the stratovirt package, you can find the following important binary file in your sever: ```bash $ ls /usr/bin/stratovirt /usr/bin/stratovirt $ ls /usr/bin/vhostuserfs /usr/bin/vhostuserfs ``` Kuasar use `docker` or `containerd` container engine to build guest os initrd image, so you need to make sure `docker` or `containerd` is correctly installed and can pull the image from the dockerhub registries. Tips: `make vmm` build command will download the Rust and Golang packages from the internet, so you need to provide the `httpproxy` and `httpsproxy` environments for the `make all` command. If a self-signed certificate is used in the `make all` build command execution environment, you may encounter SSL issues with downloading resources from https URL failed. Therefore, you need to provide a CA-signed certificate and copy it into the root directory of the Kuasar project, then rename it as \"proxy.crt\". In this way, our build script will use the \"proxy.crt\" certificate to access the https URLs of Rust and Golang installation packages. ```bash $ HYPERVISOR=stratovirt make vmm $ HYPERVISOR=stratovirt make install-vmm ``` After installation, you will find the required files in the specified path ```bash /usr/local/bin/vmm-sandboxer /var/lib/kuasar/vmlinux.bin /var/lib/kuasar/kuasar.initrd /var/lib/kuasar/config_stratovirt.toml ``` supports Kuasar with its master branch at the moment. For building iSulad from scratch, please refer to . Here we only emphasize the difference of the building steps. ```bash $ git clone https://gitee.com/openeuler/lcr.git $ cd lcr $ mkdir build $ cd build $ sudo -E cmake .. $ sudo -E make -j $(nproc) $ sudo -E make install ``` ```bash $ git clone https://gitee.com/openeuler/iSulad.git $ cd iSulad $ mkdir build $ cd build $ sudo -E cmake .. -DENABLECRIAPIV1=ON -DENABLESHIMV2=ON -DENABLESANDBOXER=ON $ sudo make -j $ sudo -E make install ``` Add the following configuration in the iSulad configuration file `/etc/isulad/daemon.json` ```json ... \"default-sandboxer\": \"vmm\", \"cri-sandboxers\": { \"vmm\": { \"name\": \"vmm\", \"address\": \"/run/vmm-sandboxer.sock\" } }, \"cri-runtimes\": { \"vmm\": \"io.containerd.vmm.v1\" }, ... ``` Sine some code have not been merged into the upstream containerd community, so you need to manually compile the containerd source code in the . git clone the codes of containerd fork version from kuasar repository. ```bash $ git clone https://github.com/kuasar-io/containerd.git $ cd containerd $ make bin/containerd $ install bin/containerd /usr/bin/containerd ``` Add the following sandboxer config in the containerd config file `/etc/containerd/config.toml` ```toml [proxy_plugins]" }, { "data": "type = \"sandbox\" address = \"/run/vmm-sandboxer.sock\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.vmm] runtime_type = \"io.containerd.kuasar.v1\" sandboxer = \"vmm\" io_type = \"hvsock\" ``` Install which is required by to configure pod network. ```bash $ wget https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-arm64-v1.2.0.tgz mkdir -p /opt/cni/bin/ mkdir -p /etc/cni/net.d tar -zxvf cni-plugins-linux-arm64-v1.2.0.tgz -C /opt/cni/bin/ ``` ```bash VERSION=\"v1.15.0\" # check latest version in /releases page wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-arm64.tar.gz sudo tar zxvf crictl-$VERSION-linux-arm64.tar.gz -C /usr/local/bin rm -f crictl-$VERSION-linux-arm64.tar.gz ``` create the crictl config file in the `/etc/crictl.yaml` ```bash cat /etc/crictl.yaml runtime-endpoint: unix:///var/run/isulad.sock image-endpoint: unix:///var/run/isulad.sock timeout: 10 ``` The default config file `/var/lib/kuasar/config_stratovirt.toml` for stratovirt vmm-sandboxer: ```toml [sandbox] log_level = \"info\" [hypervisor] path = \"/usr/bin/stratovirt\" machine_type = \"virt,mem-share=on\" kernel_path = \"/var/lib/kuasar/vmlinux.bin\" image_path = \"\" initrd_path = \"/var/lib/kuasar/kuasar.initrd\" kernelparams = \"task.loglevel=debug task.sharefs_type=virtiofs\" vcpus = 1 memoryinmb = 1024 blockdevicedriver = \"virtio-blk\" debug = true enablememprealloc = false [hypervisor.virtiofsd_conf] path = \"/usr/bin/vhostuserfs\" ``` ```bash $ ENABLECRISANDBOXES=1 ./bin/containerd ``` ```bash $ systemctl start kuasar-vmm ``` ```bash $ cat podsandbox.yaml metadata: attempt: 1 name: busybox-sandbox2 namespace: default uid: hdishd83djaidwnduwk28bcsc log_directory: /tmp linux: namespaces: options: {} $ crictl runp --runtime=vmm podsandbox.yaml 5cbcf744949d8500e7159d6bd1e3894211f475549c0be15d9c60d3c502c7ede3 ``` Tips: `--runtime=vmm` indicates that containerd needs to use vmm-sandboxer runtime to run a pod sandbox List pod sandboxes and check the sandbox is in Ready state: ```bash $ crictl pods POD ID CREATED STATE NAME NAMESPACE ATTEMPT 5cbcf744949d8 About a minute ago Ready busybox-sandbox2 default 1 ``` Create a container in the podsandbox ```bash $ cat pod-container.yaml metadata: name: busybox1 image: image: docker.io/library/busybox:latest command: top log_path: busybox.0.log no pivot: true $ crictl create 5cbcf744949d8500e7159d6bd1e3894211f475549c0be15d9c60d3c502c7ede3 pod-container.yaml podsandbox.yaml c11df540f913e57d1e28372334c028fd6550a2ba73208a3991fbcdb421804a50 ``` List containers and check the container is in Created state: ```bash $ crictl ps -a CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID c11df540f913e docker.io/library/busybox:latest 15 seconds ago Created busybox1 0 5cbcf744949d8 ``` Start container in the podsandbox ```bash $ crictl start c11df540f913e57d1e28372334c028fd6550a2ba73208a3991fbcdb421804a50 $ crictl ps CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID c11df540f913e docker.io/library/busybox:latest 2 minutes ago Running busybox1 0 5cbcf744949d8 ``` Get the `vsock guest-cid` from stratovirt vm process ```bash $ ps -ef | grep stratovirt | grep 5cbcf744949d8 /usr/bin/stratovirt -name sandbox-5cbcf744949d8500e7159d6bd1e3894211f475549c0be15d9c60d3c502c7ede3 ... -device vhost-vsock-pci,id=vsock-395568061,guest-cid=395568061,bus=pcie.0,addr=0x3,vhostfd=3 ... ``` Enter the guest os debug console shell environment: ```bash $ ncat --vsock 395568061 1025 bash-5.1# busybox ip addr show busybox ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo validlft forever preferredlft forever inet6 ::1/128 scope host validlft forever preferredlft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWERUP> mtu 1500 qdisc pfifofast qlen 1000 link/ether 42:e2:92:d4:39:9f brd ff:ff:ff:ff:ff:ff inet 172.19.0.240/24 brd 172.19.0.255 scope global eth0 validlft forever preferredlft forever inet6 fe80::40e2:92ff:fed4:399f/64 scope link validlft forever preferredlft forever bash-5.1# busybox ping 172.19.0.1 busybox ping 172.19.0.1 PING 172.19.0.1 (172.19.0.1): 56 data bytes 64 bytes from 172.19.0.1: seq=0 ttl=64 time=0.618 ms 64 bytes from 172.19.0.1: seq=1 ttl=64 time=0.116 ms 64 bytes from 172.19.0.1: seq=2 ttl=64 time=0.152 ms ```" } ]
{ "category": "Runtime", "file_name": "how-to-run-kuasar-with-isulad-and-stratovirt.md", "project_name": "Kuasar", "subcategory": "Container Runtime" }
[ { "data": "First, please confirm that you use the command `PORTABLE=1 make static_lib` to compile rocksdb. Then use the `ldd` command to check if the dependent libraries are installed on the machine. After installing the missing libraries, execute the `ldconfig` command. There are two ways to solve this problem: Add the specified library to `CGOLDFLAGS` to compile, for example: `CGOLDFLAGS=\"-L/usr/local/lib -lrocksdb -lzstd\"`. This method requires that the zstd library is also installed on other deployment machines. Delete the script that automatically detects whether the zstd library is installed. The file location is an example: `rockdb-5.9.2/buildtools/builddetect_platform`. Delete the following content: ```bash $CXX $CFLAGS $COMMON_FLAGS -x c++ - -o /dev/null 2>/dev/null <<EOF int main() {} EOF if [ \"$?\" = 0 ]; then COMMONFLAGS=\"$COMMONFLAGS -DZSTD\" PLATFORMLDFLAGS=\"$PLATFORMLDFLAGS -lzstd\" JAVALDFLAGS=\"$JAVALDFLAGS -lzstd\" fi ``` When compiling the erasure coding subsystem, an error message is displayed: `fatal error: rocksdb/c.h: no such file or directory...` First, confirm whether the file pointed to by the error message exists in the `.deps/include/rocksdb` directory. If it exists, try `source env.sh` and try again. If the file does not exist or the error still occurs, you can delete all the rocksdb-related files in the `.deps` directory and then recompile. To resolve the error message \"/usr/bin/ld: cannot find -lbz2\" during compilation, you should check if the \"bzip2-devel\" package is installed with a version of 1.0.6 or higher. To resolve the error message \"/usr/bin/ld: cannot find -lz\" during compilation, you should check if the \"zlib-devel\" package is installed with a version of 1.2.7 or higher. when building `blobstore`, just enter sub-folder `blobstore` and add following content to the end of `blobstore/env.sh` before executing `source env.sh`: ```bash export DISABLEWARNINGAS_ERROR=true ``` when building `cubefs` itself, just add following content at the end of `env.sh` before executing `source env.sh`: NOTE: Options might be different according to different gcc versions. Following option has been tested on `gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0` ```bash export CXXFLAGS=-Wno-error=xxx # optional, developers can change accordingly or comment out or delete it safely ```" } ]
{ "category": "Runtime", "file_name": "build.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "Currently, Velero only supports a single credential secret per location provider/plugin. Velero creates and stores the plugin credential secret under the hard-coded key `secret.cloud-credentials.data.cloud`. This makes it so switching from one plugin to another necessitates overriding the existing credential secret with the appropriate one for the new plugin. To allow Velero to create and store multiple secrets for provider credentials, even multiple credentials for the same provider To improve the UX for configuring the velero deployment with multiple plugins/providers. Enable use cases such as AWS volume snapshots w/Minio as the object storage Continue to support use cases where multiple Backup Storage Locations are in use simultaneously `velero backup logs` while backup/restore is running Handle changes in configuration while operations are happening as well as they currently are To make any change except what's necessary to handle multiple credentials To allow multiple credentials for or change the UX for node-based authentication (e.g. AWS IAM, GCP Workload Identity, Azure AAD Pod Identity). Node-based authentication will not allow cases such as a mix of AWS snapshots with Minio object storage. Instead of one credential per Velero deployment, multiple credentials can be added and used with different BSLs. There are two aspects to handling multiple credentials: Modifying how credentials are configured and specified by the user Modifying how credentials are provided to the plugin processes Each of these aspects will be discussed in turn. Currently, Velero creates a secret (`cloud-credentials`) during install with a single entry that contains the contents of the credentials file passed by the user. Instead of adding new CLI options to Velero to create and manage credentials, users will create their own Kubernetes secrets within the Velero namespace and reference these. This approach is being chosen as it allows users to directly manage Kubernetes secrets objects as they wish and it removes the need for wrapper functions to be created within Velero to manage the creation of secrets. Separate credentials rather than combining credentials in a single secret also avoids issues with maximum size of credentials as well as update in place issues. To enable the use of existing Kubernetes secrets, BSLs will be modified to have a new field `Credential`. This field will be a which will enable the user to specify which key within a particular secret the BSL should use. Existing BackupStorageLocationSpec definition: // BackupStorageLocationSpec defines the desired state of a Velero BackupStorageLocation type BackupStorageLocationSpec struct { // Provider is the provider of the backup storage. Provider string `json:\"provider\"` // Config is for provider-specific configuration fields. // +optional Config map[string]string `json:\"config,omitempty\"` StorageType `json:\",inline\"` // Default indicates this location is the default backup storage location. // +optional Default bool `json:\"default,omitempty\"` // AccessMode defines the permissions for the backup storage location. // +optional AccessMode BackupStorageLocationAccessMode `json:\"accessMode,omitempty\"` // BackupSyncPeriod defines how frequently to sync backup API objects from object storage. A value of 0 disables sync. // +optional // +nullable BackupSyncPeriod *metav1.Duration `json:\"backupSyncPeriod,omitempty\"` // ValidationFrequency defines how frequently to validate the corresponding object storage. A value of 0 disables validation. // +optional // +nullable ValidationFrequency *metav1.Duration `json:\"validationFrequency,omitempty\"` } The following field will be added: Credential *corev1api.SecretKeySelector `json:\"credential,omitempty\"` The resulting BackupStorageLocationSpec will be this: // BackupStorageLocationSpec defines the desired state of a Velero BackupStorageLocation type BackupStorageLocationSpec struct { // Provider is the provider of the backup storage. Provider string `json:\"provider\"` // Config is for provider-specific configuration" }, { "data": "// +optional Config map[string]string `json:\"config,omitempty\"` // Credential contains the credential information intended to be used with this location // +optional Credential *corev1api.SecretKeySelector `json:\"credential,omitempty\"` StorageType `json:\",inline\"` // Default indicates this location is the default backup storage location. // +optional Default bool `json:\"default,omitempty\"` // AccessMode defines the permissions for the backup storage location. // +optional AccessMode BackupStorageLocationAccessMode `json:\"accessMode,omitempty\"` // BackupSyncPeriod defines how frequently to sync backup API objects from object storage. A value of 0 disables sync. // +optional // +nullable BackupSyncPeriod *metav1.Duration `json:\"backupSyncPeriod,omitempty\"` // ValidationFrequency defines how frequently to validate the corresponding object storage. A value of 0 disables validation. // +optional // +nullable ValidationFrequency *metav1.Duration `json:\"validationFrequency,omitempty\"` } The CLI for managing Backup Storage Locations (BSLs) will be modified to allow the user to set these credentials. Both `velero backup-location (create|set)` will have a new flag (`--credential`) to specify the secret and key within the secret to use. This flag will take a key-value pair in the format `<secret-name>=<key-in-secret>`. The arguments will be validated to ensure that the secret exists in the Velero namespace. If the Credential field is empty in a BSL, the default credentials from `cloud-credentials` will be used as they are currently. The approach we have chosen is to include the path to the credentials file in the `config` map passed to a plugin. Prior to using any secret for a BSL, it will need to be serialized to disk. Using the details in the `Credential` field in the BSL, the contents of the Secret will be read and serialized. To achieve this, we will create a new package, `credentials`, which will introduce new types and functions to manage the fetching of credentials based on a `SecretKeySelector`. This will also be responsible for serializing the fetched credentials to a temporary directory on the Velero pod filesystem. The path where a set of credentials will be written to will be a fixed path based on the namespace, name, and key from the secret rather than a randomly named file as is usual with temporary files. The reason for this is that `BackupStore`s are frequently created within the controllers and the credentials must be serialized before any plugin APIs are called, which would result in a quick accumulation of temporary credentials files. For example, the default validation frequency for BackupStorageLocations is one minute. This means that any time a `BackupStore`, or other type which requires credentials, is created, the credentials will be fetched from the API server and may overwrite any existing use of that credential. If we instead wanted to use an unique file each time, we could work around the of multiple files being written by cleaning up the temporary files upon completion of the plugin operations, if this information is known. Once the credentials have been serialized, this path will be made available to the plugins. Instead of setting the necessary environment variable for the plugin process, the `config` map for the BSL will be modified to include an addiitional entry with the path to the credentials file: `credentialsFile`. This will be passed through when and it will be the responsibility of the plugin to use the passed credentials when starting a session. For an example of how this would affect the AWS plugin, see . The restic controllers will also need to be updated to use the correct credentials. The BackupStorageLocation for a given PVB/PVR will be fetched and the `Credential` field from that BSL will be" }, { "data": "The existing setup for the restic commands use the credentials from the environment variables with . Instead of relying on the existing environment variables, if there are credentials for a particular BSL, the environment will be specifically created for each `RepoIdentifier`. This will use a lot of the existing logic with the exception that it will be modified to work with a serialized secret rather than find the secret file from an environment variable. Currently, GCP is the only provider that relies on the existing environment variables with no specific overrides. For GCP, the environment variable will be overwritten with the path of the serialized secret. One of the use cases we wish to satisfy is the ability to specify a different object store than the cloud provider offers, for example, using a Minio S3 object store from within AWS. Currently the VolumeSnapshotter and the ObjectStore plugin share the cloud credentials. Each backup/restore has a BackupStorageLocation associated with it. The BackupStorageLocation can optionally specify the credential used by the ObjectStorePlugin and Restic daemons while the cloud credential will always be used for the VolumeSnapshotter. The vSphere plugin is implemented as a BackupItemAction and shares the credentials of the AWS plugin for S3 access. The backup storage location is passed in Backup.Spec.StorageLocation. Currently the plugin retrieves the S3 bucket and server from the BSL and creates a BackupRespositoryClaim with that and the credentials retrieved from the cloud credential. The plugin will need to be modified to retrieve the credentials field from the BSL and use that credential in the BackupRepositoryClaim. For now, regardless of the approaches used above, we will still support the existing workflow. Users will be able to set credentials during install and a secret will be created for them. This secret will still be mounted into the Velero pods and the appropriate environment variables set. This will allow users to use versions of plugins which haven't yet been updated to use credentials directly, such as with many community created plugins. Multiple credential handling will only be used in the case where a particular BSL has been modified to use an existing secret. Although the handling of secrets will be similar to how credentials are currently managed within Velero, care must be taken to ensure that any new code does not leak the contents of secrets, for example, including them within logs. In order to support parallelism, Velero will need to be able to use multiple credentials simultaneously with the ObjectStore. Currently backups are single threaded and a single BSL will be used throughout the entire backup. The only existing points of parallelism are when a user downloads logs for a backup or the BackupStorageLocationReconciler reconciles while a backup or restore is running. In the current code, `downloadrequestcontroller.go` and `backupstoragelocation_controller.go` create a new plugin manager and hence another ObjectStore plugin in parallel with the ObjectStore plugin servicing a backup or restore (if one is running). Three different approaches can be taken to provide credentials to plugin processes: Providing the path to the credentials file as an environment variable per plugin. This is how credentials are currently passed. Include the path to the credentials file in the `config` map passed to a plugin. Include the details of the secret in the `config` map passed to a" }, { "data": "The last two options require changes to the plugin as the plugin will need to instantiate a client using the provided credentials. The client libraries used by the plugins will not be able to rely on the credentials details being available in the environment as they currently do. We have selected option 2 as the approach to take. The approaches that were not selected are detailed below for reference. To continue to provide the credentials via the environment, plugins will need to be invoked differently so that the correct credential is used. Currently, there is a single secret, which is mounted into every pod deployed by Velero (the Velero Deployment and the Restic DaemonSet) at the path `/credentials/cloud`. This path is made known to all plugins through provider specific environment variables and all possible provider environment variables are set to this path. Instead of setting the environment variables for all the pods, we can modify plugin processes are created so that the environment variables are set on a per plugin process basis. Prior to using any secret for a BSL, it will need to be serialized to disk. Using the details in the `Credential` field in the BSL, the contents of the Secret will be read and serialized to a file. Each plugin process would still have the same set of environment variables set, however the value used for each of these variables would instead be the path to the serialized secret. To set the environment variables for a plugin process, the plugin manager must be modified so that when creating an ObjectStore, we pass in the entire BSL object, rather than . The plugin manager currently stores a map of . New restartable processes are created only . This could be modified to also take the necessary environment variables so that when , these environment variables could be provided and would be set on the plugin process. Taking this approach would not require any changes from plugins as the credentials information would be made available to them in the same way. However, it is quite a significant change in how we initialize and invoke plugins. We would also need to ensure that the restic controllers are updated in the same way so that correct credentials are used (when creating a `ResticRepository` or processing `PodVolumeBackup`/`PodVolumeRestore`). This could be achieved by modifying the existing function to . This function already sets environment variables for the restic process depending on which storage provider is being used. This approach is like the selected approach of passing the credentials file via the `config` map, however instead of the Velero process being responsible for serializing the file to disk prior to invoking the plugin, the `Credential SecretKeySelector` details will be passed through to the plugin. It will be the responsibility of the plugin to fetch the secret from the Kubernetes API and perform the necessary steps to make it available for use when creating a session, for example, serializing the contents to disk, or evaluating the contents and adding to the process environment. This approach has an additional burden on the plugin author over the previous approach as it requires the author to create a client to communicate with the Kubernetes API to retrieve the secret. Although it would be the responsibility of the plugin to serialize the credential and use it directly, Velero would still be responsible for serializing the secret so that it could be used with the restic controllers as in the selected approach." } ]
{ "category": "Runtime", "file_name": "secrets.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Currently, the Velero project contains in-tree plugins for three cloud providers: AWS, Azure, and GCP. The Velero team has decided to extract each of those plugins into their own separate repository. This document details the steps necessary to create the new repositories, as well as a general design for what each plugin project will look like. Have 3 new repositories for each cloud provider plugin currently supported by the Velero team: AWS, Azure, and GCP Have the currently in-tree cloud provider plugins behave like any other plugin external to Velero Extend the Velero plugin framework capability in any way Create GH repositories for any plugin other then the currently 3 in-tree plugins Extract out any plugin that is not a cloud provider plugin (ex: item action related plugins) With more and more providers wanting to support Velero, it gets more difficult to justify excluding those from being in-tree just as with the three original ones. At the same time, if we were to include any more plugins in-tree, it would ultimately become the responsibility of the Velero team to maintain an increasing number of plugins. This move aims to equalize the field so all plugins are treated equally. We also hope that, with time, developers interested in getting involved in the upkeep of those plugins will become active enough to be promoted to maintainers. Lastly, having the plugins live in their own individual repositories allows for iteration on them separately from the core codebase. [ ] Use GH UI to create each repository in the new VMW org. Who: new org owner; TBD [ ] Make owners of the Velero repo owners of each repo in the new org. Who: new org owner; TBD [ ] Add Travis CI. Who: Any of the new repo owners; TBD [ ] Add webhook: travis CI. Who: Any of the new repo owners; TBD [ ] Add DCO for signoff check (https://probot.github.io/apps/dco/). Who: Any of the new repo owners; TBD [ ] Modify Velero so it can install any of the provider plugins. https://github.com/heptio/velero/issues/1740 - Who: @nrb [ ] Extract each provider plugin into their own repo. https://github.com/heptio/velero/issues/1537 [ ] Create deployment and gcr-push scripts with the new location path. Who: @carlisia [ ] Add documentation for how to use the plugin. Who: @carlisia [ ] Update Helm chart to install Velero using any of the provider plugins. https://github.com/heptio/velero/issues/1819 [ ] Upgrade script. https://github.com/heptio/velero/issues/1889. [Pending] The organization owner will make all current owners in the Velero repo also owners in each of the new org plugin repos. Someone with owner permission on the new repository needs to go to their Travis CI account and authorize Travis CI on the repo. Here are instructions: https://docs.travis-ci.com/user/tutorial/. After this, any webhook notifications can be added following these instructions: https://docs.travis-ci.com/user/notifications/#configuring-webhook-notifications. Each provider plugin will be an independent project, using the Velero library to implement their specific functionalities. The way Velero is installed will be changed to accommodate installing these plugins at deploy time, namely the Velero `install` command, as well as the Helm chart. Each plugin repository will need to have their respective images built and pushed to the same registry as the Velero images. Each provider plugin will be an independent GH repository, named: `velero-plugin-aws`, `velero-plugin-azure`, and `velero-plugin-gcp`. Build of the project will be done the same way as with Velero, using" }, { "data": "Images for all the plugins will be pushed to the same repository as the Velero image, also using Travis. Releases of each of these plugins will happen in sync with releases of Velero. This will consist of having a tag in the repo and a tagged image build with the same release version as Velero so it makes it easy to identify what versions are compatible, starting at v1.2. Documentation for how to install and use the plugins will be augmented in the existing Plugins section of the Velero documentation. Documentation for how to use each plugin will reside in their respective repos. The navigation on the Velero documentation will be modified for easy discovery of the docs/images for these plugins. We will keep the major and minor release points in sync, but the plugins can have multiple minor dot something releases as long as it remains compatible with the corresponding major/minor release of Velero. Ex: | Velero | Plugin | Compatible? | |||| | v1.2 | v1.2 | | | v1.2 | v1.2.3 | | | v1.2 | v1.3 | | | v1.3 | v1.2 | | | v1.3 | v1.3.3 | | As per https://github.com/heptio/velero/issues/1740, we will add a `plugins` flag to the Velero install command which will accept an array of URLs pointing to +1 images of plugins to be installed. The `velero plugin add` command should continue working as is, in specific, it should also allow the installation of any of the new 3 provider plugins. @nrb will provide specifics about how this change will be tackled, as well as what will be documented. Part of the work of adding the `plugins` flag will be removing the logic that adds `velero.io` name spacing to plugins that are added without it. The Helm chart that allows the installation of Velero will be modified to accept the array of plugin images with an added `plugins` configuration item. The naming convention to use for name spacing each plugin will be `velero.io`, since they are currently maintained by the Velero team. Install dep Question: are there any places outside the plugins where we depend on the cloud-provider SDKs? can we eliminate those dependencies too? x the `restic` package uses the `aws`. SDK to get the bucket region for the AWS object store (https://github.com/carlisia/velero/blob/32d46871ccbc6b03e415d1e3d4ad9ae2268b977b/pkg/restic/config.go#L41) could not find usage of the cloud provider SDKs anywhere else. Plugins such as the pod -> pvc -> pv backupitemaction ones make sense to stay in the core repo as they provide some important logic that just happens to be implemented in a plugin. The documentation for how to fresh install the out-of-tree plugin with Velero v1.2 will be specified together with the documentation for the install changes on issue https://github.com/heptio/velero/issues/1740. For upgrades, we will provide a script that will: change the tag on the Velero deployment yaml for both the main image and any of th three plugins installed. rename existing aws, azure or gcp plugin names to have the `velero.io/` namespace preceding the name (ex: `velero.io/aws). Alternatively, we could add CLI `velero upgrade` command that would make these changes. Ex: `velero upgrade 1.3` would upgrade from `v1.2` to `v1.3`. For upgrading: Edit the provider field in the backupstoragelocations and volumesnapshotlocations CRDs to include the new namespace. We considered having the plugins all live in the same GH repository. The downside of that approach is ending up with a binary and image bigger than necessary, since they would contain the SDKs of all three providers. Ensure that only the Velero core team has maintainer/owner privileges." } ]
{ "category": "Runtime", "file_name": "move-plugin-repos.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"ark restic repo get\" layout: docs Get restic repositories Get restic repositories ``` ark restic repo get [flags] ``` ``` -h, --help help for get --label-columns stringArray a comma-separated list of labels to be displayed as columns -o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'. (default \"table\") -l, --selector string only show items matching this label selector --show-labels show labels in the last column ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with restic repositories" } ]
{ "category": "Runtime", "file_name": "ark_restic_repo_get.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Spiderpool can be configured to serve metrics. And spiderpool metrics provide the insight of Spiderpool Agent and Spiderpool Controller. The metrics of spiderpool controller is set by the following pod environment: | environment | description | default | ||-| - | | SPIDERPOOLENABLEDMETRIC | enable metrics | false | | SPIDERPOOLENABLEDDEBUG_METRIC | enable debug level metrics | false | | SPIDERPOOLMETRICHTTP_PORT | metrics port | 5721 | The metrics of spiderpool agent is set by the following pod environment: | environment | description | default | ||-|| | SPIDERPOOLENABLEDMETRIC | enable metrics | false | | SPIDERPOOLENABLEDDEBUG_METRIC | enable debug level metrics | false | | SPIDERPOOLMETRICHTTP_PORT | metrics port | 5711 | Check the environment variable `SPIDERPOOLENABLEDMETRIC` of the daemonset `spiderpool-agent` for whether it is already set to `true` or not. Check the environment variable `SPIDERPOOLENABLEDMETRIC` of deployment `spiderpool-controller` for whether it is already set to `true` or not. ```shell kubectl -n kube-system get daemonset spiderpool-agent -o yaml kubectl -n kube-system get deployment spiderpool-controller -o yaml ``` You can set one or both of them to `true`. For example, let's enable spiderpool agent metrics by running `helm upgrade --set spiderpoolAgent.prometheus.enabled=true`. Spiderpool agent exports some metrics related with IPAM allocation and" }, { "data": "Currently, those include: | Name | description | |--|--| | spiderpoolipamallocation_counts | Number of IPAM allocation requests that Spiderpool Agent received , prometheus type: counter | | spiderpoolipamallocationfailurecounts | Number of Spiderpool Agent IPAM allocation failures, prometheus type: counter | | spiderpoolipamallocationupdateippoolconflictcounts | Number of Spiderpool Agent IPAM allocation update IPPool conflicts, prometheus type: counter | | spiderpoolipamallocationerrinternal_counts | Number of Spiderpool Agent IPAM allocation internal errors, prometheus type: counter | | spiderpoolipamallocationerrnoavailablepool_counts | Number of Spiderpool Agent IPAM allocation no available IPPool errors, prometheus type: counter | | spiderpoolipamallocationerrretriesexhaustedcounts | Number of Spiderpool Agent IPAM allocation retries exhausted errors, prometheus type: counter | | spiderpoolipamallocationerripusedout_counts | Number of Spiderpool Agent IPAM allocation IP addresses used out errors, prometheus type: counter | | spiderpoolipamallocationaverageduration_seconds | The average duration of all Spiderpool Agent allocation processes, prometheus type: gauge | | spiderpoolipamallocationmaxduration_seconds | The maximum duration of Spiderpool Agent allocation process (per-process), prometheus type: gauge | | spiderpoolipamallocationminduration_seconds | The minimum duration of Spiderpool Agent allocation process (per-process), prometheus type: gauge | | spiderpoolipamallocationlatestduration_seconds | The latest duration of Spiderpool Agent allocation process (per-process), prometheus type: gauge | | spiderpoolipamallocationdurationseconds | Histogram of IPAM allocation duration in seconds, prometheus type: histogram | | spiderpoolipamallocationaveragelimitdurationseconds | The average duration of all Spiderpool Agent allocation queuing, prometheus type: gauge | | spiderpoolipamallocationmaxlimitdurationseconds | The maximum duration of Spiderpool Agent allocation queuing, prometheus type: gauge | | spiderpoolipamallocationminlimitdurationseconds | The minimum duration of Spiderpool Agent allocation queuing, prometheus type: gauge | | spiderpoolipamallocationlatestlimitdurationseconds | The latest duration of Spiderpool Agent allocation queuing, prometheus type: gauge | | spiderpoolipamallocationlimitduration_seconds | Histogram of IPAM allocation queuing duration in seconds, prometheus type: histogram | | spiderpoolipamrelease_counts | Count of the number of Spiderpool Agent received the IPAM release requests, prometheus type: counter | | spiderpoolipamreleasefailurecounts | Number of Spiderpool Agent IPAM release failure, prometheus type: counter | | spiderpoolipamreleaseupdateippoolconflictcounts | Number of Spiderpool Agent IPAM release update IPPool conflicts, prometheus type: counter | | spiderpoolipamreleaseerrinternal_counts | Number of Spiderpool Agent IPAM releasing internal error, prometheus type: counter | | spiderpoolipamreleaseerrretriesexhaustedcounts | Number of Spiderpool Agent IPAM releasing retries exhausted error, prometheus type: counter | | spiderpoolipamreleaseaverageduration_seconds | The average duration of all Spiderpool Agent release processes, prometheus type: gauge | | spiderpoolipamreleasemaxduration_seconds | The maximum duration of Spiderpool Agent release process (per-process), prometheus type: gauge | | spiderpoolipamreleaseminduration_seconds | The minimum duration of Spiderpool Agent release process (per-process), prometheus type: gauge | | spiderpoolipamreleaselatestduration_seconds | The latest duration of Spiderpool Agent release process (per-process), prometheus type: gauge | | spiderpoolipamreleasedurationseconds | Histogram of IPAM release duration in seconds, prometheus type: histogram | | spiderpoolipamreleaseaveragelimitdurationseconds | The average duration of all Spiderpool Agent release queuing, prometheus type: gauge | | spiderpoolipamreleasemaxlimitdurationseconds | The maximum duration of Spiderpool Agent release queuing, prometheus type: gauge | | spiderpoolipamreleaseminlimitdurationseconds | The minimum duration of Spiderpool Agent release queuing, prometheus type: gauge | | spiderpoolipamreleaselatestlimitdurationseconds | The latest duration of Spiderpool Agent release queuing, prometheus type: gauge | | spiderpoolipamreleaselimitduration_seconds | Histogram of IPAM release queuing duration in seconds, prometheus type: histogram | | spiderpooldebugautopoolwaitedforavailable_counts | Number of Spiderpool Agent IPAM allocation wait for auto-created IPPool available, prometheus type: counter. (debug level metric) | Spiderpool controller exports some metrics related with SpiderIPPool IP garbage collection. Currently, those include: | Name | description | |--|--| | spiderpoolipgc_counts | Number of Spiderpool Controller IP garbage collection, prometheus type: counter. | | spiderpoolipgcfailurecounts | Number of Spiderpool Controller IP garbage collection failures, prometheus type: counter. | | spiderpooltotalippool_counts | Number of Spiderpool IPPools, prometheus type: gauge. | | spiderpooldebugippooltotalip_counts | Number of Spiderpool IPPool corresponding total" } ]
{ "category": "Runtime", "file_name": "metrics.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Get contents of a policy BPF map ``` cilium-dbg bpf policy get [flags] ``` ``` --all Dump all policy maps -h, --help help for get -n, --numeric Do not resolve IDs -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage policy related BPF maps" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_policy_get.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "This page lists all active maintainers of this repository. If you were a maintainer and would like to add your name to the Emeritus list, please send us a PR. See for governance guidelines and how to become a maintainer. See for general contribution guidelines. , Datadog, Inc. , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC" } ]
{ "category": "Runtime", "file_name": "MAINTAINERS.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "In this article, we will show you two serverless functions in Rust and WasmEdge deployed on AWS Lambda. One is the image processing function, the other one is the TensorFlow inference function. For the insight on why WasmEdge on AWS Lambda, please refer to the article Since our demo WebAssembly functions are written in Rust, you will need a . Make sure that you install the `wasm32-wasi` compiler target as follows, in order to generate WebAssembly bytecode. ```bash rustup target add wasm32-wasi ``` The demo application front end is written in , and deployed on AWS Lambda. We will assume that you already have the basic knowledge of how to work with Next.js and Lambda. Our first demo application allows users to upload an image and then invoke a serverless function to turn it into black and white. A deployed through GitHub Pages is available. Fork the to get started. To deploy the application on AWS Lambda, follow the guide in the repository . This repo is a standard Next.js application. The backend serverless function is in the `api/functions/image_grayscale` folder. The `src/main.rs` file contains the Rust programs source code. The Rust program reads image data from the `STDIN`, and then outputs the black-white image to the `STDOUT`. ```rust use hex; use std::io::{self, Read}; use image::{ImageOutputFormat, ImageFormat}; fn main() { let mut buf = Vec::new(); io::stdin().readtoend(&mut buf).unwrap(); let imageformatdetected: ImageFormat = image::guess_format(&buf).unwrap(); let img = image::loadfrommemory(&buf).unwrap(); let filtered = img.grayscale(); let mut buf = vec![]; match imageformatdetected { ImageFormat::Gif => { filtered.write_to(&mut buf, ImageOutputFormat::Gif).unwrap(); }, _ => { filtered.write_to(&mut buf, ImageOutputFormat::Png).unwrap(); }, }; io::stdout().write_all(&buf).unwrap(); io::stdout().flush().unwrap(); } ``` You can use Rusts `cargo` tool to build the Rust program into WebAssembly bytecode or native code. ```bash cd api/functions/image-grayscale/ cargo build --release --target wasm32-wasi ``` Copy the build artifacts to the `api` folder. ```bash cp target/wasm32-wasi/release/grayscale.wasm ../../ ``` When we build the docker image, `api/pre.sh` is executed. `pre.sh` installs the WasmEdge runtime, and then compiles each WebAssembly bytecode program into a native `so` library for faster execution. The script loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice that runs the compiled `grayscale.so` file generated by for better performance. ```javascript const { spawn } = require('child_process'); const path = require('path'); function _runWasm(reqBody) { return new Promise(resolve => { const wasmedge = spawn(path.join(dirname, 'wasmedge'), [path.join(dirname, 'grayscale.so')]); let d = []; wasmedge.stdout.on('data', (data) => { d.push(data); }); wasmedge.on('close', (code) => { let buf = Buffer.concat(d); resolve(buf); }); wasmedge.stdin.write(reqBody); wasmedge.stdin.end(''); }); } ``` The `exports.handler` part of `hello.js` exports an async function handler, used to handle different events every time the serverless function is called. In this example, we simply process the image by calling the function above and return the result, but more complicated event-handling behavior may be defined based on your need. We also need to return some `Access-Control-Allow` headers to avoid errors when calling the serverless function from a" }, { "data": "You can read more about CORS errors if you encounter them when replicating our example. ```javascript exports.handler = async function(event, context) { var typedArray = new Uint8Array(event.body.match(/[\\da-f]{2}/gi).map(function (h) { return parseInt(h, 16); })); let buf = await _runWasm(typedArray); return { statusCode: 200, headers: { \"Access-Control-Allow-Headers\" : \"Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token\", \"Access-Control-Allow-Origin\": \"*\", \"Access-Control-Allow-Methods\": \"DELETE, GET, HEAD, OPTIONS, PATCH, POST, PUT\" }, body: buf.toString('hex') }; } ``` Now we have the WebAssembly bytecode function and the script to load and connect to the web request. In order to deploy them as a function service on AWS Lambda, you still need to package the whole thing into a Docker image. We are not going to cover in detail about how to build the Docker image and deploy on AWS Lambda, as there are detailed steps in the . However, we will highlight some lines in the for you to avoid some pitfalls. ```dockerfile FROM public.ecr.aws/lambda/nodejs:14 WORKDIR /var/task RUN yum update -y && yum install -y curl tar gzip COPY *.wasm ./ COPY pre.sh ./ RUN chmod +x pre.sh RUN ./pre.sh COPY *.js ./ CMD [ \"hello.handler\" ] ``` First, we are building the image from . The advantage of using AWS Lambda's base image is that it includes the , which we need to implement in our Docker image as it is required by AWS Lambda. The Amazon Linux uses `yum` as the package manager. These base images contain the Amazon Linux Base operating system, the runtime for a given language, dependencies and the Lambda Runtime Interface Client (RIC), which implements the Lambda . The Lambda Runtime Interface Client allows your runtime to receive requests from and send requests to the Lambda service. Second, we need to put our function and all its dependencies in the `/var/task` directory. Files in other folders will not be executed by AWS Lambda. Third, we need to define the default command when we start our container. `CMD [ \"hello.handler\" ]` means that we will call the `handler` function in `hello.js` whenever our serverless function is called. Recall that we have defined and exported the handler function in the previous steps through `exports.handler = ...` in `hello.js`. Docker images built from AWS Lambda's base images can be tested locally following . Local testing requires , which is already installed in all of AWS Lambda's base images. To test your image, first, start the Docker container by running: ```bash docker run -p 9000:8080 myfunction:latest ``` This command sets a function endpoint on your local machine at `http://localhost:9000/2015-03-31/functions/function/invocations`. Then, from a separate terminal window, run: ```bash curl -XPOST \"http://localhost:9000/2015-03-31/functions/function/invocations\" -d '{}' ``` And you should get your expected output in the terminal. If you don't want to use a base image from AWS Lambda, you can also use your own base image and install RIC and/or RIE while building your Docker image. Just follow Create an image from an alternative base image section from" }, { "data": "That's it! After building your Docker image, you can deploy it to AWS Lambda following steps outlined in the repository . Now your serverless function is ready to rock! The application allows users to upload an image and then invoke a serverless function to classify the main subject on the image. It is in as the previous example but in the `tensorflow` branch. The backend serverless function for image classification is in the `api/functions/image-classification` folder in the `tensorflow` branch. The `src/main.rs` file contains the Rust programs source code. The Rust program reads image data from the `STDIN`, and then outputs the text output to the `STDOUT`. It utilizes the WasmEdge Tensorflow API to run the AI inference. ```rust pub fn main() { // Step 1: Load the TFLite model let modeldata: &[u8] = includebytes!(\"models/mobilenetv11.0224/mobilenetv11.0224_quant.tflite\"); let labels = includestr!(\"models/mobilenetv11.0224/labelsmobilenetquantv1224.txt\"); // Step 2: Read image from STDIN let mut buf = Vec::new(); io::stdin().readtoend(&mut buf).unwrap(); // Step 3: Resize the input image for the tensorflow model let flatimg = wasmedgetensorflowinterface::loadjpgimageto_rgb8(&buf, 224, 224); // Step 4: AI inference let mut session = wasmedgetensorflowinterface::Session::new(&modeldata, wasmedgetensorflow_interface::ModelType::TensorFlowLite); session.addinput(\"input\", &flatimg, &[1, 224, 224, 3]) .run(); let resvec: Vec<u8> = session.getoutput(\"MobilenetV1/Predictions/Reshape_1\"); // Step 5: Find the food label that responds to the highest probability in res_vec // ... ... let mut label_lines = labels.lines(); for i in 0..maxindex { label_lines.next(); } // Step 6: Generate the output text let classname = labellines.next().unwrap().to_string(); if max_value > 50 { println!(\"It {} a <a href='https://www.google.com/search?q={}'>{}</a> in the picture\", confidence.tostring(), classname, class_name); } else { println!(\"It does not appears to be any food item in the picture.\"); } } ``` You can use the `cargo` tool to build the Rust program into WebAssembly bytecode or native code. ```bash cd api/functions/image-classification/ cargo build --release --target wasm32-wasi ``` Copy the build artifacts to the `api` folder. ```bash cp target/wasm32-wasi/release/classify.wasm ../../ ``` Again, the `api/pre.sh` script installs WasmEdge runtime and its Tensorflow dependencies in this application. It also compiles the `classify.wasm` bytecode program to the `classify.so` native shared library at the time of deployment. The script loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice runs the compiled `classify.so` file generated by for better performance. The handler function is similar to our previous example, and is omitted here. ```javascript const { spawn } = require('child_process'); const path = require('path'); function _runWasm(reqBody) { return new Promise(resolve => { const wasmedge = spawn( path.join(dirname, 'wasmedge-tensorflow-lite'), [path.join(dirname, 'classify.so')], {env: {'LDLIBRARYPATH': dirname}} ); let d = []; wasmedge.stdout.on('data', (data) => { d.push(data); }); wasmedge.on('close', (code) => { resolve(d.join('')); }); wasmedge.stdin.write(reqBody); wasmedge.stdin.end(''); }); } exports.handler = ... // _runWasm(reqBody) is called in the handler ``` You can build your Docker image and deploy the function in the same way as outlined in the previous example. Now you have created a web app for subject classification! Next, it's your turn to use the as a template to develop Rust serverless function on AWS Lambda. Looking forward to your great work." } ]
{ "category": "Runtime", "file_name": "aws.md", "project_name": "WasmEdge Runtime", "subcategory": "Container Runtime" }
[ { "data": "name: Flaking Test about: Report flaky tests or jobs in Submariner CI labels: flake <!-- Please only use this template for submitting reports about flaky tests or jobs (pass or fail with no underlying change in code) in Submariner CI --> Which jobs are flaking: Which test(s) are flaking: Testgrid link: Reason for failure: Anything else we need to know:" } ]
{ "category": "Runtime", "file_name": "flaking-test.md", "project_name": "Submariner", "subcategory": "Cloud Native Network" }
[ { "data": "This document lists all the API resource versions currently supported by Antrea Mulit-cluster. Antrea Multi-cluster is supported since v1.5.0. Most Custom Resource Definitions (CRDs) used by Antrea Multi-cluster are in the API group `multicluster.crd.antrea.io`, and two CRDs from are in group `multicluster.x-k8s.io` which is defined by Kubernetes upstream . | CRD | CRD version | Introduced in | Deprecated in / Planned Deprecation | Planned Removal | | | -- | - | -- | | | `ClusterSets` | v1alpha2 | v1.13.0 | N/A | N/A | | `MemberClusterAnnounces` | v1alpha1 | v1.5.0 | N/A | N/A | | `ResourceExports` | v1alpha1 | v1.5.0 | N/A | N/A | | `ResourceImports` | v1alpha1 | v1.5.0 | N/A | N/A | | `Gateway` | v1alpha1 | v1.7.0 | N/A | N/A | | `ClusterInfoImport` | v1alpha1 | v1.7.0 | N/A | N/A | | CRD | CRD version | Introduced in | Deprecated in / Planned Deprecation | Planned Removal | | - | -- | - | -- | | | `ServiceExports` | v1alpha1 | v1.5.0 | N/A | N/A | | `ServiceImports` | v1alpha1 | v1.5.0 | N/A | N/A | | CRD | API group | CRD version | Introduced in | Deprecated in | Removed in | | | - | -- | - | - | - | | `ClusterClaims` | `multicluster.crd.antrea.io` | v1alpha1 | v1.5.0 | v1.8.0 | v1.8.0 | | `ClusterClaims` | `multicluster.crd.antrea.io` | v1alpha2 | v1.8.0 | v1.13.0 | v1.13.0 | | `ClusterSets` | `multicluster.crd.antrea.io` | v1alpha1 | v1.5.0 | v1.13.0 | N/A |" } ]
{ "category": "Runtime", "file_name": "api.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "(network-ovn-setup)= See the following sections for how to set up a basic OVN network, either as a standalone network or to host a small Incus cluster. Complete the following steps to create a standalone OVN network that is connected to a managed Incus parent bridge network (for example, `incusbr0`) for outbound connectivity. Install the OVN tools on the local server: sudo apt install ovn-host ovn-central Configure the OVN integration bridge: sudo ovs-vsctl set open_vswitch . \\ externalids:ovn-remote=unix:/var/run/ovn/ovnsbdb.sock \\ external_ids:ovn-encap-type=geneve \\ external_ids:ovn-encap-ip=127.0.0.1 Create an OVN network: incus network set <parentnetwork> ipv4.dhcp.ranges=<IPrange> ipv4.ovn.ranges=<IP_range> incus network create ovntest --type=ovn network=<parent_network> Create an instance that uses the `ovntest` network: incus init images:ubuntu/22.04 c1 incus config device override c1 eth0 network=ovntest incus start c1 Run to show the instance information: ```{terminal} :input: incus list :scroll: ++++-+--+--+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++++-+--+--+ | c1 | RUNNING | 192.0.2.2 (eth0) | 2001:db8:cff3:5089:216:3eff:fef0:549f (eth0) | CONTAINER | 0 | ++++-+--+--+ ``` Complete the following steps to set up an Incus cluster that uses an OVN network. Just like Incus, the distributed database for OVN must be run on a cluster that consists of an odd number of members. The following instructions use the minimum of three servers, which run both the distributed database for OVN and the OVN controller. In addition, you can add any number of servers to the Incus cluster that run only the OVN controller. Complete the following steps on the three machines that you want to run the distributed database for OVN: Install the OVN tools: sudo apt install ovn-central ovn-host Mark the OVN services as enabled to ensure that they are started when the machine boots: systemctl enable ovn-central systemctl enable ovn-host Stop OVN for now: systemctl stop ovn-central Note down the IP address of the machine: ip -4 a Open `/etc/default/ovn-central` for editing. Paste in one of the following configurations (replace `<server1>`, `<server2>` and `<server_3>` with the IP addresses of the respective machines, and `<local>` with the IP address of the machine that you are on). For the first machine: ``` OVNCTLOPTS=\" \\ --db-nb-addr=<local> \\ --db-nb-create-insecure-remote=yes \\ --db-sb-addr=<local> \\ --db-sb-create-insecure-remote=yes \\ --db-nb-cluster-local-addr=<local> \\ --db-sb-cluster-local-addr=<local> \\ --ovn-northd-nb-db=tcp:<server1>:6641,tcp:<server2>:6641,tcp:<server_3>:6641 \\ --ovn-northd-sb-db=tcp:<server1>:6642,tcp:<server2>:6642,tcp:<server_3>:6642\" ``` For the second and third machine: ``` OVNCTLOPTS=\" \\ --db-nb-addr=<local> \\ --db-nb-cluster-remote-addr=<server_1> \\ --db-nb-create-insecure-remote=yes \\ --db-sb-addr=<local> \\ --db-sb-cluster-remote-addr=<server_1> \\ --db-sb-create-insecure-remote=yes \\ --db-nb-cluster-local-addr=<local> \\ --db-sb-cluster-local-addr=<local> \\ --ovn-northd-nb-db=tcp:<server1>:6641,tcp:<server2>:6641,tcp:<server_3>:6641 \\ --ovn-northd-sb-db=tcp:<server1>:6642,tcp:<server2>:6642,tcp:<server_3>:6642\" ``` Start OVN: systemctl start ovn-central On the remaining machines, install only `ovn-host` and make sure it is enabled: sudo apt install ovn-host systemctl enable ovn-host On all machines, configure Open vSwitch (replace the variables as described above): sudo ovs-vsctl set open_vswitch" }, { "data": "\\ externalids:ovn-remote=tcp:<server1>:6642,tcp:<server2>:6642,tcp:<server3>:6642 \\ external_ids:ovn-encap-type=geneve \\ external_ids:ovn-encap-ip=<local> Create an Incus cluster by running `incus admin init` on all machines. On the first machine, create the cluster. Then join the other machines with tokens by running on the first machine and specifying the token when initializing Incus on the other machine. On the first machine, create and configure the uplink network: incus network create UPLINK --type=physical parent=<uplinkinterface> --target=<machinename_1> incus network create UPLINK --type=physical parent=<uplinkinterface> --target=<machinename_2> incus network create UPLINK --type=physical parent=<uplinkinterface> --target=<machinename_3> incus network create UPLINK --type=physical parent=<uplinkinterface> --target=<machinename_4> incus network create UPLINK --type=physical \\ ipv4.ovn.ranges=<IP_range> \\ ipv6.ovn.ranges=<IP_range> \\ ipv4.gateway=<gateway> \\ ipv6.gateway=<gateway> \\ dns.nameservers=<name_server> To determine the required values: Uplink interface : A high availability OVN cluster requires a shared layer 2 network, so that the active OVN chassis can move between cluster members (which effectively allows the OVN router's external IP to be reachable from a different host). Therefore, you must specify either an unmanaged bridge interface or an unused physical interface as the parent for the physical network that is used for OVN uplink. The instructions assume that you are using a manually created unmanaged bridge. See for instructions on how to set up this bridge. Gateway : Run `ip -4 route show default` and `ip -6 route show default`. Name server : Run `resolvectl`. IP ranges : Use suitable IP ranges based on the assigned IPs. Still on the first machine, configure Incus to be able to communicate with the OVN DB cluster. To do so, find the value for `ovn-northd-nb-db` in `/etc/default/ovn-central` and provide it to Incus with the following command: incus config set network.ovn.northbound_connection <ovn-northd-nb-db> Finally, create the actual OVN network (on the first machine): incus network create my-ovn --type=ovn To test the OVN network, create some instances and check the network connectivity: incus launch images:ubuntu/22.04 c1 --network my-ovn incus launch images:ubuntu/22.04 c2 --network my-ovn incus launch images:ubuntu/22.04 c3 --network my-ovn incus launch images:ubuntu/22.04 c4 --network my-ovn incus list incus exec c4 -- bash ping <IP of c1> ping <nameserver> ping6 -n www.example.com Complete the following steps to have the OVN controller send its logs to Incus. Enable the syslog socket: incus config set core.syslog_socket=true Open `/etc/default/ovn-host` for editing. Paste the following configuration: OVNCTLOPTS=\" \\ --ovn-controller-log='-vsyslog:info --syslog-method=unix:/var/lib/incus/syslog.socket'\" Restart the OVN controller: systemctl restart ovn-controller.service You can now use to see logged network ACL traffic from the OVN controller: incus monitor --type=network-acls You can also send the logs to Loki. To do so, add the `network-acl` value to the {config:option}`server-loki:loki.types` configuration key, for example: incus config set loki.types=network-acl ```{tip} You can include logs for OVN `northd`, OVN north-bound `ovsdb-server`, and OVN south-bound `ovsdb-server` as well. To do so, edit `/etc/default/ovn-central`: OVNCTLOPTS=\" \\ --ovn-northd-log='-vsyslog:info --syslog-method=unix:/var/lib/incus/syslog.socket' \\ --ovn-nb-log='-vsyslog:info --syslog-method=unix:/var/lib/incus/syslog.socket' \\ --ovn-sb-log='-vsyslog:info --syslog-method=unix:/var/lib/incus/syslog.socket'\" sudo systemctl restart ovn-central.service ```" } ]
{ "category": "Runtime", "file_name": "network_ovn_setup.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "<!-- This is for pull requesting new features, improvements and changes! For fixing bugs use https://github.com/openebs/openebs/compare/?template=bugs.md --> <!-- Don't forget to follow code style, and update documentation and tests if needed --> <!-- If you can't answer some sections, please delete them --> <!-- Describe your changes in detail --> <!-- Why is this change required? How can it benefit other users? --> <!-- If it is from an open issue, please link to the issue here --> <!-- Please describe in detail how you tested your changes --> <!-- Include details of your testing environment, and the tests you ran to see how your change affects other areas of the code, etc <!-- Will your changes brake backward compatibility or not? --> <!-- Add screenshots of your changes -->" } ]
{ "category": "Runtime", "file_name": "features.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "For debugging or inspection you may want to extract an ACI manifest to stdout. ``` { \"acVersion\": \"0.8.11\", \"acKind\": \"ImageManifest\", ... ``` | Flag | Default | Options | Description | | | | | | | `--pretty-print` | `true` | `true` or `false` | Apply indent to format the output | There are cases where you might want to export the ACI from the store to copy to another machine, file server, etc. ``` $ tar xvf etcd.aci ``` NOTES: A matching image must be fetched before doing this operation, rkt will not attempt to download an image first, this subcommand will incur no-network I/O. The exported ACI file might be different than the original one because rkt image export always returns uncompressed ACIs. | Flag | Default | Options | Description | | | | | | | `--overwrite` | `false` | `true` or `false` | Overwrite output ACI | For debugging or inspection you may want to extract an ACI to a directory on disk. There are a few different options depending on your use case but the basic command looks like this: ``` etcd-extracted etcd-extracted/manifest etcd-extracted/rootfs etcd-extracted/rootfs/etcd etcd-extracted/rootfs/etcdctl ... ``` NOTE: Like with rkt image export, a matching image must be fetched before doing this operation. Now there are some flags that can be added to this: To get just the rootfs use: ``` etcd-extracted etcd-extracted/etcd etcd-extracted/etcdctl ... ``` If you want the image rendered as it would look ready-to-run inside of the rkt stage2 then use `rkt image render`. NOTE: this will not use overlayfs or any other mechanism. This is to simplify the cleanup: to remove the extracted files you can run a simple `rm -Rf`. | Flag | Default | Options | Description | | | | | | | `--overwrite` | `false` | `true` or `false` | Overwrite output directory | | `--rootfs-only` | `false` | `true` or `false` | Extract rootfs only | You can garbage collect the rkt store to clean up unused internal data and remove old images. By default, images not used in the last 24h will be removed. This can be configured with the `--grace-period` flag. ``` rkt: removed treestore \"deps-sha512-219204dd54481154aec8f6eafc0f2064d973c8a2c0537eab827b7414f0a36248\" rkt: removed treestore \"deps-sha512-3f2a1ad0e9739d977278f0019b6d7d9024a10a2b1166f6c9fdc98f77a357856d\" rkt: successfully removed aci for image: \"sha512-e39d4089a224718c41e6bef4c1ac692a6c1832c8c69cf28123e1f205a9355444\" (\"coreos.com/rkt/stage1\") rkt: successfully removed aci for image: \"sha512-0648aa44a37a8200147d41d1a9eff0757d0ac113a22411f27e4e03cbd1e84d0d\"" }, { "data": "rkt: 2 image(s) successfully removed ``` | Flag | Default | Options | Description | | | | | | | `--grace-period` | `24h0m0s` | A time | Duration to wait since an image was last used before removing it | You can get a list of images in the local store with their keys, names and import times. ``` ID NAME IMPORT TIME LAST USED SIZE LATEST sha512-91e98d7f1679 coreos.com/etcd:v2.0.9 6 days ago 2 minutes ago 12MiB false sha512-a03f6bad952b coreos.com/rkt/stage1:0.7.0 55 minutes ago 2 minutes ago 143MiB false ``` A more detailed output can be had by adding the `--full` flag: ``` ID NAME IMPORT TIME LAST USED SIZE LATEST sha512-96323da393621d846c632e71551b77089ac0b004ceb5c2362be4f5ced2212db9 registry-1.docker.io/library/redis:latest 2015-12-14 12:30:33.652 +0100 CET 2015-12-14 12:33:40.812 +0100 CET 113309184 true ``` In rkt, dependencies between ACI images form a directed acyclic graph. An image is pre-rendered in what is called the tree store. A rendered image in the tree store contains a ready-to-execute filesystem tree with all the files of the image plus the relevant files of its dependencies as specified by the . The size shown in `rkt image list` is the uncompressed size of the image itself plus the size of its related tree stores. Note: There's currently a bug in the size calculation logic. rkt assumes there can only be one tree store per image but, if a dependency changes, there will be multiple different tree stores. This bug is tracked by | Flag | Default | Options | Description | | | | | | | `--fields` | `id,name,importtime,lastused,size,latest` | A comma-separated list with one or more of `id`, `name`, `importtime`, `lastused`, `size`, `latest` | Comma-separated list of fields to display | | `--full` | `false` | `true` or `false` | Use long output format | | `--no-legend` | `false` | `true` or `false` | Suppress a legend with the list | | `--order` | `asc` | `asc` or `desc` | Choose the sorting order if at least one sort field is provided (`--sort`) | | `--sort` | `importtime` | A comma-separated list with one or more of `id`, `name`, `importtime`, `lastused`, `size`, `latest` | Sort the output according to the provided comma-separated list of fields | Given multiple image IDs or image names you can remove them from the local store. ``` rkt: successfully removed aci for image: \"sha512-a03f6bad952bd548c2a57a5d2fbb46679aff697ccdacd6c62e1e1068d848a9d4\" (\"coreos.com/rkt/stage1\") rkt: successfully removed aci for image: \"sha512-91e98d7f167905b69cce91b163963ccd6a8e1c4bd34eeb44415f0462e4647e27\" (\"coreos.com/etcd\") rkt: 2 image(s) successfully removed ``` Given one or more image IDs or image names, verify will verify that their ondisk checksum matches the value previously calculated on render. ``` successfully verified checksum for image: \"quay.io/coreos/etcd:v3.1.0\" (\"sha512-e70ec975ce5327ea52c4a30cc4a951ecea55217a290e866e70888517964ba700\") successfully verified checksum for image: \"sha512-887890e697d9\" (\"sha512-887890e697d9a0229eff22436def3c436cb4b18f72ac274c8c05427b39539307\") ``` See the table with ." } ]
{ "category": "Runtime", "file_name": "image.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "Supporting Longhorn snapshot CRD allows users to query/create/delete volume snapshots using kubectl. This is one step closer to making kubectl as Longhorn CLI. Also, this will be a building block for the future auto-attachment/auto-detachment refactoring for snapshot creation, deletion, volume cloning. https://github.com/longhorn/longhorn/issues/3144 Support Longhorn snapshot CRD to allow users to query/create/delete volume snapshots using kubectl. A building block for the future auto-attachment/auto-detachment refactoring for snapshot creation, deletion, volume cloning. Pay attention to scalability problem. A cluster with 1k volumes might have 30k snapshots. We should make sure not to overload the controller work-queue as well as making too many grpc calls to engine processes. Introduce a new CRD, snapshot CRD and the snapshot controller. The life cycle of a snapshot CR is as below: Create (by engine monitor/kubectl) When user create a new snapshot CR, Longhorn try to create a new snapshot When there is a snapshot in the volume that isn't corresponding to any snapshot CR, Longhorn will generate snapshot CR for that snapshot Update (by snapshot controller) Snapshot controller will reconcile the snapshot CR status with the snapshot info inside the volume engine Delete (by engine monitor/kubectl) When a snapshot CR is deleted (by user or by Longhorn), snapshot controller will make sure that the snapshot are removed from the engine before remove the finalizer and allow the deletion Deleting volume should be blocked until all of its snapshot are removed When there is a system generated snapshot CR that isn't corresponding to any snapshot info inside engine status, Longhorn will delete the snapshot CR Before this enhancement, users have to use Longhorn UI to query/create/delete volume snapshot. For user with only access to CLI, another option is to use our . However, the Python client are not as intuitive and easy as using kubectl. After this enhancement, users will be able to use kubectl to query/create/delete Longhorn snapshots just like what they can do with Longhorn backups. There is no additional requirement for users to use this feature. The experience details should be in the `User Experience In Detail` later. User wants to limit the snapshot count to save space. Snapshot RecurringJobs set to Retain X number of snapshots do not touch unrelated snapshots, so if one ever changes the name of the RecurringJob, the old snapshots will stick around forever. These then have to be manually deleted in the UI. There might be some kind of browser automation framework might also work for pruning large numbers of snapshots, but this feels janky. Having a CRD for snapshots would greatly simplify this, as one could prune snapshots using kubectl, much like how one can currently manage backups using kubectl due to the existence of the `backups.longhorn.io` CRD. There is no additional requirement for users to use this" }, { "data": "We don't want to have disruptive changes in this initial version of snapshot CR (e.g., snapshot API create/delete shouldn't change. Snapshot status is still inside the engine status). We can wait for the snapshot CRD to be a bit more mature (no issue with scalability) and make the disruptive changes in the next version of snapshot CR (e.g., snapshot API create/delete changes to create/delete snapshot CRs. Snapshot status is removed from inside the engine status) Introduce a new CRD, snapshot CRD and the snapshot controller. The snapshot CRD is: ```yaml // SnapshotSpec defines the desired state of Longhorn Snapshot type SnapshotSpec struct { // the volume that this snapshot belongs to. // This field is immutable after creation. // Required Volume string `json:\"volume\"` // require creating a new snapshot // +optional CreateSnapshot bool `json:\"createSnapshot\"` // The labels of snapshot // +optional // +nullable Labels map[string]string `json:\"labels\"` } // SnapshotStatus defines the observed state of Longhorn Snapshot type SnapshotStatus struct { // +optional Parent string `json:\"parent\"` // +optional // +nullable Children map[string]bool `json:\"children\"` // +optional MarkRemoved bool `json:\"markRemoved\"` // +optional UserCreated bool `json:\"userCreated\"` // +optional CreationTime string `json:\"creationTime\"` // +optional Size int64 `json:\"size\"` // +optional // +nullable Labels map[string]string `json:\"labels\"` // +optional OwnerID string `json:\"ownerID\"` // +optional Error string `json:\"error,omitempty\"` // +optional RestoreSize int64 `json:\"restoreSize\"` // +optional ReadyToUse bool `json:\"readyToUse\"` } ``` The life cycle of a snapshot CR is as below: Create When a snapshot CR is created, Longhorn mutation webhook will: Add a volume label `longhornvolume: <VOLUME-NAME>` to the snapshot CR. This allow us to efficiently find snapshots corresponding to a volume without having listing potentially thoundsands of snapshots. Add `longhornFinalizerKey` to snapshot CR to prevent it from being removed before Longhorn has change to clean up the corresponding snapshot Populate the value for `snapshot.OwnerReferences` to uniquely identify the volume of this snapshot. This field contains the volume UID to uniquely identify the volume in case the old volume was deleted and a new volume was created with the same name. For user created snapshot CR, the field `Spec.CreateSnapshot` should be set to `true` indicating that Longhorn should provision a new snapshot for this CR. Longhorn snapshot controller will pick up this CR, check to see if there already is a snapshot inside the `engine.Status.Snapshots`. If there is there already a snapshot inside engine.Status.Snapshots, update the snapshot.Status with the snapshot info inside `engine.Status.Snapshots` If there isn't a snapshot inside `engine.Status.Snapshots` then: making a call to engine process to check if there already a snapshot with the same name. This is to make sure we don't accidentally create 2 snapshots with the same name. This logic can be remove after is resolved If the snapshot doesn't inside the engine process, make another call to create the snapshot For the snapshots that are already exist inside `engine.Status.Snapshots` but doesn't have corresponding snapshot CRs" }, { "data": "system generated snapshots), the engine monitoring will generate snapshot CRs for them. The snapshot CR generated by engine monitoring with have `Spec.CreateSnapshot` set to `false`, Longhorn snapshot controller will not create a snapshot for those CRs. The snapshot controller only sync status for those snapshot CRs Update Snapshot CR spec and label are immutable after creation. It will be protected by the admission webhook Sync the snapshot info from `engine.Status.Snapshots` to the `snapshot.Status`. If there is any error or if the snapshot is marked as removed, set `snapshot.Status.ReadyToUse` to `false` If there there is no snapshot info inside `engine.Status.Snapshots`, mark the `snapshot.Status.ReadyToUse` to `false`and populate the `snapshot.Status.Error` with the lost message. This snapshot will eventually be updated again when engine monitoring update `engine.Status.Snapshots` or it may be cleanup as the section below Delete Engine monitor will responsible for removing all snapshot CRs that don't have a matching snapshot info and are in one of the following cases: The snapshot CRs with `Spec.CreateSnapshot: false` (snapshot CR that is auto generated by the engine monitoring) The snapshot CRs with `Spec.CreateSnapshot: true` and `snapCR.Status.CreationTime != nil` (snapshot CR that has requested a new snapshot and the snapshot has already provisioned before but no longer exist now) When a snapshot CR has deletion timestamp set, snapshot controller will: Check to see if the actual snapshot inside engine process exist. If it exist do: if has not been marked as removed, issue grpc call to engine process to remove the snapshot Check if the engine is in the purging state, if not issue a snapshot purge call to engine process If it doesn't exist, remove the `longhornFinalizerKey` to allow the deletion of the snapshot CR Integration test plan. For engine enhancement, also requires engine integration test plan. Anything that requires if user want to upgrade to this enhancement How do we address scalability issue? Controller workqueue Disable resync period for snapshot informer Enqueue snapshot only when: There is a change in snapshot CR There is a change in `engine.Status.CurrentState` (volume attach/detach event), `engine.Status.PurgeStatus` (for snapshot deletion event), `engine.Status.Snapshots` (for snapshot creation/update event) This enhancement proposal doesn't make additional call to engine process comparing to the existing design. For the special snapshot `volume-head`, we don't create a snapshot CR for this special snapshot because: From the usecase perspective, user cannot delete this snapshot anyway so there is no need to generate this snapshot The name `volume-head` is not globally uniquely, we might have to include volume name if we want to generate this snapshot CR We would have to implement special logic to prevent user from deleting this special CR On the flip side, if we generate this special CR, user will have a complete picture of the snapshot chain The VolumeHead CR may suddenly point to another actual file during the snapshot creation." } ]
{ "category": "Runtime", "file_name": "20220420-longhorn-snapshot-crd.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "This command prints the rkt version, the appc version rkt is built against, and the Go version and architecture rkt was built with. ``` $ rkt version rkt Version: 1.30.0 appc Version: 0.8.11 Go Version: go1.5.3 Go OS/Arch: linux/amd64" } ]
{ "category": "Runtime", "file_name": "version.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "Antrea uses , and to generate CNI gRPC service code. If you make any change to , you can re-generate the code by invoking `make codegen`. Antrea extends Kubernetes API with an extension APIServer and Custom Resource Definitions, and uses [k8s.io/code-generator (release-1.18)](https://github.com/kubernetes/code-generator/tree/release-1.18) to generate clients, informers, conversions, protobuf codecs and other helpers. The resource definitions and their generated codes are located in the conventional paths: `pkg/apis/<resource group>` for internal types and `pkg/apis/<resource group>/<version>` for versioned types and `pkg/client/clientset` for clients. If you make any change to any `types.go`, you can re-generate the code by invoking `make codegen`. Antrea uses the framework for its unit tests. If you add or modify interfaces that need to be mocked, please add or update `MOCKGEN_TARGETS` in accordingly. All the mocks for a given package will typically be generated in a sub-package called `testing`. For example, the mock code for the interface `Baz` defined in the package `pkg/foo/bar` will be generated to `pkg/foo/bar/testing/mock_bar.go`, and you can import it via `pkg/foo/bar/testing`. Same as above, you can re-generate the mock source code (with `mockgen`) by invoking `make codegen`. contains a list of supported metrics, which could be affected by third party component changes. The collection of metrics is done from a running Kind deployment, in order to reflect the current list of metrics which is exposed by Antrea Controller and Agents. To regenerate the metrics list within the document, use with document location as a parameter." } ]
{ "category": "Runtime", "file_name": "code-generation.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "% runc-update \"8\" runc-update - update running container resource constraints runc update [option ...] container-id runc update -r resources.json|- container-id The update command change the resource constraints of a running container instance. The resources can be set using options, or, if -r is used, parsed from JSON provided as a file or from stdin. In case -r is used, the JSON format is like this: { \"memory\": { \"limit\": 0, \"reservation\": 0, \"swap\": 0, \"kernel\": 0, \"kernelTCP\": 0 }, \"cpu\": { \"shares\": 0, \"quota\": 0, \"burst\": 0, \"period\": 0, \"realtimeRuntime\": 0, \"realtimePeriod\": 0, \"cpus\": \"\", \"mems\": \"\" }, \"blockIO\": { \"blkioWeight\": 0 } } --resources|-r resources.json : Read the new resource limits from resources.json. Use - to read from stdin. If this option is used, all other options are ignored. --blkio-weight weight : Set a new io weight. --cpu-period num : Set CPU CFS period to be used for hardcapping (in microseconds) --cpu-quota num : Set CPU usage limit within a given period (in microseconds). --cpu-burst num : Set CPU burst limit within a given period (in microseconds). --cpu-rt-period num : Set CPU realtime period to be used for hardcapping (in microseconds). --cpu-rt-runtime num : Set CPU realtime hardcap limit (in usecs). Allowed cpu time in a given period. --cpu-share num : Set CPU shares (relative weight vs. other containers). --cpuset-cpus list : Set CPU(s) to use. The list can contain commas and ranges. For example: 0-3,7. --cpuset-mems list : Set memory node(s) to use. The list format is the same as for --cpuset-cpus. --memory num : Set memory limit to num bytes. --memory-reservation num : Set memory reservation, or soft limit, to num bytes. --memory-swap num : Set total memory + swap usage to num bytes. Use -1 to unset the limit (i.e. use unlimited swap). --pids-limit num : Set the maximum number of processes allowed in the container. --l3-cache-schema value : Set the value for Intel RDT/CAT L3 cache schema. --mem-bw-schema value : Set the Intel RDT/MBA memory bandwidth schema. runc(8)." } ]
{ "category": "Runtime", "file_name": "runc-update.8.md", "project_name": "runc", "subcategory": "Container Runtime" }
[ { "data": "Some Longhorn components should be available to correctly handle cleanup/detach Longhorn volumes during the draining process. They are: `csi-attacher`, `csi-provisioner`, `longhorn-admission-webhook`, `longhorn-conversion-webhook`, `share-manager`, `instance-manager`, and daemonset pods in `longhorn-system` namespace. This LEP outlines our existing solutions to protect these components, the issues of these solutions, and the proposal for improvement. https://github.com/longhorn/longhorn/issues/3304 Have better ways to protect Longhorn components (`csi-attacher`, `csi-provisioner`, `longhorn-admission-webhook`, `longhorn-conversion-webhook`) without demanding the users to specify the draining flags to skip these pods. Our existing solutions to protect these components are: For `instance-manager`: dynamically create/delete instance manager PDB For Daemonset pods in `longhorn-system` namespace: we advise the users to specify `--ignore-daemonsets` to ignore them in the `kubectl drain` command. This actually follows the For `csi-attacher`, `csi-provisioner`, `longhorn-admission-webhook`, and `longhorn-conversion-webhook`: we advise the user to specify `--pod-selector` to ignore these pods Proposal for `csi-attacher`, `csi-provisioner`, `longhorn-admission-webhook`, and `longhorn-conversion-webhook`: <br> The problem with the existing solution is that sometime, users could not specify `--pod-selector` for the `kubectl drain` command. For example, for the users that are using the project , they don't have option to specify `--pod-selector`. Also, we would like to have a more automatic way instead of relying on the user to set kubectl drain options. Therefore, we propose the following design: Longhorn manager automatically create PDBs for `csi-attacher`, `csi-provisioner`, `longhorn-admission-webhook`, and `longhorn-conversion-webhook` with `minAvailable` set to 1. This will make sure that each of these deployment has at least 1 running pod during the draining process. Longhorn manager continuously watches the volumes and removes the PDBs once there is no attached volume. This should work for both single-node and multi-node cluster. Before the enhancement, users would need to specify the drain options for drain command to exclude Longhorn pods. Sometimes, this is not possible when users use third-party solution to drain and upgrade kubernetes, such as System Upgrade Controller. After the enhancement, the user can doesn't need to specify the drain options for the drain command to exclude Longhorn pods. None Create a new controller inside Longhorn manager called `longhorn-pdb-controller`, the controller listens for the changes for `csi-attacher`, `csi-provisioner`, `longhorn-admission-webhook`, `longhorn-conversion-webhook`, and Longhorn volumes to adjust the PDB correspondingly. https://github.com/longhorn/longhorn/issues/3304#issuecomment-1467174481 No Upgrade is needed In the original Github ticket, we mentioned that we need to add PDB to protect share manager pod from being drained before its workload pods because if share manager pod doesn't exist then its volume cannot be unmounted in the CSI flow. However, with the fix https://github.com/longhorn/longhorn/issues/5296, we can always umounted the volume even if the share manager is not running. Therefore, we don't need to protect share manager pod." } ]
{ "category": "Runtime", "file_name": "20230307-pdb-for-longhon-csi-and-webhook.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "CURVE monitoring system includes three aspects: indicators collection, indicators storage, and indicators display. For indicators collection, use built-in ; the open source monitoring system is used for indicators storage; is used for indicators display. is a counter library in a multi-threaded environment, which is convenient for recording and viewing various types of user numerical value in program. bvar data can be exported and queried via web portal on the port of the brpc server service, also can view historical trends, statistics and quantile values; bvar also has a built-in prometheus conversion module to convert collected indicators into a format supported by prometheus. The bvar data models used in CURVE are: `bvar::Adder<T>` : counterdefault 0varname << N is equivalent to varname += N. `bvar::Maxer<T>` : maximum valuedefault std::numeric_limits::min()varname << N is equivalent to varname = max(varname, N). `bvar::Miner<T>` : minimum valuedefault std::numeric_limits::max()varname << N is equivalent to varname = min(varname, N). `bvar::IntRecorder` : the average value since use. Note that the attributive here is not \"in a period of time\". Generally, the average value in the time window is derived through Window. `bvar::Window<VAR>`: the accumulated value of a bvar in a period of time. Window is derived from the existing bvar and will be updated automatically. `bvar::PerSecond<VAR>`: the average accumulated value of a bvar per second in a period of time. PerSecond is also a derivative variable that is automatically updated. `bvar::LatencyRecorder`: to recording latency and qps. Input delay, average delay/maximum delay/qps/total times are all available. The specific usage of bvar in CURVE can be viewed: CURVE cluster monitoring uses Prometheus to collect data and Grafana to display. Monitoring content includes: Client, Mds, Chunkserver, Etcd, and machine nodes. The configuration of the monitoring target uses the prometheus file-based service automatic discovery function. The monitoring component is deployed in docker and docker-compose is used for orchestration. The deployment related scripts are in the . <img src=\"../images/monitor.png\" alt=\"monitor\" width=\"800\" /> ```Promethethus``` regularly pulls corresponding data from Brpc Server in MDS, ETCD, Snapshotcloneserver, ChunkServer, and Client. ```docker compose``` is used to orchestrate the configuration of docker components, including Prometheus, Grafana and Repoter. ```python``` scripts. is used to generate the monitoring target configuration that prometheus service discovery depends on. The generated file is in json format and the script depends on to obtain mds, etcd information from the configuration. is used to export the data information required by the daily reporter from Grafana. <img src=\"../images/grafana-example-1.png\" alt=\"monitor\" width=\"1000\" /> <img src=\"../images/grafana-example-3.png\" alt=\"monitor\" width=\"1000\" /> <img src=\"../images/grafana-example-2.png\" alt=\"monitor\" width=\"1000\" /> <img src=\"../images/grafana-reporter.png\" alt=\"monitor\" width=\"800\" />" } ]
{ "category": "Runtime", "file_name": "monitor_en.md", "project_name": "Curve", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"ark client config\" layout: docs Get and set client configuration file values Get and set client configuration file values ``` -h, --help help for config ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Ark client related commands - Get client configuration file values - Set client configuration file values" } ]
{ "category": "Runtime", "file_name": "ark_client_config.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "The admission controller has been removed in favor of the declarative validation admission policies This proposal is to add support for admission controllers in Rook. An admission controller is a piece of code that intercepts requests to the Kubernetes API server prior to persistence of the object, but after the request is authenticated and authorized There are two special controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. Mutating controllers may modify the objects they admit but validation controllers are only allowed to validate requests. Currently, user can manipulate Custom Resource specs with any values which may result in Rook not functioning as expected. The present validation method in Kubernetes is the OpenAPI schema validation which can be used for basic validations like checking type of data, providing a range for the values etc but anything more complex (checking resource availability, network status, error handling) would not be possible under this scenario. Webhook server which will validate the requests TLS certificates for the server, ValidatingWebhookConfig/MutatingWebhookConfig which will intercept requests and send a HTTPS request to the webhook server. RBAC Components As shown in the above diagram, the admission control process proceeds in two phases. In the first phase, mutating admission controllers are run. In the second phase, validating admission controllers are run. Note again that some of the controllers are both. The admission controllers intercept requests based on the values given in the configuration. In this config, we have to provide the details on What resources should it be looking for ? (Pods, Service) What api version and group does it belong to ? Example : ApiVersion = (v1, v1beta) Group version = (rook.ceph.io, admissionregistration.k8s.io)* What kind of operation should it intercept ? (Create, Update, Delete) A valid base64 encoded CA bundle. What path do we want to send with HTTPs request (/validate, /mutate) A webhook server should be in place (with valid TLS certificates) to intercept any HTTPs request that comes with the above path value. Once the request is intercepted by the server, an object is sent through with the resource specifications. When the webhook server receives Admission Request, it will perform predefined validations on the provided resource values and send back an with the indication whether request is accepted or rejected. If any of the controllers in either phase reject the request, the entire request is rejected immediately and an error is returned to the" }, { "data": "We can use self-signed certificates approved by the Kubernetes Certificate Authority for development purposes.This can be done by following the steps given below. Creating the private key and certs using openssl. Sample : Generating the public and private keys ``` openssl genrsa -out ${PRIVATEKEYNAME}.pem 2048 openssl req -new -key ${PRIVATEKEYNAME}.pem -subj \"/CN=${service}.${namespace}.svc\" openssl base64 -d -A -out ${PUBLICKEYNAME}.pem ``` Creating and sending a CSR to kubernetes for approval Sample : Certificate Signing Request (CSR) in Kubernetes ``` apiVersion: certificates.k8s.io/v1 kind: CertificateSigningRequest metadata: name: ${csrName} spec: request: $(cat server.csr | base64 | tr -d '\\n') usages: digital signature key encipherment server auth ``` Verify with `kubectl get csr ${csrName}` Sample : Approval of Signed Certificate ``` kubectl certificate approve ${csrName} ``` If it is approved, then we can check the certificate with following command ``` $(kubectl get csr ${csrName} -o jsonpath='{.status.certificate}') ``` Once approved, a generic secret will be created with the given public and private key which will be later mounted onto the server pod for use. Sample : Creating a Secret in Kubernetes ``` kubectl create secret generic ${secret} \\ --from-file=key.pem=${PRIVATEKEYNAME}.pem \\ --from-file=cert.pem=${PUBLICKEYNAME}.pem ``` Modifying the webhook config to inject CA bundle onto the ValidatingWebhookConfig All the above resources will be created in rook-ceph namespace Using the above approach, the dev/admins will have the responsibility of rotating the certificates when they expire. Below is an excerpt of what a ValidatingWebhookConfig looks like ``` apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: name: demo-webhook webhooks: name: webhook-server.webhook-demo.svc clientConfig: service: name: webhook-server namespace: rook-ceph path: \"/validate\" caBundle: ${CAPEMB64} rules: operations: [ \"CREATE\" ] apiGroups: [\"\"] apiVersions: [\"v1\"] resources: [\"pods\"] ``` We can make changes to the above values according to intercept based on whether a resource is being updated/deleted/created or change the type of resource or the request path which will be sent to the server. Based on the whether the secrets are present, the rook operator will deploy the relevant configuration files onto the cluster and start the server. The secrets will be volume mounted on the rook operator pod dynamically when they are detected. After the volumes are mounted, an HTTPs server would be started. Once the server starts, it will look for the appropriate tls key and crt files in the mounted volumes and start intercepting requests based on the path set in ValidatingWebhookConfig. If the server is unable to find valid certificates, It will not deploy any admission controller components onto the cluster and hence rook will continue to function normally as before. <https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/> <https://github.com/kubernetes/api/blob/master/admission/v1beta1/types.go> <https://kubernetes.io/blog/2019/03/21/a-guide-to-kubernetes-admission-controllers/> <https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/>" } ]
{ "category": "Runtime", "file_name": "admission-controller.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "Install Longhorn. Install Longhorn stack. ```bash ./install.sh ``` Sample output: ```shell secret/influxdb-creds created persistentvolumeclaim/influxdb created deployment.apps/influxdb created service/influxdb created Deployment influxdb is running. Cloning into 'upgrade-responder'... remote: Enumerating objects: 1077, done. remote: Counting objects: 100% (1076/1076), done. remote: Compressing objects: 100% (454/454), done. remote: Total 1077 (delta 573), reused 1049 (delta 565), pack-reused 1 Receiving objects: 100% (1077/1077), 55.01 MiB | 18.10 MiB/s, done. Resolving deltas: 100% (573/573), done. Release \"longhorn-upgrade-responder\" does not exist. Installing it now. NAME: longhorn-upgrade-responder LAST DEPLOYED: Thu May 11 00:42:44 2023 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Get the Upgrade Responder server URL by running these commands: export POD_NAME=$(kubectl get pods --namespace default -l \"app.kubernetes.io/name=upgrade-responder,app.kubernetes.io/instance=longhorn-upgrade-responder\" -o jsonpath=\"{.items[0].metadata.name}\") kubectl port-forward $POD_NAME 8080:8314 --namespace default echo \"Upgrade Responder server URL is http://127.0.0.1:8080\" Deployment longhorn-upgrade-responder is running. persistentvolumeclaim/grafana-pvc created deployment.apps/grafana created service/grafana created Deployment grafana is running. [Upgrade Checker] URL : http://longhorn-upgrade-responder.default.svc.cluster.local:8314/v1/checkupgrade [InfluxDB] URL : http://influxdb.default.svc.cluster.local:8086 Database : longhornupgraderesponder Username : root Password : root [Grafana] Dashboard : http://1.2.3.4:30864 Username : admin Password : admin ```" } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "For information on debugging instance issues, see {ref}`instances-troubleshoot`. Here are different ways to help troubleshooting `incus` and `incusd` code. Adding `--debug` flag to any client command will give extra information about internals. If there is no useful info, it can be added with the logging call: logger.Debugf(\"Hello: %s\", \"Debug\") This command will monitor messages as they appear on remote server. On server side the most easy way is to communicate with Incus through local socket. This command accesses `GET /1.0` and formats JSON into human readable form using utility: ```bash curl --unix-socket /var/lib/incus/unix.socket incus/1.0 | jq . ``` See the for available API. {ref}`HTTPS connection to Incus <security>` requires valid client certificate that is generated on first . This certificate should be passed to connection tools for authentication and encryption. If desired, `openssl` can be used to examine the certificate (`~/.config/incus/client.crt`): ```bash openssl x509 -text -noout -in client.crt ``` Among the lines you should see: Certificate purposes: SSL client : Yes ```bash wget --no-check-certificate --certificate=$HOME/.config/incus/client.crt --private-key=$HOME/.config/incus/client.key -qO - https://127.0.0.1:8443/1.0 ``` Some browser plugins provide convenient interface to create, modify and replay web requests. To authenticate against Incus server, convert `incus` client certificate into importable format and import it into browser. For example this produces `client.pfx` in Windows-compatible format: ```bash openssl pkcs12 -clcerts -inkey client.key -in client.crt -export -out client.pfx ``` After that, opening should work as expected. The files of the global {ref}`database <database>` are stored under the `./database/global` sub-directory of your Incus data directory (`/var/lib/incus/database/global`). Since each member of the cluster also needs to keep some data which is specific to that member, Incus also uses a plain SQLite database (the \"local\" database), which you can find in `./database/local.db`. Backups of the global database directory and of the local database file are made before upgrades, and are tagged with the `.bak` suffix. You can use those if you need to revert the state as it was before the upgrade. If you want to get a SQL text dump of the content or the schema of the databases, use the `incus admin sql <local|global> [.dump|.schema]` command, which produces the equivalent output of the `.dump` or `.schema` directives of the `sqlite3` command line tool. If you need to perform SQL queries (e.g. `SELECT`, `INSERT`, `UPDATE`) against the local or global database, you can use the `incus admin sql` command (run `incus admin sql --help` for details). You should only need to do that in order to recover from broken updates or bugs. Please consult the Incus team first (creating a [GitHub issue](https://github.com/lxc/incus/issues/new) or post). In case the Incus daemon fails to start after an upgrade because of SQL data migration bugs or similar problems, it's possible to recover the situation by creating `.sql` files containing queries that repair the broken update. To perform repairs against the local database, write a `./database/patch.local.sql` file containing the relevant queries, and similarly a `./database/patch.global.sql` for global database repairs. Those files will be loaded very early in the daemon startup sequence and deleted if the queries were successful (if they fail, no state will change as they are run in a SQL transaction). As above, please consult the Incus team first. If you want to flush the content of the cluster database to disk, use the `incus admin sql global .sync` command, that will write a plain SQLite database file into `./database/global/db.bin`, which you can then inspect with the `sqlite3` command line tool." } ]
{ "category": "Runtime", "file_name": "debugging.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "Provider (e.g. ACI, AWS Fargate) Version (e.g. 0.1, 0.2-beta) K8s Master Info (e.g. AKS, ACS, Bare Metal, EKS) Install Method (e.g. Helm Chart, )" } ]
{ "category": "Runtime", "file_name": "ISSUE_TEMPLATE.md", "project_name": "Virtual Kubelet", "subcategory": "Container Runtime" }
[ { "data": "Note: This guide only applies to QEMU, since the vhost-user storage device is only available for QEMU now. The enablement work on other hypervisors is still ongoing. The Storage Performance Development Kit (SPDK) provides a set of tools and libraries for writing high performance, scalable, user-mode storage applications. virtio, vhost and vhost-user: virtio is an efficient way to transport data for virtual environments and guests. It is most commonly used in QEMU VMs, where the VM itself exposes a virtual PCI device and the guest OS communicates with it using a specific virtio PCI driver. Its diagram is: ``` +++--+-+--+ | ++-+ | | | +-+ | | | user | | | | | | space | | guest | | | | | | | | | | +-+ qemu | +-++ | | | | | | | virtio | | | | | | | | driver | | | | | | +-++++ | | | | ++-+ | | | ^ | | | | | | | | v | v | +-++++--+-+--+ | |block | ++ kvm.ko | | | |device| | | | | ++ +--+-+ | | host kernel | ++ ``` vhost is a protocol for devices accessible via inter-process communication. It uses the same virtio queue layout as virtio to allow vhost devices to be mapped directly to virtio devices. The initial vhost implementation is a part of the Linux kernel and uses an ioctl interface to communicate with userspace applications. Its diagram is: ``` +++--+-+--+ | ++-+ | | | +-+ | | | user | | | | | | space | | guest | | | | | | | | | | | qemu | +-++ | | | | | | virtio | | | | | | | driver | | | | | +-+--++-+ | | | ++-+ | | | | | | | +-++--+-+--+--v-+ | |block | |vhost-scsi.ko| | kvm.ko | | |device| | | | | | +^--+ +-v^-+ +--v-+ | | | host | | | | +-+ kernel +-+ | ++ ``` vhost-user implements the control plane through Unix domain socket to establish virtio queue sharing with a user space process on the same host. SPDK exposes vhost devices via the vhost-user" }, { "data": "Its diagram is: ``` +-++--+-+-+ | ++-+ | | user | +-+ | | | space | | | | | | | | guest | | | | +-+-+ | qemu | +-++ | | | | vhost | | | | virtio | | | | | backend | | | | driver | | | | +-^-^^-+ | +-+--+--+ | | | | | | | | | | | | | | +--++-V+-+ | | | | | | | | | | | | ++--+--+ | | | | | | |unix sockets| | | | | | | ++ | | | | | | | | | | | | +-+ | | | | | +--|shared memory|<+ | | +-+-+-++--+-++ | | | | | +-+ kvm.ko | | +--+--+ | host kernel | ++ ``` SPDK vhost is a vhost-user slave server. It exposes Unix domain sockets and allows external applications to connect. It is capable of exposing virtualized storage devices to QEMU instances or other arbitrary processes. Currently, the SPDK vhost-user target can exposes these types of virtualized devices: `vhost-user-blk` `vhost-user-scsi` `vhost-user-nvme` (deprecated from SPDK 21.07 release) For more information, visit and . Following the SPDK . First, run the SPDK `setup.sh` script to setup some hugepages for the SPDK vhost target application. We recommend you use a minimum of 4GiB, enough for the SPDK vhost target and the virtual machine. This will allocate 4096MiB (4GiB) of hugepages, and avoid binding PCI devices: ```bash $ sudo HUGEMEM=4096 PCI_WHITELIST=\"none\" scripts/setup.sh ``` Then, take directory `/var/run/kata-containers/vhost-user` as Kata's vhost-user device directory. Make subdirectories for vhost-user sockets and device nodes: ```bash $ sudo mkdir -p /var/run/kata-containers/vhost-user/ $ sudo mkdir -p /var/run/kata-containers/vhost-user/block/ $ sudo mkdir -p /var/run/kata-containers/vhost-user/block/sockets/ $ sudo mkdir -p /var/run/kata-containers/vhost-user/block/devices/ ``` For more details, see section . Next, start the SPDK vhost target application. The following command will start vhost on the first CPU core with all future socket files placed in `/var/run/kata-containers/vhost-user/block/sockets/`: ```bash $ sudo app/spdktgt/spdktgt -S /var/run/kata-containers/vhost-user/block/sockets/ & ``` To list all available vhost options run the following command: ```bash $ app/spdktgt/spdktgt -h ``` Create an experimental `vhost-user-blk` device based on memory directly: The following RPC will create a 64MB memory block device named `Malloc0` with 4096-byte block size: ```bash $ sudo scripts/rpc.py bdevmalloccreate 64 4096 -b Malloc0 ``` The following RPC will create a `vhost-user-blk` device exposing `Malloc0` block device. The device will be accessible via `/var/run/kata-containers/vhost-user/block/sockets/vhostblk0`: ```bash $ sudo" }, { "data": "vhostcreateblk_controller vhostblk0 Malloc0 ``` Considering the OCI specification and characteristics of vhost-user device, Kata has chosen to use Linux reserved the block major range `240-254` to map each vhost-user block type to a major. Also a specific directory is used for vhost-user devices. The base directory for vhost-user device is a configurable value, with the default being `/var/run/kata-containers/vhost-user`. It can be configured by parameter `vhostuserstore_path` in . Currently, the vhost-user storage device is not enabled by default, so the user should enable it explicitly inside the Kata TOML configuration file by setting `enablevhostuser_store = true`. Since SPDK vhost-user target requires hugepages, hugepages should also be enabled inside the Kata TOML configuration file by setting `enable_hugepages = true`. Here is the conclusion of parameter setting for vhost-user storage device: ```toml enable_hugepages = true enablevhostuser_store = true vhostuserstore_path = \"<Path of the base directory for vhost-user device>\" ``` Note: These parameters are under `[hypervisor.qemu]` section in Kata TOML configuration file. If they are absent, users should still add them under `[hypervisor.qemu]` section. For the subdirectories of `vhostuserstore_path`: `block` is used for block device; `block/sockets` is where we expect UNIX domain sockets for vhost-user block devices to live; `block/devices` is where simulated block device nodes for vhost-user block devices are created. For example, if using the default directory `/var/run/kata-containers/vhost-user`, UNIX domain sockets for vhost-user block device are under `/var/run/kata-containers/vhost-user/block/sockets/`. Device nodes for vhost-user block device are under `/var/run/kata-containers/vhost-user/block/devices/`. Currently, Kata has chosen major number 241 to map to `vhost-user-blk` devices. For `vhost-user-blk` device named `vhostblk0`, a UNIX domain socket is already created by SPDK vhost target, and a block device node with major `241` and minor `0` should be created for it, in order to be recognized by Kata runtime: ```bash $ sudo mknod /var/run/kata-containers/vhost-user/block/devices/vhostblk0 b 241 0 ``` To use `vhost-user-blk` device, use `ctr` to pass a host `vhost-user-blk` device to the container. In your `config.json`, you should use `devices` to pass a host device to the container. For example (only `vhost-user-blk` listed): ```json { \"linux\": { \"devices\": [ { \"path\": \"/dev/vda\", \"type\": \"b\", \"major\": 241, \"minor\": 0, \"fileMode\": 420, \"uid\": 0, \"gid\": 0 } ] } } ``` With `rootfs` provisioned under `bundle` directory, you can run your SPDK container: ```bash $ sudo ctr run -d --runtime io.containerd.run.kata.v2 --config bundle/config.json spdk_container ``` Example of performing I/O operations on the `vhost-user-blk` device inside container: ``` $ sudo ctr t exec --exec-id 1 -t spdk_container sh / # ls -l /dev/vda brw-r--r-- 1 root root 254, 0 Jan 20 03:54 /dev/vda / # dd if=/dev/vda of=/tmp/ddtest bs=4k count=20 20+0 records in 20+0 records out 81920 bytes (80.0KB) copied, 0.002996 seconds, 26.1MB/s ```" } ]
{ "category": "Runtime", "file_name": "using-SPDK-vhostuser-and-kata.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Validate Cilium Network Policies deployed in the cluster Before upgrading Cilium it is recommended to run this validation checker to make sure the policies deployed are valid. The validator will verify if all policies deployed in the cluster are valid, in case they are not, an error is printed and the has an exit code 1 is returned. ``` cilium-dbg preflight validate-cnp [flags] ``` ``` --enable-k8s Enable the k8s clientset (default true) --enable-k8s-api-discovery Enable discovery of Kubernetes API groups and resources with the discovery API -h, --help help for validate-cnp --k8s-api-server string Kubernetes API server URL --k8s-client-burst int Burst value allowed for the K8s client --k8s-client-qps float32 Queries per second limit for the K8s client --k8s-heartbeat-timeout duration Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s) --k8s-kubeconfig-path string Absolute path of the kubernetes kubeconfig file ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Cilium upgrade helper" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_preflight_validate-cnp.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "This is the standard configuration for version 1 containers. It includes namespaces, standard filesystem setup, a default Linux capability set, and information about resource reservations. It also has information about any populated environment settings for the processes running inside a container. Along with the configuration of how a container is created the standard also discusses actions that can be performed on a container to manage and inspect information about the processes running inside. The v1 profile is meant to be able to accommodate the majority of applications with a strong security configuration. Minimum requirements: Kernel version - 3.10 recommended 2.6.2x minimum(with backported patches) Mounted cgroups with each subsystem in its own hierarchy | Flag | Enabled | | | - | | CLONE_NEWPID | 1 | | CLONE_NEWUTS | 1 | | CLONE_NEWIPC | 1 | | CLONE_NEWNET | 1 | | CLONE_NEWNS | 1 | | CLONE_NEWUSER | 1 | | CLONE_NEWCGROUP | 1 | Namespaces are created for the container via the `unshare` syscall. A root filesystem must be provided to a container for execution. The container will use this root filesystem (rootfs) to jail and spawn processes inside where the binaries and system libraries are local to that directory. Any binaries to be executed must be contained within this rootfs. Mounts that happen inside the container are automatically cleaned up when the container exits as the mount namespace is destroyed and the kernel will unmount all the mounts that were setup within that namespace. For a container to execute properly there are certain filesystems that are required to be mounted within the rootfs that the runtime will setup. | Path | Type | Flags | Data | | -- | | -- | - | | /proc | proc | MSNOEXEC,MSNOSUID,MS_NODEV | | | /dev | tmpfs | MSNOEXEC,MSSTRICTATIME | mode=755 | | /dev/shm | tmpfs | MSNOEXEC,MSNOSUID,MS_NODEV | mode=1777,size=65536k | | /dev/mqueue | mqueue | MSNOEXEC,MSNOSUID,MS_NODEV | | | /dev/pts | devpts | MSNOEXEC,MSNOSUID | newinstance,ptmxmode=0666,mode=620,gid=5 | | /sys | sysfs | MSNOEXEC,MSNOSUID,MSNODEV,MSRDONLY | | After a container's filesystems are mounted within the newly created mount namespace `/dev` will need to be populated with a set of device nodes. It is expected that a rootfs does not need to have any device nodes specified for `/dev` within the rootfs as the container will setup the correct devices that are required for executing a container's process. | Path | Mode | Access | | | - | - | | /dev/null | 0666 | rwm | | /dev/zero | 0666 | rwm | | /dev/full | 0666 | rwm | | /dev/tty | 0666 | rwm | | /dev/random | 0666 | rwm | | /dev/urandom | 0666 | rwm | ptmx `/dev/ptmx` will need to be a symlink to the host's `/dev/ptmx` within the container. The use of a pseudo TTY is optional within a container and it should support both. If a pseudo is provided to the container `/dev/console` will need to be setup by binding the console in `/dev/` after it has been populated and mounted in" }, { "data": "| Source | Destination | UID GID | Mode | Type | | | | - | - | - | | pty host path | /dev/console | 0 0 | 0600 | bind | After `/dev/null` has been setup we check for any external links between the container's io, STDIN, STDOUT, STDERR. If the container's io is pointing to `/dev/null` outside the container we close and `dup2` the `/dev/null` that is local to the container's rootfs. After the container has `/proc` mounted a few standard symlinks are setup within `/dev/` for the io. | Source | Destination | | | -- | | /proc/self/fd | /dev/fd | | /proc/self/fd/0 | /dev/stdin | | /proc/self/fd/1 | /dev/stdout | | /proc/self/fd/2 | /dev/stderr | A `pivot_root` is used to change the root for the process, effectively jailing the process inside the rootfs. ```c put_old = mkdir(...); pivotroot(rootfs, putold); chdir(\"/\"); unmount(putold, MSDETACH); rmdir(put_old); ``` For container's running with a rootfs inside `ramfs` a `MS_MOVE` combined with a `chroot` is required as `pivot_root` is not supported in `ramfs`. ```c mount(rootfs, \"/\", NULL, MS_MOVE, NULL); chroot(\".\"); chdir(\"/\"); ``` The `umask` is set back to `0022` after the filesystem setup has been completed. Cgroups are used to handle resource allocation for containers. This includes system resources like cpu, memory, and device access. | Subsystem | Enabled | | - | - | | devices | 1 | | memory | 1 | | cpu | 1 | | cpuacct | 1 | | cpuset | 1 | | blkio | 1 | | perf_event | 1 | | freezer | 1 | | hugetlb | 1 | | pids | 1 | All cgroup subsystem are joined so that statistics can be collected from each of the subsystems. Freezer does not expose any stats but is joined so that containers can be paused and resumed. The parent process of the container's init must place the init pid inside the correct cgroups before the initialization begins. This is done so that no processes or threads escape the cgroups. This sync is done via a pipe ( specified in the runtime section below ) that the container's init process will block waiting for the parent to finish setup. Intel platforms with new Xeon CPU support Resource Director Technology (RDT). Cache Allocation Technology (CAT) and Memory Bandwidth Allocation (MBA) are two sub-features of RDT. Cache Allocation Technology (CAT) provides a way for the software to restrict cache allocation to a defined 'subset' of L3 cache which may be overlapping with other 'subsets'. The different subsets are identified by class of service (CLOS) and each CLOS has a capacity bitmask (CBM). Memory Bandwidth Allocation (MBA) provides indirect and approximate throttle over memory bandwidth for the software. A user controls the resource by indicating the percentage of maximum memory bandwidth or memory bandwidth limit in MBps unit if MBA Software Controller is enabled. It can be used to handle L3 cache and memory bandwidth resources allocation for containers if hardware and kernel support Intel RDT CAT and MBA features. In Linux 4.10 kernel or newer, the interface is defined and exposed via \"resource control\" filesystem, which is a \"cgroup-like\" interface. Comparing with cgroups, it has similar process management lifecycle and interfaces in a container. But unlike cgroups' hierarchy, it has single level filesystem layout. CAT and MBA features are introduced in Linux" }, { "data": "and 4.12 kernel via \"resource control\" filesystem. Intel RDT \"resource control\" filesystem hierarchy: ``` mount -t resctrl resctrl /sys/fs/resctrl tree /sys/fs/resctrl /sys/fs/resctrl/ |-- info | |-- L3 | | |-- cbm_mask | | |-- mincbmbits | | |-- num_closids | |-- MB | |-- bandwidth_gran | |-- delay_linear | |-- min_bandwidth | |-- num_closids |-- ... |-- schemata |-- tasks |-- <container_id> |-- ... |-- schemata |-- tasks ``` For runc, we can make use of `tasks` and `schemata` configuration for L3 cache and memory bandwidth resources constraints. The file `tasks` has a list of tasks that belongs to this group (e.g., <container_id>\" group). Tasks can be added to a group by writing the task ID to the \"tasks\" file (which will automatically remove them from the previous group to which they belonged). New tasks created by fork(2) and clone(2) are added to the same group as their parent. The file `schemata` has a list of all the resources available to this group. Each resource (L3 cache, memory bandwidth) has its own line and format. L3 cache schema: It has allocation bitmasks/values for L3 cache on each socket, which contains L3 cache id and capacity bitmask (CBM). ``` Format: \"L3:<cacheid0>=<cbm0>;<cacheid1>=<cbm1>;...\" ``` For example, on a two-socket machine, the schema line could be \"L3:0=ff;1=c0\" which means L3 cache id 0's CBM is 0xff, and L3 cache id 1's CBM is 0xc0. The valid L3 cache CBM is a contiguous bits set and number of bits that can be set is less than the max bit. The max bits in the CBM is varied among supported Intel CPU models. Kernel will check if it is valid when writing. e.g., default value 0xfffff in root indicates the max bits of CBM is 20 bits, which mapping to entire L3 cache capacity. Some valid CBM values to set in a group: 0xf, 0xf0, 0x3ff, 0x1f00 and etc. Memory bandwidth schema: It has allocation values for memory bandwidth on each socket, which contains L3 cache id and memory bandwidth. ``` Format: \"MB:<cacheid0>=bandwidth0;<cacheid1>=bandwidth1;...\" ``` For example, on a two-socket machine, the schema line could be \"MB:0=20;1=70\" The minimum bandwidth percentage value for each CPU model is predefined and can be looked up through \"info/MB/min_bandwidth\". The bandwidth granularity that is allocated is also dependent on the CPU model and can be looked up at \"info/MB/bandwidth_gran\". The available bandwidth control steps are: minbw + N * bwgran. Intermediate values are rounded to the next control step available on the hardware. If MBA Software Controller is enabled through mount option \"-o mba_MBps\" mount -t resctrl resctrl -o mba_MBps /sys/fs/resctrl We could specify memory bandwidth in \"MBps\" (Mega Bytes per second) unit instead of \"percentages\". The kernel underneath would use a software feedback mechanism or a \"Software Controller\" which reads the actual bandwidth using MBM counters and adjust the memory bandwidth percentages to ensure: \"actual memory bandwidth < user specified memory bandwidth\". For example, on a two-socket machine, the schema line could be \"MB:0=5000;1=7000\" which means 5000 MBps memory bandwidth limit on socket 0 and 7000 MBps memory bandwidth limit on socket 1. For more information about Intel RDT kernel interface:" }, { "data": "``` An example for runc: Consider a two-socket machine with two L3 caches where the default CBM is 0x7ff and the max CBM length is 11 bits, and minimum memory bandwidth of 10% with a memory bandwidth granularity of 10%. Tasks inside the container only have access to the \"upper\" 7/11 of L3 cache on socket 0 and the \"lower\" 5/11 L3 cache on socket 1, and may use a maximum memory bandwidth of 20% on socket 0 and 70% on socket 1. \"linux\": { \"intelRdt\": { \"closID\": \"guaranteed_group\", \"l3CacheSchema\": \"L3:0=7f0;1=1f\", \"memBwSchema\": \"MB:0=20;1=70\" } } ``` The standard set of Linux capabilities that are set in a container provide a good default for security and flexibility for the applications. | Capability | Enabled | | -- | - | | CAPNETRAW | 1 | | CAPNETBIND_SERVICE | 1 | | CAPAUDITREAD | 1 | | CAPAUDITWRITE | 1 | | CAPDACOVERRIDE | 1 | | CAP_SETFCAP | 1 | | CAP_SETPCAP | 1 | | CAP_SETGID | 1 | | CAP_SETUID | 1 | | CAP_MKNOD | 1 | | CAP_CHOWN | 1 | | CAP_FOWNER | 1 | | CAP_FSETID | 1 | | CAP_KILL | 1 | | CAPSYSCHROOT | 1 | | CAPNETBROADCAST | 0 | | CAPSYSMODULE | 0 | | CAPSYSRAWIO | 0 | | CAPSYSPACCT | 0 | | CAPSYSADMIN | 0 | | CAPSYSNICE | 0 | | CAPSYSRESOURCE | 0 | | CAPSYSTIME | 0 | | CAPSYSTTY_CONFIG | 0 | | CAPAUDITCONTROL | 0 | | CAPMACOVERRIDE | 0 | | CAPMACADMIN | 0 | | CAPNETADMIN | 0 | | CAP_SYSLOG | 0 | | CAPDACREAD_SEARCH | 0 | | CAPLINUXIMMUTABLE | 0 | | CAPIPCLOCK | 0 | | CAPIPCOWNER | 0 | | CAPSYSPTRACE | 0 | | CAPSYSBOOT | 0 | | CAP_LEASE | 0 | | CAPWAKEALARM | 0 | | CAPBLOCKSUSPEND | 0 | Additional security layers like and can be used with the containers. A container should support setting an apparmor profile or selinux process and mount labels if provided in the configuration. Standard apparmor profile: ```c profile <profilename> flags=(attachdisconnected,mediate_deleted) { network, capability, file, umount, deny @{PROC}/sys/fs/ wklx, deny @{PROC}/sysrq-trigger rwklx, deny @{PROC}/mem rwklx, deny @{PROC}/kmem rwklx, deny @{PROC}/sys/kernel/[^m]* wklx, deny @{PROC}/sys/kernel//* wklx, deny mount, deny /sys/[^f]/* wklx, deny /sys/f[^s]/* wklx, deny /sys/fs/[^c]/* wklx, deny /sys/fs/c[^g]/* wklx, deny /sys/fs/cg[^r]/* wklx, deny /sys/firmware/efi/efivars/ rwklx, deny /sys/kernel/security/ rwklx, } ``` TODO: seccomp work is being done to find a good default config During container creation the parent process needs to talk to the container's init process and have a form of synchronization. This is accomplished by creating a pipe that is passed to the container's init. When the init process first spawns it will block on its side of the pipe until the parent closes its side. This allows the parent to have time to set the new process inside a cgroup hierarchy and/or write any uid/gid mappings required for user namespaces. The pipe is passed to the init process via FD 3. The application consuming libcontainer should be compiled statically. libcontainer does not define any init process and the arguments provided are used to `exec` the process inside the application. There should be no long running init within the container" }, { "data": "If a pseudo tty is provided to a container it will open and `dup2` the console as the container's STDIN, STDOUT, STDERR as well as mounting the console as `/dev/console`. An extra set of mounts are provided to a container and setup for use. A container's rootfs can contain some non portable files inside that can cause side effects during execution of a process. These files are usually created and populated with the container specific information via the runtime. Extra runtime files: /etc/hosts /etc/resolv.conf /etc/hostname /etc/localtime There are a few defaults that can be overridden by users, but in their omission these apply to processes within a container. | Type | Value | | - | | | Parent Death Signal | SIGKILL | | UID | 0 | | GID | 0 | | GROUPS | 0, NULL | | CWD | \"/\" | | $HOME | Current user's home dir or \"/\" | | Readonly rootfs | false | | Pseudo TTY | false | After a container is created there is a standard set of actions that can be done to the container. These actions are part of the public API for a container. | Action | Description | | -- | | | Get processes | Return all the pids for processes running inside a container | | Get Stats | Return resource statistics for the container as a whole | | Wait | Waits on the container's init process ( pid 1 ) | | Wait Process | Wait on any of the container's processes returning the exit status | | Destroy | Kill the container's init process and remove any filesystem state | | Signal | Send a signal to the container's init process | | Signal Process | Send a signal to any of the container's processes | | Pause | Pause all processes inside the container | | Resume | Resume all processes inside the container if paused | | Exec | Execute a new process inside of the container ( requires setns ) | | Set | Setup configs of the container after it's created | User can execute a new process inside of a running container. Any binaries to be executed must be accessible within the container's rootfs. The started process will run inside the container's rootfs. Any changes made by the process to the container's filesystem will persist after the process finished executing. The started process will join all the container's existing namespaces. When the container is paused, the process will also be paused and will resume when the container is unpaused. The started process will only run when the container's primary process (PID 1) is running, and will not be restarted when the container is restarted. The started process will have its own cgroups nested inside the container's cgroups. This is used for process tracking and optionally resource allocation handling for the new process. Freezer cgroup is required, the rest of the cgroups are optional. The process executor must place its pid inside the correct cgroups before starting the process. This is done so that no child processes or threads can escape the cgroups. When the process is stopped, the process executor will try (in a best-effort way) to stop all its children and remove the sub-cgroups." } ]
{ "category": "Runtime", "file_name": "spec.md", "project_name": "Inclavare Containers", "subcategory": "Container Runtime" }
[ { "data": "sidebar_position: 8 sidebar_label: \"GUI\" HwameiStor has a module for Graph User Interface. It will provide users with an easy way to manage the HwameiStor system. The GUI can be deployed by the Operator." } ]
{ "category": "Runtime", "file_name": "gui.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at oss-conduct@uber.com. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. This Code of Conduct is adapted from the , version 1.4, available at ." } ]
{ "category": "Runtime", "file_name": "CODE_OF_CONDUCT.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "title: Feature Overview menu_order: 20 search_type: Documentation * * * * * * * * * * For step-by-step instructions on how to use Weave Net, see . Weave Net creates a virtual network that connects Docker containers deployed across multiple hosts. To application containers, the network established by Weave resembles a giant Ethernet switch, where all containers are connected and can easily access services from one another. Because Weave Net uses standard protocols, your favorite network tools and applications, developed over decades, can still be used to configure, secure, monitor, and troubleshoot a container network. Broadcast and Multicast protocols can also be used over Weave Net. To start using Weave Net, see and . Weave Net automatically chooses the fastest available method to transport data between peers. The best performing of these (the 'fast datapath') offers near-native throughput and latency. See and . Weave Net includes a , which can be used to start containers using the Docker or the , and attach them to the Weave network before they begin execution. To use the proxy, run: host1$ eval $(weave env) and then start and manage containers with standard Docker commands. Containers started in this way that subsequently restart, either by an explicit `docker restart` command or by Docker restart policy, are re-attached to the Weave network by the `Weave Docker API Proxy`. See . Weave Net can also be used as a . A Docker network named `weave` is created by `weave launch`, which is used as follows: $ docker run --net=weave -ti weaveworks/ubuntu Using the Weave plugin enables you to take advantage of . There are two plugin implementations for Weave Net: the plugin which doesn't require an external cluster store, and the plugin which supports Docker swarm mode. Weave can be used as a plugin to systems that support the , such as Kubernetes and Mesosphere. See for more details. Containers are automatically allocated a unique IP address. To view the addresses allocated by Weave, run `weave ps`. Instead of allowing Weave to automatically allocate addresses, an IP address and a network can be explicitly specified. See for instructions. For a discussion on how Weave Net uses IPAM, see . And also review the for an explanation of addressing and private networks. Named containers are automatically registered in , and are discoverable by using standard, simple name lookups: host1$ docker run -dti --name=service weaveworks/ubuntu host1$ docker run -ti weaveworks/ubuntu root@7b21498fb103:/# ping service WeaveDNS also supports , and . See . A single Weave network can host multiple, isolated applications, with each application's containers being able to communicate with each other but not with the containers of other applications. To isolate applications, Weave Net can make use of the isolation-through-subnets technique. This common strategy is an example of how with Weave many \"on metal\" techniques can be used to deploy your applications to containers. See for information on how to use the isolation-through-subnets technique with Weave Net. The Weave includes a network policy controller that implements [Kubernetes Network Policies](http://kubernetes.io/docs/user-guide/networkpolicies/). At times, you may not know the application network for a given container in advance. In these cases, you can take advantage of Weave's ability to attach and detach running containers to and from any network. See for" }, { "data": "In keeping with our ease-of-use philosophy, the cryptography in Weave Net is intended to satisfy a particular user requirement: strong, out-of-the-box security without a complex setup or the need to wade your way through the configuration of cipher suite negotiation, certificate generation or any of the other things needed to properly secure an IPsec or TLS installation. Weave Net communicates via TCP and UDP on a well-known port, so you can adapt whatever is appropriate to your requirements - for example an IPsec VPN for inter-DC traffic, or VPC/private network inside a data-center. For cases when this is not convenient, Weave Net provides a secure, mechanism which you can use in conjunction with or as an alternative to any other security technologies you have running alongside Weave. Weave Net implements encryption and security using the Go version of , and, additionally in the case of encrypted fast datapath using ). For information on how to secure your Docker network connections, see and for a more technical discussion on how Weave implements encryption see, and . Weave Net application networks can be integrated with a host's network, and establish connectivity between the host and application containers anywhere. See . Exporting Services* - Services running in containers on a Weave network can be made accessible to the outside world or to other networks. Importing Services* - Applications can run anywhere, and yet still be made accessible by specific application containers or services. Binding Services* - A container can be bound to a particular IP and port without having to change your application code, while at the same time will maintain its original endpoint. Routing Services* - By combining the importing and exporting features, you can connect to disjointed networks, even when separated by firewalls and where there may be overlapping IP addresses. See for instructions on how to manage services on a Weave container network. Weave can network containers hosted in different cloud providers or data centers. For example, you can run an application consisting of containers that run on (GCE), (EC2) and in a local data centre all at the same time. See . A network of containers across more than two hosts can be established even when there is only partial connectivity between the hosts. Weave Net routes traffic between containers as long as there is at least one path of connected hosts between them. See . Hosts can be added to or removed from a Weave network without stopping or reconfiguring the remaining hosts. See [Adding and Removing Hosts Dynamically.](/site/tasks/manage/finding-adding-hosts-dynamically.md) Containers can be moved between hosts without requiring any reconfiguration or, in many cases, restarts of other containers. All that is required is for the migrated container to be started with the same IP address as it was given originally. See , in particular, Routing Services for more information on container mobility. Weave Net peers continually exchange topology information, and monitor and (re)establish network connections to other peers. So if hosts or networks fail, Weave can \"route around\" the problem. This includes network partitions, where containers on either side of a partition can continue to communicate, with full connectivity being restored when the partition heals. The Weave Net Router container is very lightweight, fast and and disposable. For example, should Weave Net ever run into difficulty, one can simply stop it (with `weave stop`) and restart it. Application containers do not have to be restarted in that event, and if the Weave Net container is restarted quickly enough, may not experience a temporary connectivity failure." } ]
{ "category": "Runtime", "file_name": "features.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "layout: global title: Running Apache Hadoop MapReduce on Alluxio This guide describes how to configure Alluxio with , so that your MapReduce programs can read+write data stored in Alluxio. Alluxio has been set up and is running. Make sure that the Alluxio client jar is available on each machine. This Alluxio client jar file can be found at `{{site.ALLUXIOCLIENTJAR_PATH}}` in the tarball downloaded from the Alluxio . In order to run map-reduce examples, we also recommend downloading the based on your Hadoop version. For example, if you are using Hadoop 2.7 download this . This step is only required for Hadoop 1.x and can be skipped by users of Hadoop 2.x or later. Add the following property to the `core-site.xml` of your Hadoop installation: ```xml <property> <name>fs.alluxio.impl</name> <value>alluxio.hadoop.FileSystem</value> <description>The Alluxio FileSystem</description> </property> ``` This will allow MapReduce jobs to recognize URIs with Alluxio scheme `alluxio://` in their input and output files. In order for the MapReduce applications to read and write files in Alluxio, the Alluxio client jar must be on the JVM classpath of all nodes of the application. The Alluxio client jar should also be added to the `HADOOP_CLASSPATH` environment variable. This makes the Alluxio client available to JVMs which are created when running `hadoop jar` command: ```shell $ export HADOOPCLASSPATH={{site.ALLUXIOCLIENTJARPATH}}:${HADOOP_CLASSPATH} ``` You can use the `-libjars` command line option when using `hadoop jar ...`, specifying `{{site.ALLUXIOCLIENTJAR_PATH}}` as the argument of `-libjars`. Hadoop will place the jar in the Hadoop DistributedCache, making it available to all the nodes. For example, the following command adds the Alluxio client jar to the `-libjars` option: ```shell $ ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount \\ -libjars {{site.ALLUXIOCLIENTJAR_PATH}} <INPUT FILES> <OUTPUT DIRECTORY> ``` Alternative configurations are described in the section. For this example, we will use a pseudo-distributed Hadoop cluster, started by running: ```shell $ cd $HADOOP_HOME $ ./bin/stop-all.sh $ ./bin/start-all.sh ``` Depending on the Hadoop version, you may need to replace `./bin` with `./sbin`. Start Alluxio locally: ```shell $ ./bin/alluxio process start local ``` You can add a sample file to Alluxio to run MapReduce wordcount on. From your Alluxio directory: ```shell $ ./bin/alluxio fs mkdir /wordcount $ ./bin/alluxio fs cp file://LICENSE /wordcount/input.txt ``` This command will copy the `LICENSE` file into the Alluxio namespace with the path `/wordcount/input.txt`. Now we can run a MapReduce job (using Hadoop 2.7.3 as example) for wordcount. ```shell $ ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount \\ -libjars {{site.ALLUXIOCLIENTJAR_PATH}} \\ alluxio://localhost:19998/wordcount/input.txt \\ alluxio://localhost:19998/wordcount/output ``` After this job completes, the result of the wordcount will be in the `/wordcount/output` directory in Alluxio. You can see the resulting files by running: ```shell $ ./bin/alluxio fs ls /wordcount/output $ ./bin/alluxio fs cat /wordcount/output/part-r-00000 ``` TipThe previous wordcount example is also applicable to Alluxio in HA mode. See the instructions on . This guide on describes several ways to distribute the jars to nodes in the" }, { "data": "From that guide, the recommended way to distribute the Alluxio client jar is to use the distributed cache, via the `-libjars` command line option. Another way to distribute the client jar is to manually distribute it to all the Hadoop nodes. You could place the client jar `{{site.ALLUXIOCLIENTJARPATH}}` in the `$HADOOPHOME/lib` (it may be `$HADOOP_HOME/share/hadoop/common/lib` for different versions of Hadoop) directory of every MapReduce node, and then restart Hadoop. Alternatively, add this jar to `mapreduce.application.classpath` system property for your Hadoop deployment to the Alluxio client jar is on the classpath. Note that the jars must be installed again for each new release. On the other hand, when the jar is already on every node, then the `-libjars` command line option is not needed. Alluxio configuration parameters can be added to the Hadoop `core-site.xml` file to affect all MapReduce jobs. For example, when Alluxio is running in HA mode, all MapReduce jobs will need to have the Alluxio client configured to communicate to the masters in HA mode. To configure clients to communicate with an Alluxio cluster in HA mode using internal leader election, the following section would need to be added to your Hadoop installation's `core-site.xml` ```xml <configuration> <property> <name>alluxio.master.rpc.addresses</name> <value>masterhostname1:19998,masterhostname2:19998,masterhostname3:19998</value> </property> </configuration> ``` See for more details. Hadoop MapReduce users can add `\"-Dproperty=value\"` after the `hadoop jar` or `yarn jar` command and the properties will be propagated to all the tasks of this job. For example, the following MapReduce wordcount job sets write type to `CACHE_THROUGH` when writing to Alluxio: ```shell $ ./bin/hadoop jar libexec/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount \\ -Dalluxio.user.file.writetype.default=CACHE_THROUGH \\ -libjars {{site.ALLUXIOCLIENTJAR_PATH}} \\ <INPUT FILES> <OUTPUT DIRECTORY> ``` Logs with Hadoop can be modified in many different ways. If you wish to directly modify the `log4j.properties` file for Hadoop, then you can add or modify appenders within `${HADOOP_HOME}/conf/log4j.properties` on each of the nodes in your cluster. You may also modify the configuration values in in your installation. If you simply wish to modify log levels then your can change `mapreduce.map.log.level` or `mapreduce.reduce.log.level`. If you are using YARN then you may also wish to modify some of the `yarn.log.*` properties which can be found in A: This error message is seen when your MapReduce application tries to access Alluxio as an HDFS-compatible file system, but the `alluxio://` scheme is not recognized by the application. Please make sure your Hadoop configuration file `core-site.xml` has the following property: ```xml <configuration> <property> <name>fs.alluxio.impl</name> <value>alluxio.hadoop.FileSystem</value> </property> </configuration> ``` A: This error message is seen when your MapReduce application tries to access Alluxio as an HDFS-compatible file system, the `alluxio://` scheme has been configured correctly but the Alluxio client jar is not found on the classpath of your application. You can append the client jar to `$HADOOP_CLASSPATH`: ```shell $ export HADOOPCLASSPATH={{site.ALLUXIOCLIENTJARPATH}}:${HADOOP_CLASSPATH} ``` If the corresponding classpath has been set but exceptions still exist, users can check whether the path is valid by: ```shell $ ls {{site.ALLUXIOCLIENTJAR_PATH}} ```" } ]
{ "category": "Runtime", "file_name": "Hadoop-MapReduce.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "New versions of the [OpenTelemetry Semantic Conventions] mean new versions of the `semconv` package need to be generated. The `semconv-generate` make target is used for this. Checkout a local copy of the [OpenTelemetry Semantic Conventions] to the desired release tag. Pull the latest `otel/semconvgen` image: `docker pull otel/semconvgen:latest` Run the `make semconv-generate ...` target from this repository. For example, ```sh export TAG=\"v1.21.0\" # Change to the release version you are generating. export OTELSEMCONVREPO=\"/absolute/path/to/opentelemetry/semantic-conventions\" docker pull otel/semconvgen:latest make semconv-generate # Uses the exported TAG and OTELSEMCONVREPO. ``` This should create a new sub-package of . Ensure things look correct before submitting a pull request to include the addition. You can run `make gorelease` that runs to ensure that there are no unwanted changes done in the public API. You can check/report problems with `gorelease` . First, decide which module sets will be released and update their versions in `versions.yaml`. Commit this change to a new branch. Update go.mod for submodules to depend on the new release which will happen in the next step. Run the `prerelease` make target. It creates a branch `prerelease<module set><new tag>` that will contain all release changes. ``` make prerelease MODSET=<module set> ``` Verify the changes. ``` git diff ...prerelease<module set><new tag> ``` This should have changed the version for all modules to be `<new tag>`. If these changes look correct, merge them into your pre-release branch: ```go git merge prerelease<module set><new tag> ``` Update the . Make sure all relevant changes for this release are included and are in language that non-contributors to the project can understand. To verify this, you can look directly at the commits since the `<last tag>`. ``` git --no-pager log --pretty=oneline \"<last tag>..HEAD\" ``` Move all the `Unreleased` changes into a new section following the title scheme (`[<new tag>] - <date of release>`). Update all the appropriate links at the" }, { "data": "Push the changes to upstream and create a Pull Request on GitHub. Be sure to include the curated changes from the in the description. Once the Pull Request with all the version changes has been approved and merged it is time to tag the merged commit. *IMPORTANT*: It is critical you use the same tag that you used in the Pre-Release step! Failure to do so will leave things in a broken state. As long as you do not change `versions.yaml` between pre-release and this step, things should be fine. *IMPORTANT*: . It is critical you make sure the version you push upstream is correct. . For each module set that will be released, run the `add-tags` make target using the `<commit-hash>` of the commit on the main branch for the merged Pull Request. ``` make add-tags MODSET=<module set> COMMIT=<commit hash> ``` It should only be necessary to provide an explicit `COMMIT` value if the current `HEAD` of your working directory is not the correct commit. Push tags to the upstream remote (not your fork: `github.com/open-telemetry/opentelemetry-go.git`). Make sure you push all sub-modules as well. ``` git push upstream <new tag> git push upstream <submodules-path/new tag> ... ``` Finally create a Release for the new `<new tag>` on GitHub. The release body should include all the release notes from the Changelog for this release. After releasing verify that examples build outside of the repository. ``` ./verify_examples.sh ``` The script copies examples into a different directory removes any `replace` declarations in `go.mod` and builds them. This ensures they build with the published release, not the local copy. Once verified be sure to that uses this release. Update the [Go instrumentation documentation] in the OpenTelemetry website under [content/en/docs/languages/go]. Importantly, bump any package versions referenced to be the latest one you just released and ensure all code examples still compile and are accurate. Bump the dependencies in the following Go services:" } ]
{ "category": "Runtime", "file_name": "RELEASING.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "rkt has support for executing pods with KVM hypervisor - or as a . rkt employs this to run a pod within a virtual machine with its own operating system kernel and hypervisor isolation, rather than creating a container using Linux cgroups and namespaces. The KVM stage1 does not yet implement all of the default stage1's features and semantics. While the same app container can be executed under isolation by either stage1, it may require different configuration, especially for networking. However, several deployments of the KVM stage1 are operational outside of CoreOS, and we encourage testing of this feature and welcome your contributions. Provided you have hardware virtualization support and the loaded (refer to your distribution for instructions), you can then run an image like you would normally do with rkt: ``` sudo rkt run --debug --insecure-options=image --stage1-name=coreos.com/rkt/stage1-kvm:1.30.0 docker://redis ``` This output is the same you'll get if you run a container-based rkt. If you want to see the kernel and boot messages, run rkt with the `--debug` flag. You can exit pressing `<Ctrl-a x>`. By default, processes will start working on all CPUs if at least one app does not have specified CPUs. In the other case, container will be working on aggregate amount of CPUs. Currently, the memory allocated to the virtual machine is a sum of memory required by each app in pod and additional 128MB required by system. If memory of some app is not specified, app memory will be set on default value (128MB). It leverages the work done by Intel with their . Stage1 contains a Linux kernel that is executed under hypervisor (LKVM or QEMU). This kernel will then start systemd, which in turn will start the applications in the pod. A KVM-based rkt is very similar to a container-based one, it just uses hypervisor to execute pods instead of systemd-nspawn. Here's a comparison of the components involved between a container-based and a KVM based rkt. Container-based: ``` host OS rkt systemd-nspawn systemd chroot user-app1 ``` KVM based: ``` host OS rkt hypervisor kernel systemd chroot user-app1 ``` For LKVM you can use `stage1-kvm.aci` or `stage1-kvm-lkvm.aci`, for QEMU - `stage1-kvm-qemu.aci` from the official release. You can also build rkt yourself with the right options: ``` $ ./autogen.sh && ./configure --with-stage1-flavors=kvm --with-stage1-kvm-hypervisors=lkvm,qemu && make ``` For more details about configure parameters, see . This will build the rkt binary and the KVM stage1 aci image in `build-rkt-1.30.0+git/target/bin/`. Depending on the configuration options, it will be `stage1-kvm.aci` (if one hypervisor is set), or `stage1-kvm-lkvm.aci` and `stage1-kvm-qemu.aci` (if you want to have both images built once). The KVM stage1 has some hypervisor specific parameters that can change the execution environment. Additional can be passed via the environment variable `RKTHYPERVISOREXTRAKERNELPARAMS`: ``` sudo RKTHYPERVISOREXTRAKERNELPARAMS=\"systemd.unifiedcgrouphierarchy=true maxloop=12 possiblecpus=1\" \\ rkt run --stage1-name=coreos.com/rkt/stage1-kvm:1.30.0 \\ ... ``` The three command line parameters above are just examples and they are documented respectively in:" } ]
{ "category": "Runtime", "file_name": "running-kvm-stage1.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "name: Feature request about: Suggest an idea for this project title: '' labels: community, triage assignees: '' Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] Describe the solution you'd like A clear and concise description of what you want to happen. Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. Additional context Add any other context or screenshots about the feature request here." } ]
{ "category": "Runtime", "file_name": "feature_request.md", "project_name": "MinIO", "subcategory": "Cloud Native Storage" }
[ { "data": "- https://github.com/heptio/ark/tree/v0.3.3 Treat the first field in a schedule's cron expression as minutes, not seconds https://github.com/heptio/ark/tree/v0.3.2 Add client-go auth provider plugins for Azure, GCP, OIDC https://github.com/heptio/ark/tree/v0.3.1 Fix Makefile VERSION https://github.com/heptio/ark/tree/v0.3.0 Initial Release" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-0.3.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "English SpiderIPPool resources represent the IP address ranges allocated by Spiderpool for Pods. To create SpiderIPPool resources in your cluster, refer to the . Single-stack, dual-stack, and IPv6 Support IP address range control Gateway route control Exclusive or global default pool control Compatible with various resource affinity settings Spiderpool supports three modes of IP address allocation: IPv4-only, IPv6-only, and dual-stack. Refer to for details. When installing Spiderpool via Helm, you can use configuration parameters to specify: `--set ipam.enableIPv4=true --set ipam.enableIPv6=true`. If dual-stack mode is enabled, you can manually specify which IPv4 and IPv6 pools should be used for IP address allocation: In a dual-stack environment, you can also configure Pods to only receive IPv4 or IPv6 addresses using the annotation `ipam.spidernet.io/ippool: '{\"ipv4\": [\"custom-ipv4-ippool\"]}'`. ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: custom-dual-ippool-deploy spec: replicas: 3 selector: matchLabels: app: custom-dual-ippool-deploy template: metadata: annotations: ipam.spidernet.io/ippool: |- { \"ipv4\": [\"custom-ipv4-ippool\"],\"ipv6\": [\"custom-ipv6-ippool\"] } labels: app: custom-dual-ippool-deploy spec: containers: name: custom-dual-ippool-deploy image: busybox imagePullPolicy: IfNotPresent command: [\"/bin/sh\", \"-c\", \"trap : TERM INT; sleep infinity & wait\"] ``` This feature owns 4 usage options including: `Use Pod Annotation to Specify IP Pool`, `Use Namespace Annotation to Specify IP Pool`, `Use CNI Configuration File to Specify IP Pool` and `Set Cluster Default Level for SpiderIPPool`. For the priority rules when specifying the SpiderIPPool, refer to the . Additionally, with the following ways of specifying IPPools(Pod Annotation, Namespace Annotation, CNI configuration file) you can also use wildcards '', '?' and '[]' to match the desired IPPools. For example: ipam.spidernet.io/ippool: '{\"ipv4\": [\"demo-v4-ippool1\", \"backup-ipv4\"]}' '*': Matches zero or more characters. For example, \"ab\" can match \"ab\", \"abc\", \"abcd\", and so on. '?': Matches a single character. For example, \"a?c\" can match \"abc\", \"adc\", \"axc\", and so on. '[]': Matches a specified range of characters. You can specify the choices of characters inside the brackets, or use a hyphen to specify a character range. For example, \"[abc]\" can match any one of the characters \"a\", \"b\", or \"c\". You can use annotations like `ipam.spidernet.io/ippool` or `ipam.spidernet.io/ippools` on a Pod's annotation to indicate which IP pools should be used. The `ipam.spidernet.io/ippools` annotation is primarily used for specifying multiple network interfaces. Additionally, you can specify multiple pools as fallback options. If one pool's IP addresses are exhausted, addresses can be allocated from the other specified pools. ```yaml ipam.spidernet.io/ippool: |- { \"ipv4\": [\"demo-v4-ippool1\", \"backup-ipv4-ippool\", \"wildcard-v4?\"], \"ipv6\": [\"demo-v6-ippool1\", \"backup-ipv6-ippool\", \"wildcard-v6*\"] } ``` When using the annotation `ipam.spidernet.io/ippools` for specifying multiple network interfaces, you can explicitly indicate the interface name by specifying the `interface` field. Alternatively, you can use array ordering to determine which IP pools are assigned to which network interfaces. Additionally, the `cleangateway` field indicates whether a default route should be generated based on the `gateway` field of the IPPool. When `cleangateway` is set to true, it means that no default route needs to be generated (default is false). In scenarios with multiple network interfaces, it is generally not possible to generate two or more default routes in the `main` routing table. The plugin `Coordinator` already solved this problem and you can ignore `clengateway`" }, { "data": "If you want to use Spiderpool IPAM plugin alone, you can use `cleangateway: true` to indicate that a default route should not be generated based on the IPPool `gateway` field. ```yaml ipam.spidernet.io/ippools: |- [{ \"ipv4\": [\"demo-v4-ippool1\", \"wildcard-v4-ippool[123]\"], \"ipv6\": [\"demo-v6-ippool1\", \"wildcard-v6-ippool[123]\"] },{ \"ipv4\": [\"demo-v4-ippool2\", \"wildcard-v4-ippool[456]\"], \"ipv6\": [\"demo-v6-ippool2\", \"wildcard-v6-ippool[456]\"], \"cleangateway\": true }] ``` ```yaml ipam.spidernet.io/ippools: |- [{ \"interface\": \"eth0\", \"ipv4\": [\"demo-v4-ippool1\", \"wildcard-v4-ippool[123]\"], \"ipv6\": [\"demo-v6-ippool1\", \"wildcard-v6-ippool[123]\"], \"cleangateway\": true },{ \"interface\": \"net1\", \"ipv4\": [\"demo-v4-ippool2\", \"wildcard-v4-ippool[456]\"], \"ipv6\": [\"demo-v6-ippool2\", \"wildcard-v6-ippool[456]\"], \"cleangateway\": false }] ``` You can annotate the Namespace with `ipam.spidernet.io/default-ipv4-ippool` and `ipam.spidernet.io/default-ipv6-ippool`. When deploying applications, IP pools can be selected based on these annotations of the application's namespace: If IP pool is not explicitly specified, rules defined in the Namespace annotation take precedence. ```yaml apiVersion: v1 kind: Namespace metadata: annotations: ipam.spidernet.io/default-ipv4-ippool: '[\"ns-v4-ippool1\", \"ns-v4-ippool2\", \"wildcard-v4*\"]' ipam.spidernet.io/default-ipv6-ippool: '[\"ns-v6-ippool1\", \"ns-v6-ippool2\", \"wildcard-v6?\"]' name: kube-system ... ``` You can specify the default IPv4 and IPv6 pools for an application in the CNI configuration file. For more details, refer to If IP pool is not explicitly specified using Pod Annotation and no IP pool is specified through Namespace annotation, the rules defined in the CNI configuration file take precedence. ```yaml { \"name\": \"macvlan-vlan0\", \"type\": \"macvlan\", \"master\": \"eth0\", \"ipam\": { \"type\": \"spiderpool\", \"defaultipv4ippool\":[\"default-v4-ippool\", \"backup-ipv4-ippool\", \"wildcard-v4-ippool[123]\"], \"defaultipv6ippool\":[\"default-v6-ippool\", \"backup-ipv6-ippool\", \"wildcard-v6-ippool[456]\"] } } ``` In the , the `spec.default` field is a boolean type. It determines the cluster default pool when no specific IPPool is specified through annotations or the CNI configuration file: If no IP pool is specified using Pod annotations, and no IP pool is specified through Namespace annotations, and no IP pool is specified in the CNI configuration file, the system will use the pool defined by this field as the cluster default. Multiple IPPool resources can be set as the cluster default level. ```yaml apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: master-172 spec: default: true ... ``` Refer to for details. Refer to for details. As a result, Pods will receive the default route based on the gateway, as well as custom routes defined in the IP pool. If the IP pool does not have a gateway configured, the default route will not take effect. To simplify the viewing of properties related to SpiderIPPool resources, we have added some additional fields that can be displayed using the `kubectl get sp -o wide` command: `ALLOCATED-IP-COUNT` represents the number of allocated IP addresses in the pool. `TOTAL-IP-COUNT` represents the total number of IP addresses in the pool. `DEFAULT` indicates whether the pool is set as the cluster default level. `DISABLE` indicates whether the pool is disabled. `NODENAME` indicates the nodes have an affinity with the pool. `MULTUSNAME` indicates the Multus instances have an affinity with the pool. `APP-NAMESPACE` is specific to the feature. It signifies that the pool is automatically created by the system and corresponds to the namespace of the associated application. ```shell ~# kubectl get sp -o wide NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT DISABLE NODENAME MULTUSNAME APP-NAMESPACE auto4-demo-deploy-subnet-eth0-fcca4 4 172.100.0.0/16 1 2 false false kube-system test-pod-ippool 4 10.6.0.0/16 0 10 false false [\"master\",\"worker1\"] [\"kube-system/macvlan-vlan0\"] ``` We have also supplemented SpiderIPPool resources with relevant metric information. For more details, refer to" } ]
{ "category": "Runtime", "file_name": "spider-ippool.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Load-balancing configuration ``` -h, --help help for lb ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - List load-balancing configuration - Maglev lookup table" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_lb.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"How Velero Works\" layout: docs Each Velero operation -- on-demand backup, scheduled backup, restore -- is a custom resource, defined with a Kubernetes and stored in . Velero also includes controllers that process the custom resources to perform backups, restores, and all related operations. You can back up or restore all objects in your cluster, or you can filter objects by type, namespace, and/or label. Velero is ideal for the disaster recovery use case, as well as for snapshotting your application state, prior to performing system operations on your cluster, like upgrades. The backup operation: Uploads a tarball of copied Kubernetes objects into cloud object storage. Calls the cloud provider API to make disk snapshots of persistent volumes, if specified. You can optionally specify backup hooks to be executed during the backup. For example, you might need to tell a database to flush its in-memory buffers to disk before taking a snapshot. . Note that cluster backups are not strictly atomic. If Kubernetes objects are being created or edited at the time of backup, they might not be included in the backup. The odds of capturing inconsistent information are low, but it is possible. The schedule operation allows you to back up your data at recurring intervals. You can create a scheduled backup at any time, and the first backup is then performed at the schedule's specified interval. These intervals are specified by a Cron expression. Velero saves backups created from a schedule with the name `<SCHEDULE NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as YYYYMMDDhhmmss. For more information see the . When you run `velero backup create test-backup`: The Velero client makes a call to the Kubernetes API server to create a `Backup` object. The `BackupController` notices the new `Backup` object and performs validation. The `BackupController` begins the backup process. It collects the data to back up by querying the API server for resources. The `BackupController` makes a call to the object storage service -- for example, AWS S3 -- to upload the backup file. By default, `velero backup create` makes disk snapshots of any persistent volumes. You can adjust the snapshots by specifying additional flags. Run `velero backup create --help` to see available flags. Snapshots can be disabled with the option `--snapshot-volumes=false`. ![19] The restore operation allows you to restore all of the objects and persistent volumes from a previously created backup. You can also restore only a subset of objects and persistent volumes. Velero supports multiple namespace remapping--for example, in a single restore, objects in namespace \"abc\" can be recreated under namespace \"def\", and the objects in namespace \"123\" under \"456\". The default name of a restore is `<BACKUP NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as YYYYMMDDhhmmss. You can also specify a custom name. A restored object also includes a label with key `velero.io/restore-name` and value `<RESTORE NAME>`. By default, backup storage locations are created in read-write mode. However, during a restore, you can configure a backup storage location to be in read-only mode, which disables backup creation and deletion for the storage location. This is useful to ensure that no backups are inadvertently created or deleted during a restore scenario. You can optionally specify to be executed during a restore or after resources are restored. For example, you might need to perform a custom database restore operation before the database application containers" }, { "data": "When you run `velero restore create`: The Velero client makes a call to the Kubernetes API server to create a object. The `RestoreController` notices the new Restore object and performs validation. The `RestoreController` fetches the backup information from the object storage service. It then runs some preprocessing on the backed up resources to make sure the resources will work on the new cluster. For example, using the to verify that the restore resource will work on the target cluster. The `RestoreController` starts the restore process, restoring each eligible resource one at a time. By default, Velero performs a non-destructive restore, meaning that it won't delete any data on the target cluster. If a resource in the backup already exists in the target cluster, Velero will skip that resource. You can configure Velero to use an update policy instead using the restore flag. When this flag is set to `update`, Velero will attempt to update an existing resource in the target cluster to match the resource from the backup. For more details about the Velero restore process, see the page. Velero backs up resources using the Kubernetes API server's preferred version for each group/resource. When restoring a resource, this same API group/version must exist in the target cluster in order for the restore to be successful. For example, if the cluster being backed up has a `gizmos` resource in the `things` API group, with group/versions `things/v1alpha1`, `things/v1beta1`, and `things/v1`, and the server's preferred group/version is `things/v1`, then all `gizmos` will be backed up from the `things/v1` API endpoint. When backups from this cluster are restored, the target cluster must have the `things/v1` endpoint in order for `gizmos` to be restored. Note that `things/v1` does not need to be the preferred version in the target cluster; it just needs to exist. When you create a backup, you can specify a TTL (time to live) by adding the flag `--ttl <DURATION>`. If Velero sees that an existing backup resource is expired, it removes: The backup resource The backup file from cloud object storage All PersistentVolume snapshots All associated Restores The TTL flag allows the user to specify the backup retention period with the value specified in hours, minutes and seconds in the form `--ttl 24h0m0s`. If not specified, a default TTL value of 30 days will be applied. The effects of expiration are not applied immediately, they are applied when the gc-controller runs its reconciliation loop every hour. If backup fails to delete, a label `velero.io/gc-failure=<Reason>` will be added to the backup custom resource. You can use this label to filter and select backups that failed to delete. Implemented reasons are: BSLNotFound: Backup storage location not found BSLCannotGet: Backup storage location cannot be retrieved from the API server for reasons other than not found BSLReadOnly: Backup storage location is read-only Velero treats object storage as the source of truth. It continuously checks to see that the correct backup resources are always present. If there is a properly formatted backup file in the storage bucket, but no corresponding backup resource in the Kubernetes API, Velero synchronizes the information from object storage to Kubernetes. This allows restore functionality to work in a cluster migration scenario, where the original backup objects do not exist in the new cluster. Likewise, if a `Completed` backup object exists in Kubernetes but not in object storage, it will be deleted from Kubernetes since the backup tarball no longer exists. `Failed` or `PartiallyFailed` backup will not be removed by object storage sync." } ]
{ "category": "Runtime", "file_name": "how-velero-works.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- A few sentences describing the overall goals of the pull request's commits. Please include the type of fix - (e.g. bug fix, new feature, documentation) some details on why this PR should be merged the details of the testing you've done on it (both manual and automated) which components are affected by this PR --> [ ] Tests [ ] Documentation [ ] Release note <!-- Writing a release note: By default, no release note action is required. If you're unsure whether or not your PR needs a note, ask your reviewer for guidance. If this PR requires a release note, update the block below to include a concise note describing the change and any important impacts this PR may have. --> ```release-note None required ```" } ]
{ "category": "Runtime", "file_name": "PULL_REQUEST_TEMPLATE.md", "project_name": "Flannel", "subcategory": "Cloud Native Network" }
[ { "data": "Dear maintainer. Thank you for investing the time and energy to help make runc as useful as possible. Maintaining a project is difficult, sometimes unrewarding work. Sure, you will get to contribute cool features to the project. But most of your time will be spent reviewing, cleaning up, documenting, answering questions, justifying design decisions - while everyone has all the fun! But remember - the quality of the maintainers work is what distinguishes the good projects from the great. So please be proud of your work, even the unglamorous parts, and encourage a culture of appreciation and respect for every aspect of improving the project - not just the hot new features. This document is a manual for maintainers old and new. It explains what is expected of maintainers, how they should work, and what tools are available to them. This is a living document - if you see something out of date or missing, speak up! It is every maintainer's responsibility to: 1) Expose a clear roadmap for improving their component. 2) Deliver prompt feedback and decisions on pull requests. 3) Be available to anyone with questions, bug reports, criticism etc. on their component. This includes IRC and GitHub issues and pull requests. 4) Make sure their component respects the philosophy, design and roadmap of the project. Short answer: with pull requests to the runc repository. runc is an open-source project with an open design philosophy. This means that the repository is the source of truth for EVERY aspect of the project, including its philosophy, design, roadmap and APIs. *If it's part of the project, it's in the repo. It's in the repo, it's part of the project.* As a result, all decisions can be expressed as changes to the repository. An implementation change is a change to the source code. An API change is a change to the API specification. A philosophy change is a change to the philosophy manifesto. And so on. All decisions affecting runc, big and small, follow the same 3 steps: Step 1: Open a pull request. Anyone can do this. Step 2: Discuss the pull request. Anyone can do this. Step 3: Accept (`LGTM`) or refuse a pull request. The relevant maintainers do this (see below \"Who decides what?\") I'm a maintainer, should I make pull requests too? Yes. Nobody should ever push to master directly. All changes should be made through a pull request. All decisions are pull requests, and the relevant maintainers make decisions by accepting or refusing the pull request. Review and acceptance by anyone is denoted by adding a comment in the pull request:" }, { "data": "However, only currently listed `MAINTAINERS` are counted towards the required two LGTMs. Overall the maintainer system works because of mutual respect across the maintainers of the project. The maintainers trust one another to make decisions in the best interests of the project. Sometimes maintainers can disagree and this is part of a healthy project to represent the point of views of various people. In the case where maintainers cannot find agreement on a specific change the role of a Chief Maintainer comes into play. The Chief Maintainer for the project is responsible for overall architecture of the project to maintain conceptual integrity. Large decisions and architecture changes should be reviewed by the chief maintainer. The current chief maintainer for the project is Michael Crosby (@crosbymichael). Even though the maintainer system is built on trust, if there is a conflict with the chief maintainer on a decision, their decision can be challenged and brought to the technical oversight board if two-thirds of the maintainers vote for an appeal. It is expected that this would be a very exceptional event. The best maintainers have a vested interest in the project. Maintainers are first and foremost contributors that have shown they are committed to the long term success of the project. Contributors wanting to become maintainers are expected to be deeply involved in contributing code, pull request review, and triage of issues in the project for more than two months. Just contributing does not make you a maintainer, it is about building trust with the current maintainers of the project and being a person that they can depend on and trust to make decisions in the best interest of the project. The final vote to add a new maintainer should be approved by over 66% of the current maintainers with the chief maintainer having veto power. In case of a veto, conflict resolution rules expressed above apply. The voting period is five business days on the Pull Request to add the new maintainer. Part of a healthy project is to have active maintainers to support the community in contributions and perform tasks to keep the project running. Maintainers are expected to be able to respond in a timely manner if their help is required on specific issues where they are pinged. Being a maintainer is a time consuming commitment and should not be taken lightly. When a maintainer is unable to perform the required duties they can be removed with a vote by 66% of the current maintainers with the chief maintainer having veto power. The voting period is ten business days. Issues related to a maintainer's performance should be discussed with them among the other maintainers so that they are not surprised by a pull request removing them." } ]
{ "category": "Runtime", "file_name": "MAINTAINERS_GUIDE.md", "project_name": "runc", "subcategory": "Container Runtime" }
[ { "data": "title: Troubleshooting Cases sidebar_position: 6 Debugging process for some frequently encountered JuiceFS problems. If `juicefs format` has been run on the metadata engine, executing `juicefs format` command again might result in the following error: ``` cannot update volume XXX from XXX to XXX ``` In this case, clean up the metadata engine, and try again. When using Redis below 6.0.0, `juicefs format` will fail when `username` is specified: ``` format: ERR wrong number of arguments for 'auth' command ``` Username is supported in Redis 6.0.0 and above, you'll need to omit the `username` from the Redis URL, e.g. `redis://:password@host:6379/1`. If you encounter the following error when using : ``` sentinel: GetMasterAddrByName master=\"xxx\" failed: NOAUTH Authentication required. ``` Please confirm whether for the Redis Sentinel instance, if it is set, then you need to pass the `SENTINEL_PASSWORD` environment variable configures the password to connect to the Sentinel instance separately, and the password in the metadata engine URL will only be used to connect to the Redis server. When using to mount a directory on the host machine into a container, you may encounter the following error: ``` docker: Error response from daemon: error while creating mount source path 'XXX': mkdir XXX: file exists. ``` This is usually due to the `juicefs mount` command being executed with a non-root user, thus Docker daemon doesn't have permission to access this directory. You can deal with this using one of below methods: Execute `juicefs mount` command with root user Add option to both FUSE config file, and mount command. When executing `juicefs mount` command with a non-root user, you may see: ``` fuse: fuse: exec: \"/bin/fusermount\": stat /bin/fusermount: no such file or directory ``` This only occurs when a non-root user is trying to mount file system, meaning `fusermount` is not found, there are two solutions to this problem: Execute `juicefs mount` command with root user Install `fuse` package (e.g. `apt-get install fuse`, `yum install fuse`) If current user doesn't have permission to execute `fusermount` command, you'll see: ``` fuse: fuse: fork/exec /usr/bin/fusermount: permission denied ``` When this happens, check `fusermount` permission: ```shell $ ls -l /usr/bin/fusermount -rwsr-x. 1 root fuse 27968 Dec 7 2011 /usr/bin/fusermount $ ls -l /usr/bin/fusermount -rwsr-xr-x 1 root root 32096 Oct 30 2018 /usr/bin/fusermount ``` If JuiceFS Client cannot connect to object storage, or the bandwidth is simply not enough, JuiceFS will complain in logs: ```text <INFO>: slow request: PUT chunks/0/0/104194304 (%!s(<nil>), 20.512s) <ERROR>: flush 9902558 timeout after waited 8m0s <ERROR>: pending slice 9902558-80: ... ``` If the problem is a network connection issue, or the object storage has service issue, troubleshooting is relatively simple. But if the error was caused by low bandwidth, there's some more to consider. The first issue with slow connection is upload / download timeouts (demonstrated in the above error logs), to tackle this problem: Reduce upload concurrency, e.g. , to avoid upload timeouts. Reduce buffer size, e.g. or even lower. In a large bandwidth condition, increasing buffer size improves parallel performance. But in a low speed environment, this only makes `flush` operations slow and prone to timeouts. Default timeout for GET / PUT requests are 60 seconds, increasing `--get-timeout` and `--put-timeout` may help with read / write timeouts. In addition, the feature needs to be used with caution in low bandwidth" }, { "data": "Let's briefly go over the JuiceFS Client background job design: every JuiceFS Client runs background jobs by default, one of which is data compaction, and if the client has poor internet speed, it'll drag down performance for the whole system. A worse case is when client write cache is also enabled, compaction results are uploaded too slowly, forcing other clients into a read hang when accessing the affected files: ```text <ERROR>: read file 14029704: input/output error <INFO>: slow operation: read (14029704,131072,0): input/output error (0) <74.147891> <WARNING>: fail to read sliceId 1771585458 (off:4194304, size:4194304, clen: 37746372): get chunks/0/0/104194304: oss: service returned error: StatusCode=404, ErrorCode=NoSuchKey, ErrorMessage=\"The specified key does not exist.\", RequestId=62E8FB058C0B5C3134CB80B6 ``` To avoid this type of issue, we recommend disabling background jobs on low-bandwidth clients, i.e. adding option to the mount command. When using JuiceFS at scale, there will be some warnings in client logs: ``` <WARNING>: fail to read sliceId 1771585458 (off:4194304, size:4194304, clen: 37746372): get chunks/0/0/104194304: oss: service returned error: StatusCode=404, ErrorCode=NoSuchKey, ErrorMessage=\"The specified key does not exist.\", RequestId=62E8FB058C0B5C3134CB80B6 ``` When this type of warning occurs, but not accompanied by I/O errors (indicated by `input/output error` in client logs), you can safely ignore them and continue normal use, client will retry automatically and resolves this issue. This warning means that JuiceFS Client cannot read a particular slice, because a block does not exist, and object storage has to return a `NoSuchKey` error. Usually this is caused by: Clients carry out compaction asynchronously, which upon completion, will change the relationship between file and its corresponding blocks, causing problems for other clients that's already reading this file, hence the warning. Some clients enabled , they write a file, commit to the Metadata Service, but the corresponding blocks are still pending to upload (caused by for example, ). Meanwhile, other clients that are already accessing this file will meet this warning. Again, if no errors occur, just safely ignore this warning. In JuiceFS, a typical read amplification manifests as object storage traffic being much larger than JuiceFS Client read speed. For example, JuiceFS Client is reading at 200MiB/s, while S3 traffic grows up to 2GiB/s. JuiceFS is equipped with the : when reading a block at arbitrary position, the whole block is asynchronously scheduled for download. This is a read optimization enabled by default, but in some cases, this brings read amplification. Once we know this, we can start the diagnose. We'll collect JuiceFS access log (see ) to determine the file system access patterns of our application, and adjust JuiceFS configuration accordingly. Below is a diagnose process in an actual production environment: ```shell cat /jfs/.accesslog | grep -v \"^#$\" >> access.log wc -l access.log grep \"read (\" access.log | wc -l grep \"read (148153116,\" access.log ``` Access log looks like: ``` 2022.09.22 08:55:21.013121 [uid:0,gid:0,pid:0] read (148153116,131072,28668010496): OK (131072) <1.309992> 2022.09.22 08:55:21.577944 [uid:0,gid:0,pid:0] read (148153116,131072,14342746112): OK (131072) <1.385073> 2022.09.22 08:55:22.098133 [uid:0,gid:0,pid:0] read (148153116,131072,35781816320): OK (131072) <1.301371> 2022.09.22 08:55:22.883285 [uid:0,gid:0,pid:0] read (148153116,131072,3570397184): OK (131072) <1.305064> 2022.09.22 08:55:23.362654 [uid:0,gid:0,pid:0] read (148153116,131072,100420673536): OK (131072) <1.264290> 2022.09.22 08:55:24.068733 [uid:0,gid:0,pid:0] read (148153116,131072,48602152960): OK (131072) <1.185206> 2022.09.22 08:55:25.351035 [uid:0,gid:0,pid:0] read (148153116,131072,60529270784): OK (131072) <1.282066> 2022.09.22 08:55:26.631518 [uid:0,gid:0,pid:0] read (148153116,131072,4255297536): OK (131072) <1.280236> 2022.09.22 08:55:27.724882 [uid:0,gid:0,pid:0] read (148153116,131072,715698176): OK (131072) <1.093108> 2022.09.22 08:55:31.049944 [uid:0,gid:0,pid:0] read (148153116,131072,8233349120): OK (131072) <1.020763> 2022.09.22 08:55:32.055613 [uid:0,gid:0,pid:0] read (148153116,131072,119523176448): OK (131072) <1.005430> 2022.09.22 08:55:32.056935 [uid:0,gid:0,pid:0] read (148153116,131072,44287774720): OK (131072) <0.001099> 2022.09.22 08:55:33.045164 [uid:0,gid:0,pid:0] read (148153116,131072,1323794432): OK (131072) <0.988074>" }, { "data": "08:55:36.502687 [uid:0,gid:0,pid:0] read (148153116,131072,47760637952): OK (131072) <1.184290> 2022.09.22 08:55:38.525879 [uid:0,gid:0,pid:0] read (148153116,131072,53434183680): OK (131072) <0.096732> ``` Studying the access log, it's easy to conclude that our application performs frequent random small reads on a very large file, notice how the offset (the third argument of `read`) jumps significantly between each read, this means consecutive reads are accessing very different parts of the large file, thus prefetched data blocks is not being effectively utilized (a block is 4MiB by default, an offset of 4194304 bytes), only causing read amplifications. In this situation, we can safely set `--prefetch` to 0, so that prefetch concurrency is zero, which is essentially disabled. Re-mount and our problem is solved. If JuiceFS Client takes up too much memory, you may choose to optimize memory usage using below methods, but note that memory optimization is not free, and each setting adjustment will bring corresponding overhead, please do sufficient testing and verification before adjustment. Read/Write buffer size (`--buffer-size`) directly correlate to JuiceFS Client memory usage, using a lower `--buffer-size` will effectively decrease memory usage, but please note that the reduction may also affect the read and write performance. Read more at . JuiceFS mount client is an Go program, which means you can decrease `GOGC` (default to 100, in percentage) to adopt a more active garbage collection. This inevitably increase CPU usage and may even directly hinder performance. Read more at . If you use self-hosted Ceph RADOS as the data storage of JuiceFS, consider replacing glibc with , the latter comes with more efficient memory management and may decrease off-heap memory footprint in this scenario. If a file or directory are opened when you unmount JuiceFS, you'll see below errors, assuming JuiceFS is mounted on `/jfs`: ```shell umount: /jfs: target is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) Resource busy -- try 'diskutil unmount' ``` In such case: Locate the files being opened using commands like `lsof /jfs`, deal with these processes (like force quit), and retry. Force close the FUSE connection by `echo 1 > /sys/fs/fuse/connections/[device-number]/abort`, and then retry. You might need to find out the `[device-number]` using `lsof /jfs`, but if JuiceFS is the only FUSE mount point in the system, then `/sys/fs/fuse/connections` will contain only a single directory, no need to check further. If you just want to unmount ASAP, and do not care what happens to opened files, run `juicefs umount --force` to forcibly umount, note that behavior is different between Linux and macOS: For Linux, `juicefs umount --force` is translated to `umount --lazy`, file system will be detached, but opened files remain, FUSE client will exit when file descriptors are released. For macOS, `juicefs umount --force` is translated to `umount -f`, file system will be forcibly unmounted and opened files will be closed immediately. Compiling JuiceFS requires GCC 5.4 and above, this error may occur when using lower versions: ``` /go/pkg/tool/linux_amd64/link: running gcc failed: exit status 1 /go/pkg/tool/linux_amd64/compile: signal: killed ``` If glibc version is different between build environment and runtime, you may see below error: ``` $ juicefs juicefs: /lib/aarch64-linux-gnu/libc.so.6: version 'GLIBC_2.28' not found (required by juicefs) ``` This requires you to re-compile JuiceFS Client in your runtime host environment. Most Linux distributions comes with glibc by default, you can check its version with `ldd --version`." } ]
{ "category": "Runtime", "file_name": "troubleshooting.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- toc --> - - - - <!-- /toc --> Starting with Antrea v1.8, Antrea can be installed and updated using . We provide the following Helm charts: `antrea/antrea`: the Antrea network plugin. `antrea/flow-aggregator`: the Antrea Flow Aggregator; see for more details. `antrea/theia`: Theia, the Antrea network observability solution; refer to the sub-project for more details. Note that these charts are the same charts that we use to generate the YAML manifests for the `kubectl apply` installation method. Ensure that the necessary for running Antrea are met. Ensure that Helm 3 is . We recommend using a recent version of Helm if possible. Refer to the [Helm documentation](https://helm.sh/docs/topics/version_skew/) for compatibility between Helm and Kubernetes versions. Add the Antrea Helm chart repository: ```bash helm repo add antrea https://charts.antrea.io helm repo update ``` To install the Antrea Helm chart, use the following command: ```bash helm install antrea antrea/antrea --namespace kube-system ``` This will install the latest available version of Antrea. You can also install a specific version of Antrea (>= v1.8.0) with `--version <TAG>`. To upgrade the Antrea Helm chart, use the following commands: ```bash kubectl apply -f https://github.com/antrea-io/antrea/releases/download/<TAG>/antrea-crds.yml helm upgrade antrea antrea/antrea --namespace kube-system --version <TAG> ``` Helm 3 introduces \"special treatment\" for , with the ability to place CRD definitions (as plain YAML, not templated) in a special crds/ directory. When CRDs are defined this way, they will be installed before other resources (in case these other resources include CRs corresponding to these CRDs). CRDs defined this way will also never be deleted (to avoid accidental deletion of user-defined CRs) and will also never be upgraded (in case the chart author didn't ensure that the upgrade was backwards-compatible). The rationale for all of this is described in details in this [Helm community document](https://github.com/helm/community/blob/main/hips/hip-0011.md). Even though Antrea follows a , which reduces the likelihood of a serious issue when upgrading Antrea, we have decided to follow Helm best practices when it comes to CRDs. It means that an extra step is required for upgrading the chart: ```bash kubectl apply -f https://github.com/antrea-io/antrea/releases/download/<TAG>/antrea-crds.yml ``` When upgrading CRDs in production, it is recommended to make a backup of your Custom Resources (CRs) first. The Flow Aggregator is on the same release schedule as Antrea. Please ensure that you use the same released version for the Flow Aggregator chart as for the Antrea chart. To install the Flow Aggregator Helm chart, use the following command: ```bash helm install flow-aggregator antrea/flow-aggregator --namespace flow-aggregator --create-namespace ``` This will install the latest available version of the Flow Aggregator. You can also install a specific version (>= v1.8.0) with `--version <TAG>`. To upgrade the Flow Aggregator Helm chart, use the following command: ```bash helm upgrade flow-aggregator antrea/flow-aggregator --namespace flow-aggregator --version <TAG> ``` Refer to the [Theia documentation](https://github.com/antrea-io/theia/blob/main/docs/getting-started.md)." } ]
{ "category": "Runtime", "file_name": "helm.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "Name | Type | Description | Notes | - | - | - Id | Pointer to string | | [optional] `func NewVmRemoveDevice() *VmRemoveDevice` NewVmRemoveDevice instantiates a new VmRemoveDevice object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewVmRemoveDeviceWithDefaults() *VmRemoveDevice` NewVmRemoveDeviceWithDefaults instantiates a new VmRemoveDevice object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *VmRemoveDevice) GetId() string` GetId returns the Id field if non-nil, zero value otherwise. `func (o VmRemoveDevice) GetIdOk() (string, bool)` GetIdOk returns a tuple with the Id field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmRemoveDevice) SetId(v string)` SetId sets Id field to given value. `func (o *VmRemoveDevice) HasId() bool` HasId returns a boolean if a field has been set." } ]
{ "category": "Runtime", "file_name": "VmRemoveDevice.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "All community contributions in this pull request are licensed to the project maintainers under the terms of the . By creating this pull request I represent that I have the right to license the contributions to the project maintainers under the Apache 2 license. [ ] Bug fix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [ ] Optimization (provides speedup with no functional changes) [ ] Breaking change (fix or feature that would cause existing functionality to change) [ ] Fixes a regression (If yes, please add `commit-id` or `PR #` here) [ ] Unit tests added/updated [ ] Internal documentation updated" } ]
{ "category": "Runtime", "file_name": "PULL_REQUEST_TEMPLATE.md", "project_name": "MinIO", "subcategory": "Cloud Native Storage" }
[ { "data": "[TOC] gVisor implements a large portion of the Linux surface and while we strive to make it broadly compatible, there are (and always will be) unimplemented features and bugs. The only real way to know if it will work is to try. If you find a container that doesnt work and there is no known issue, please indicating the full command you used to run the image. You can view open issues related to compatibility . If you're able to provide the , the problem likely to be fixed much faster. The following applications/images have been tested: elasticsearch golang httpd java8 jenkins mariadb memcached mongo mysql nginx node php postgres prometheus python redis registry rust tomcat wordpress Most common utilities work. Note that: Some tools, such as `tcpdump` and old versions of `ping`, require explicitly enabling raw sockets via the unsafe `--net-raw` runsc flag. In case of tcpdump the following invocations will work tcpdump -i any tcpdump -i \\<device-name\\> -p (-p disables promiscuous mode) Different Docker images can behave differently. For example, Alpine Linux and Ubuntu have different `ip` binaries. Specific tools include: <!-- mdformat off(don't wrap the table) --> | Tool | Status | | :--: | :--: | | apt-get | Working. | | bundle | Working. | | cat | Working. | | curl | Working. | | dd | Working. | | df | Working. | | dig | Working. | | drill | Working. | | env | Working. | | find | Working. | | gcore | Working. | | gdb | Working. | | gosu | Working. | | grep | Working. | | ifconfig | Works partially, like ip. Full support . | | ip | Some subcommands work (e.g. addr, route). Full support . | | less | Working. | | ls | Working. | | lsof | Working. | | mount | Works in readonly mode. gVisor doesn't currently support creating new mounts at runtime. | | nc | Working. | | nmap | Not working. | | netstat | . | | nslookup | Working. | | ping | Working. | | ps | Working. | | route | Working. | | ss | . | | sshd | Partially working. Job control . | | strace | Working. | | tar | Working. | | tcpdump | Working , . | | top | Working. | | uptime | Working. | | vim | Working. | | wget | Working. | <!-- mdformat on -->" } ]
{ "category": "Runtime", "file_name": "compatibility.md", "project_name": "gVisor", "subcategory": "Container Runtime" }