content
listlengths
1
171
tag
dict
[ { "data": "rkt can run multiple applications in a pod, under a supervising process and alongside with a sidecar service which takes care of multiplexing its I/O toward the outside world. Historically this has been done via systemd-journald only, meaning that all logging was handled via journald and interactive applications had to re-use a parent TTY. Starting from systemd v232, it is possible to connect a service streams to arbitrary socket units and let custom sidecar multiplex all the I/O. This document describes the architectural design for the current logging and attaching subsystem, which allows custom logging and attaching logic. In order to be able to attach or apply custom logging logic to applications, an appropriate runtime mode must be specified when adding/preparing an application inside a pod. This is done via stage0 CLI arguments (`--stdin`, `--stdout`, and `--stder`) which translate into per-application stage2 annotations. This mode results in the application having the corresponding stream attached to the parent terminal. For historical reasons and backward compatibility, this is a special mode activated via `--interactive` and only supports single-app pods. Interactive mode does not support attaching and ties the runtime to the lifetime of the parent terminal. Internally, this translates to an annotation at the app level: ``` { \"name\": \"coreos.com/rkt/stage2/stdin\", \"value\": \"interactive\" }, { \"name\": \"coreos.com/rkt/stage2/stdout\", \"value\": \"interactive\" }, { \"name\": \"coreos.com/rkt/stage2/stderr\", \"value\": \"interactive\" } ``` In this case, the corresponding service unit file gains the following properties: ``` [Service] StandardInput=tty StandardOutput=tty StandardError=tty ... ``` No further sidecar dependencies are introduced in this case. This mode results in the application having the corresponding stream attached to a dedicated pseudo-terminal. This is different from the \"interactive\" mode because: it allocates a new pseudo-terminal accounted towards pod resources it supports external attaching/detaching it supports multiple applications running inside a single pod it does not tie the pod lifetime to the parent terminal one Internally, this translates to an annotation at the app level: ``` { \"name\": \"coreos.com/rkt/stage2/stdin\", \"value\": \"tty\" }, { \"name\": \"coreos.com/rkt/stage2/stdout\", \"value\": \"tty\" }, { \"name\": \"coreos.com/rkt/stage2/stderr\", \"value\": \"tty\" } ``` In this case, the corresponding service unit file gains the following properties: ``` [Service] TTYPath=/rkt/iomux/<appname>/stage2-pts StandardInput=tty StandardOutput=tty StandardError=tty ... ``` A sidecar dependency to `ttymux@.service` is introduced in this case. Application has a `Wants=` and `After=` relationship to it. This mode results in the application having each of the corresponding streams separately handled by a muxing" }, { "data": "This is different from the \"interactive\" and \"tty\" modes because: it does not allocate any terminal for the application single streams can be separately handled it supports multiple applications running inside a single pod Internally, this translates to an annotation at the app level: ``` { \"name\": \"coreos.com/rkt/stage2/stdin\", \"value\": \"stream\" }, { \"name\": \"coreos.com/rkt/stage2/stdout\", \"value\": \"stream\" }, { \"name\": \"coreos.com/rkt/stage2/stderr\", \"value\": \"stream\" } ``` In this case, the corresponding service unit file gains the following properties: ``` [Service] StandardInput=fd Sockets=<appname>-stdin.socket StandardOutput=fd Sockets=<appname>-stdout.socket StandardError=fd Sockets=<appname>-stderr.socket ... ``` A sidecar dependency to `iomux@.service` is introduced in this case. Application has a `Wants=` and `Before=` relationship to it. Additional per-stream socket units are generated, as follows: ``` [Unit] Description=<stream> socket for <appname> DefaultDependencies=no StopWhenUnneeded=yes RefuseManualStart=yes RefuseManualStop=yes BindsTo=<appname>.service [Socket] RemoveOnStop=yes Service=<appname>.service FileDescriptorName=<stream> ListenFIFO=/rkt/iottymux/<appname>/stage2-<stream> ``` This mode results in the application having the corresponding stream attached to systemd-journald. This is the default mode for stdout/stderr, for historical reasons and backward compatibility. Internally, this translates to an annotation at the app level: ``` { \"name\": \"coreos.com/rkt/stage2/stdout\", \"value\": \"log\" }, { \"name\": \"coreos.com/rkt/stage2/stderr\", \"value\": \"log\" } ``` In this case, the corresponding service unit file gains the following properties: ``` [Service] StandardOutput=journal StandardError=journal ... ``` A sidecar dependency to `systemd-journald.service` is introduced in this case. Application has a `Wants=` and `After=` relationship to it. Logging is not a valid mode for stdin. This mode results in the application having the corresponding stream closed. This is the default mode for stdin, for historical reasons and backward compatibility. Internally, this translates to an annotation at the app level: ``` { \"name\": \"coreos.com/rkt/stage2/stdin\", \"value\": \"null\" }, { \"name\": \"coreos.com/rkt/stage2/stdout\", \"value\": \"null\" }, { \"name\": \"coreos.com/rkt/stage2/stderr\", \"value\": \"null\" } ``` In this case, the corresponding service unit file gains the following properties: ``` [Service] StandardInput=null StandardOutput=null StandardError=null [...] ``` No further sidecar dependencies are introduced in this case. The following per-app annotations are defined for internal use, with the corresponding set of allowed values: `coreos.com/rkt/stage2/stdin` `interactive` `null` `stream` `tty` `coreos.com/rkt/stage2/stdout` `interactive` `log` `null` `stream` `tty` `coreos.com/rkt/stage2/stderr` `interactive` `log` `null` `stream` `tty` All the logging and attaching logic is handled by the stage1 `iottymux` binary. Each main application may additionally have a dedicated sidecar for I/O multiplexing, which proxies I/O to external clients over sockets. Sidecar state is persisted at `/rkt/iottymux/<appname>` while the main application is running. `rkt attach` can auto-discover endpoints, by reading the content of status file located at" }, { "data": "This file provides a versioned JSON document, whose content varies depending on the I/O for the specific application. For example, an application with all streams available for attaching will have a status file similar to the following: ``` { \"version\": 1, \"targets\": [ { \"name\": \"stdin\", \"domain\": \"unix\", \"address\": \"/rkt/iottymux/alpine-sh/sock-stdin\" }, { \"name\": \"stdout\", \"domain\": \"unix\", \"address\": \"/rkt/iottymux/alpine-sh/sock-stdout\" }, { \"name\": \"stderr\", \"domain\": \"unix\", \"address\": \"/rkt/iottymux/alpine-sh/sock-stderr\" } ] } ``` Its `--mode=list` option just read the file and print it back to the user. `rkt attach --mode=auto` performs the auto-discovery mechanism described above, and the proceed to attach stdin/stdour/stderr of the current process (itself) to all available corresponding endpoints. This the default attaching mode. `rkt attach --mode=<stream>` performs the auto-discovery mechanism described above, and the proceed to the corresponding available endpoints. This is the default output multiplexer for stdout/stderr in logging mode, for historical reasons and backward compatibility. Restrictions: requires journalctl (or similar libsystemd-based helper) to decode output entries requires a libsystemd on the host compiled with LZ4 support systemd-journald does not support distinguishing between entries from stdout and stderr TODO(lucab): k8s logmode This is the standard systemd-journald service. It is the default output handler for the \"logging\" mode. iottymux is a multi-purpose stage1 binary. It currently serves the following purposes: Multiplex I/O over TTY (in TTY mode) Multiplex I/O from streams (in streaming mode) Attach to existing attachable applications (in TTY or streaming mode) This component takes care of multiplexing dedicated streams and receiving clients for attaching. It is started as an instance of the templated `iomux@.service` service by a `Before=` dependency from the application. Internally, it attaches to available FIFOs and proxies them to separate sockets for external clients. It is implemented as a sub-action of the main `iottymux` binary and completely run in stage1 context. This component takes care of multiplexing TTY and receiving clients for attaching. It is started as an instance of the templated `ttymux@.service` service by a `After=` dependency from the application. Internally, it creates a pesudo-tty pair (whose slave is used by the main application) and proxies the master to a socket for external clients. It is implemented as a sub-action of the main `iottymux` binary and completely run in stage1 context. This component takes care of discovering endpoints and attaching to them, both for TTY and streaming modes. It is invoked by the \"stage1\" attach entrypoint and completely run in stage1 context. It is implemented as a sub-action of the main `iottymux` binary." } ]
{ "category": "Runtime", "file_name": "log-attach-design.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "`ScaleWorkload` function can be used to scale a workload to specified replicas. It automatically sets the original replica count of the workload as output artifact, which makes using `ScaleWorkload` function in blueprints a lot easier. Below is an example of how this function can be used ``` yaml apiVersion: cr.kanister.io/v1alpha1 kind: Blueprint metadata: name: my-blueprint actions: backup: outputArtifacts: backupOutput: keyValue: origReplicas: \"{{ .Phases.shutdownPod.Output.originalReplicaCount }}\" phases: func: ScaleWorkload name: shutdownPod args: namespace: \"{{ .StatefulSet.Namespace }}\" name: \"{{ .StatefulSet.Name }}\" kind: StatefulSet replicas: 0 # this is the replica count, the STS will scaled to restore: inputArtifactNames: backupOutput phases: func: ScaleWorkload name: bringUpPod args: namespace: \"{{ .StatefulSet.Namespace }}\" name: \"{{ .StatefulSet.Name }}\" kind: StatefulSet replicas: \"{{ .ArtifactsIn.backupOutput.KeyValue.origReplicas }}\" ```" } ]
{ "category": "Runtime", "file_name": "scaleworkload.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "CURVE client serves as the entrance of the services provided by CURVE, providing dynamic link library for QEMU/CURVE-NBD. As a result, restarting QEMU/CURVE-NBD is necessary when CURVE Client needs to be updated. In order to relief the impact of updating on applications based on CURVE, we decoupled CURVE client and its applications, and imported the hot upgrade module NEBD in between. <p align=\"center\"> <img src=\"../images/nebd-overview.jpg\" alt=\"nebd-overview\" width=\"650\" /><br> <font size=3> Figure 1: NEBD structure</font> </p> Figure1 shows the deployment structure of NEBD. NEBD Client (part1 in source code directory): NEBD Client corresponds to applications based on CURVE, including QEMU and CURVE-NBD. An NEBD client connects to specified NEBD server through Unix Domain Socket. NEBD Server(part2 in source code directory)NEBD Server is responsible for receiving the requests from part1, then call CURVE client for corresponding operations. An NEBD server can receive requests from different NEBD clients. Also, figure 1 shows that instead of CURVE client, NEBD client is now the component that serves the application above. In this case, the applications will still be influenced when NEBD client is being upgraded. So in our design, we simplified the business logic of NEBD client as much as possible, which means it will only be responsible for request forwarding and limited retries for requests if needed. There are few steps for NEBD server/CURVE client's upgrade: Install the latest version of CURVE client/NEBD server Stop running processes of part2 Restart processes of part2 In our practice, we use daemon to monitor the processes of part2, and start them if not exist. Also, notice that from the stop of process of part2 to the start of the new one, only 1 to 5 seconds are required in our test and production environment. <p align=\"center\"> <img src=\"../images/nebd-modules.png\" alt=\"nebd-modules\" width=\"500\" /><br> <font size=3> Figure 2: Structure of each module</font> </p> Figure 2 show the components of NEBD client and NEBD server. libnebd: API interface for upper level applications, including open/close and read/write. File Client: The implementations of libnebd interface, send users' requests to NEBD server. MetaCache Manager: Record information of files already opened" }, { "data": "Heartbeat Client: Sent regular heartbeat carrying opened file info to NEBD server. File Service: Receive and deal with file requests from NEBD client. Heartbeat Service: Receive and process heartbeat from NEBD client. File Manager: Manage opened files on NEBD server. IO Executor: Responsible for the actual execution of file requests, which calls the interface of CURVE client and send requests to storage clusters. Metafile Manager: Manage metadata files, and also responsible for metadata persistence, or load persistent data from files. Retry policy of part1: As what we've mentioned above, part1 only execute limited retries, and this characteristic can be reflected in two aspects: There's no time out for RPC requests from part1. Part1 only executes retries for errors of RPC requests themselves, and forward error codes returned by RPC to upper level directly. Use Write request as an example, and figure3 is the flow chart of the request: Forward Write request from upper level to part2 through RPC requests, and wait without setting the time out. If the RPC requests return successfully, return to upper level corresponding to the RPC response. If disconnection occurs of unable to connect, wait for a while and retry. <p align=\"center\"> <img src=\"../images/nebd-part1-write-request-en.png\" alt=\"images\\nebd-part1-write-request-en\" width=\"400\" /><br> <font size=3> Figure 3: Flow chart of write request sent by NEBD client</font> </p> Other requests follow similar procedures as write request does. Heartbeat management: In order to avoid upper level application finishing without closing files opened, part2 will check heartbeat status of files (opened files info reported by part1 through regular heartbeat), and will close the files of which the last heartbeat time has exceeded the threshold. The difference between closing files with time out heartbeat and files that requested to close by upper level applications is that the time-out closing will not remove file info from metafile. But there's a case that an upper level application suspended somehow and recovered later, this will also cause a heartbeat time out and therefore corresponding files are closed. Thus, when part2 receiving requests from part1, it will first check whether the metafile own the records for current files. If it does and corresponding files are in closed status, part2 will first open corresponding files and execute following requests." } ]
{ "category": "Runtime", "file_name": "nebd_en.md", "project_name": "Curve", "subcategory": "Cloud Native Storage" }
[ { "data": "All notable changes to this project will be documented in this file. This project adheres to . Enhancements: : Add `WithLazy` method for `SugaredLogger`. : zaptest: Add `NewTestingWriter` for customizing TestingWriter with more flexibility than `NewLogger`. : Add `Log`, `Logw`, `Logln` methods for `SugaredLogger`. : Add `WithPanicHook` option for testing panic logs. Thanks to @defval, @dimmo, @arxeiss, and @MKrupauskas for their contributions to this release. Enhancements: : Add Dict as a Field. : Add `WithLazy` method to `Logger` which lazily evaluates the structured context. : String encoding is much (~50%) faster now. Thanks to @hhk7734, @jquirke, and @cdvr1993 for their contributions to this release. This release contains several improvements including performance, API additions, and two new experimental packages whose APIs are unstable and may change in the future. Enhancements: : Add `zap/exp/zapslog` package for integration with slog. : Add `Name` to `Logger` which returns the Logger's name if one is set. : Add `zap/exp/expfield` package which contains helper methods `Str` and `Strs` for constructing String-like zap.Fields. : Reduce stack size on `Any`. Thanks to @knight42, @dzakaammar, @bcspragu, and @rexywork for their contributions to this release. Enhancements: : Add `Level` to both `Logger` and `SugaredLogger` that reports the current minimum enabled log level. : `SugaredLogger` turns errors to zap.Error automatically. Thanks to @Abirdcfly, @craigpastro, @nnnkkk7, and @sashamelentyev for their contributions to this release. Enhancements: : Add a `zapcore.LevelOf` function to determine the level of a `LevelEnabler` or `Core`. : Add `zap.Stringers` field constructor to log arrays of objects that implement `String() string`. Enhancements: : Add `zap.Objects` and `zap.ObjectValues` field constructors to log arrays of objects. With these two constructors, you don't need to implement `zapcore.ArrayMarshaler` for use with `zap.Array` if those objects implement `zapcore.ObjectMarshaler`. : Add `SugaredLogger.WithOptions` to build a copy of an existing `SugaredLogger` with the provided options applied. : Add `ln` variants to `SugaredLogger` for each log level. These functions provide a string joining behavior similar to `fmt.Println`. : Add `zap.WithFatalHook` option to control the behavior of the logger for `Fatal`-level log entries. This defaults to exiting the program. : Add a `zap.Must` function that you can use with `NewProduction` or `NewDevelopment` to panic if the system was unable to build the logger. : Add a `Logger.Log` method that allows specifying the log level for a statement dynamically. Thanks to @cardil, @craigpastro, @sashamelentyev, @shota3506, and @zhupeijun for their contributions to this release. Enhancements: : Add `zapcore.ParseLevel` to parse a `Level` from a string. : Add `zap.ParseAtomicLevel` to parse an `AtomicLevel` from a string. Bugfixes: : Fix panic in JSON encoder when `EncodeLevel` is unset. Other changes: : Improve encoding performance when the `AddCaller` and `AddStacktrace` options are used together. Thanks to @aerosol and @Techassi for their contributions to this release. Enhancements: : Add `EncoderConfig.SkipLineEnding` flag to disable adding newline characters between log statements. : Add `EncoderConfig.NewReflectedEncoder` field to customize JSON encoding of reflected log fields. Bugfixes: : Fix inaccurate precision when encoding complex64 as JSON. , : Close JSON namespaces opened in `MarshalLogObject` methods when the methods return. : Avoid panicking in Sampler core if `thereafter` is zero. Other changes: : Drop support for Go < 1.15. Thanks to @psrajat, @lruggieri, @sammyrnycreal for their contributions to this release. Bugfixes: : JSON: Fix complex number encoding with negative imaginary part. Thanks to @hemantjadon. : JSON: Fix inaccurate precision when encoding float32. Enhancements: : Avoid panicking in Sampler core if the level is out of bounds. : Reduce the size of BufferedWriteSyncer by aligning the fields" }, { "data": "Thanks to @lancoLiu and @thockin for their contributions to this release. Bugfixes: : Fix nil dereference in logger constructed by `zap.NewNop`. Enhancements: : Add `zapcore.BufferedWriteSyncer`, a new `WriteSyncer` that buffers messages in-memory and flushes them periodically. : Add `zapio.Writer` to use a Zap logger as an `io.Writer`. : Add `zap.WithClock` option to control the source of time via the new `zapcore.Clock` interface. : Avoid panicking in `zap.SugaredLogger` when arguments of `w` methods don't match expectations. : Add support for filtering by level or arbitrary matcher function to `zaptest/observer`. : Comply with `io.StringWriter` and `io.ByteWriter` in Zap's `buffer.Buffer`. Thanks to @atrn0, @ernado, @heyanfu, @hnlq715, @zchee for their contributions to this release. Bugfixes: : Encode `<nil>` for nil `error` instead of a panic. , : Update minimum version constraints to address vulnerabilities in dependencies. Enhancements: : Improve alignment of fields of the Logger struct, reducing its size from 96 to 80 bytes. : Support `grpclog.LoggerV2` in zapgrpc. : Support URL-encoded POST requests to the AtomicLevel HTTP handler with the `application/x-www-form-urlencoded` content type. : Support multi-field encoding with `zap.Inline`. : Speed up SugaredLogger for calls with a single string. : Add support for filtering by field name to `zaptest/observer`. Thanks to @ash2k, @FMLS, @jimmystewpot, @Oncilla, @tsoslow, @tylitianrui, @withshubh, and @wziww for their contributions to this release. Bugfixes: : Fix missing newline in IncreaseLevel error messages. : Fix panic in JSON encoder when encoding times or durations without specifying a time or duration encoder. : Honor CallerSkip when taking stack traces. : Fix the default file permissions to use `0666` and rely on the umask instead. : Encode `<nil>` for nil `Stringer` instead of a panic error log. Enhancements: : Added `zapcore.TimeEncoderOfLayout` to easily create time encoders for custom layouts. : Added support for a configurable delimiter in the console encoder. : Optimize console encoder by pooling the underlying JSON encoder. : Add ability to include the calling function as part of logs. : Add `StackSkip` for including truncated stacks as a field. : Add options to customize Fatal behaviour for better testability. Thanks to @SteelPhase, @tmshn, @lixingwang, @wyxloading, @moul, @segevfiner, @andy-retailnext and @jcorbin for their contributions to this release. Bugfixes: : Fix handling of `Time` values out of `UnixNano` range. : Fix `IncreaseLevel` being reset after a call to `With`. Enhancements: : Add `WithCaller` option to supersede the `AddCaller` option. This allows disabling annotation of log entries with caller information if previously enabled with `AddCaller`. : Deprecate `NewSampler` constructor in favor of `NewSamplerWithOptions` which supports a `SamplerHook` option. This option adds support for monitoring sampling decisions through a hook. Thanks to @danielbprice for their contributions to this release. Bugfixes: : Fix panic on attempting to build a logger with an invalid Config. : Vendoring Zap with `go mod vendor` no longer includes Zap's development-time dependencies. : Fix issue introduced in 1.14.0 that caused invalid JSON output to be generated for arrays of `time.Time` objects when using string-based time formats. Thanks to @YashishDua for their contributions to this release. Enhancements: : Optimize calls for disabled log levels. : Add millisecond duration encoder. : Add option to increase the level of a logger. : Optimize time formatters using `Time.AppendFormat` where possible. Thanks to @caibirdme for their contributions to this release. Enhancements: : Add `Intp`, `Stringp`, and other similar `p` field constructors to log pointers to primitives with support for `nil` values. Thanks to @jbizzle for their contributions to this release. Enhancements: : Migrate to Go modules. Enhancements: : Add `zapcore.OmitKey` to omit keys in an `EncoderConfig`. : Add `RFC3339` and `RFC3339Nano` time" }, { "data": "Thanks to @juicemia, @uhthomas for their contributions to this release. Bugfixes: : Fix `MapObjectEncoder.AppendByteString` not adding value as a string. : Fix incorrect call depth to determine caller in Go 1.12. Enhancements: : Add `zaptest.WrapOptions` to wrap `zap.Option` for creating test loggers. : Don't panic when encoding a String field. : Disable HTML escaping for JSON objects encoded using the reflect-based encoder. Thanks to @iaroslav-ciupin, @lelenanam, @joa, @NWilson for their contributions to this release. Bugfixes: : MapObjectEncoder should not ignore empty slices. Enhancements: : Reduce number of allocations when logging with reflection. , : Expose a registry for third-party logging sinks. Thanks to @nfarah86, @AlekSi, @JeanMertz, @philippgille, @etsangsplk, and @dimroc for their contributions to this release. Enhancements: : Make log level configurable when redirecting the standard library's logger. : Add a logger that writes to a `testing.TB`. : Add a top-level alias for `zapcore.Field` to clean up GoDoc. Bugfixes: : Add a missing import comment to `go.uber.org/zap/buffer`. Thanks to @DiSiqueira and @djui for their contributions to this release. Bugfixes: : Store strings when using AddByteString with the map encoder. Enhancements: : Add `NewStdLogAt`, which extends `NewStdLog` by allowing the user to specify the level of the logged messages. Enhancements: : Omit zap stack frames from stacktraces. : Add a `ContextMap` method to observer logs for simpler field validation in tests. Enhancements: and : Support errors produced by `go.uber.org/multierr`. : Support user-supplied encoders for logger names. Bugfixes: : Fix a bug that incorrectly truncated deep stacktraces. Thanks to @richard-tunein and @pavius for their contributions to this release. This release fixes two bugs. Bugfixes: : Support a variety of case conventions when unmarshaling levels. : Fix a panic in the observer. This release adds a few small features and is fully backward-compatible. Enhancements: : Add a `LineEnding` field to `EncoderConfig`, allowing users to override the Unix-style default. : Preserve time zones when logging times. : Make `zap.AtomicLevel` implement `fmt.Stringer`, which makes a variety of operations a bit simpler. This release adds an enhancement to zap's testing helpers as well as the ability to marshal an AtomicLevel. It is fully backward-compatible. Enhancements: : Add a substring-filtering helper to zap's observer. This is particularly useful when testing the `SugaredLogger`. : Make `AtomicLevel` implement `encoding.TextMarshaler`. This release adds a gRPC compatibility wrapper. It is fully backward-compatible. Enhancements: : Add a `zapgrpc` package that wraps zap's Logger and implements `grpclog.Logger`. This release fixes two bugs and adds some enhancements to zap's testing helpers. It is fully backward-compatible. Bugfixes: : Fix caller path trimming on Windows. : Fix a panic when attempting to use non-existent directories with zap's configuration struct. Enhancements: : Add filtering helpers to zaptest's observing logger. Thanks to @moitias for contributing to this release. This is zap's first stable release. All exported APIs are now final, and no further breaking changes will be made in the 1.x release series. Anyone using a semver-aware dependency manager should now pin to `^1`. Breaking changes: : Add byte-oriented APIs to encoders to log UTF-8 encoded text without casting from `[]byte` to `string`. : To support buffering outputs, add `Sync` methods to `zapcore.Core`, `zap.Logger`, and `zap.SugaredLogger`. : Rename the `testutils` package to `zaptest`, which is less likely to clash with other testing helpers. Bugfixes: : Make the ISO8601 time formatters fixed-width, which is friendlier for tab-separated console output. : Remove the automatic locks in `zapcore.NewCore`, which allows zap to work with concurrency-safe `WriteSyncer` implementations. : Stop reporting errors when trying to `fsync` standard out on Linux systems. : Report the correct caller from zap's standard library interoperability" }, { "data": "Enhancements: : Add a registry allowing third-party encodings to work with zap's built-in `Config`. : Make the representation of logger callers configurable (like times, levels, and durations). : Allow third-party encoders to use their own buffer pools, which removes the last performance advantage that zap's encoders have over plugins. : Add `CombineWriteSyncers`, a convenience function to tee multiple `WriteSyncer`s and lock the result. : Make zap's stacktraces compatible with mid-stack inlining (coming in Go 1.9). : Export zap's observing logger as `zaptest/observer`. This makes it easier for particularly punctilious users to unit test their application's logging. Thanks to @suyash, @htrendev, @flisky, @Ulexus, and @skipor for their contributions to this release. This is the third release candidate for zap's stable release. There are no breaking changes. Bugfixes: : Byte slices passed to `zap.Any` are now correctly treated as binary blobs rather than `[]uint8`. Enhancements: : Users can opt into colored output for log levels. : In addition to hijacking the output of the standard library's package-global logging functions, users can now construct a zap-backed `log.Logger` instance. : Frames from common runtime functions and some of zap's internal machinery are now omitted from stacktraces. Thanks to @ansel1 and @suyash for their contributions to this release. This is the second release candidate for zap's stable release. It includes two breaking changes. Breaking changes: : Zap's global loggers are now fully concurrency-safe (previously, users had to ensure that `ReplaceGlobals` was called before the loggers were in use). However, they must now be accessed via the `L()` and `S()` functions. Users can update their projects with ``` gofmt -r \"zap.L -> zap.L()\" -w . gofmt -r \"zap.S -> zap.S()\" -w . ``` and : RC1 was mistakenly shipped with invalid JSON and YAML struct tags on all config structs. This release fixes the tags and adds static analysis to prevent similar bugs in the future. Bugfixes: : Redirecting the standard library's `log` output now correctly reports the logger's caller. Enhancements: and : Zap now transparently supports non-standard, rich errors like those produced by `github.com/pkg/errors`. : Though `New(nil)` continues to return a no-op logger, `NewNop()` is now preferred. Users can update their projects with `gofmt -r 'zap.New(nil) -> zap.NewNop()' -w .`. : Incorrectly importing zap as `github.com/uber-go/zap` now returns a more informative error. Thanks to @skipor and @chapsuk for their contributions to this release. This is the first release candidate for zap's stable release. There are multiple breaking changes and improvements from the pre-release version. Most notably: Zap's import path is now \"go.uber.org/zap\"* &mdash; all users will need to update their code. User-facing types and functions remain in the `zap` package. Code relevant largely to extension authors is now in the `zapcore` package. The `zapcore.Core` type makes it easy for third-party packages to use zap's internals but provide a different user-facing API. `Logger` is now a concrete type instead of an interface. A less verbose (though slower) logging API is included by default. Package-global loggers `L` and `S` are included. A human-friendly console encoder is included. A declarative config struct allows common logger configurations to be managed as configuration instead of code. Sampling is more accurate, and doesn't depend on the standard library's shared timer heap. This is a minor version, tagged to allow users to pin to the pre-1.0 APIs and upgrade at their leisure. Since this is the first tagged release, there are no backward compatibility concerns and all functionality is new. Early zap adopters should pin to the 0.1.x minor version until they're ready to upgrade to the upcoming stable release." } ]
{ "category": "Runtime", "file_name": "CHANGELOG.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "This is a small experiment to have a wrapper CLI which can call both API functions as well as debug CLI. To facilitate tab completion and help, the API call names are broken up with spaces replacing the underscores." } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "FD.io", "subcategory": "Cloud Native Network" }
[ { "data": "Microsoft Azure AWS Alibaba VMWare Netflix Hashi Corp Admiralty Elotl Tencent Games Since end-users are specific per provider within VK we have many end-user customers that we don't have permission to list publically. Please contact ribhatia@microsoft.com for more informtation. Are you currently using Virtual Kubelet in production? Please let us know by adding your company name and a description of your use case to this document!" } ]
{ "category": "Runtime", "file_name": "ADOPTERS.md", "project_name": "Virtual Kubelet", "subcategory": "Container Runtime" }
[ { "data": "Firecracker uses a to standardize the build process. This also fixes the build tools and dependencies to specific versions. Every once in a while, something needs to be updated. To do this, a new container image needs to be built locally, then published to the registry. The Firecracker CI suite must also be updated to use the new image. Access to the . The `docker` package installed locally. You should already have this if you've ever built Firecracker from source. Access to both an `x86_64` and `aarch64` machines to build the container images. Ensure `aws --version` is >=1.17.10. This step is optional but recommended, to be on top of Python package changes. ```sh ./tools/devtool shell --privileged poetry update --lock --directory tools/devctr/ ``` This will change `poetry.lock`, which you can commit with your changes. Login to the Docker organization in a shell. Make sure that your account has access to the repository: ```bash aws ecr-public get-login-password --region us-east-1 \\ | docker login --username AWS --password-stdin public.ecr.aws ``` For non-TTY devices, although not recommended a less secure approach can be used: ```bash docker login --username AWS --password \\ $(aws ecr-public get-login-password --region us-east-1) public.ecr.aws ``` Navigate to the Firecracker directory. Verify that you have the latest container image locally. ```bash docker images REPOSITORY TAG IMAGE ID CREATED SIZE public.ecr.aws/firecracker/fcuvm v26 8d00deb17f7a 2 weeks ago 2.41GB ``` Make your necessary changes, if any, to the . There's one for all the architectures in the Firecracker source tree. Commit the changes, if any. Build a new container image with the updated Dockerfile. ```bash tools/devtool build_devctr ``` Verify that the new image exists. ```bash docker images REPOSITORY TAG IMAGE ID CREATED SIZE public.ecr.aws/firecracker/fcuvm latest 1f9852368efb 2 weeks ago 2.36GB public.ecr.aws/firecracker/fcuvm v26 8d00deb17f7a 2 weeks ago 2.41GB ``` Tag the new image with the next available version `X` and the architecture you're on. Note that this will not always be \"current version in devtool + 1\", as sometimes that version might already be used on feature branches. Always check the \"Image Tags\" on to make sure you do not accidentally overwrite an existing image. As a sanity check, run: ```bash docker pull public.ecr.aws/firecracker/fcuvm:vX ``` and verify that you get an error message along the lines of ``` Error response from daemon: manifest for public.ecr.aws/firecracker/fcuvm:vX not found: manifest unknown: Requested image not found ``` This means the version you've chosen does not exist yet, and you are good to go. ```bash docker tag 1f9852368efb public.ecr.aws/firecracker/fcuvm:v27x8664 docker images REPOSITORY TAG IMAGE ID CREATED public.ecr.aws/firecracker/fcuvm latest 1f9852368efb 1 week ago public.ecr.aws/firecracker/fcuvm v27x8664 1f9852368efb 1 week ago public.ecr.aws/firecracker/fcuvm v26 8d00deb17f7a 2 weeks ago ``` Push the image. ```bash docker push public.ecr.aws/firecracker/fcuvm:v27x8664 ``` Login to the `aarch64` build machine. Steps 1-4 are identical across architectures, change `x86_64` to `aarch64`. Then continue with the above steps: Build a new container image with the updated Dockerfile. ```bash tools/devtool build_devctr ``` Verify that the new image exists. ```bash docker images REPOSITORY TAG IMAGE ID CREATED public.ecr.aws/firecracker/fcuvm latest 1f9852368efb 2 minutes ago" }, { "data": "v26 8d00deb17f7a 2 weeks ago ``` Tag the new image with the next available version `X` and the architecture you're on. Note that this will not always be \"current version in devtool + 1\", as sometimes that version might already be used on feature branches. Always check the \"Image Tags\" on to make sure you do not accidentally overwrite an existing image. As a sanity check, run: ```bash docker pull public.ecr.aws/firecracker/fcuvm:vX ``` and verify that you get an error message along the lines of ``` Error response from daemon: manifest for public.ecr.aws/firecracker/fcuvm:vX not found: manifest unknown: Requested image not found ``` This means the version you've chosen does not exist yet, and you are good to go. ```bash docker tag 1f9852368efb public.ecr.aws/firecracker/fcuvm:v27_aarch64 docker images REPOSITORY TAG IMAGE ID public.ecr.aws/firecracker/fcuvm latest 1f9852368efb public.ecr.aws/firecracker/fcuvm v27_aarch64 1f9852368efb public.ecr.aws/firecracker/fcuvm v26 8d00deb17f7a ``` Push the image. ```bash docker push public.ecr.aws/firecracker/fcuvm:v27_aarch64 ``` Create a manifest to point the latest container version to each specialized image, per architecture. ```bash docker manifest create public.ecr.aws/firecracker/fcuvm:v27 \\ public.ecr.aws/firecracker/fcuvm:v27x8664 public.ecr.aws/firecracker/fcuvm:v27_aarch64 docker manifest push public.ecr.aws/firecracker/fcuvm:v27 ``` Update the image tag in the . Commit and push the change. ```bash PREV_TAG=v26 CURR_TAG=v27 sed -i \"s%DEVCTRIMAGETAG=\\\"$PREVTAG\\\"%DEVCTRIMAGETAG=\\\"$CURRTAG\\\"%\" tools/devtool ``` Check out the for additional troubleshooting steps and guidelines. ```bash docker manifest is only supported when experimental cli features are enabled ``` See for explanations and fix. Either fetch and run it locally on another machine than the one you used to build it, or clean up any artifacts from the build machine and fetch. ```bash docker system prune -a docker images REPOSITORY TAG IMAGE ID CREATED SIZE tools/devtool shell [Firecracker devtool] About to pull docker image public.ecr.aws/firecracker/fcuvm:v15 [Firecracker devtool] Continue? ``` ```bash docker push public.ecr.aws/firecracker/fcuvm:v27 The push refers to repository [public.ecr.aws/firecracker/fcuvm] e2b5ee0c4e6b: Preparing 0fbb5fd5f156: Preparing ... a1aa3da2a80a: Waiting denied: requested access to the resource is denied ``` Only a Firecracker maintainer can update the container image. If you are one, ask a member of the team to add you to the AWS ECR repository and retry. Tags can be deleted from the . Also, pushing the same tag twice will overwrite the initial content. If you see unrelated `Python` errors, it's likely because the dev container pulls `Python 3` at build time. `Python 3` means different minor versions on different platforms, and is not backwards compatible. So it's entirely possible that `docker build` has pulled in unwanted `Python` dependencies. To include only your changes, an alternative to the method described above is to make the changes inside the container, instead of in the `Dockerfile`. Let's say you want to update (random example). Enter the container as `root`. ```bash tools/devtool shell -p ``` Make the changes locally. Do not exit the container. ```bash cargo install cargo-audit --force ``` Find your running container. ```bash docker ps CONTAINER ID IMAGE COMMAND CREATED e9f0487fdcb9 fcuvm:v14 \"bash\" 53 seconds ago ``` Commit the modified container to a new image. Use the `container ID`. ```bash docker commit e9f0487fdcb9 fcuvm:v15x8664 ``` ```bash docker image ls REPOSITORY TAG IMAGE ID CREATED fcuvm v15x8664 514581e654a6 18 seconds ago fcuvm v14 c8581789ead3 2 months ago ``` Repeat for `aarch64`. Create and push the manifest." } ]
{ "category": "Runtime", "file_name": "devctr-image.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage compiled BPF template objects ``` -h, --help help for sha ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - Get datapath SHA header - List BPF template objects." } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_sha.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "To simplify the communication between velero users and developers, this document proposes the `velero debug` command to generate a tarball including the logs needed for debugging. Github issue: https://github.com/vmware-tanzu/velero/issues/675 Gathering information to troubleshoot a Velero deployment is currently spread across multiple commands, and is not very efficient. Logs for the Velero server itself are accessed via a kubectl logs command, while information on specific backups or restores are accessed via a Velero subcommand. Restic logs are even more complicated to retrieve, since one must gather logs for every instance of the daemonset, and theres currently no good mechanism to locate which node a particular restic backup ran against. A dedicated subcommand can lower this effort and reduce back-and-forth between user and developer for collecting the logs. Enable efficient log collection for Velero and associated components, like plugins and restic. Collecting logs for components that do not belong to velero such as storage service. Automated log analysis. With the introduction of the new command `velero debug`, the command would download all of the following information: velero deployment logs restic DaemonSet logs plugin logs All the resources in the group `velero.io` that are created such as: Backup Restore BackupStorageLocation PodVolumeBackup PodVolumeRestore etc ... Log of the backup and restore, if specified in the param A project called `crash-diagnostics` (or `crashd`) (https://github.com/vmware-tanzu/crash-diagnostics) implements the Kubernetes API queries and provides Starlark scripting language to abstract details, and collect the information into a local copy. It can be used as a standalone CLI executing a Starlark script file. With the capabilities of embedding files in Go 1.16, we can define a Starlark script gathering the necessary information, embed the script at build time, then the velero debug command will invoke `crashd`, passing in the scripts text contents. The Starlark script to be called by crashd: ```python def capturebackuplogs(cmd, namespace): if args.backup: log(\"Collecting log and information for backup: {}\".format(args.backup)) backupDescCmd = \"{} --namespace={} backup describe {} --details\".format(cmd, namespace, args.backup) capturelocal(cmd=backupDescCmd, filename=\"backupdescribe{}.txt\".format(args.backup)) backupLogsCmd = \"{} --namespace={} backup logs {}\".format(cmd, namespace, args.backup) capturelocal(cmd=backupLogsCmd, filename=\"backup_{}.log\".format(args.backup)) def capturerestorelogs(cmd, namespace): if args.restore: log(\"Collecting log and information for restore: {}\".format(args.restore)) restoreDescCmd = \"{} --namespace={} restore describe {} --details\".format(cmd, namespace, args.restore) capturelocal(cmd=restoreDescCmd, filename=\"restoredescribe{}.txt\".format(args.restore)) restoreLogsCmd = \"{} --namespace={} restore logs {}\".format(cmd, namespace, args.restore) capturelocal(cmd=restoreLogsCmd, filename=\"restore_{}.log\".format(args.restore)) ns = args.namespace if args.namespace else \"velero\" output = args.output if args.output else \"bundle.tar.gz\" cmd = args.cmd if args.cmd else \"velero\" crshd = crashd_config(workdir=\"./velero-bundle\") setdefaults(kubeconfig(path=args.kubeconfig, cluster_context=args.kubecontext)) log(\"Collecting velero resources in namespace: {}\". format(ns)) kube_capture(what=\"objects\", namespaces=[ns], groups=['velero.io']) capturelocal(cmd=\"{} version -n {}\".format(cmd, ns), filename=\"version.txt\") log(\"Collecting velero deployment logs in namespace: {}\". format(ns)) kube_capture(what=\"logs\", namespaces=[ns]) capturebackuplogs(cmd, ns) capturerestorelogs(cmd, ns) archive(outputfile=output," }, { "data": "log(\"Generated debug information bundle: {}\".format(output)) ``` The sample command to trigger the script via crashd: ```shell ./crashd run ./velero.cshd --args 'backup=harbor-backup-2nd,namespace=velero,basedir=,restore=,kubeconfig=/home/.kube/minikube-250-224/config,output=' ``` To trigger the script in `velero debug`, in the package `pkg/cmd/cli/debug` a struct `option` will be introduced ```go type option struct { // currCmd the velero command currCmd string // workdir for crashd will be $baseDir/velero-debug baseDir string // the namespace where velero server is installed namespace string // the absolute path for the log bundle to be generated outputPath string // the absolute path for the kubeconfig file that will be read by crashd for calling K8S API kubeconfigPath string // the kubecontext to be used for calling K8S API kubeContext string // optional, the name of the backup resource whose log will be packaged into the debug bundle backup string // optional, the name of the restore resource whose log will be packaged into the debug bundle restore string // optional, it controls whether to print the debug log messages when calling crashd verbose bool } ``` The code will consolidate the input parameters and execution context of the `velero` CLI to form the option struct, which can be transformed into the `argsMap` that can be used when calling the func `exec.Execute` in `crashd`: https://github.com/vmware-tanzu/crash-diagnostics/blob/v0.3.4/exec/executor.go#L17 The collection could be done via the Kubernetes client-go API, but such integration is not necessarily trivial to implement, therefore, `crashd` is preferred approach The starlark script will be embedded into the velero binary, and the byte slice will be passed to the `exec.Execute` func directly, so theres little risk that the script will be modified before being executed. As the `crashd` project evolves the behavior of the internal functions used in the Starlark script may change. Well ensure the correctness of the script via regular E2E tests. Bump up to use Go v1.16 to compile velero Embed the starlark script Implement the `velero debug` sub-command to call the script Add E2E test case Command dependencies: In the Starlark script, for collecting version info and backup logs, it calls the `velero backup logs` and `velero version`, which makes the call stack like velero debug -> crashd -> velero xxx. We need to make sure this works under different PATH settings. Progress and error handling: The log collection may take a relatively long time, log messages should be printed to indicate the progress when different items are being downloaded and packaged. Additionally, when an error happens, `crashd` may omit some errors, so before the script is executed we'll do some validation and make sure the `debug` command fail early if some parameters are incorrect." } ]
{ "category": "Runtime", "file_name": "velero-debug.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "OpenEBS is the most widely deployed open source example of a category of storage solutions sometimes called . OpenEBS is itself deployed as a set of containers on Kubernetes worker nodes. This document describes the high level architecture of OpenEBS and the links to the Source Code and its Dependencies. Some key aspects that make OpenEBS different compared to other traditional storage solutions: Built using the micro-services architecture like the applications it serves. Use Kubernetes itself to orchestrate and manage the OpenEBS components Built completely in userspace making it highly portable to run across any OS / Platform Completely intent driven, inheriting the same principles that drive the ease of use with Kubernetes The architecture of OpenEBS is container native and horizontally scalable. OpenEBS is a collection of different microservices that can be grouped into 3 major areas (or planes): The data engines are the containers responsible for interfacing with the underlying storage devices such as host filesystem, rotational drives, SSDs and NVMe devices. The data engines provide volumes with required capabilities like high availability, snapshots, clones, etc. Volume capabilities can be optimized based on the workload they serve. Depending on the capabilities requested, OpenEBS selects different data engines like cStor ( a CoW based) or Jiva or even Local PVs for a given volume. The high availability is achieved by abstracting the access to the volume into the target container - which in turn does the synchronous replication to multiple different replica containers. The replica containers save the data to the underlying storage devices. If a node serving the application container and the target container fails, the application and target are rescheduled to a new node. The target connects with the other available replicas and will start serving the IO. The Storage Management or Control Plane is responsible for interfacing between Kubernetes (Volume/CSI interface) and managing the volumes created using the OpenEBS Data Engines. The Storage Management Plane is implemented using a set of containers that are either running at the cluster level or the node level. Some of the storage management options are also provided by containers running as side-cars to the data engine containers. The storage management containers are responsible for providing APIs for gathering details about the volumes. The APIs can be used by Kubernetes Provisioners for managing volumes, snapshots, backups, so forth; used by Prometheus to collect metrics of volumes; used by custom programs like CLI or UI to provide insights into the OpenEBS Storage status or management. While this plane is an integral part of the OpenEBS Storage Management Plane, the containers and custom resources under this plane can be used by other projects that require a Kubernetes native way of managing the Storage Devices (rotational drives, SSDs and NVMe, etc.) attached to Kubernetes nodes. Storage Device Management Plane can be viewed as an Inventory Management tool, that discovers devices and keeps track of their usage via device claims (akin to PV/PVC" }, { "data": "All the operations like device listing, identifying the topology or details of a specific device can be accessed via kubectl and Kubernetes Custom Resources. OpenEBS source code is spread across multiple repositories, organized either by the storage engine or management layer. This section describes the various actively maintained repositories. is OpenEBS meta repository that contains design documents, project management, community and contributor documents, deployment and workload examples. contains the source code for OpenEBS Documentation portal (https://docs.openebs.io) implemented using Docusaurus framework and other libraries listed in . contains the source code for OpenEBS portal (https://openebs.io) implemented using Gatsby framework and other libraries listed in . contains the Helm chart source code for OpenEBS and also hosts a gh-pages website for install artifacts and Helm packages. contains OpenEBS Storage Management components that help with managing cStor, Jiva and Local Volumes. This repository contains the non-CSI drivers. The code is being moved from this repository to engine specific CSI drivers. Detailed dependency list can be found in: . OpenEBS also maintains a forked copy of the Kubernetes external-storage repository to support the external-provisioners for cStor and Jiva volumes. contains OpenEBS extensions for Kubernetes External Dynamic Provisioners. These provisioners will be deprecated in the near term in favor of the CSI drivers that are under beta and alpha stage at the moment. This is a forked repository from . has the plugin code to perform cStor and ZFS Local PV based Backup and Restore using Velero. is a general purpose alpine container used to launch some management jobs by OpenEBS Operators. contains the OpenEBS related Kubernetes custom resource specifications and the related Go-client API to manage those resources. This functionality is being split from the mono-repo into its own repository. contains management tools for upgrading and migrating OpenEBS volumes and pools. This functionality is being split from the mono-repo into its own repository. Go dependencies are listed . contains the Litmus based e2e tests that are executed on GitLab pipelines. Contains tests for Jiva, cStor and Local PV. contains the OpenEBS CLI that can be run as kubectl plugin. This functionality is being split from the mono-repo into its own repository. (Currently in Alpha). contains tools/scripts for running performance benchmarks on Kubernetes volumes. This is an experimental repo. The work in this repo can move to other repos like e2e-tests. is a prometheus exporter for sending capacity usage statistics using `du` from hostpath volumes. is wrapper around OpenEBS Helm to allow installation the . contains Kubernetes native Device Inventory Management functionality. A detailed dependency list can be found in . Along with being dependent on Kubernetes and Operator SDK for managing the Kubernetes custom resources, NDM also optionally depends on the following. (License: MPL 2.0) for discovering device attributes. OpenEBS maintains forked repositories of openSeaChest to fix/upstream the issues found in this" }, { "data": "is one of the data engines supported by OpenEBS which was forked from Rancher Longhorn engine and has diverged from the way Jiva volumes are managed within Kubernetes. At the time of the fork, Longhorn was focused towards Docker and OpenEBS was focused on supporting Kubernetes. Jiva engine depends on the following: A fork of Longhorn engine is maintained in OpenEBS to upstream the common changes from Jiva to Longhorn. for providing user space iSCSI Target support implemented in Go. A fork of the project is maintained in OpenEBS to keep the dependencies in sync and upstream the changes. fork is also maintained by OpenEBS to manage the differences between Jiva way of writing into the sparse files. provides a complete list of dependencies used by the Jiva project. contains the CSI Driver for Jiva Volumes. Currently in alpha. Dependencies are in: . contains Kubernetes custom resources and operators to manage Jiva volumes. Currently in alpha used by Jiva CSI Driver. This will replace the volume management functionality offered by OpenEBS API Server. Dependencies are in: . contains the cStor Replica functionality that makes use of uZFS - userspace ZFS to store the data on devices. is a fork of (License: CDDL). This fork contains the code that modifies ZFS to run in user space. contains the iSCSI Target functionality used by cStor volumes. This work is derived from earlier work available as FreeBSD port at http://www.peach.ne.jp/archives/istgt/ (archive link: https://web.archive.org/web/20190622064711/peach.ne.jp/archives/istgt/). The original work was licensed under BSD license. is the CSI Driver for cStor Volumes. This will deprecate the external provisioners. Currently in beta. Dependencies are in: . contain the Kubernetes custom resources and operators to manage cStor Pools volumes. Currently in beta and used by cStor CSI Driver. This will replace the volume management functionality offered by OpenEBS API Server. Dependencies are in: . contains the Mayastor data engine, CSI driver and management utilitities. is a forked repository of (License: BSD) for managing the upstream changes. (License: MIT) contains Rust bindings for SPDK. is forked from (License: MIT) for managing the upstream changes. is forked from (License: MIT) for managing the upstream changes. is forked from (License: MIT) for managing upstream changes. is forked from (License: MIT) for managing upstream changes. is forked from (License: MIT) for managing upstream changes. contains the CSI driver for provisioning Kubernetes Local Volumes on ZFS installed on the nodes. contains the CSI driver for provisioning Kubernetes Local Volumes on Hostpath by creating sparse files. (Currently in Alpha). Architectural overview on how each of the Data engines operate are provided in this Design Documents for various components and features are listed There is always something more that is required, to make it easier to suit your use-cases. Feel free to join the discussion on new features or raise a PR with your proposed change. - Already signed up? Head to our discussions at Pick an issue of your choice to work on from any of the repositories listed above. Here are some contribution ideas to start looking at: . . Help with backlogs from the by discussing requirements and design." } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "Libvirt is one of manager for StratoVirt, it manages StratoVirt by creating cmdlines to launch StratoVirt and giving commands via QMP. Currently, five virsh commands are supported to manage StratoVirt: `virsh create`, `virsh destroy`, `virsh suspend`, `virsh resume` and `virsh console`. StratoVirt can be configured by following ways: memory: ``` <memory unit='GiB'>8</memory> or <memory unit='MiB'>8192</memory> ``` CPU: CPU topology is not supported, please configure the number of VCPUs only. ``` <vcpu>4</vcpu> ``` Architecture: Optional value of `arch` are: `aarch64` and `x86_64`. On X86 platform, supported machine is `q35`; on aarch64 platform, supported machine is `virt`. ``` <os> <type arch='x86_64' machine='q35'>hvm</type> </os> ``` Kernel and cmdline: `/path/to/standardvmkernel` is the path of standard vm kernel. ``` <kernel>/path/to/standardvmkernel</kernel> <cmdline>console=ttyS0 root=/dev/vda reboot=k panic=1 rw</cmdline> ``` feature: As the acpi is used in Standard VM, therefore the acpi feature must be configured. ``` <features> <acpi/> </features> ``` For aarch64 platform, as gicv3 is used the `gic` should also be added to feature. ``` <features> <acpi/> <gic version='3'/> </features> ``` emulator: Set emulator for libvirt, `/path/to/StratoVirtbinaryfile` is the path to StratoVirt binary file. ``` <devices> <emulator>/path/to/StratoVirtbinaryfile</emulator> </devices> ``` balloon ``` <controller type='pci' index='4' model='pcie-root-port' /> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x000' bus='0x04' slot='0x00' function='0x00'/> </memballoon> ``` pflash Pflash can be added by the following config. `/path/to/pflash` is the path of pflash file. ``` <loader readonly='yes' type='pflash'>/path/to/pflash</loader> <nvram template='/path/to/OVMFVARS'>/path/to/OVMFVARS</nvram> ``` iothread ``` <iothreads>1</iothreads> ``` block: ``` <controller type='pci' index='1' model='pcie-root-port' /> <disk type='file' device='disk'> <driver name='qemu' type='raw' iothread='1'/> <source file='/path/to/rootfs'/> <target dev='hda' bus='virtio'/> <iotune> <totaliopssec>1000</totaliopssec> </iotune> <address type='pci' domain='0x000' bus='0x01' slot='0x00' function='0x00'/> </disk> ``` net ``` <controller type='pci' index='2' model='pcie-root-port' /> <interface type='ethernet'> <mac address='de:ad:be:ef:00:01'/> <source bridge='qbr0'/> <target dev='tap0'/> <model type='virtio'/> <address type='pci' domain='0x000' bus='0x02' slot='0x00' function='0x00'/> </interface> ``` console To use `virsh console` command, the virtio console with redirect `pty` should be configured. ``` <controller type='pci' index='3' model='pcie-root-port' /> <controller type='virtio-serial' index='0'> <alias name='virt-serial0'/> <address type='pci' domain='0x000' bus='0x03' slot='0x00' function='0x00'/> </controller> <console type='pty'> <target type='virtio' port='0'/> <alias name='console0'/> </console> ``` vhost-vsock ``` <controller type='pci' index='6' model='pcie-root-port' /> <vsock model='virtio'> <cid auto='no' address='3'/> <address type='pci' domain='0x000' bus='0x00' slot='0x06' function='0x00'/> </vsock> ``` rng ``` <controller type='pci' index='5' model='pcie-root-port' /> <rng model='virtio'> <rate period='1000' bytes='1234'/> <backend model='random'>/path/to/random_file</backend> <address type='pci' domain='0x000' bus='0x05' slot='0x00' function='0x00'/> </rng> ``` vfio ``` <controller type='pci' index='7' model='pcie-root-port' /> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> </hostdev> ```" } ]
{ "category": "Runtime", "file_name": "interconnect_with_libvirt.md", "project_name": "StratoVirt", "subcategory": "Container Runtime" }
[ { "data": "title: Prerequisites Rook can be installed on any existing Kubernetes cluster as long as it meets the minimum version and Rook is granted the required privileges (see below for more information). Kubernetes versions v1.25 through v1.30 are supported. Architectures supported are `amd64 / x86_64` and `arm64`. To configure the Ceph storage cluster, at least one of these local storage types is required: Raw devices (no partitions or formatted filesystems) Raw partitions (no formatted filesystem) LVM Logical Volumes (no formatted filesystem) Persistent Volumes available from a storage class in `block` mode Confirm whether the partitions or devices are formatted with filesystems with the following command: ```console $ lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT vda vda1 LVM2_member >eSO50t-GkUV-YKTH-WsGq-hNJY-eKNf-3i07IB ubuntu--vg-root ext4 c2366f76-6e21-4f10-a8f3-6776212e2fe4 / ubuntu--vg-swap_1 swap 9492a3dc-ad75-47cd-9596-678e8cf17ff9 [SWAP] vdb ``` If the `FSTYPE` field is not empty, there is a filesystem on top of the corresponding device. In this example, `vdb` is available to Rook, while `vda` and its partitions have a filesystem and are not available. Ceph OSDs have a dependency on LVM in the following scenarios: If encryption is enabled (`encryptedDevice: \"true\"` in the cluster CR) A `metadata` device is specified `osdsPerDevice` is greater than 1 LVM is not required for OSDs in these scenarios: OSDs are created on raw devices or partitions OSDs are created on PVCs using the `storageClassDeviceSets` If LVM is required, LVM needs to be available on the hosts where OSDs will be running. Some Linux distributions do not ship with the `lvm2` package. This package is required on all storage nodes in the k8s cluster to run Ceph OSDs. Without this package even though Rook will be able to successfully create the Ceph OSDs, when a node is rebooted the OSD pods running on the restarted node will fail to start. Please install LVM using your Linux distribution's package manager. For example: CentOS: ```console sudo yum install -y lvm2 ``` Ubuntu: ```console sudo apt-get install -y lvm2 ``` RancherOS: Since version LVM is supported Logical volumes during the boot process. You need to add an for that. ```yaml runcmd: [ \"vgchange\", \"-ay\" ] ``` Ceph requires a Linux kernel built with the RBD module. Many Linux distributions have this module, but not all. For example, the GKE Container-Optimised OS (COS) does not have RBD. Test your Kubernetes nodes by running `modprobe rbd`. If the rbd module is 'not found', rebuild the kernel to include the `rbd` module, install a newer kernel, or choose a different Linux distribution. Rook's default RBD configuration specifies only the `layering` feature, for broad compatibility with older kernels. If your Kubernetes nodes run a 5.4 or later kernel, additional feature flags can be enabled in the storage class. The `fast-diff` and `object-map` features are especially useful. ```yaml imageFeatures: layering,fast-diff,object-map,deep-flatten,exclusive-lock ``` If creating RWX volumes from a Ceph shared file system (CephFS), the recommended minimum kernel version is 4.17. If the kernel version is less than 4.17, the requested PVC sizes will not be enforced. Storage quotas will only be enforced on newer kernels. Specific configurations for some distributions. For NixOS, the kernel modules will be found in the non-standard path `/run/current-system/kernel-modules/lib/modules/`, and they'll be symlinked inside the also non-standard path `/nix`. Rook containers require read access to those locations to be able to load the required modules. They have to be bind-mounted as volumes in the CephFS and RBD plugin pods. If installing Rook with Helm, uncomment these example settings in `values.yaml`: `csi.csiCephFSPluginVolume` `csi.csiCephFSPluginVolumeMount` `csi.csiRBDPluginVolume` `csi.csiRBDPluginVolumeMount` If deploying without Helm, add those same values to the settings in the `rook-ceph-operator-config` ConfigMap found in operator.yaml: `CSICEPHFSPLUGIN_VOLUME` `CSICEPHFSPLUGINVOLUMEMOUNT` `CSIRBDPLUGIN_VOLUME` `CSIRBDPLUGINVOLUMEMOUNT`" } ]
{ "category": "Runtime", "file_name": "prerequisites.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "Prerequisites: Tests must be run on a Linux OS Docker installed with IPv6 enabled You will need to restart your Docker engine after updating the config Target kube-vip Docker image exists locally. Either build the image locally with `make dockerx86Local` or `docker pull` the image from a registry. Run the tests from the repo root: ``` make e2e-tests ``` Note: To preserve the test cluster after a test run, run the following: ``` make E2EPRESERVECLUSTER=true e2e-tests ``` The E2E tests: Start a local kind cluster Load the local docker image into kind Test connectivity to the control plane using the VIP Kills the current leader This causes leader election to occur Attempts to connect to the control plane using the VIP The new leader will need send ndp advertisements before this can succeed within a timeout" } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "kube-vip", "subcategory": "Cloud Native Network" }
[ { "data": "Architecture of the library === ELF -> Specifications -> Objects -> Links ELF BPF is usually produced by using Clang to compile a subset of C. Clang outputs an ELF file which contains program byte code (aka BPF), but also metadata for maps used by the program. The metadata follows the conventions set by libbpf shipped with the kernel. Certain ELF sections have special meaning and contain structures defined by libbpf. Newer versions of clang emit additional metadata in BPF Type Format (aka BTF). The library aims to be compatible with libbpf so that moving from a C toolchain to a Go one creates little friction. To that end, the is tested against the Linux selftests and avoids introducing custom behaviour if possible. The output of the ELF reader is a `CollectionSpec` which encodes all of the information contained in the ELF in a form that is easy to work with in Go. The BPF Type Format describes more than just the types used by a BPF program. It includes debug aids like which source line corresponds to which instructions and what global variables are used. lives in a separate internal package since exposing it would mean an additional maintenance burden, and because the API still has sharp corners. The most important concept is the `btf.Type` interface, which also describes things that aren't really types like `.rodata` or `.bss` sections. `btf.Type`s can form cyclical graphs, which can easily lead to infinite loops if one is not careful. Hopefully a safe pattern to work with `btf.Type` emerges as we write more code that deals with it. Specifications `CollectionSpec`, `ProgramSpec` and `MapSpec` are blueprints for in-kernel objects and contain everything necessary to execute the relevant `bpf(2)` syscalls. Since the ELF reader outputs a `CollectionSpec` it's possible to modify clang-compiled BPF code, for example to rewrite constants. At the same time the package provides an assembler that can be used to generate `ProgramSpec` on the fly. Creating a spec should never require any privileges or be restricted in any way, for example by only allowing programs in native endianness. This ensures that the library stays flexible. Objects `Program` and `Map` are the result of loading specs into the kernel. Sometimes loading a spec will fail because the kernel is too old, or a feature is not enabled. There are multiple ways the library deals with that: Fallback: older kernels don't allowing naming programs and maps. The library automatically detects support for names, and omits them during load if necessary. This works since name is primarily a debug aid. Sentinel error: sometimes it's possible to detect that a feature isn't available. In that case the library will return an error wrapping `ErrNotSupported`. This is also useful to skip tests that can't run on the current kernel. Once program and map objects are loaded they expose the kernel's low-level API, e.g. `NextKey`. Often this API is awkward to use in Go, so there are safer wrappers on top of the low-level API, like `MapIterator`. The low-level API is useful as an out when our higher-level API doesn't support a particular use case. Links BPF can be attached to many different points in the kernel and newer BPF hooks tend to use bpf_link to do so. Older hooks unfortunately use a combination of syscalls, netlink messages, etc. Adding support for a new link type should not pull in large dependencies like netlink, so XDP programs or tracepoints are out of scope." } ]
{ "category": "Runtime", "file_name": "ARCHITECTURE.md", "project_name": "Inclavare Containers", "subcategory": "Container Runtime" }
[ { "data": "title: \"Cluster migration\" layout: docs Velero's backup and restore capabilities make it a valuable tool for migrating your data between clusters. Cluster migration with Velero is based on Velero's functionality, which is responsible for syncing Velero resources from your designated object storage to your cluster. This means that to perform cluster migration with Velero you must point each Velero instance running on clusters involved with the migration to the same cloud object storage location. This page outlines a cluster migration scenario and some common configurations you will need to start using Velero to begin migrating data. Before migrating you should consider the following, Velero does not natively support the migration of persistent volumes snapshots across cloud providers. If you would like to migrate volume data between cloud platforms, enable , which will backup volume contents at the filesystem level. Velero doesn't support restoring into a cluster with a lower Kubernetes version than where the backup was taken. Migrating workloads across clusters that are not running the same version of Kubernetes might be possible, but some factors need to be considered before migration, including the compatibility of API groups between clusters for each custom resource. If a Kubernetes version upgrade breaks the compatibility of core/native API groups, migrating with Velero will not be possible without first updating the impacted custom resources. For more information about API group versions, please see . The Velero plugin for AWS and Azure does not support migrating data between regions. If you need to do this, you must use . This scenario steps through the migration of resources from Cluster 1 to Cluster 2. In this scenario, both clusters are using the same cloud provider, AWS, and Velero's . On Cluster 1, make sure Velero is installed and points to an object storage location using the `--bucket` flag. ``` velero install --provider aws --image velero/velero:v1.8.0 --plugins velero/velero-plugin-for-aws:v1.4.0 --bucket velero-migration-demo --secret-file xxxx/aws-credentials-cluster1 --backup-location-config region=us-east-2 --snapshot-location-config region=us-east-2 ``` During installation, Velero creates a Backup Storage Location called `default` inside the `--bucket` your provided in the install command, in this case `velero-migration-demo`. This is the location that Velero will use to store backups. Running `velero backup-location get` will show the backup location of Cluster 1. ``` velero backup-location get NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT default aws velero-migration-demo Available 2022-05-13 13:41:30 +0800 CST ReadWrite true ``` Still on Cluster 1, make sure you have a backup of your cluster. Replace `<BACKUP-NAME>` with a name for your backup. ``` velero backup create <BACKUP-NAME> ``` Alternatively, you can create a of your data with the Velero `schedule`" }, { "data": "This is the recommended way to make sure your data is automatically backed up according to the schedule you define. The default backup retention period, expressed as TTL (time to live), is 30 days (720 hours); you can use the `--ttl <DURATION>` flag to change this as necessary. See for more information about backup expiry. On Cluster 2, make sure that Velero is installed. Note that the install command below has the same `region` and `--bucket` location as the install command for Cluster 1. The Velero plugin for AWS does not support migrating data between regions. ``` velero install --provider aws --image velero/velero:v1.8.0 --plugins velero/velero-plugin-for-aws:v1.4.0 --bucket velero-migration-demo --secret-file xxxx/aws-credentials-cluster2 --backup-location-config region=us-east-2 --snapshot-location-config region=us-east-2 ``` Alternatively you could configure `BackupStorageLocations` and `VolumeSnapshotLocations` after installing Velero on Cluster 2, pointing to the `--bucket` location and `region` used by Cluster 1. To do this you can use to `velero backup-location create` and `velero snapshot-location create` commands. ``` velero backup-location create bsl --provider aws --bucket velero-migration-demo --config region=us-east-2 --access-mode=ReadOnly ``` Its recommended that you configure the `BackupStorageLocations` as read-only by using the `--access-mode=ReadOnly` flag for `velero backup-location create`. This will make sure that the backup is not deleted from the object store by mistake during the restore. See `velero backup-location help` for more information about the available flags for this command. ``` velero snapshot-location create vsl --provider aws --config region=us-east-2 ``` See `velero snapshot-location help` for more information about the available flags for this command. Continuing on Cluster 2, make sure that the Velero Backup object created on Cluster 1 is available. `<BACKUP-NAME>` should be the same name used to create your backup of Cluster 1. ``` velero backup describe <BACKUP-NAME> ``` Velero resources are with the backup files in object storage. This means that the Velero resources created by Cluster 1's backup will be synced to Cluster 2 through the shared Backup Storage Location. Once the sync occurs, you will be able to access the backup from Cluster 1 on Cluster 2 using Velero commands. The default sync interval is 1 minute, so you may need to wait before checking for the backup's availability on Cluster 2. You can configure this interval with the `--backup-sync-period` flag to the Velero server on Cluster 2. On Cluster 2, once you have confirmed that the right backup is available, you can restore everything to Cluster 2. ``` velero restore create --from-backup <BACKUP-NAME> ``` Make sure `<BACKUP-NAME>` is the same backup name from Cluster 1. Check that the Cluster 2 is behaving as expected: On Cluster 2, run: ``` velero restore get ``` Then run: ``` velero restore describe <RESTORE-NAME-FROM-GET-COMMAND> ``` Your data that was backed up from Cluster 1 should now be available on Cluster 2. If you encounter issues, make sure that Velero is running in the same namespace in both clusters." } ]
{ "category": "Runtime", "file_name": "migration-case.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- toc --> - - - - - - - - <!-- /toc --> Antrea Multi-cluster implements , which allows users to create multi-cluster Services that can be accessed cross clusters in a ClusterSet. Antrea Multi-cluster also extends Antrea-native NetworkPolicy to support Multi-cluster NetworkPolicy rules that apply to cross-cluster traffic, and ClusterNetworkPolicy replication that allows a ClusterSet admin to create ClusterNetworkPolicies which are replicated across the entire ClusterSet and enforced in all member clusters. Antrea Multi-cluster was first introduced in Antrea v1.5.0. In Antrea v1.7.0, the Multi-cluster Gateway feature was added that supports routing multi-cluster Service traffic through tunnels among clusters. The ClusterNetworkPolicy replication feature is supported since Antrea v1.6.0, and Multi-cluster NetworkPolicy rules are supported since Antrea v1.10.0. Antrea v1.13 promoted the ClusterSet CRD version from v1alpha1 to v1alpha2. If you plan to upgrade from a previous version to v1.13 or later, please check the . Please refer to the to learn how to build a ClusterSet with two clusters quickly. In this guide, all Multi-cluster installation and ClusterSet configuration are done by applying Antrea Multi-cluster YAML manifests. Actually, all operations can also be done with `antctl` Multi-cluster commands, which may be more convenient in many cases. You can refer to the and to learn how to use the Multi-cluster commands. We assume an Antrea version >= `v1.8.0` is used in this guide, and the Antrea version is set to an environment variable `TAG`. For example, the following command sets the Antrea version to `v1.8.0`. ```bash export TAG=v1.8.0 ``` To use the latest version of Antrea Multi-cluster from the Antrea main branch, you can change the YAML manifest path to: `https://github.com/antrea-io/antrea/tree/main/multicluster/build/yamls/` when applying or downloading an Antrea YAML manifest. and , in particular configuration (please check the corresponding sections to learn more information), requires an Antrea Multi-cluster Gateway to be set up in each member cluster by default to route Service and Pod traffic across clusters. To support Multi-cluster Gateways, `antrea-agent` must be deployed with the `Multicluster` feature enabled in a member cluster. You can set the following configuration parameters in `antrea-agent.conf` of the Antrea deployment manifest to enable the `Multicluster` feature: ```yaml kind: ConfigMap apiVersion: v1 metadata: name: antrea-config namespace: kube-system data: antrea-agent.conf: | featureGates: Multicluster: true multicluster: enableGateway: true namespace: \"\" # Change to the Namespace where antrea-mc-controller is deployed. ``` In order for Multi-cluster features to work, it is necessary for `enableGateway` to be set to true by the user, except when Pod-to-Pod direct connectivity already exists (e.g., provided by the cloud provider) and `endpointIPType` is configured as `PodIP`. Details can be found in . Please note that always requires Gateway. Prior to Antrea v1.11.0, Multi-cluster Gateway only works with Antrea `encap` traffic mode, and all member clusters in a ClusterSet must use the same tunnel type. Since Antrea v1.11.0, Multi-cluster Gateway also works with the Antrea `noEncap`, `hybrid` and `networkPolicyOnly` modes. For `noEncap` and `hybrid` modes, Antrea Multi-cluster deployment is the same as `encap` mode. For `networkPolicyOnly` mode, we need extra Antrea configuration changes to support Multi-cluster Gateway. Please check for more information. When using Multi-cluster Gateway, it is not possible to enable WireGuard for inter-Node traffic within the same member cluster. It is however possible to [enable WireGuard for" }, { "data": "traffic](#multi-cluster-wireguard-encryption) between member clusters. A Multi-cluster ClusterSet is comprised of a single leader cluster and at least two member clusters. Antrea Multi-cluster Controller needs to be deployed in the leader and all member clusters. A cluster can serve as the leader, and meanwhile also be a member cluster of the ClusterSet. To deploy Multi-cluster Controller in a dedicated leader cluster, please refer to [Deploy in a Dedicated Leader cluster](#deploy-in-a-dedicated-leader-cluster). To deploy Multi-cluster Controller in a member cluster, please refer to . To deploy Multi-cluster Controller in a dual-role cluster, please refer to . Since Antrea v1.14.0, you can run the following command to install Multi-cluster Controller in the leader cluster. Multi-cluster Controller is deployed into a Namespace. You must create the Namespace first, and then apply the deployment manifest in the Namespace. For a version older than v1.14, please check the user guide document of the version: `https://github.com/antrea-io/antrea/blob/release-$version/docs/multicluster/user-guide.md`, where `$version` can be `1.12`, `1.13` etc. ```bash kubectl create ns antrea-multicluster kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader.yml ``` The Multi-cluster Controller in the leader cluster will be deployed in Namespace `antrea-multicluster` by default. If you'd like to use another Namespace, you can change `antrea-multicluster` to the desired Namespace in `antrea-multicluster-leader-namespaced.yml`, for example: ```bash kubectl create ns '<desired-namespace>' curl -L https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader-namespaced.yml > antrea-multicluster-leader-namespaced.yml sed 's/antrea-multicluster/<desired-namespace>/g' antrea-multicluster-leader-namespaced.yml | kubectl apply -f - ``` You can run the following command to install Multi-cluster Controller in a member cluster. The command will run the controller in the \"member\" mode in the `kube-system` Namespace. If you want to use a different Namespace other than `kube-system`, you can edit `antrea-multicluster-member.yml` and change `kube-system` to the desired Namespace. ```bash kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-member.yml ``` We need to run two instances of Multi-cluster Controller in the dual-role cluster, one in leader mode and another in member mode. Follow the steps in section to deploy the leader controller and import the Multi-cluster CRDs. Follow the steps in section to deploy the member controller. An Antrea Multi-cluster ClusterSet should include at least one leader cluster and two member clusters. As an example, in the following sections we will create a ClusterSet `test-clusterset` which has two member clusters with cluster ID `test-cluster-east` and `test-cluster-west` respectively, and one leader cluster with ID `test-cluster-north`. Please note that the name of a ClusterSet CR must match the ClusterSet ID. In all the member and leader clusters of a ClusterSet, the ClusterSet CR must use the ClusterSet ID as the name, e.g. `test-clusterset` in the example of this guide. We first need to set up access to the leader cluster's API server for all member clusters. We recommend creating one ServiceAccount for each member for fine-grained access control. The Multi-cluster Controller deployment manifest for a leader cluster also creates a default member cluster token. If you prefer to use the default token, you can skip step 1 and replace the Secret name `member-east-token` to the default token Secret `antrea-mc-member-access-token` in step 2. Apply the following YAML manifest in the leader cluster to set up access for `test-cluster-east`: ```yml apiVersion: v1 kind: ServiceAccount metadata: name: member-east namespace: antrea-multicluster apiVersion: v1 kind: Secret metadata: name: member-east-token namespace: antrea-multicluster annotations: kubernetes.io/service-account.name: member-east type: kubernetes.io/service-account-token apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: member-east namespace: antrea-multicluster roleRef: apiGroup:" }, { "data": "kind: Role name: antrea-mc-member-cluster-role subjects: kind: ServiceAccount name: member-east namespace: antrea-multicluster ``` Generate the token Secret manifest from the leader cluster, and create a Secret with the manifest in member cluster `test-cluster-east`, e.g.: ```bash kubectl get secret member-east-token -n antrea-multicluster -o yaml | grep -w -e '^apiVersion' -e '^data' -e '^metadata' -e '^ *name:' -e '^kind' -e ' ca.crt' -e ' token:' -e '^type' -e ' namespace' | sed -e 's/kubernetes.io\\/service-account-token/Opaque/g' -e 's/antrea-multicluster/kube-system/g' > member-east-token.yml kubectl apply -f member-east-token.yml --kubeconfig=/path/to/kubeconfig-of-member-test-cluster-east ``` Replace all `east` to `west` and repeat step 1/2 for the other member cluster `test-cluster-west`. In all clusters, a `ClusterSet` CR must be created to define the ClusterSet and claim the cluster is a member of the ClusterSet. Create `ClusterSet` in the leader cluster `test-cluster-north` with the following YAML manifest (you can also refer to ): ```yaml apiVersion: multicluster.crd.antrea.io/v1alpha2 kind: ClusterSet metadata: name: test-clusterset namespace: antrea-multicluster spec: clusterID: test-cluster-north leaders: clusterID: test-cluster-north ``` Create `ClusterSet` in member cluster `test-cluster-east` with the following YAML manifest (you can also refer to ): ```yaml apiVersion: multicluster.crd.antrea.io/v1alpha2 kind: ClusterSet metadata: name: test-clusterset namespace: kube-system spec: clusterID: test-cluster-east leaders: clusterID: test-cluster-north secret: \"member-east-token\" server: \"https://172.18.0.1:6443\" namespace: antrea-multicluster ``` Note: update `server: \"https://172.18.0.1:6443\"` in the `ClusterSet` spec to the correct leader cluster API server address. Create `ClusterSet` in member cluster `test-cluster-west`: ```yaml apiVersion: multicluster.crd.antrea.io/v1alpha2 kind: ClusterSet metadata: name: test-clusterset namespace: kube-system spec: clusterID: test-cluster-west leaders: clusterID: test-cluster-north secret: \"member-west-token\" server: \"https://172.18.0.1:6443\" namespace: antrea-multicluster ``` If you want to make the leader cluster `test-cluster-north` also a member cluster of the ClusterSet, make sure you follow the steps in [Deploy Leader and Member in One Cluster](#deploy-leader-and-member-in-one-cluster) and repeat the steps in as well (don't forget replace all `east` to `north` when you repeat the steps). Then create the `ClusterSet` CR in cluster `test-cluster-north` in the `kube-system` Namespace (where the member Multi-cluster Controller runs): ```yaml apiVersion: multicluster.crd.antrea.io/v1alpha2 kind: ClusterSet metadata: name: test-clusterset namespace: kube-system spec: clusterID: test-cluster-north leaders: clusterID: test-cluster-north secret: \"member-north-token\" server: \"https://172.18.0.1:6443\" namespace: antrea-multicluster ``` Multi-cluster Gateways are responsible for establishing tunnels between clusters. Each member cluster should have one Node serving as its Multi-cluster Gateway. Multi-cluster Service traffic is routed among clusters through the tunnels between Gateways. Below is a table about communication support for different configurations. | Pod-to-Pod connectivity provided by underlay | Gateway Enabled | MC EndpointTypes | Cross-cluster Service/Pod communications | | -- | | -- | - | | No | No | N/A | No | | Yes | No | PodIP | Yes | | No | Yes | PodIP/ClusterIP | Yes | | Yes | Yes | PodIP/ClusterIP | Yes | After a member cluster joins a ClusterSet, and the `Multicluster` feature is enabled on `antrea-agent`, you can select a Node of the cluster to serve as the Multi-cluster Gateway by adding an annotation: `multicluster.antrea.io/gateway=true` to the K8s Node. For example, you can run the following command to annotate Node `node-1` as the Multi-cluster Gateway: ```bash kubectl annotate node node-1 multicluster.antrea.io/gateway=true ``` You can annotate multiple Nodes in a member cluster as the candidates for Multi-cluster Gateway, but only one Node will be selected as the active Gateway. Before Antrea" }, { "data": "the Gateway Node is just randomly selected and will never change unless the Node or its `gateway` annotation is deleted. Starting with Antrea v1.9.0, Antrea Multi-cluster Controller will guarantee a \"ready\" Node is selected as the Gateway, and when the current Gateway Node's status changes to not \"ready\", Antrea will try selecting another \"ready\" Node from the candidate Nodes to be the Gateway. Once a Gateway Node is decided, Multi-cluster Controller in the member cluster will create a `Gateway` CR with the same name as the Node. You can check it with command: ```bash $ kubectl get gateway -n kube-system NAME GATEWAY IP INTERNAL IP AGE node-1 10.17.27.55 10.17.27.55 10s ``` `internalIP` of the Gateway is used for the tunnels between the Gateway Node and other Nodes in the local cluster, while `gatewayIP` is used for the tunnels to remote Gateways of other member clusters. Multi-cluster Controller discovers the IP addresses from the K8s Node resource of the Gateway Node. It will always use `InternalIP` of the K8s Node as the Gateway's `internalIP`. For `gatewayIP`, there are several possibilities: By default, the K8s Node's `InternalIP` is used as `gatewayIP` too. You can choose to use the K8s Node's `ExternalIP` as `gatewayIP`, by changing the configuration option `gatewayIPPrecedence` to value: `external`, when deploying the member Multi-cluster Controller. The configration option is defined in ConfigMap `antrea-mc-controller-config` in `antrea-multicluster-member.yml`. When the Gateway Node has a separate IP for external communication or is associated with a public IP (e.g. an Elastic IP on AWS), but the IP is not added to the K8s Node, you can still choose to use the IP as `gatewayIP`, by adding an annotation: `multicluster.antrea.io/gateway-ip=<ip-address>` to the K8s Node. When choosing a candidate Node for Multi-cluster Gateway, you need to make sure the resulted `gatewayIP` can be reached from the remote Gateways. You may need to properly to allow the tunnels between Gateway Nodes. As of now, only IPv4 Gateway IPs are supported. After the Gateway is created, Multi-cluster Controller will be responsible for exporting the cluster's network information to other member clusters through the leader cluster, including the cluster's Gateway IP and Service CIDR. Multi-cluster Controller will try to discover the cluster's Service CIDR automatically, but you can also manually specify the `serviceCIDR` option in ConfigMap `antrea-mc-controller-config`. In other member clusters, a ClusterInfoImport CR will be created for the cluster which includes the exported network information. For example, in cluster `test-cluster-west`, you you can see a ClusterInfoImport CR with name `test-cluster-east-clusterinfo` is created for cluster `test-cluster-east`: ```bash $ kubectl get clusterinfoimport -n kube-system NAME CLUSTER ID SERVICE CIDR AGE test-cluster-east-clusterinfo test-cluster-east 110.96.0.0/20 10s ``` Make sure you repeat the same step to assign a Gateway Node in all member clusters. Once you confirm that all `Gateway` and `ClusterInfoImport` are created correctly, you can follow the section to create multi-cluster Services and verify cross-cluster Service access. Since Antrea v1.12.0, Antrea Multi-cluster supports WireGuard tunnel between member clusters. If WireGuard is enabled, the WireGuard interface and routes will be created by Antrea Agent on the Gateway Node, and all cross-cluster traffic will be encrypted and forwarded to the WireGuard tunnel. Please note that WireGuard encryption requires the `wireguard` kernel module be present on the Kubernetes Nodes. `wireguard` module is part of mainline kernel since Linux" }, { "data": "Or, you can compile the module from source code with a kernel version >= 3.10. documents how to install WireGuard together with the kernel module on various operating systems. To enable the WireGuard encryption, the `TrafficEncryptMode` in Multi-cluster configuration should be set to `wireGuard` and the `enableGateway` field should be set to `true` as follows: ```yaml kind: ConfigMap apiVersion: v1 metadata: name: antrea-config namespace: kube-system data: antrea-agent.conf: | featureGates: Multicluster: true multicluster: enableGateway: true trafficEncryptionMode: \"wireGuard\" wireGuard: port: 51821 ``` When WireGuard encryption is enabled for cross-cluster traffic as part of the Multi-cluster feature, in-cluster encryption (for traffic within a given member cluster) is no longer supported, not even with IPsec. After you set up a ClusterSet properly, you can create a `ServiceExport` CR to export a Service from one cluster to other clusters in the Clusterset, like the example below: ```yaml apiVersion: multicluster.x-k8s.io/v1alpha1 kind: ServiceExport metadata: name: nginx namespace: default ``` For example, once you export the `default/nginx` Service in member cluster `test-cluster-west`, it will be automatically imported in member cluster `test-cluster-east`. A Service and an Endpoints with name `default/antrea-mc-nginx` will be created in `test-cluster-east`, as well as a ServcieImport CR with name `default/nginx`. Now, Pods in `test-cluster-east` can access the imported Service using its ClusterIP, and the requests will be routed to the backend `nginx` Pods in `test-cluster-west`. You can check the imported Service and ServiceImport with commands: ```bash $ kubectl get serviceimport antrea-mc-nginx -n default NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE antrea-mc-nginx ClusterIP 10.107.57.62 <none> 443/TCP 10s $ kubectl get serviceimport nginx -n default NAME TYPE IP AGE nginx ClusterSetIP [\"10.19.57.62\"] 10s ``` As part of the Service export/import process, in the leader cluster, two ResourceExport CRs will be created in the Multi-cluster Controller Namespace, for the exported Service and Endpoints respectively, as well as two ResourceImport CRs. You can check them in the leader cluster with commands: ```bash $ kubectl get resourceexport -n antrea-multicluster NAME CLUSTER ID KIND NAMESPACE NAME AGE test-cluster-west-default-nginx-endpoints test-cluster-west Endpoints default nginx 30s test-cluster-west-default-nginx-service test-cluster-west Service default nginx 30s $ kubectl get resourceimport -n antrea-multicluster NAME KIND NAMESPACE NAME AGE default-nginx-endpoints Endpoints default nginx 99s default-nginx-service ServiceImport default nginx 99s ``` When there is any new change on the exported Service, the imported multi-cluster Service resources will be updated accordingly. Multiple member clusters can export the same Service (with the same name and Namespace). In this case, the imported Service in a member cluster will include endpoints from all the export clusters, and the Service requests will be load-balanced to all these clusters. Even when the client Pod's cluster also exported the Service, the Service requests may be routed to other clusters, and the endpoints from the local cluster do not take precedence. A Service cannot have conflicted definitions in different export clusters, otherwise only the first export will be replicated to other clusters; other exports as well as new updates to the Service will be ingored, until user fixes the conflicts. For example, after a member cluster exported a Service: `default/nginx` with TCP Port `80`, other clusters can only export the same Service with the same Ports definition including Port names. At the moment, Antrea Multi-cluster supports only IPv4 multi-cluster" }, { "data": "By default, a multi-cluster Service will use the exported Services' ClusterIPs (the original Service ClusterIPs in the export clusters) as Endpoints. Since Antrea v1.9.0, Antrea Multi-cluster also supports using the backend Pod IPs as the multi-cluster Service endpoints. You can change the value of configuration option `endpointIPType` in ConfigMap `antrea-mc-controller-config` from `ClusterIP` to `PodIP` to use Pod IPs as endpoints. All member clusters in a ClusterSet should use the same endpoint type. Existing ServiceExports should be re-exported after changing `endpointIPType`. `ClusterIP` type requires that Service CIDRs (ClusterIP ranges) must not overlap among member clusters, and always requires Multi-cluster Gateways to be configured. `PodIP` type requires Pod CIDRs not to overlap among clusters, and it also requires Multi-cluster Gateways when there is no direct Pod-to-Pod connectivity across clusters. Also refer to for more information. Since Antrea v1.9.0, Multi-cluster supports routing Pod traffic across clusters through Multi-cluster Gateways. Pod IPs can be reached in all member clusters within a ClusterSet. To enable this feature, the cluster's Pod CIDRs must be set in ConfigMap `antrea-mc-controller-config` of each member cluster and `multicluster.enablePodToPodConnectivity` must be set to `true` in the `antrea-agent` configuration. Note, Pod CIDRs must not overlap among clusters to enable cross-cluster Pod-to-Pod connectivity. ```yaml apiVersion: v1 kind: ConfigMap metadata: labels: app: antrea name: antrea-mc-controller-config namespace: kube-system data: controllermanagerconfig.yaml: | apiVersion: multicluster.crd.antrea.io/v1alpha1 kind: MultiClusterConfig podCIDRs: \"10.10.1.1/16\" ``` ```yaml kind: ConfigMap apiVersion: v1 metadata: name: antrea-config namespace: kube-system data: antrea-agent.conf: | featureGates: Multicluster: true multicluster: enablePodToPodConnectivity: true ``` You can edit , or use `kubectl edit` to change the ConfigMap: ```bash kubectl edit configmap -n kube-system antrea-mc-controller-config ``` Normally, `podCIDRs` should be the value of `kube-controller-manager`'s `cluster-cidr` option. If it's left empty, the Pod-to-Pod connectivity feature will not be enabled. If you use `kubectl edit` to edit the ConfigMap, then you need to restart the `antrea-mc-controller` Pod to load the latest configuration. Antrea-native policies can be enforced on cross-cluster traffic in a ClusterSet. To enable Multi-cluster NetworkPolicy features, check the Antrea Controller and Agent ConfigMaps and make sure that `enableStretchedNetworkPolicy` is set to `true` in addition to enabling the `multicluster` feature gate: ```yaml kind: ConfigMap apiVersion: v1 metadata: name: antrea-config namespace: kube-system data: antrea-controller.conf: | featureGates: Multicluster: true multicluster: enableStretchedNetworkPolicy: true # required by both egress and ingres rules antrea-agent.conf: | featureGates: Multicluster: true multicluster: enableGateway: true enableStretchedNetworkPolicy: true # required by only ingress rules namespace: \"\" ``` Restricting Pod egress traffic to backends of a Multi-cluster Service (which can be on the same cluster of the source Pod or on a different cluster) is supported by Antrea-native policy's `toServices` feature in egress rules. To define such a policy, simply put the exported Service name and Namespace in the `toServices` field of an Antrea-native policy, and set `scope` of the `toServices` peer to `ClusterSet`: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: acnp-drop-tenant-to-secured-mc-service spec: priority: 1 tier: securityops appliedTo: podSelector: matchLabels: role: tenant egress: action: Drop toServices: name: secured-service # an exported Multi-cluster Service namespace: svcNamespace scope: ClusterSet ``` The `scope` field of `toServices` rules is supported since Antrea v1.10. For earlier versions of Antrea, an equivalent rule can be written by not specifying `scope` and providing the imported Service name instead (i.e. `antrea-mc-[svcName]`). Note that the scope of policy's `appliedTo` field will still be restricted to the cluster where the policy is created" }, { "data": "To enforce such a policy for all `role=tenant` Pods in the entire ClusterSet, use the feature described in the later section, and set the `clusterNetworkPolicy` field of the ResourceExport to the `acnp-drop-tenant-to-secured-mc-service` spec above. Such replication should only be performed by ClusterSet admins, who have clearance of creating ClusterNetworkPolicies in all clusters of a ClusterSet. Antrea-native policies now support selecting ingress peers in the ClusterSet scope (since v1.10.0). Policy rules can be created to enforce security postures on ingress traffic from all member clusters in a ClusterSet: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: drop-tenant-access-to-admin-namespace spec: appliedTo: namespaceSelector: matchLabels: role: admin priority: 1 tier: securityops ingress: action: Deny from: scope: ClusterSet namespaceSelector: matchLabels: role: tenant ``` ```yaml apiVersion: crd.antrea.io/v1beta1 kind: NetworkPolicy metadata: name: db-svc-allow-ingress-from-client-only namespace: prod-us-west spec: appliedTo: podSelector: matchLabels: app: db priority: 1 tier: application ingress: action: Allow from: scope: ClusterSet podSelector: matchLabels: app: client action: Deny ``` As shown in the examples above, setting `scope` to `ClusterSet` expands the scope of the `podSelector` or `namespaceSelector` of an ingress peer to the entire ClusterSet that the policy is created in. Similar to egress rules, the scope of an ingress rule's `appliedTo` is still restricted to the local cluster. To use the ingress cross-cluster NetworkPolicy feature, the `enableStretchedNetworkPolicy` option needs to be set to `true` in `antrea-mc-controller-config`, for each `antrea-mc-controller` running in the ClusterSet. Refer to the on how to change the ConfigMap: ```yaml controllermanagerconfig.yaml: | apiVersion: multicluster.crd.antrea.io/v1alpha1 kind: MultiClusterConfig enableStretchedNetworkPolicy: true ``` Note that currently ingress stretched NetworkPolicy only works with the Antrea `encap` traffic mode. Since Antrea v1.6.0, Multi-cluster admins can specify certain ClusterNetworkPolicies to be replicated and enforced across the entire ClusterSet. This is especially useful for ClusterSet admins who want all clusters in the ClusterSet to be applied with a consistent security posture (for example, all Namespaces in all clusters can only communicate with Pods in their own Namespaces). For more information regarding Antrea ClusterNetworkPolicy (ACNP), please refer to . To achieve such ACNP replication across clusters, admins can, in the leader cluster of a ClusterSet, create a `ResourceExport` CR of kind `AntreaClusterNetworkPolicy` which contains the ClusterNetworkPolicy spec they wish to be replicated. The `ResourceExport` should be created in the Namespace where the ClusterSet's leader Multi-cluster Controller runs. ```yaml apiVersion: multicluster.crd.antrea.io/v1alpha1 kind: ResourceExport metadata: name: strict-namespace-isolation-for-test-clusterset namespace: antrea-multicluster # Namespace that Multi-cluster Controller is deployed spec: kind: AntreaClusterNetworkPolicy name: strict-namespace-isolation # In each importing cluster, an ACNP of name antrea-mc-strict-namespace-isolation will be created with the spec below clusterNetworkPolicy: priority: 1 tier: securityops appliedTo: namespaceSelector: {} # Selects all Namespaces in the member cluster ingress: action: Pass from: namespaces: match: Self # Skip drop rule for traffic from Pods in the same Namespace podSelector: matchLabels: k8s-app: kube-dns # Skip drop rule for traffic from the core-dns components action: Drop from: namespaceSelector: {} # Drop from Pods from all other Namespaces ``` The above sample spec will create an ACNP in each member cluster which implements strict Namespace isolation for that cluster. Note that because the Tier that an ACNP refers to must exist before the ACNP is applied, an importing cluster may fail to create the ACNP to be replicated, if the Tier in the ResourceExport spec cannot be found in that particular" }, { "data": "If there are such failures, the ACNP creation status of failed member clusters will be reported back to the leader cluster as K8s Events, and can be checked by describing the `ResourceImport` of the original `ResourceExport`: ```bash $ kubectl describe resourceimport -A Name: strict-namespace-isolation-antreaclusternetworkpolicy Namespace: antrea-multicluster API Version: multicluster.crd.antrea.io/v1alpha1 Kind: ResourceImport Spec: Clusternetworkpolicy: Applied To: Namespace Selector: Ingress: Action: Pass Enable Logging: false From: Namespaces: Match: Self Pod Selector: Match Labels: k8s-app: kube-dns Action: Drop Enable Logging: false From: Namespace Selector: Priority: 1 Tier: random Kind: AntreaClusterNetworkPolicy Name: strict-namespace-isolation ... Events: Type Reason Age From Message - - - Warning ACNPImportFailed 2m11s resourceimport-controller ACNP Tier random does not exist in the importing cluster test-cluster-west ``` In future releases, some additional tooling may become available to automate the creation of ResourceExports for ACNPs, and provide a user-friendly way to define Multi-cluster NetworkPolicies to be enforced in the ClusterSet. If you'd like to build Multi-cluster Controller Docker image locally, you can follow the following steps: Go to your local `antrea` source tree, run `make build-antrea-mc-controller`, and you will get a new image named `antrea/antrea-mc-controller:latest` locally. Run `docker save antrea/antrea-mc-controller:latest > antrea-mcs.tar` to save the image. Copy the image file `antrea-mcs.tar` to the Nodes of your local cluster. Run `docker load < antrea-mcs.tar` in each Node of your local cluster. If you want to remove a member cluster from a ClusterSet and uninstall Antrea Multi-cluster, please follow the following steps. Note: please replace `kube-system` with the right Namespace in the example commands and manifest if Antrea Multi-cluster is not deployed in the default Namespace. Delete all ServiceExports and the Multi-cluster Gateway annotation on the Gateway Nodes. Delete the ClusterSet CR. Antrea Multi-cluster Controller will be responsible for cleaning up all resources created by itself automatically. Delete the Antrea Multi-cluster Deployment: ```bash kubectl delete -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-member.yml ``` If you want to delete a ClusterSet and uninstall Antrea Multi-cluster in a leader cluster, please follow the following steps. You should first before removing a leader cluster from a ClusterSet. Note: please replace `antrea-multicluster` with the right Namespace in the following example commands and manifest if Antrea Multi-cluster is not deployed in the default Namespace. Delete AntreaClusterNetworkPolicy ResourceExports in the leader cluster. Verify that there is no remaining MemberClusterAnnounces. ```bash kubectl get memberclusterannounce -n antrea-multicluster ``` Delete the ClusterSet CR. Antrea Multi-cluster Controller will be responsible for cleaning up all resources created by itself automatically. Check there is no remaining ResourceExports and ResourceImports: ```bash kubectl get resourceexports -n antrea-multicluster kubectl get resourceimports -n antrea-multicluster ``` Note: you can follow the to delete the left-over ResourceExports. Delete the Antrea Multi-cluster Deployment: ```bash kubectl delete -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader.yml ``` We recommend user to redeploy or update Antrea Multi-cluster Controller through `kubectl apply`. If you are using `kubectl delete -f ` and `kubectl create -f ` to redeploy Controller in the leader cluster, you might encounter in `ResourceExport` CRD cleanup. To avoid this issue, please delete any `ResourceExport` CRs in the leader cluster first, and make sure `kubectl get resourceexport -A` returns empty result before you can redeploy Multi-cluster Controller. All `ResourceExports` can be deleted with the following command: ```bash kubectl get resourceexport -A -o json | jq -r '.items[]|[.metadata.namespace,.metadata.name]|join(\" \")' | xargs -n2 bash -c 'kubectl delete -n $0 resourceexport/$1' ```" } ]
{ "category": "Runtime", "file_name": "user-guide.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "name: Release task about: Create a release task title: \"[RELEASE]\" labels: release/task assignees: '' What's the task? Please describe. Action items for releasing v<x.y.z> Roles Release captain: <!--responsible for RD efforts of release development and coordinating with QA captain--> QA captain: <!--responsible for coordinating QA efforts of release testing tasks--> Describe the sub-tasks. Pre-Release (QA captain needs to coordinate the following efforts and finish these items) [ ] Regression test plan (manual) - QA captain [ ] Run e2e regression for pre-GA milestones (`install`, `upgrade`) - @yangchiu [ ] Run security testing of container images for pre-GA milestones - @roger-ryao [ ] Verify longhorn chart PR to ensure all artifacts are ready for GA (`install`, `upgrade`) @chriscchien [ ] Run core testing (install, upgrade) for the GA build from the previous patch (1.5.4) and the last patch of the previous feature release (1.4.4). - @yangchiu Release (Release captain needs to finish the following items) [ ] Release longhorn/chart from the release branch to publish to ArtifactHub [ ] Release note [ ] Deprecation note [ ] Upgrade notes including highlighted notes, deprecation, compatible changes, and others impacting the current users Post-Release (Release captain needs to coordinate the following items) [ ] Create a new release branch of manager/ui/tests/engine/longhorn instance-manager/share-manager/backing-image-manager when creating the RC1 (only for new feature release) [ ] Update https://github.com/longhorn/longhorn/blob/master/deploy/upgraderesponderserver/chart-values.yaml @PhanLe1010 [ ] Add another request for the rancher charts for the next patch release (`1.5.6`) @rebeccazzzz Rancher charts: verify the chart can be installed & upgraded - @khushboo-rancher [ ] rancher/image-mirrors update @PhanLe1010 [ ] rancher/charts active branches (2.7 & 2.8) for rancher marketplace @mantissahz @PhanLe1010 cc @longhorn/qa @longhorn/dev" } ]
{ "category": "Runtime", "file_name": "release.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "Describe the PR e.g. add cool parser. Relation issue e.g. https://github.com/swaggo/gin-swagger/pull/123/files Additional context Add any other context about the problem here." } ]
{ "category": "Runtime", "file_name": "pull_request_template.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "This document defines a high level roadmap for rkt development. The dates below should not be considered authoritative, but rather indicative of the projected timeline of the project. The represent the most up-to-date state of affairs. rkt's version 1.0 release marks the command line user interface and on-disk data structures as stable and reliable for external development. Adapting rkt to offer first-class implementation of the Kubernetes Container Runtime Interface. Supporting OCI specs natively in rkt. Following OCI evolution and stabilization, it will become the preferred way over appc. However, rkt will continue to support the ACI image format and distribution mechanism. There is currently no plans to remove that support from rkt. Future tasks without a specific timeline are tracked at https://github.com/rkt/rkt/milestone/30." } ]
{ "category": "Runtime", "file_name": "ROADMAP.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "We follow the , and use the corresponding tooling. For the purposes of the aforementioned guidelines, controller-runtime counts as a \"library project\", but otherwise follows the guidelines exactly. For release branches, we generally tend to support backporting one (1) major release (`release-{X-1}` or `release-0.{Y-1}`), but may go back further if the need arises and is very pressing (e.g. security updates). Note the . Particularly: We DO guarantee Kubernetes REST API compatibility -- if a given version of controller-runtime stops working with what should be a supported version of Kubernetes, this is almost certainly a bug. We DO NOT guarantee any particular compatibility matrix between kubernetes library dependencies (client-go, apimachinery, etc); Such compatibility is infeasible due to the way those libraries are versioned." } ]
{ "category": "Runtime", "file_name": "VERSIONING.md", "project_name": "Stash by AppsCode", "subcategory": "Cloud Native Storage" }
[ { "data": "The initial roadmap of CRI-O was lightweight and followed the main Kubernetes Container Runtime Interface (CRI) development lifecycle. This is partially because additional features on top of that are either integrated into a CRI-O release as soon as theyre ready, or are tracked through the Milestone mechanism in GitHub. Another reason is that feature availability is mostly tied to Kubernetes releases, and thus most of its long-term goals are already tracked in through the Kubernetes Enhancement Proposal (KEP) process. Finally, CRI-Os long-term roadmap outside of features being added by SIG-Node is in part described by its mission: to be a secure, performant and stable implementation of the Container Runtime Interface. However, all of these together do construct a roadmap, and this document will describe how. CRI-Os primary internal planning mechanism is the Milestone feature in GitHub, along with Issues. Since CRI-O releases in lock-step with Kubernetes minor releases, where the CRI-O community aims to have a x.y.0 release released within three days after the corresponding Kubernetes x.y.0 release, there is a well established deadline that must be met. For PRs and Issues that are targeted at a particular x.y.0 release can be added to the x.y Milestone and they will be considered for priority in leading up to the actual release. However, since CRI-Os releases are time bound and partially out of the CRI-O communities control, tagging a PR or issue with a Milestone does not guarantee it will be included. Users or contributors who dont have permissions to add the Milestone can request an Approver to do so. If there is disagreement, the standard mechanism will be used. Pull requests targeting patch release-x.y branches are not part of any milestone. The release branches are decoupled from the and fixes can be merged ad-hoc. The support for patch release branches follows the yearly Kubernetes period and can be longer based on contributor bandwidth. CRI-Os primary purpose is to be a CRI compliant runtime for Kubernetes, and thus most of the features that CRI-O adds are added to remain conformant to the" }, { "data": "Often, though not always, CRI-O will attempt to support new features in Kubernetes while theyre in the Alpha stage, though sometimes this target is missed and support is added while the feature is in Beta. To track the features that may be added to CRI-O from upstream, one can watch for a given release. If a particular feature interests you, the CRI-O community recommends you open an issue in CRI-O so it can be included in the Milestone for that given release. CRI-O maintainers are involved in SIG-Node in driving various upstream initiatives that can be tracked in the . There still exist features that CRI-O will add that exist outside of the purview of SIG-Node, and span multiple releases. These features are usually implemented to fulfill CRI-Os purpose of being secure, performant, and stable. As all of these are aspirational and seek to improve CRI-O structurally, as opposed to fixing a bug or clearly adding a feature, its less appropriate to have an issue for them. As such, updates to this document will be made once per release cycle. Finally, it is worth noting that the integration of these features will be deliberate, slow, and strictly opted into. CRI-O does aim to constantly improve, but also aims to never compromise its stability in the process. Some of these features can be seen below: Improve upstream documentation Automate release process Improved seccomp notification support Increase pod density on nodes: Reduce overhead of relisting pods and containers (alongside ) Reduce overhead of metrics collection (alongside ) Reduce process overhead of multiple conmons per pod (through ) Improve maintainability by ensuring the code is easy to understand and follow Improve observability and tracing internally Evaluate rust reimplementation of different pieces of the codebase. Relying on different SIGs for CRI-O features: We have a need to discuss our enhancements with different SIGs to get all required information and drive the change. This can lead into helpful, but maybe not expected input and delay the deliverable. Some features require initial research: We're not completely aware of all technical aspects for the changes. This means that there is a risk of delaying because of investing more time in pre-research." } ]
{ "category": "Runtime", "file_name": "roadmap.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "<!-- Thank you for contributing to curve! --> Issue Number: #xxx <!-- replace xxx with issue number --> Problem Summary: What's Changed: How it Works: Side effects(Breaking backward compatibility? Performance regression?): [ ] Relevant documentation/comments is changed or added [ ] I acknowledge that all my contributions will be made under the project's license" } ]
{ "category": "Runtime", "file_name": "pull_request_template.md", "project_name": "Curve", "subcategory": "Cloud Native Storage" }
[ { "data": "| json type \\ dest type | bool | int | uint | float |string| | | | | |--|--| | number | positive => true <br/> negative => true <br/> zero => false| 23.2 => 23 <br/> -32.1 => -32| 12.1 => 12 <br/> -12.1 => 0|as normal|same as origin| | string | empty string => false <br/> string \"0\" => false <br/> other strings => true | \"123.32\" => 123 <br/> \"-123.4\" => -123 <br/> \"123.23xxxw\" => 123 <br/> \"abcde12\" => 0 <br/> \"-32.1\" => -32| 13.2 => 13 <br/> -1.1 => 0 |12.1 => 12.1 <br/> -12.3 => -12.3<br/> 12.4xxa => 12.4 <br/> +1.1e2 =>110 |same as origin| | bool | true => true <br/> false => false| true => 1 <br/> false => 0 | true => 1 <br/> false => 0 |true => 1 <br/>false => 0|true => \"true\" <br/> false => \"false\"| | object | true | 0 | 0 |0|originnal json| | array | empty array => false <br/> nonempty array => true| [] => 0 <br/> [1,2] => 1 | [] => 0 <br/> [1,2] => 1 |[] => 0<br/>[1,2] => 1|original json|" } ]
{ "category": "Runtime", "file_name": "fuzzy_mode_convert_table.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "This document outlines the Open vSwitch (OVS) pipeline Antrea uses to implement its networking functionalities. The following assumptions are currently in place: Antrea is deployed in encap mode, establishing an overlay network across all Nodes. All the Nodes are Linux Nodes. IPv6 is disabled. Option `antreaProxy.proxyAll` (referred to as `proxyAll` later in this document) is enabled. Two Alpha features `TrafficControl` and `L7NetworkPolicy` are enabled. Default settings are maintained for other features and options. The document references version v1.15 of Antrea. Node Route Controller: the which is a part of antrea-agent and watches for updates to Nodes. When a Node is added, it updates the local networking configurations (e.g. configure the tunnel to the new Node). When a Node is deleted, it performs the necessary clean-ups. peer Node: this is how we refer to other Nodes in the cluster, to which the local Node is connected through a Geneve, VXLAN, GRE, or STT tunnel. Antrea-native NetworkPolicy: Antrea ClusterNetworkPolicy and Antrea NetworkPolicy CRDs, as documented . Service session affinity: a Service attribute that selects the same backend Pods for connections from a particular client. For a K8s Service, session affinity can be enabled by setting `service.spec.sessionAffinity` to `ClientIP` (default is `None`). See for more information about session affinity. table-miss flow: a \"catch-all\" flow in an OpenFlow table, which is used if no other flow is matched. If the table-miss flow does not exist, by default packets unmatched by flows are dropped (discarded). action `conjunction`: an efficient way in OVS to implement conjunctive matches, is a match for which multiple fields are required to match conjunctively, each within a set of acceptable values. See [OVS fields](http://www.openvswitch.org/support/dist-docs/ovs-fields.7.txt) for more information. action `normal`: OpenFlow defines this action to submit a packet to \"the traditional non-OpenFlow pipeline of the switch\". In other words, if a flow uses this action, the packets matched by the flow traverse the switch in the same manner as they would if OpenFlow were not configured on the switch. Antrea uses this action to process ARP packets as a regular learning L2 switch would. action `group`: an action used to process forwarding decisions on multiple OVS ports. Examples include: load-balancing, multicast, and active/standby. See [OVS group action](https://docs.openvswitch.org/en/latest/ref/ovs-actions.7/#the-group-action) for more information. action `IN_PORT`: an action to output packets to the port on which they were received. This is the only standard way to output the packets to the input port. action `ct`: an action to commit connections to the connection tracking module, which OVS can use to match the state of a TCP, UDP, ICMP, etc., connection. See the [OVS Conntrack tutorial](https://docs.openvswitch.org/en/latest/tutorials/ovs-conntrack/) for more information. reg mark: a value stored in an OVS register conveying information for a packet across the pipeline. Explore all reg marks in the pipeline in the [OVS Registers] section. ct mark: a value stored in the field `ct_mark` of OVS conntrack, conveying information for a connection throughout its entire lifecycle across the pipeline. Explore all values used in the pipeline in the [Ct Marks] section. ct label: a value stored in the field `ct_label` of OVS conntrack, conveying information for a connection throughout its entire lifecycle across the pipeline. Explore all values used in the pipeline in the [Ct Labels] section. ct zone: a zone is to isolate connection tracking rules stored in the field `ct_zone` of OVS conntrack. It is conceptually similar to the more generic Linux network namespace but is specific to conntrack and has less" }, { "data": "Explore all the zones used in the pipeline in the [Ct Zones] section. dmac table: a traditional L2 switch has a \"dmac\" table that maps the learned destination MAC address to the appropriate egress port. It is often the same physical table as the \"smac\" table (which matches the source MAC address and initiates MAC learning if the address is unknown). Global Virtual MAC: a virtual MAC address that is used as the destination MAC for all tunneled traffic across all Nodes. This simplifies networking by enabling all Nodes to use this MAC address instead of the actual MAC address of the appropriate remote gateway. This allows each OVS to act as a \"proxy\" for the local gateway when receiving tunneled traffic and directly take care of the packet forwarding. Currently, we use a hard-coded value of `aa:bb:cc:dd:ee:ff`. Virtual Service IP: a virtual IP address used as the source IP address for hairpin Service connections through the Antrea gateway port. Currently, we use a hard-coded value of `169.254.0.253`. Virtual NodePort DNAT IP: a virtual IP address used as a DNAT IP address for NodePort Service connections through Antrea gateway port. Currently, we use a hard-coded value of `169.254.0.252`. This guide includes a representative flow dump for every table in the pipeline, to illustrate the function of each table. If you have a cluster running Antrea, you can dump the flows or groups on a given Node as follows: ```bash kubectl exec -n kube-system <ANTREAAGENTPODNAME> -c antrea-ovs -- ovs-ofctl dump-flows <BRIDGENAME> -O Openflow15 [--no-stats] [--names] kubectl exec -n kube-system <ANTREAAGENTPODNAME> -c antrea-ovs -- ovs-ofctl dump-groups <BRIDGENAME> -O Openflow15 [--names] ``` where `<ANTREAAGENTPODNAME>` is the name of the antrea-agent Pod running on that Node, and `<BRIDGENAME>` is the name of the bridge created by Antrea (`br-int` by default). You can also dump the flows for a specific table or group as follows: ```bash kubectl exec -n kube-system <ANTREAAGENTPODNAME> -c antrea-ovs -- ovs-ofctl dump-flows <BRIDGENAME> table=<TABLE_NAME> -O Openflow15 [--no-stats] [--names] kubectl exec -n kube-system <ANTREAAGENTPODNAME> -c antrea-ovs -- ovs-ofctl dump-groups <BRIDGENAME> <GROUP_ID> -O Openflow15 [--names] ``` where `<TABLENAME>` is the name of a table in the pipeline, and `<GROUPID>` is the ID of a group. We use some OVS registers to carry information throughout the pipeline. To enhance usability, we assign friendly names to the registers we use. | Register | Field Range | Field Name | Reg Mark Value | Reg Mark Name | Description | ||-||-||| | NXMNXREG0 | bits 0-3 | PktSourceField | 0x1 | FromTunnelRegMark | Packet source is tunnel port. | | | | | 0x2 | FromGatewayRegMark | Packet source is the local Antrea gateway port. | | | | | 0x3 | FromPodRegMark | Packet source is local Pod port. | | | | | 0x4 | FromUplinkRegMark | Packet source is uplink port. | | | | | 0x5 | FromBridgeRegMark | Packet source is local bridge port. | | | | | 0x6 | FromTCReturnRegMark | Packet source is TrafficControl return port. | | | bits 4-7 | PktDestinationField | 0x1 | ToTunnelRegMark | Packet destination is tunnel port. | | | | | 0x2 | ToGatewayRegMark | Packet destination is the local Antrea gateway port. | | | | | 0x3 | ToLocalRegMark | Packet destination is local Pod port. | | | | | 0x4 | ToUplinkRegMark | Packet destination is uplink port. | | | | | 0x5 | ToBridgeRegMark | Packet destination is local bridge" }, { "data": "| | | bit 9 | | 0b0 | NotRewriteMACRegMark | Packet's source/destination MAC address does not need to be rewritten. | | | | | 0b1 | RewriteMACRegMark | Packet's source/destination MAC address needs to be rewritten. | | | bit 10 | | 0b1 | APDenyRegMark | Packet denied (Drop/Reject) by Antrea NetworkPolicy. | | | bits 11-12 | APDispositionField | 0b00 | DispositionAllowRegMark | Indicates Antrea NetworkPolicy disposition: allow. | | | | | 0b01 | DispositionDropRegMark | Indicates Antrea NetworkPolicy disposition: drop. | | | | | 0b11 | DispositionPassRegMark | Indicates Antrea NetworkPolicy disposition: pass. | | | bit 13 | | 0b1 | GeneratedRejectPacketOutRegMark | Indicates packet is a generated reject response packet-out. | | | bit 14 | | 0b1 | SvcNoEpRegMark | Indicates packet towards a Service without Endpoint. | | | bit 19 | | 0b1 | RemoteSNATRegMark | Indicates packet needs SNAT on a remote Node. | | | bit 22 | | 0b1 | L7NPRedirectRegMark | Indicates L7 Antrea NetworkPolicy disposition of redirect. | | | bits 21-22 | OutputRegField | 0b01 | OutputToOFPortRegMark | Output packet to an OVS port. | | | | | 0b10 | OutputToControllerRegMark | Send packet to Antrea Agent. | | | bits 25-32 | PacketInOperationField | | | Field to store NetworkPolicy packetIn operation. | | NXMNXREG1 | bits 0-31 | TargetOFPortField | | | Egress OVS port of packet. | | NXMNXREG2 | bits 0-31 | SwapField | | | Swap values in flow fields in OpenFlow actions. | | | bits 0-7 | PacketInTableField | | | OVS table where it was decided to send packets to the controller (Antrea Agent). | | NXMNXREG3 | bits 0-31 | EndpointIPField | | | Field to store IPv4 address of the selected Service Endpoint. | | | | APConjIDField | | | Field to store Conjunction ID for Antrea Policy. | | NXMNXREG4 | bits 0-15 | EndpointPortField | | | Field store TCP/UDP/SCTP port of a Service's selected Endpoint. | | | bits 16-18 | ServiceEPStateField | 0b001 | EpToSelectRegMark | Packet needs to do Service Endpoint selection. | | | bits 16-18 | ServiceEPStateField | 0b010 | EpSelectedRegMark | Packet has done Service Endpoint selection. | | | bits 16-18 | ServiceEPStateField | 0b011 | EpToLearnRegMark | Packet has done Service Endpoint selection and the selected Endpoint needs to be cached. | | | bits 0-18 | EpUnionField | | | The union value of EndpointPortField and ServiceEPStateField. | | | bit 19 | | 0b1 | ToNodePortAddressRegMark | Packet is destined for a Service of type NodePort. | | | bit 20 | | 0b1 | AntreaFlexibleIPAMRegMark | Packet is from local Antrea IPAM Pod. | | | bit 20 | | 0b0 | NotAntreaFlexibleIPAMRegMark | Packet is not from local Antrea IPAM Pod. | | | bit 21 | | 0b1 | ToExternalAddressRegMark | Packet is destined for a Service's external IP. | | | bits 22-23 | TrafficControlActionField | 0b01 | TrafficControlMirrorRegMark | Indicates packet needs to be mirrored (used by TrafficControl). | | | | | 0b10 | TrafficControlRedirectRegMark | Indicates packet needs to be redirected (used by TrafficControl). | | | bit 24 | | 0b1 | NestedServiceRegMark | Packet is destined for a Service using other Services as Endpoints. | | | bit 25 | | 0b1 | DSRServiceRegMark | Packet is destined for a Service working in DSR" }, { "data": "| | | | | 0b0 | NotDSRServiceRegMark | Packet is destined for a Service working in non-DSR mode. | | | bit 26 | | 0b1 | RemoteEndpointRegMark | Packet is destined for a Service selecting a remote non-hostNetwork Endpoint. | | | bit 27 | | 0b1 | FromExternalRegMark | Packet is from Antrea gateway, but its source IP is not the gateway IP. | | | bit 28 | | 0b1 | FromLocalRegMark | Packet is from a local Pod or the Node. | | NXMNXREG5 | bits 0-31 | TFEgressConjIDField | | | Egress conjunction ID hit by TraceFlow packet. | | NXMNXREG6 | bits 0-31 | TFIngressConjIDField | | | Ingress conjunction ID hit by TraceFlow packet. | | NXMNXREG7 | bits 0-31 | ServiceGroupIDField | | | GroupID corresponding to the Service. | | NXMNXREG8 | bits 0-11 | VLANIDField | | | VLAN ID. | | | bits 12-15 | CtZoneTypeField | 0b0001 | IPCtZoneTypeRegMark | Ct zone type is IPv4. | | | | | 0b0011 | IPv6CtZoneTypeRegMark | Ct zone type is IPv6. | | | bits 0-15 | CtZoneField | | | Ct zone ID is a combination of VLANIDField and CtZoneTypeField. | | NXMNXREG9 | bits 0-31 | TrafficControlTargetOFPortField | | | Field to cache the OVS port to output packets to be mirrored or redirected (used by TrafficControl). | | NXMNXXXREG3 | bits 0-127 | EndpointIP6Field | | | Field to store IPv6 address of the selected Service Endpoint. | Note that reg marks that have overlapped bits will not be used at the same time, such as `SwapField` and `PacketInTableField`. We use some bits of the `ct_mark` field of OVS conntrack to carry information throughout the pipeline. To enhance usability, we assign friendly names to the bits we use. | Field Range | Field Name | Ct Mark Value | Ct Mark Name | Description | |-|--||--|--| | bits 0-3 | ConnSourceCTMarkField | 0b0010 | FromGatewayCTMark | Connection source is the Antrea gateway port. | | | | 0b0101 | FromBridgeCTMark | Connection source is the local bridge port. | | bit 4 | | 0b1 | ServiceCTMark | Connection is for Service. | | | | 0b0 | NotServiceCTMark | Connection is not for Service. | | bit 5 | | 0b1 | ConnSNATCTMark | SNAT'd connection for Service. | | bit 6 | | 0b1 | HairpinCTMark | Hair-pin connection. | | bit 7 | | 0b1 | L7NPRedirectCTMark | Connection should be redirected to an application-aware engine. | We use some bits of the `ct_label` field of OVS conntrack to carry information throughout the pipeline. To enhance usability, we assign friendly names to the bits we use. | Field Range | Field Name | Description | |-|--|| | bits 0-31 | IngressRuleCTLabel | Ingress rule ID. | | bits 32-63 | EgressRuleCTLabel | Egress rule ID. | | bits 64-75 | L7NPRuleVlanIDCTLabel | VLAN ID for L7 NetworkPolicy rule. | We use some OVS conntrack zones to isolate connection tracking rules. To enhance usability, we assign friendly names to the ct zones. | Zone ID | Zone Name | Description | ||--|-| | 65520 | CtZone | Tracking IPv4 connections that don't require SNAT. | | 65521 | SNATCtZone | Tracking IPv4 connections that require SNAT. | Several tables of the pipeline are dedicated to [Kubernetes NetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) implementation (tables [EgressRule], [EgressDefaultRule], [IngressRule], and" }, { "data": "Throughout this document, the following K8s NetworkPolicy example is used to demonstrate how simple ingress and egress policy rules are mapped to OVS flows. This K8s NetworkPolicy is applied to Pods with the label `app: web` in the `default` Namespace. For these Pods, only TCP traffic on port 80 from Pods with the label `app: client` and to Pods with the label `app: db` is allowed. Because Antrea will only install OVS flows for this K8s NetworkPolicy on Nodes that have Pods selected by the policy, we have scheduled an `app: web` Pod on the current Node from which the sample flows in this document are dumped. The Pod has been assigned an IP address `10.10.0.19` from the Antrea CNI, so you will see the IP address shown in the associated flows. ```yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: web-app-db-network-policy namespace: default spec: podSelector: matchLabels: app: web policyTypes: Ingress Egress ingress: from: podSelector: matchLabels: app: client ports: protocol: TCP port: 80 egress: to: podSelector: matchLabels: app: db ports: protocol: TCP port: 3306 ``` Like K8s NetworkPolicy, several tables of the pipeline are dedicated to [Kubernetes Service](https://kubernetes.io/docs/concepts/services-networking/service/) implementation (tables [NodePortMark], [SessionAffinity], [ServiceLB], and [EndpointDNAT]). By enabling `proxyAll`, ClusterIP, NodePort, LoadBalancer, and ExternalIP are all handled by AntreaProxy. Otherwise, only in-cluster ClusterIP is handled. In this document, we use the sample K8s Services below. These Services select Pods with the label `app: web` as Endpoints. A sample Service with `clusterIP` set to `10.101.255.29` does not have any associated Endpoint. ```yaml apiVersion: v1 kind: Service metadata: name: sample-clusterip-no-ep spec: ports: protocol: TCP port: 80 targetPort: 80 clusterIP: 10.101.255.29 ``` A sample ClusterIP Service with `clusterIP` set to `10.105.31.235`. ```yaml apiVersion: v1 kind: Service metadata: name: sample-clusterip spec: selector: app: web ports: protocol: TCP port: 80 targetPort: 80 clusterIP: 10.105.31.235 ``` A sample NodePort Service with `nodePort` set to `30004`. ```yaml apiVersion: v1 kind: Service metadata: name: sample-nodeport spec: selector: app: web ports: protocol: TCP port: 80 targetPort: 80 nodePort: 30004 type: NodePort ``` A sample LoadBalancer Service with ingress IP `192.168.77.150` assigned by an ingress controller. ```yaml apiVersion: v1 kind: Service metadata: name: sample-loadbalancer spec: selector: app: web ports: protocol: TCP port: 80 targetPort: 80 type: LoadBalancer status: loadBalancer: ingress: ip: 192.168.77.150 ``` A sample Service with external IP `192.168.77.200`. ```yaml apiVersion: v1 kind: Service metadata: name: sample-service-externalip spec: selector: app: web ports: protocol: TCP port: 80 targetPort: 80 externalIPs: 192.168.77.200 ``` A sample Service configured with session affinity. ```yaml apiVersion: v1 kind: Service metadata: name: sample-service-session-affinity spec: selector: app: web ports: protocol: TCP port: 80 targetPort: 80 clusterIP: 10.96.76.15 sessionAffinity: ClientIP sessionAffinityConfig: clientIP: timeoutSeconds: 300 ``` A sample Service configured `externalTrafficPolicy` to `Local`. Only `externalTrafficPolicy` of NodePort/LoadBalancer Service can be configured with `Local`. ```yaml apiVersion: v1 kind: Service metadata: name: sample-service-etp-local spec: selector: app: web ports: protocol: TCP port: 80 targetPort: 80 type: LoadBalancer externalTrafficPolicy: Local status: loadBalancer: ingress: ip: 192.168.77.151 ``` In addition to the tables created for K8s NetworkPolicy, Antrea creates additional dedicated tables to support (tables [AntreaPolicyEgressRule] and [AntreaPolicyIngressRule]). Consider the following Antrea ClusterNetworkPolicy (ACNP) in the Application Tier as an example for the remainder of this document. This ACNP is applied to all Pods with the label `app: web` in all Namespaces. For these Pods, only TCP traffic on port 80 from the Pods with the label `app: client` and to the Pods with the label `app: db` is allowed. Similar to K8s NetworkPolicy, Antrea will only install OVS flows for this policy on Nodes that have Pods selected by the policy. This policy has very similar rules as the K8s NetworkPolicy example shown" }, { "data": "This is intentional to simplify this document and to allow easier comparison between the flows generated for both types of policies. Additionally, we should emphasize that this policy applies to Pods across all Namespaces, while a K8s NetworkPolicy is always scoped to a specific Namespace (in the case of our example, the default Namespace). ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: web-app-db-network-policy spec: priority: 5 tier: application appliedTo: podSelector: matchLabels: app: web ingress: action: Allow from: podSelector: matchLabels: app: client ports: protocol: TCP port: 80 name: AllowFromClient action: Drop egress: action: Allow to: podSelector: matchLabels: app: db ports: protocol: TCP port: 3306 name: AllowToDB action: Drop ``` In addition to layer 3 and layer 4 policies mentioned above, [Antrea-native Layer 7 NetworkPolicy](../antrea-l7-network-policy.md) is also supported in Antrea. The main difference is that Antrea-native L7 NetworkPolicy uses layer 7 protocol to filter traffic, not layer 3 or layer 4 protocol. Consider the following Antrea-native L7 NetworkPolicy in the Application Tier as an example for the remainder of this document. This ACNP is applied to all Pods with the label `app: web` in all Namespaces. It allows only HTTP ingress traffic on port 8080 from Pods with the label `app: client`, limited to the `GET` method and `/api/v2/*` path. Any other HTTP ingress traffic on port 8080 from Pods with the label `app: client` will be dropped. ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: ingress-allow-http-request-to-api-v2 spec: priority: 4 tier: application appliedTo: podSelector: matchLabels: app: web ingress: name: AllowFromClientL7 action: Allow from: podSelector: matchLabels: app: client ports: protocol: TCP port: 8080 l7Protocols: http: path: \"/api/v2/*\" method: \"GET\" ``` is a CRD API that manages and manipulates the transmission of Pod traffic. Antrea creates a dedicated table [TrafficControl] to implement feature `TrafficControl`. We will use the following TrafficControls as examples for the remainder of this document. This is a TrafficControl applied to Pods with the label `app: web`. For these Pods, both ingress and egress traffic will be redirected to port `antrea-tc-tap0`, and returned through port `antrea-tc-tap1`. ```yaml apiVersion: crd.antrea.io/v1alpha2 kind: TrafficControl metadata: name: redirect-web-to-local spec: appliedTo: podSelector: matchLabels: app: web direction: Both action: Redirect targetPort: ovsInternal: name: antrea-tc-tap0 returnPort: ovsInternal: name: antrea-tc-tap1 ``` This is a TrafficControl applied to Pods with the label `app: db`. For these Pods, both ingress and egress will be mirrored (duplicated) to port `antrea-tc-tap2`. ```yaml apiVersion: crd.antrea.io/v1alpha2 kind: TrafficControl metadata: name: mirror-db-to-local spec: appliedTo: podSelector: matchLabels: app: db direction: Both action: Mirror targetPort: ovsInternal: name: antrea-tc-tap2 ``` Table [EgressMark] is dedicated to the implementation of feature `Egress`. Consider the following Egresses as examples for the remainder of this document. This is an Egress applied to Pods with the label `app: web`. For these Pods, all egress traffic (traffic leaving the cluster) will be SNAT'd on the Node `k8s-node-control-plane` using Egress IP `192.168.77.112`. In this context, `k8s-node-control-plane` is known as the \"Egress Node\" for this Egress resource. Note that the flows presented in the rest of this document were dumped on Node `k8s-node-control-plane`. Egress flows are different on the \"source Node\" (Node running a workload Pod to which the Egress resource is applied) and on the \"Egress Node\" (Node enforcing the SNAT policy). ```yaml apiVersion: crd.antrea.io/v1beta1 kind: Egress metadata: name: egress-web spec: appliedTo: podSelector: matchLabels: app: web egressIP: 192.168.77.112 status: egressNode: k8s-node-control-plane ``` This is an Egress applied to Pods with the label `app: client`. For these Pods, all egress traffic will be SNAT'd on the Node `k8s-node-worker-1` using Egress IP `192.168.77.113`. ```yaml apiVersion:" }, { "data": "kind: Egress metadata: name: egress-client spec: appliedTo: podSelector: matchLabels: app: client egressIP: 192.168.77.113 status: egressNode: k8s-node-worker-1 ``` This table serves as the primary entry point in the pipeline, forwarding packets to different tables based on their respective protocols. If you dump the flows of this table, you may see the following: ```text table=PipelineRootClassifier, priority=200,arp actions=goto_table:ARPSpoofGuard table=PipelineRootClassifier, priority=200,ip actions=goto_table:Classifier table=PipelineRootClassifier, priority=0 actions=drop ``` Flow 1 forwards ARP packets to table [ARPSpoofGuard]. Flow 2 forwards IP packets to table [Classifier]. Flow 3 is the table-miss flow to drop other unsupported protocols, not normally used. This table is designed to drop ARP packets from local Pods or the local Antrea gateway. We ensure that the advertised IP and MAC addresses are correct, meaning they match the values configured on the interface when Antrea sets up networking for a local Pod or the local Antrea gateway. If you dump the flows of this table, you may see the following: ```text table=ARPSpoofGuard, priority=200,arp,inport=\"antrea-gw0\",arpspa=10.10.0.1,arpsha=ba:5e:d1:55:aa:c0 actions=gototable:ARPResponder table=ARPSpoofGuard, priority=200,arp,inport=\"client-6-3353ef\",arpspa=10.10.0.26,arpsha=5e:b5:e3:a6:90:b7 actions=gototable:ARPResponder table=ARPSpoofGuard, priority=200,arp,inport=\"web-7975-274540\",arpspa=10.10.0.24,arpsha=fa:b7:53:74:21:a6 actions=gototable:ARPResponder table=ARPSpoofGuard, priority=200,arp,inport=\"db-755c6-5080e3\",arpspa=10.10.0.25,arpsha=36:48:21:a2:9d:b4 actions=gototable:ARPResponder table=ARPSpoofGuard, priority=0 actions=drop ``` Flow 1 matches legitimate ARP packets from the local Antrea gateway. Flows 2-4 match legitimate ARP packets from local Pods. Flow 5 is the table-miss flow to drop ARP spoofing packets, which are not matched by flows 1-4. The purpose of this table is to handle ARP requests from the local Antrea gateway or local Pods, addressing specific cases: Responding to ARP requests from the local Antrea gateway seeking the MAC address of a remote Antrea gateway located on a different Node. This ensures that the local Node can reach any remote Pods. Ensuring the normal layer 2 (L2) learning among local Pods and the local Antrea gateway. If you dump the flows of this table, you may see the following: ```text table=ARPResponder, priority=200,arp,arptpa=10.10.1.1,arpop=1 actions=move:NXMOFETHSRC[]->NXMOFETHDST[],setfield:aa:bb:cc:dd:ee:ff->ethsrc,setfield:2->arpop,move:NXMNXARPSHA[]->NXMNXARPTHA[],setfield:aa:bb:cc:dd:ee:ff->arpsha,move:NXMOFARPSPA[]->NXMOFARPTPA[],setfield:10.10.1.1->arpspa,IN_PORT table=ARPResponder, priority=190,arp actions=NORMAL table=ARPResponder, priority=0 actions=drop ``` Flow 1 is designed for case 1, matching ARP request packets for the MAC address of a remote Antrea gateway with IP address `10.10.1.1`. It programs an ARP reply packet and sends it back to the port where the request packet was received. Note that both the source hardware address and the source MAC address in the ARP reply packet are set to the *Global Virtual MAC* `aa:bb:cc:dd:ee:ff`, not the actual MAC address of the remote Antrea gateway. This ensures that once the traffic is received by the remote OVS bridge, it can be directly forwarded to the appropriate Pod without actually going through the local Antrea gateway. The Global Virtual MAC is used as the destination MAC address for all the traffic being tunneled or routed. This flow serves as the \"ARP responder\" for the peer Node whose local Pod subnet is `10.10.1.0/24`. If we were to look at the routing table for the local Node, we would find the following \"onlink\" route: ```text 10.10.1.0/24 via 10.10.1.1 dev antrea-gw0 onlink ``` A similar route is installed on the local Antrea gateway (antrea-gw0) interface every time the Antrea Node Route Controller is notified that a new Node has joined the cluster. The route must be marked as \"onlink\" since the kernel does not have a route to the peer gateway `10.10.1.1`. We \"trick\" the kernel into believing that `10.10.1.1` is directly connected to the local Node, even though it is on the other side of the tunnel. Flow 2 is designed for case 2, ensuring that OVS handles the remainder of ARP traffic as a regular L2 learning switch (using the `normal`" }, { "data": "In particular, this takes care of forwarding ARP requests and replies among local Pods. Flow 3 is the table-miss flow, which should never be used since ARP packets will be matched by either flow 1 or 2. This table is designed to determine the \"category\" of IP packets by matching on their ingress port. It addresses specific cases: Packets originating from the local Node through the local Antrea gateway port, requiring IP spoof legitimacy verification. Packets originating from the external network through the Antrea gateway port. Packets received through an overlay tunnel. Packets received through a return port defined in a user-provided TrafficControl CR (for feature `TrafficControl`). Packets returned from an application-aware engine through a specific port (for feature `L7NetworkPolicy`). Packets originating from local Pods, requiring IP spoof legitimacy verification. If you dump the flows of this table, you may see the following: ```text table=Classifier, priority=210,ip,inport=\"antrea-gw0\",nwsrc=10.10.0.1 actions=setfield:0x2/0xf->reg0,setfield:0x10000000/0x10000000->reg4,goto_table:SpoofGuard table=Classifier, priority=200,inport=\"antrea-gw0\" actions=setfield:0x2/0xf->reg0,setfield:0x8000000/0x8000000->reg4,gototable:SpoofGuard table=Classifier, priority=200,inport=\"antrea-tun0\" actions=setfield:0x1/0xf->reg0,setfield:0x200/0x200->reg0,gototable:UnSNAT table=Classifier, priority=200,inport=\"antrea-tc-tap2\" actions=setfield:0x6/0xf->reg0,goto_table:L3Forwarding table=Classifier, priority=200,inport=\"antrea-l7-tap1\",vlantci=0x1000/0x1000 actions=popvlan,setfield:0x6/0xf->reg0,goto_table:L3Forwarding table=Classifier, priority=190,inport=\"client-6-3353ef\" actions=setfield:0x3/0xf->reg0,setfield:0x10000000/0x10000000->reg4,gototable:SpoofGuard table=Classifier, priority=190,inport=\"web-7975-274540\" actions=setfield:0x3/0xf->reg0,setfield:0x10000000/0x10000000->reg4,gototable:SpoofGuard table=Classifier, priority=190,inport=\"db-755c6-5080e3\" actions=setfield:0x3/0xf->reg0,setfield:0x10000000/0x10000000->reg4,gototable:SpoofGuard table=Classifier, priority=0 actions=drop ``` Flow 1 is designed for case 1, matching the source IP address `10.10.0.1` to ensure that the packets are originating from the local Antrea gateway. The following reg marks are loaded: `FromGatewayRegMark`, indicating that the packets are received on the local Antrea gateway port, which will be consumed in tables [L3Forwarding], [L3DecTTL], [SNATMark] and [SNAT]. `FromLocalRegMark`, indicating that the packets are from the local Node, which will be consumed in table [ServiceLB]. Flow 2 is designed for case 2, matching packets originating from the external network through the Antrea gateway port and forwarding them to table [SpoofGuard]. Since packets originating from the local Antrea gateway are matched by flow 1, flow 2 can only match packets originating from the external network. The following reg marks are loaded: `FromGatewayRegMark`, the same as flow 1. `FromExternalRegMark`, indicating that the packets are from the external network, not the local Node. Flow 3 is for case 3, matching packets through an overlay tunnel (i.e., from another Node) and forwarding them to table [UnSNAT]. This approach is based on the understanding that these packets originate from remote Nodes, potentially bearing varying source IP addresses. These packets undergo legitimacy verification before being tunneled. As a consequence, packets from the tunnel should be seamlessly forwarded to table [UnSNAT]. The following reg marks are loaded: `FromTunnelRegMark`, indicating that the packets are received on a tunnel, consumed in table [L3Forwarding]. `RewriteMACRegMark`, indicating that the source and destination MAC addresses of the packets should be rewritten, and consumed in table [L3Forwarding]. Flow 4 is for case 4, matching packets from a TrafficControl return port and forwarding them to table [L3Forwarding] to decide the egress port. It's important to note that a forwarding decision for these packets was already made before redirecting them to the TrafficControl target port in table [Output], and at this point, the source and destination MAC addresses of these packets have already been set to the correct values. The only purpose of forwarding the packets to table [L3Forwarding] is to load the tunnel destination IP for packets destined for remote Nodes. This ensures that the returned packets destined for remote Nodes are forwarded through the tunnel. `FromTCReturnRegMark`, which will be used in table [TrafficControl], is loaded to mark the packet" }, { "data": "Flow 5 is for case 5, matching packets returned back from an application-aware engine through a specific port, stripping the VLAN ID used by the application-aware engine, and forwarding them to table [L3Forwarding] to decide the egress port. Like flow 4, the purpose of forwarding the packets to table [L3Forwarding] is to load the tunnel destination IP for packets destined for remote Nodes, and `FromTCReturnRegMark` is also loaded. Flows 6-8 are for case 6, matching packets from local Pods and forwarding them to table [SpoofGuard] to do legitimacy verification. The following reg marks are loaded: `FromPodRegMark`, indicating that the packets are received on the ports connected to the local Pods, consumed in tables [L3Forwarding] and [SNATMark]. `FromLocalRegMark`, indicating that the packets are from the local Pods, consumed in table [ServiceLB]. Flow 9 is the table-miss flow to drop packets that are not matched by flows 1-8. This table is crafted to prevent IP from local Pods. It addresses specific cases: Allowing all packets from the local Antrea gateway. We do not perform checks for this interface as we need to accept external traffic with a source IP address that does not match the gateway IP. Ensuring that the source IP and MAC addresses are correct, i.e., matching the values configured on the interface when Antrea sets up networking for a Pod. If you dump the flows of this table, you may see the following: ```text table=SpoofGuard, priority=200,ip,inport=\"antrea-gw0\" actions=gototable:UnSNAT table=SpoofGuard, priority=200,ip,inport=\"client-6-3353ef\",dlsrc=5e:b5:e3:a6:90:b7,nwsrc=10.10.0.26 actions=gototable:UnSNAT table=SpoofGuard, priority=200,ip,inport=\"web-7975-274540\",dlsrc=fa:b7:53:74:21:a6,nwsrc=10.10.0.24 actions=gototable:UnSNAT table=SpoofGuard, priority=200,ip,inport=\"db-755c6-5080e3\",dlsrc=36:48:21:a2:9d:b4,nwsrc=10.10.0.25 actions=gototable:UnSNAT table=SpoofGuard, priority=0 actions=drop ``` Flow 1 is for case 1, matching packets received on the local Antrea gateway port without checking the source IP and MAC addresses. There are some cases where the source IP of the packets through the local Antrea gateway port is not the local Antrea gateway IP address: When Antrea is deployed with kube-proxy, and `AntreaProxy` is not enabled, packets from local Pods destined for Services will first go through the gateway port, get load-balanced by the kube-proxy data path (undergoes DNAT) then re-enter the OVS pipeline through the gateway port (through an \"onlink\" route, installed by Antrea, directing the DNAT'd packets to the gateway port), resulting in the source IP being that of a local Pod. When Antrea is deployed without kube-proxy, and both `AntreaProxy` and `proxyAll` are enabled, packets from the external network destined for Services will be routed to OVS through the gateway port without masquerading source IP. When Antrea is deployed with kube-proxy, packets from the external network destined for Services whose `externalTrafficPolicy` is set to `Local` will get load-balanced by the kube-proxy data path (undergoes DNAT with a local Endpoint selected by the kube-proxy) and then enter the OVS pipeline through the gateway (through a \"onlink\" route, installed by Antrea, directing the DNAT'd packets to the gateway port) without masquerading source IP. Flows 2-4 are for case 2, matching legitimate IP packets from local Pods. Flow 5 is the table-miss flow to drop IP spoofing packets. This table is used to undo SNAT on reply packets by invoking action `ct` on them. The packets are from SNAT'd Service connections that have been committed to `SNATCtZone` in table [SNAT]. After invoking action `ct`, the packets will be in a \"tracked\" state, restoring all [connection tracking fields](https://www.openvswitch.org/support/dist-docs/ovs-fields.7.txt) (such as `ctstate`, `ctmark`, `ct_label`, etc.) to their original values. The packets with a \"tracked\" state are then forwarded to table [ConntrackZone]. If you dump the flows of this table, you may see the following: ```text table=UnSNAT, priority=200,ip,nw_dst=169.254.0.253 actions=ct(table=ConntrackZone,zone=65521,nat) table=UnSNAT, priority=200,ip,nw_dst=10.10.0.1 actions=ct(table=ConntrackZone,zone=65521,nat) table=UnSNAT, priority=0 actions=goto_table:ConntrackZone ``` Flow 1 matches reply packets for Service connections which were SNAT'd with the Virtual Service IP `169.254.0.253` and invokes action `ct` on" }, { "data": "Flow 2 matches packets for Service connections which were SNAT'd with the local Antrea gateway IP `10.10.0.1` and invokes action `ct` on them. This flow also matches request packets destined for the local Antrea gateway IP from local Pods by accident. However, this is harmless since such connections will never be committed to `SNATCtZone`, and therefore, connection tracking fields for the packets are unset. Flow 3 is the table-miss flow. For reply packets from SNAT'd connections, whose destination IP is the translated SNAT IP, after invoking action `ct`, the destination IP of the packets will be restored to the original IP before SNAT, stored in the connection tracking field `ctnwdst`. The main purpose of this table is to invoke action `ct` on packets from all connections. After invoking `ct` action, packets will be in a \"tracked\" state, restoring all connection tracking fields to their appropriate values. When invoking action `ct` with `CtZone` to the packets that have a \"tracked\" state associated with `SNATCtZone`, then the \"tracked\" state associated with `SNATCtZone` will be inaccessible. This transition occurs because the \"tracked\" state shifts to another state associated with `CtZone`. A ct zone is similar in spirit to the more generic Linux network namespaces, uniquely containing a \"tracked\" state within each ct zone. If you dump the flows of this table, you may see the following: ```text table=ConntrackZone, priority=200,ip actions=ct(table=ConntrackState,zone=65520,nat) table=ConntrackZone, priority=0 actions=goto_table:ConntrackState ``` Flow 1 invokes `ct` action on packets from all connections, and the packets are then forwarded to table [ConntrackState] with the \"tracked\" state associated with `CtZone`. Note that for packets in an established Service (DNATed) connection, not the first packet of a Service connection, DNAT or un-DNAT is performed on them before they are forwarded. Flow 2 is the table-miss flow that should remain unused. This table handles packets from the connections that have a \"tracked\" state associated with `CtZone`. It addresses specific cases: Dropping invalid packets reported by conntrack. Forwarding tracked packets from all connections to table [AntreaPolicyEgressRule] directly, bypassing the tables like [PreRoutingClassifier], [NodePortMark], [SessionAffinity], [ServiceLB], and [EndpointDNAT] for Service Endpoint selection. Forwarding packets from new connections to table [PreRoutingClassifier] to start Service Endpoint selection since Service connections are not identified at this stage. If you dump the flows of this table, you may see the following: ```text table=ConntrackState, priority=200,ct_state=+inv+trk,ip actions=drop table=ConntrackState, priority=190,ctstate=-new+trk,ctmark=0/0x10,ip actions=goto_table:AntreaPolicyEgressRule table=ConntrackState, priority=190,ctstate=-new+trk,ctmark=0x10/0x10,ip actions=setfield:0x200/0x200->reg0,gototable:AntreaPolicyEgressRule table=ConntrackState, priority=0 actions=goto_table:PreRoutingClassifier ``` Flow 1 is for case 1, dropping invalid packets. Flow 2 is for case 2, matching packets from non-Service connections with `NotServiceCTMark` and forwarding them to table [AntreaPolicyEgressRule] directly, bypassing the tables for Service Endpoint selection. Flow 3 is also for case 2, matching packets from Service connections with `ServiceCTMark` loaded in table [EndpointDNAT] and forwarding them to table [AntreaPolicyEgressRule], bypassing the tables for Service Endpoint selection. `RewriteMACRegMark`, which is used in table [L3Forwarding], is loaded in this flow, indicating that the source and destination MAC addresses of the packets should be rewritten. Flow 4 is the table-miss flow for case 3, matching packets from all new connections and forwarding them to table [PreRoutingClassifier] to start the processing of Service Endpoint selection. This table handles the first packet from uncommitted Service connections before Service Endpoint selection. It sequentially resubmits the packets to tables [NodePortMark] and [SessionAffinity] to do some pre-processing, including the loading of specific reg marks. Subsequently, it forwards the packets to table [ServiceLB] to perform Service Endpoint" }, { "data": "If you dump the flows of this table, you may see the following: ```text table=PreRoutingClassifier, priority=200,ip actions=resubmit(,NodePortMark),resubmit(,SessionAffinity),resubmit(,ServiceLB) table=PreRoutingClassifier, priority=0 actions=goto_table:NodePortMark ``` Flow 1 sequentially resubmits packets to tables [NodePortMark], [SessionAffinity], and [ServiceLB]. Note that packets are ultimately forwarded to table [ServiceLB]. In tables [NodePortMark] and [SessionAffinity], only reg marks are loaded. Flow 2 is the table-miss flow that should remain unused. This table is designed to potentially mark packets destined for NodePort Services. It is only created when `proxyAll` is enabled. If you dump the flows of this table, you may see the following: ```text table=NodePortMark, priority=200,ip,nwdst=192.168.77.102 actions=setfield:0x80000/0x80000->reg4 table=NodePortMark, priority=200,ip,nwdst=169.254.0.252 actions=setfield:0x80000/0x80000->reg4 table=NodePortMark, priority=0 actions=goto_table:SessionAffinity ``` Flow 1 matches packets destined for the local Node from local Pods. `NodePortRegMark` is loaded, indicating that the packets are potentially destined for NodePort Services. We assume only one valid IP address, `192.168.77.102` (the Node's transport IP), can serve as the host IP address for NodePort based on the option `antreaProxy.nodePortAddresses`. If there are multiple valid IP addresses specified in the option, a flow similar to flow 1 will be installed for each IP address. Flow 2 match packets destined for the Virtual NodePort DNAT IP. Packets destined for NodePort Services from the local Node or the external network is DNAT'd to the Virtual NodePort DNAT IP by iptables before entering the pipeline. Flow 3 is the table-miss flow. Note that packets of NodePort Services have not been identified in this table by matching destination IP address. The identification of NodePort Services will be done finally in table [ServiceLB] by matching `NodePortRegMark` and the the specific destination port of a NodePort. This table is designed to implement Service session affinity. The learned flows that cache the information of the selected Endpoints are installed here. If you dump the flows of this table, you may see the following: ```text table=SessionAffinity, hardtimeout=300, priority=200,tcp,nwsrc=10.10.0.1,nwdst=10.96.76.15,tpdst=80 \\ actions=setfield:0x50/0xffff->reg4,setfield:0/0x4000000->reg4,setfield:0xa0a0001->reg3,setfield:0x20000/0x70000->reg4,set_field:0x200/0x200->reg0 table=SessionAffinity, priority=0 actions=set_field:0x10000/0x70000->reg4 ``` Flow 1 is a learned flow generated by flow 3 in table [ServiceLB], designed for the sample Service [ClusterIP with Session Affinity], to implement Service session affinity. Here are some details about the flow: The \"hard timeout\" of the learned flow should be equal to the value of `service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` defined in the Service. This means that until the hard timeout expires, this flow is present in the pipeline, and the session affinity of the Service takes effect. Unlike an \"idle timeout\", the \"hard timeout\" does not reset whenever the flow is matched. Source IP address, destination IP address, destination port, and transport protocol are used to match packets of connections sourced from the same client and destined for the Service during the affinity time window. Endpoint IP address and Endpoint port are loaded into `EndpointIPField` and `EndpointPortField` respectively. `EpSelectedRegMark` is loaded, indicating that the Service Endpoint selection is done, and ensuring that the packets will only match the last flow in table [ServiceLB]. `RewriteMACRegMark`, which will be consumed in table [L3Forwarding], is loaded here, indicating that the source and destination MAC addresses of the packets should be rewritten. Flow 2 is the table-miss flow to match the first packet of connections destined for Services. The loading of `EpToSelectRegMark`, to be consumed in table [ServiceLB], indicating that the packet needs to do Service Endpoint selection. This table is used to implement Service Endpoint selection. It addresses specific cases: ClusterIP, as demonstrated in the examples [ClusterIP without Endpoint] and [ClusterIP]. NodePort, as demonstrated in the example [NodePort]. LoadBalancer, as demonstrated in the example [LoadBalancer]. Service configured with external IPs, as demonstrated in the example [Service with" }, { "data": "Service configured with session affinity, as demonstrated in the example [Service with session affinity]. Service configured with externalTrafficPolicy to `Local`, as demonstrated in the example [Service with ExternalTrafficPolicy Local]. If you dump the flows of this table, you may see the following: ```text table=ServiceLB, priority=200,tcp,reg4=0x10000/0x70000,nwdst=10.101.255.29,tpdst=80 actions=setfield:0x200/0x200->reg0,setfield:0x20000/0x70000->reg4,set_field:0x9->reg7,group:9 table=ServiceLB, priority=200,tcp,reg4=0x10000/0x70000,nwdst=10.105.31.235,tpdst=80 actions=setfield:0x200/0x200->reg0,setfield:0x20000/0x70000->reg4,set_field:0xc->reg7,group:10 table=ServiceLB, priority=200,tcp,reg4=0x90000/0xf0000,tpdst=30004 actions=setfield:0x200/0x200->reg0,setfield:0x20000/0x70000->reg4,setfield:0x200000/0x200000->reg4,set_field:0xc->reg7,group:12 table=ServiceLB, priority=200,tcp,reg4=0x10000/0x70000,nwdst=192.168.77.150,tpdst=80 actions=setfield:0x200/0x200->reg0,setfield:0x20000/0x70000->reg4,set_field:0xe->reg7,group:14 table=ServiceLB, priority=200,tcp,reg4=0x10000/0x70000,nwdst=192.168.77.200,tpdst=80 actions=setfield:0x200/0x200->reg0,setfield:0x20000/0x70000->reg4,set_field:0x10->reg7,group:16 table=ServiceLB, priority=200,tcp,reg4=0x10000/0x70000,nwdst=10.96.76.15,tpdst=80 actions=setfield:0x200/0x200->reg0,setfield:0x30000/0x70000->reg4,set_field:0xa->reg7,group:11 table=ServiceLB, priority=190,tcp,reg4=0x30000/0x70000,nwdst=10.96.76.15,tpdst=80 actions=learn(table=SessionAffinity,hardtimeout=300,priority=200,deletelearned,cookie=0x203000000000a,\\ ethtype=0x800,nwproto=6,NXMOFTCPDST[],NXMOFIPDST[],NXMOFIPSRC[],load:NXMNXREG4[0..15]->NXMNXREG4[0..15],load:NXMNXREG4[26]->NXMNXREG4[26],load:NXMNXREG3[]->NXMNXREG3[],load:0x2->NXMNXREG4[16..18],load:0x1->NXMNX_REG0[9]),\\ setfield:0x20000/0x70000->reg4,gototable:EndpointDNAT table=ServiceLB, priority=210,tcp,reg4=0x10010000/0x10070000,nwdst=192.168.77.151,tpdst=80 actions=setfield:0x200/0x200->reg0,setfield:0x20000/0x70000->reg4,set_field:0x11->reg7,group:17 table=ServiceLB, priority=200,tcp,reg4=0x10000/0x70000,nwdst=192.168.77.151,tpdst=80 actions=setfield:0x200/0x200->reg0,setfield:0x20000/0x70000->reg4,set_field:0x12->reg7,group:18 table=ServiceLB, priority=0 actions=goto_table:EndpointDNAT ``` Flow 1 and flow 2 are designed for case 1, matching the first packet of connections destined for the sample [ClusterIP without Endpoint] or [ClusterIP]. This is achieved by matching `EpToSelectRegMark` loaded in table [SessionAffinity], clusterIP, and port. The target of the packet matched by the flow is an OVS group where the Endpoint will be selected. Before forwarding the packet to the OVS group, `RewriteMACRegMark`, which will be consumed in table [L3Forwarding], is loaded, indicating that the source and destination MAC addresses of the packets should be rewritten. `EpSelectedRegMark` , which will be consumed in table [EndpointDNAT], is also loaded, indicating that the Endpoint is selected. Note that the Service Endpoint selection is not completed yet, as it will be done in the target OVS group. Flow 3 is for case 2, matching the first packet of connections destined for the sample [NodePort]. This is achieved by matching `EpToSelectRegMark` loaded in table [SessionAffinity], `NodePortRegMark` loaded in table [NodePortMark], and NodePort port. Similar to flows 1-2, `RewriteMACRegMark` and `EpSelectedRegMark` are also loaded. Flow 4 is for case 3, processing the first packet of connections destined for the ingress IP of the sample [LoadBalancer], similar to flow 1. Flow 5 is for case 4, processing the first packet of connections destined for the external IP of the sample [Service with ExternalIP], similar to flow 1. Flow 6 is the initial process for case 5, matching the first packet of connections destined for the sample [Service with Session Affinity]. This is achieved by matching the conditions similar to flow 1. Like flow 1, the target of the flow is also an OVS group, and `RewriteMACRegMark` is loaded. The difference is that `EpToLearnRegMark` is loaded, rather than `EpSelectedRegMark`, indicating that the selected Endpoint needs to be cached. Flow 7 is the final process for case 5, matching the packet previously matched by flow 6, resubmitted back from the target OVS group after selecting an Endpoint. Then a learned flow will be generated in table [SessionAffinity] to match the packets of the subsequent connections from the same client IP, ensuring that the packets are always forwarded to the same Endpoint selected the first time. `EpSelectedRegMark`, which will be consumed in table [EndpointDNAT], is loaded, indicating that Service Endpoint selection has been done. Flow 8 and flow 9 are for case 6. Flow 8 has higher priority than flow 9, prioritizing matching the first packet of connections sourced from a local Pod or the local Node with `FromLocalRegMark` loaded in table [Classifier] and destined for the sample [Service with ExternalTrafficPolicy Local]. The target of flow 8 is an OVS group that has all the Endpoints across the cluster, ensuring accessibility for Service connections originating from local Pods or Nodes, even though `externalTrafficPolicy` is set to `Local` for the Service. Due to the existence of flow 8, consequently, flow 9 exclusively matches packets sourced from the external network, resembling the pattern of flow 1. The target of flow 9 is an OVS group that has only the local Endpoints since `externalTrafficPolicy` of the Service is" }, { "data": "Flow 10 is the table-miss flow. As mentioned above, the Service Endpoint selection is performed within OVS groups. 3 typical OVS groups are listed below: ```text group_id=9,type=select,\\ bucket=bucketid:0,weight:100,actions=setfield:0x4000/0x4000->reg0,resubmit(,EndpointDNAT) group_id=10,type=select,\\ bucket=bucketid:0,weight:100,actions=setfield:0xa0a0018->reg3,set_field:0x50/0xffff->reg4,resubmit(,EndpointDNAT),\\ bucket=bucketid:1,weight:100,actions=setfield:0x4000000/0x4000000->reg4,setfield:0xa0a0106->reg3,setfield:0x50/0xffff->reg4,resubmit(,EndpointDNAT) group_id=11,type=select,\\ bucket=bucketid:0,weight:100,actions=setfield:0xa0a0018->reg3,set_field:0x50/0xffff->reg4,resubmit(,ServiceLB),\\ bucket=bucketid:1,weight:100,actions=setfield:0x4000000/0x4000000->reg4,setfield:0xa0a0106->reg3,setfield:0x50/0xffff->reg4,resubmit(,ServiceLB) ``` The first group with `group_id` 9 is the destination of packets matched by flow 1, designed for a Service without Endpoints. The group only has a single bucket where `SvcNoEpRegMark` which will be used in table [EndpointDNAT] is loaded, indicating that the Service has no Endpoint, and then packets are forwarded to table [EndpointDNAT]. The second group with `group_id` 10 is the destination of packets matched by flow 2, designed for a Service with Endpoints. The group has 2 buckets, indicating the availability of 2 selectable Endpoints. Each bucket has an equal chance of being chosen since they have the same weights. For every bucket, the Endpoint IP and Endpoint port are loaded into `EndpointIPField` and `EndpointPortField`, respectively. These loaded values will be consumed in table [EndpointDNAT] to which the packets are forwarded and in which DNAT will be performed. `RemoteEndpointRegMark` is loaded for remote Endpoints, like the bucket with `bucket_id` 1 in this group. The third group with `group_id` 11 is the destination of packets matched by flow 6, designed for a Service that has Endpoints and is configured with session affinity. The group closely resembles the group with `group_id` 10, except that the destination of the packets is table [ServiceLB], rather than table [EndpointDNAT]. After being resubmitted back to table [ServiceLB], they will be matched by flow 7. The table implements DNAT for Service connections after Endpoint selection is performed in table [ServiceLB]. If you dump the flows of this table, you may see the following:: ```text table=EndpointDNAT, priority=200,reg0=0x4000/0x4000 actions=controller(reason=no_match,id=62373,userdata=04) table=EndpointDNAT, priority=200,tcp,reg3=0xa0a0018,reg4=0x20050/0x7ffff actions=ct(commit,table=AntreaPolicyEgressRule,zone=65520,nat(dst=10.10.0.24:80),exec(setfield:0x10/0x10->ctmark,move:NXMNXREG0[0..3]->NXMNXCT_MARK[0..3])) table=EndpointDNAT, priority=200,tcp,reg3=0xa0a0106,reg4=0x20050/0x7ffff actions=ct(commit,table=AntreaPolicyEgressRule,zone=65520,nat(dst=10.10.1.6:80),exec(setfield:0x10/0x10->ctmark,move:NXMNXREG0[0..3]->NXMNXCT_MARK[0..3])) table=EndpointDNAT, priority=190,reg4=0x20000/0x70000 actions=set_field:0x10000/0x70000->reg4,resubmit(,ServiceLB) table=EndpointDNAT, priority=0 actions=goto_table:AntreaPolicyEgressRule ``` Flow 1 is designed for Services without Endpoints. It identifies the first packet of connections destined for such Service by matching `SvcNoEpRegMark`. Subsequently, the packet is forwarded to the OpenFlow controller (Antrea Agent). For TCP Service traffic, the controller will send a TCP RST, and for all other cases the controller will send an ICMP Destination Unreachable message. Flows 2-3 are designed for Services that have selected an Endpoint. These flows identify the first packet of connections destined for such Services by matching `EndpointPortField`, which stores the Endpoint IP, and `EpUnionField` (a combination of `EndpointPortField` storing the Endpoint port and `EpSelectedRegMark`). Then `ct` action is invoked on the packet, performing DNAT'd and forwarding it to table [ConntrackState] with the \"tracked\" state associated with `CtZone`. Some bits of ct mark are persisted: `ServiceCTMark`, to be consumed in tables [L3Forwarding] and [ConntrackCommit], indicating that the current packet and subsequent packets of the connection are for a Service. The value of `PktSourceField` is persisted to `ConnSourceCTMarkField`, storing the source of the connection for the current packet and subsequent packets of the connection. Flow 4 is to resubmit the packets which are not matched by flows 1-3 back to table [ServiceLB] to select Endpoint again. Flow 5 is the table-miss flow to match non-Service packets. This table is used to implement the egress rules across all Antrea-native NetworkPolicies, except for NetworkPolicies that are created in the Baseline Tier. Antrea-native NetworkPolicies created in the Baseline Tier will be enforced after K8s NetworkPolicies and their egress rules are installed in tables [EgressDefaultRule] and [EgressRule] respectively," }, { "data": "```text Antrea-native NetworkPolicy other Tiers -> AntreaPolicyEgressRule K8s NetworkPolicy -> EgressRule Antrea-native NetworkPolicy Baseline Tier -> EgressDefaultRule ``` Antrea-native NetworkPolicy relies on the OVS built-in `conjunction` action to implement policies efficiently. This enables us to do a conjunctive match across multiple dimensions (source IP, destination IP, port, etc.) efficiently without \"exploding\" the number of flows. For our use case, we have at most 3 dimensions. The only requirement of `conj_id` is to be a unique 32-bit integer within the table. At the moment we use a single custom allocator, which is common to all tables that can have NetworkPolicy flows installed ([AntreaPolicyEgressRule], [EgressRule], [EgressDefaultRule], [AntreaPolicyIngressRule], [IngressRule], and [IngressDefaultRule]). For this table, you will need to keep in mind the Antrea-native NetworkPolicy . Since the sample egress policy resides in the Application Tie, if you dump the flows of this table, you may see the following: ```text table=AntreaPolicyEgressRule, priority=64990,ctstate=-new+est,ip actions=gototable:EgressMetric table=AntreaPolicyEgressRule, priority=64990,ctstate=-new+rel,ip actions=gototable:EgressMetric table=AntreaPolicyEgressRule, priority=14500,ip,nw_src=10.10.0.24 actions=conjunction(7,1/3) table=AntreaPolicyEgressRule, priority=14500,ip,nw_dst=10.10.0.25 actions=conjunction(7,2/3) table=AntreaPolicyEgressRule, priority=14500,tcp,tp_dst=3306 actions=conjunction(7,3/3) table=AntreaPolicyEgressRule, priority=14500,conjid=7,ip actions=setfield:0x7->reg5,ct(commit,table=EgressMetric,zone=65520,exec(setfield:0x700000000/0xffffffff00000000->ctlabel)) table=AntreaPolicyEgressRule, priority=14499,ip,nw_src=10.10.0.24 actions=conjunction(5,1/2) table=AntreaPolicyEgressRule, priority=14499,ip actions=conjunction(5,2/2) table=AntreaPolicyEgressRule, priority=14499,conjid=5 actions=setfield:0x5->reg3,setfield:0x400/0x400->reg0,gototable:EgressMetric table=AntreaPolicyEgressRule, priority=0 actions=goto_table:EgressRule ``` Flows 1-2, which are installed by default with the highest priority, match non-new and \"tracked\" packets and forward them to table [EgressMetric] to bypass the check from egress rules. This means that if a connection is established, its packets go straight to table [EgressMetric], with no other match required. In particular, this ensures that reply traffic is never dropped because of an Antrea-native NetworkPolicy or K8s NetworkPolicy rule. However, this also means that ongoing connections are not affected if the Antrea-native NetworkPolicy or the K8s NetworkPolicy is updated. The priorities of flows 3-9 installed for the egress rules are decided by the following: The `spec.tier` value in an Antrea-native NetworkPolicy determines the primary level for flow priority. The `spec.priority` value in an Antrea-native NetworkPolicy determines the secondary level for flow priority within the same `spec.tier`. A lower value in this field corresponds to a higher priority for the flow. The rule's position within an Antrea-native NetworkPolicy also influences flow priority. Rules positioned closer to the beginning have higher priority for the flow. Flows 3-6, whose priorities are all 14500, are installed for the egress rule `AllowToDB` in the sample policy. These flows are described as follows: Flow 3 is used to match packets with the source IP address in set {10.10.0.24}, which has all IP addresses of the Pods selected by the label `app: web`, constituting the first dimension for `conjunction` with `conj_id` 7. Flow 4 is used to match packets with the destination IP address in set {10.10.0.25}, which has all IP addresses of the Pods selected by the label `app: db`, constituting the second dimension for `conjunction` with `conj_id` 7. Flow 5 is used to match packets with the destination TCP port in set {3306} specified in the rule, constituting the third dimension for `conjunction` with `conj_id` 7. Flow 6 is used to match packets meeting all the three dimensions of `conjunction` with `conj_id` 7 and forward them to table [EgressMetric], persisting `conj_id` to `EgressRuleCTLabel`, which will be consumed in table [EgressMetric]. Flows 7-9, whose priorities are all 14499, are installed for the egress rule with a `Drop` action defined after the rule `AllowToDB` in the sample policy, and serves as a default rule. Antrea-native NetworkPolicy does not have the same default isolated behavior as K8s NetworkPolicy (implemented in the [EgressDefaultRule] table). As soon as a rule is matched, we apply the corresponding action. If no rule is matched, there is no implicit drop for Pods to which an Antrea-native NetworkPolicy" }, { "data": "These flows are described as follows: Flow 7 is used to match packets with the source IP address in set {10.10.0.24}, which is from the Pods selected by the label `app: web`, constituting the first dimension for `conjunction` with `conj_id` 5. Flow 8 is used to match any IP packets, constituting the second dimension for `conjunction` with `conj_id` 5. This flow, which matches all IP packets, exists because we need at least 2 dimensions for a conjunctive match. Flow 9 is used to match packets meeting both dimensions of `conjunction` with `conj_id` 5. `APDenyRegMark` is loaded and will be consumed in table [EgressMetric] to which the packets are forwarded. Flow 10 is the table-miss flow to forward packets not matched by other flows to table [EgressMetric]. For this table, you will need to keep in mind the K8s NetworkPolicy that we are using. This table is used to implement the egress rules across all K8s NetworkPolicies. If you dump the flows for this table, you may see the following: ```text table=EgressRule, priority=200,ip,nw_src=10.10.0.24 actions=conjunction(2,1/3) table=EgressRule, priority=200,ip,nw_dst=10.10.0.25 actions=conjunction(2,2/3) table=EgressRule, priority=200,tcp,tp_dst=3306 actions=conjunction(2,3/3) table=EgressRule, priority=190,conjid=2,ip actions=setfield:0x2->reg5,ct(commit,table=EgressMetric,zone=65520,exec(setfield:0x200000000/0xffffffff00000000->ctlabel)) table=EgressRule, priority=0 actions=goto_table:EgressDefaultRule ``` Flows 1-4 are installed for the egress rule in the sample K8s NetworkPolicy. These flows are described as follows: Flow 1 is to match packets with the source IP address in set {10.10.0.24}, which has all IP addresses of the Pods selected by the label `app: web` in the `default` Namespace, constituting the first dimension for `conjunction` with `conj_id` 2. Flow 2 is to match packets with the destination IP address in set {10.10.0.25}, which has all IP addresses of the Pods selected by the label `app: db` in the `default` Namespace, constituting the second dimension for `conjunction` with `conj_id` 2. Flow 3 is to match packets with the destination TCP port in set {3306} specified in the rule, constituting the third dimension for `conjunction` with `conj_id` 2. Flow 4 is to match packets meeting all the three dimensions of `conjunction` with `conj_id` 2 and forward them to table [EgressMetric], persisting `conj_id` to `EgressRuleCTLabel`. Flow 5 is the table-miss flow to forward packets not matched by other flows to table [EgressDefaultRule]. This table complements table [EgressRule] for K8s NetworkPolicy egress rule implementation. When a NetworkPolicy is applied to a set of Pods, then the default behavior for egress connections for these Pods becomes \"deny\" (they become [isolated Pods](https://kubernetes.io/docs/concepts/services-networking/network-policies/#isolated-and-non-isolated-pods)). This table is in charge of dropping traffic originating from Pods to which a NetworkPolicy (with an egress rule) is applied, and which did not match any of the \"allowed\" list rules. If you dump the flows of this table, you may see the following: ```text table=EgressDefaultRule, priority=200,ip,nw_src=10.10.0.24 actions=drop table=EgressDefaultRule, priority=0 actions=goto_table:EgressMetric ``` Flow 1, based on our sample K8s NetworkPolicy, is to drop traffic originating from 10.10.0.24, an IP address associated with a Pod selected by the label `app: web`. If there are multiple Pods being selected by the label `app: web`, you will see multiple similar flows for each IP address. Flow 2 is the table-miss flow to forward packets to table [EgressMetric]. This table is also used to implement Antrea-native NetworkPolicy egress rules that are created in the Baseline Tier. Since the Baseline Tier is meant to be enforced after K8s NetworkPolicies, the corresponding flows will be created at a lower priority than K8s NetworkPolicy default drop flows. These flows are similar to flows 3-9 in table [AntreaPolicyEgressRule]. For the sake of simplicity, we have not defined any example Baseline policies in this" }, { "data": "This table is used to collect egress metrics for Antrea-native NetworkPolicies and K8s NetworkPolicies. If you dump the flows of this table, you may see the following: ```text table=EgressMetric, priority=200,ctstate=+new,ctlabel=0x200000000/0xffffffff00000000,ip actions=goto_table:L3Forwarding table=EgressMetric, priority=200,ctstate=-new,ctlabel=0x200000000/0xffffffff00000000,ip actions=goto_table:L3Forwarding table=EgressMetric, priority=200,ctstate=+new,ctlabel=0x700000000/0xffffffff00000000,ip actions=goto_table:L3Forwarding table=EgressMetric, priority=200,ctstate=-new,ctlabel=0x700000000/0xffffffff00000000,ip actions=goto_table:L3Forwarding table=EgressMetric, priority=200,reg0=0x400/0x400,reg3=0x5 actions=drop table=EgressMetric, priority=0 actions=goto_table:L3Forwarding ``` Flows 1-2, matching packets with `EgressRuleCTLabel` set to 2, the `conj_id` allocated for the sample K8s NetworkPolicy egress rule and loaded in table [EgressRule] flow 4, are used to collect metrics for the egress rule. Flows 3-4, matching packets with `EgressRuleCTLabel` set to 7, the `conj_id` allocated for the sample Antrea-native NetworkPolicy egress rule and loaded in table [AntreaPolicyEgressRule] flow 6, are used to collect metrics for the egress rule. Flow 5 serves as the drop rule for the sample Antrea-native NetworkPolicy egress rule. It drops the packets by matching `APDenyRegMark` loaded in table [AntreaPolicyEgressRule] flow 9 and `APConjIDField` set to 5 which is the `conj_id` allocated the egress rule and loaded in table [AntreaPolicyEgressRule] flow 9. These flows have no explicit action besides the `goto_table` action. This is because we rely on the \"implicit\" flow counters to keep track of connection / packet statistics. Ct label is used in flows 1-4, while reg is used in flow 5. The distinction lies in the fact that the value persisted in the ct label can be read throughout the entire lifecycle of a connection, but the reg mark is only valid for the current packet. For a connection permitted by a rule, all its packets should be collected for metrics, thus a ct label is used. For a connection denied or dropped by a rule, the first packet and the subsequent retry packets will be blocked, therefore a reg is enough. Flow 6 is the table-miss flow. This table, designated as the L3 routing table, serves to assign suitable source and destination MAC addresses to packets based on their destination IP addresses, as well as their reg marks or ct marks. If you dump the flows of this table, you may see the following: ```text table=L3Forwarding, priority=210,ip,nwdst=10.10.0.1 actions=setfield:ba:5e:d1:55:aa:c0->ethdst,setfield:0x20/0xf0->reg0,goto_table:L3DecTTL table=L3Forwarding, priority=210,ctstate=+rpl+trk,ctmark=0x2/0xf,ip actions=setfield:ba:5e:d1:55:aa:c0->ethdst,setfield:0x20/0xf0->reg0,gototable:L3DecTTL table=L3Forwarding, priority=200,ip,reg0=0/0x200,nwdst=10.10.0.0/24 actions=gototable:L2ForwardingCalc table=L3Forwarding, priority=200,ip,nwdst=10.10.1.0/24 actions=setfield:ba:5e:d1:55:aa:c0->ethsrc,setfield:aa:bb:cc:dd:ee:ff->ethdst,setfield:192.168.77.103->tundst,setfield:0x10/0xf0->reg0,goto_table:L3DecTTL table=L3Forwarding, priority=200,ip,reg0=0x200/0x200,nwdst=10.10.0.24 actions=setfield:ba:5e:d1:55:aa:c0->ethsrc,setfield:fa:b7:53:74:21:a6->ethdst,gototable:L3DecTTL table=L3Forwarding, priority=200,ip,reg0=0x200/0x200,nwdst=10.10.0.25 actions=setfield:ba:5e:d1:55:aa:c0->ethsrc,setfield:36:48:21:a2:9d:b4->ethdst,gototable:L3DecTTL table=L3Forwarding, priority=200,ip,reg0=0x200/0x200,nwdst=10.10.0.26 actions=setfield:ba:5e:d1:55:aa:c0->ethsrc,setfield:5e:b5:e3:a6:90:b7->ethdst,gototable:L3DecTTL table=L3Forwarding, priority=190,ctstate=-rpl+trk,ip,reg0=0x3/0xf,reg4=0/0x100000 actions=gototable:EgressMark table=L3Forwarding, priority=190,ctstate=-rpl+trk,ip,reg0=0x1/0xf actions=setfield:ba:5e:d1:55:aa:c0->ethdst,gototable:EgressMark table=L3Forwarding, priority=190,ctmark=0x10/0x10,reg0=0x202/0x20f actions=setfield:ba:5e:d1:55:aa:c0->ethdst,setfield:0x20/0xf0->reg0,goto_table:L3DecTTL table=L3Forwarding, priority=0 actions=setfield:0x20/0xf0->reg0,gototable:L2ForwardingCalc ``` Flow 1 matches packets destined for the local Antrea gateway IP, rewrites their destination MAC address to that of the local Antrea gateway, loads `ToGatewayRegMark`, and forwards them to table [L3DecTTL] to decrease TTL value. The action of rewriting the destination MAC address is not necessary but not harmful for Pod-to-gateway request packets because the destination MAC address is already the local gateway MAC address. In short, the action is only necessary for `AntreaIPAM` Pods, not required by the sample NodeIPAM Pods in this document. Flow 2 matches reply packets with corresponding ct \"tracked\" states and `FromGatewayCTMark` from connections initiated through the local Antrea gateway. In other words, these are connections for which the first packet of the connection (SYN packet for TCP) was received through the local Antrea gateway. It rewrites the destination MAC address to that of the local Antrea gateway, loads `ToGatewayRegMark`, and forwards them to table [L3DecTTL]. This ensures that reply packets can be forwarded back to the local Antrea gateway in subsequent tables. This flow is required to handle the following cases when AntreaProxy is not enabled: Reply traffic for connections from a local Pod to a ClusterIP Service, which are handled by kube-proxy and go through" }, { "data": "In this case, the destination IP address of the reply traffic is the Pod which initiated the connection to the Service (no SNAT by kube-proxy). These packets should be forwarded back to the local Antrea gateway to the third-party module to complete the DNAT processes, e.g., kube-proxy. The destination MAC of the packets is rewritten in the table to avoid it is forwarded to the original client Pod by mistake. When hairpin is involved, i.e. connections between 2 local Pods, for which NAT is performed. One example is a Pod accessing a NodePort Service for which externalTrafficPolicy is set to `Local` using the local Node's IP address, as there will be no SNAT for such traffic. Another example could be hostPort support, depending on how the feature is implemented. Flow 3 matches packets from intra-Node connections (excluding Service connections) and marked with `NotRewriteMACRegMark`, indicating that the destination and source MACs of packets should not be overwritten, and forwards them to table [L2ForwardingCalc] instead of table [L3DecTTL]. The deviation is due to local Pods connections not traversing any router device or undergoing NAT process. For packets from Service or inter-Node connections, `RewriteMACRegMark`, mutually exclusive with `NotRewriteMACRegMark`, is loaded. Therefore, the packets will not be matched by the flow. Flow 4 is designed to match packets destined for a remote Pod CIDR. This involves installing a separate flow for each remote Node, with each flow matching the destination IP address of the packets against the Pod subnet for the respective Node. For the matched packets, the source MAC address is set to that of the local Antrea gateway MAC, and the destination MAC address is set to the Global Virtual MAC. The Openflow `tun_dst` field is set to the appropriate value (i.e. the IP address of the remote Node). Additionally, `ToTunnelRegMark` is loaded, signifying that the packets will be forwarded to remote Nodes through a tunnel. The matched packets are then forwarded to table [L3DecTTL] to decrease the TTL value. Flow 5-7 matches packets destined for local Pods and marked by `RewriteMACRegMark`, which signifies that the packets may originate from Service or inter-Node connections. For the matched packets, the source MAC address is set to that of the local Antrea gateway MAC, and the destination MAC address is set to the associated local Pod MAC address. The matched packets are then forwarded to table [L3DecTTL] to decrease the TTL value. Flow 8 matches request packets originating from local Pods and destined for the external network, and then forwards them to table [EgressMark] dedicated to feature `Egress`. In table [EgressMark], SNAT IPs for Egress are looked up for the packets. To match the expected packets, `FromPodRegMark` is used to exclude packets that are not from local Pods. Additionally, `NotAntreaFlexibleIPAMRegMark`, mutually exclusive with `AntreaFlexibleIPAMRegMark` which is used to mark packets from Antrea IPAM Pods, is used since Egress can only be applied to Node IPAM Pods. It's worth noting that packets sourced from local Pods and destined for the Services listed in the option `antreaProxy.skipServices` are unexpectedly matched by flow 8 due to the fact that there is no flow in [ServiceLB] to handle these Services. Consequently, the destination IP address of the packets, allocated from the Service CIDR, is considered part of the \"external network\". No need to worry about the mismatch, as flow 3 in table [EgressMark] is designed to match these packets and prevent them from undergoing SNAT by" }, { "data": "Flow 9 matches request packets originating from remote Pods and destined for the external network, and then forwards them to table [EgressMark] dedicated to feature `Egress`. To match the expected packets, `FromTunnelRegMark` is used to include packets that are from remote Pods through a tunnel. Considering that the packets from remote Pods traverse a tunnel, the destination MAC address of the packets, represented by the Global Virtual MAC, needs to be rewritten to MAC address of the local Antrea gateway. Flow 10 matches packets from Service connections that are originating from the local Antrea gateway and destined for the external network. This is accomplished by matching `RewriteMACRegMark`, `FromGatewayRegMark`, and `ServiceCTMark`. The destination MAC address is then set to that of the local Antrea gateway. Additionally, `ToGatewayRegMark`, which will be used with `FromGatewayRegMark` together to identify hairpin connections in table [SNATMark], is loaded. Finally, the packets are forwarded to table [L3DecTTL]. Flow 11 is the table-miss flow, and is used for packets originating from local Pods and destined for the external network, and then forwarding them to table [L2ForwardingCalc]. `ToGatewayRegMark` is loaded as the matched packets traverse the local Antrea gateway. This table is dedicated to feature `Egress`. It includes flows to select the right SNAT IPs for egress traffic originating from Pods and destined for the external network. If you dump the flows of this table, you may see the following: ```text table=EgressMark, priority=210,ip,nwdst=192.168.77.102 actions=setfield:0x20/0xf0->reg0,goto_table:L2ForwardingCalc table=EgressMark, priority=210,ip,nwdst=192.168.77.103 actions=setfield:0x20/0xf0->reg0,goto_table:L2ForwardingCalc table=EgressMark, priority=210,ip,nwdst=10.96.0.0/12 actions=setfield:0x20/0xf0->reg0,goto_table:L2ForwardingCalc table=EgressMark, priority=200,ip,inport=\"client-6-3353ef\" actions=setfield:ba:5e:d1:55:aa:c0->ethsrc,setfield:aa:bb:cc:dd:ee:ff->ethdst,setfield:192.168.77.113->tundst,setfield:0x10/0xf0->reg0,setfield:0x80000/0x80000->reg0,gototable:L2ForwardingCalc table=EgressMark, priority=200,ctstate=+new+trk,ip,tundst=192.168.77.112 actions=setfield:0x1/0xff->pktmark,setfield:0x20/0xf0->reg0,gototable:L2ForwardingCalc table=EgressMark, priority=200,ctstate=+new+trk,ip,inport=\"web-7975-274540\" actions=setfield:0x1/0xff->pktmark,setfield:0x20/0xf0->reg0,gototable:L2ForwardingCalc table=EgressMark, priority=190,ct_state=+new+trk,ip,reg0=0x1/0xf actions=drop table=EgressMark, priority=0 actions=setfield:0x20/0xf0->reg0,gototable:L2ForwardingCalc ``` Flows 1-2 match packets originating from local Pods and destined for the transport IP of remote Nodes, and then forward them to table [L2ForwardingCalc] to bypass Egress SNAT. `ToGatewayRegMark` is loaded, indicating that the output port of the packets is the local Antrea gateway. Flow 3 matches packets originating from local Pods and destined for the Services listed in the option `antreaProxy.skipServices`, and then forwards them to table [L2ForwardingCalc] to bypass Egress SNAT. Similar to flows 1-2, `ToGatewayRegMark` is also loaded. The packets, matched by flows 1-3, are forwarded to this table by flow 8 in table [L3Forwarding], as they are classified as part of traffic destined for the external network. However, these packets are not intended to undergo Egress SNAT. Consequently, flows 1-3 are used to bypass Egress SNAT for these packets. Flow 4 match packets originating from local Pods selected by the sample [Egress egress-client], whose SNAT IP is configured on a remote Node, which means that the matched packets should be forwarded to the remote Node through a tunnel. Before sending the packets to the tunnel, the source and destination MAC addresses are set to the local Antrea gateway MAC and the Global Virtual MAC respectively. Additionally, `ToTunnelRegMark`, indicating that the output port is a tunnel, and `EgressSNATRegMark`, indicating that packets should undergo SNAT on a remote Node, are loaded. Finally, the packets are forwarded to table [L2ForwardingCalc]. Flow 5 matches the first packet of connections originating from remote Pods selected by the sample [Egress egress-web] whose SNAT IP is configured on the local Node, and then loads an 8-bit ID allocated for the associated SNAT IP defined in the sample Egress to the `pkt_mark`, which will be consumed by iptables on the local Node to perform SNAT with the SNAT IP. Subsequently, `ToGatewayRegMark`, indicating that the output port is the local Antrea gateway, is loaded. Finally, the packets are forwarded to table" }, { "data": "Flow 6 matches the first packet of connections originating from local Pods selected by the sample [Egress egress-web], whose SNAT IP is configured on the local Node. Similar to flow 4, the 8-bit ID allocated for the SNAT IP is loaded to `pkt_mark`, `ToGatewayRegMark` is loaded, and the packets are forwarded to table [L2ForwardingCalc] finally. Flow 7 drops all other packets tunneled from remote Nodes (identified with `FromTunnelRegMark`, indicating that the packets are from remote Pods through a tunnel). The packets are not matched by any flows 1-6, which means that they are here unexpected and should be dropped. Flow 8 is the table-miss flow, which matches \"tracked\" and non-new packets from Egress connections and forwards them to table [L2ForwardingCalc]. `ToGatewayRegMark` is also loaded for these packets. This is the table to decrement TTL for IP packets. If you dump the flows of this table, you may see the following: ```text table=L3DecTTL, priority=210,ip,reg0=0x2/0xf actions=goto_table:SNATMark table=L3DecTTL, priority=200,ip actions=decttl,gototable:SNATMark table=L3DecTTL, priority=0 actions=goto_table:SNATMark ``` Flow 1 matches packets with `FromGatewayRegMark`, which means that these packets enter the OVS pipeline from the local Antrea gateway, as the host IP stack should have decremented the TTL already for such packets, TTL should not be decremented again. Flow 2 is to decrement TTL for packets which are not matched by flow 1. Flow 3 is the table-miss flow that should remain unused. This table marks connections requiring SNAT within the OVS pipeline, distinct from Egress SNAT handled by iptables. If you dump the flows of this table, you may see the following: ```text table=SNATMark, priority=200,ctstate=+new+trk,ip,reg0=0x22/0xff actions=ct(commit,table=SNAT,zone=65520,exec(setfield:0x20/0x20->ctmark,setfield:0x40/0x40->ct_mark)) table=SNATMark, priority=200,ctstate=+new+trk,ip,reg0=0x12/0xff,reg4=0x200000/0x2200000 actions=ct(commit,table=SNAT,zone=65520,exec(setfield:0x20/0x20->ct_mark)) table=SNATMark, priority=190,ctstate=+new+trk,ip,nwsrc=10.10.0.23,nwdst=10.10.0.23 actions=ct(commit,table=SNAT,zone=65520,exec(setfield:0x20/0x20->ctmark,setfield:0x40/0x40->ct_mark)) table=SNATMark, priority=190,ctstate=+new+trk,ip,nwsrc=10.10.0.24,nwdst=10.10.0.24 actions=ct(commit,table=SNAT,zone=65520,exec(setfield:0x20/0x20->ctmark,setfield:0x40/0x40->ct_mark)) table=SNATMark, priority=0 actions=goto_table:SNAT ``` Flow 1 matches the first packet of hairpin Service connections, identified by `FromGatewayRegMark` and `ToGatewayRegMark`, indicating that both the input and output ports of the connections are the local Antrea gateway port. Such hairpin connections will undergo SNAT with the Virtual Service IP in table [SNAT]. Before forwarding the packets to table [SNAT], `ConnSNATCTMark`, indicating that the connection requires SNAT, and `HairpinCTMark`, indicating that this is a hairpin connection, are persisted to mark the connections. These two ct marks will be consumed in table [SNAT]. Flow 2 matches the first packet of Service connections requiring SNAT, identified by `FromGatewayRegMark` and `ToTunnelRegMark`, indicating that the input port is the local Antrea gateway and the output port is a tunnel. Such connections will undergo SNAT with the IP address of the local Antrea gateway in table [SNAT]. Before forwarding the packets to table [SNAT], `ToExternalAddressRegMark` and `NotDSRServiceRegMark` are loaded, indicating that the packets are destined for a Service's external IP, like NodePort, LoadBalancerIP or ExternalIP, but it is not DSR mode. Additionally, `ConnSNATCTMark`, indicating that the connection requires SNAT, is persisted to mark the connections. It's worth noting that flows 1-2 are specific to `proxyAll`, but they are harmless when `proxyAll` is disabled since these flows should be never matched by in-cluster Service traffic. Flow 3-4 match the first packet of hairpin Service connections, identified by the same source and destination Pod IP addresses. Such hairpin connections will undergo SNAT with the IP address of the local Antrea gateway in table [SNAT]. Similar to flow 1, `ConnSNATCTMark` and `HairpinCTMark` are persisted to mark the connections. Flow 5 is the table-miss flow. This table performs SNAT for connections requiring SNAT within the pipeline. If you dump the flows of this table, you may see the following: ```text table=SNAT, priority=200,ctstate=+new+trk,ctmark=0x40/0x40,ip,reg0=0x2/0xf actions=ct(commit,table=L2ForwardingCalc,zone=65521,nat(src=169.254.0.253),exec(setfield:0x10/0x10->ctmark,setfield:0x40/0x40->ctmark)) table=SNAT, priority=200,ctstate=+new+trk,ctmark=0x40/0x40,ip,reg0=0x3/0xf actions=ct(commit,table=L2ForwardingCalc,zone=65521,nat(src=10.10.0.1),exec(setfield:0x10/0x10->ctmark,setfield:0x40/0x40->ctmark)) table=SNAT, priority=200,ctstate=-new-rpl+trk,ctmark=0x20/0x20,ip actions=ct(table=L2ForwardingCalc,zone=65521,nat) table=SNAT, priority=190,ctstate=+new+trk,ctmark=0x20/0x20,ip,reg0=0x2/0xf" }, { "data": "table=SNAT, priority=0 actions=goto_table:L2ForwardingCalc ``` Flow 1 matches the first packet of hairpin Service connections through the local Antrea gateway, identified by `HairpinCTMark` and `FromGatewayRegMark`. It performs SNAT with the Virtual Service IP `169.254.0.253` and forwards the SNAT'd packets to table [L2ForwardingCalc]. Before SNAT, the \"tracked\" state of packets is associated with `CtZone`. After SNAT, their \"track\" state is associated with `SNATCtZone`, and then `ServiceCTMark` and `HairpinCTMark` persisted in `CtZone` are not accessible anymore. As a result, `ServiceCTMark` and `HairpinCTMark` need to be persisted once again, but this time they are persisted in `SNATCtZone` for subsequent tables to consume. Flow 2 matches the first packet of hairpin Service connection originating from local Pods, identified by `HairpinCTMark` and `FromPodRegMark`. It performs SNAT with the IP address of the local Antrea gateway and forwards the SNAT'd packets to table [L2ForwardingCalc]. Similar to flow 1, `ServiceCTMark` and `HairpinCTMark` are persisted in `SNATCtZone`. Flow 3 matches the subsequent request packets of connections for which SNAT was performed for the first packet, and then invokes `ct` action on the packets again to restore the \"tracked\" state in `SNATCtZone`. The packets with the appropriate \"tracked\" state are forwarded to table [L2ForwardingCalc]. Flow 4 matches the first packet of Service connections requiring SNAT, identified by `ConnSNATCTMark` and `FromGatewayRegMark`, indicating the connection is destined for an external Service IP initiated through the Antrea gateway and the Endpoint is a remote Pod. It performs SNAT with the IP address of the local Antrea gateway and forwards the SNAT'd packets to table [L2ForwardingCalc]. Similar to other flow 1 or 2, `ServiceCTMark` is persisted in `SNATCtZone`. Flow 5 is the table-miss flow. This is essentially the \"dmac\" table of the switch. We program one flow for each port (tunnel port, the local Antrea gateway port, and local Pod ports). If you dump the flows of this table, you may see the following: ```text table=L2ForwardingCalc, priority=200,dldst=ba:5e:d1:55:aa:c0 actions=setfield:0x2->reg1,setfield:0x200000/0x600000->reg0,gototable:TrafficControl table=L2ForwardingCalc, priority=200,dldst=aa:bb:cc:dd:ee:ff actions=setfield:0x1->reg1,setfield:0x200000/0x600000->reg0,gototable:TrafficControl table=L2ForwardingCalc, priority=200,dldst=5e:b5:e3:a6:90:b7 actions=setfield:0x24->reg1,setfield:0x200000/0x600000->reg0,gototable:TrafficControl table=L2ForwardingCalc, priority=200,dldst=fa:b7:53:74:21:a6 actions=setfield:0x25->reg1,setfield:0x200000/0x600000->reg0,gototable:TrafficControl table=L2ForwardingCalc, priority=200,dldst=36:48:21:a2:9d:b4 actions=setfield:0x26->reg1,setfield:0x200000/0x600000->reg0,gototable:TrafficControl table=L2ForwardingCalc, priority=0 actions=goto_table:TrafficControl ``` Flow 1 matches packets destined for the local Antrea gateway, identified by the destination MAC address being that of the local Antrea gateway. It loads `OutputToOFPortRegMark`, indicating that the packets should output to an OVS port, and also loads the port number of the local Antrea gateway to `TargetOFPortField`. Both of these two values will be consumed in table [Output]. Flow 2 matches packets destined for a tunnel, identified by the destination MAC address being that of the *Global Virtual MAC*. Similar to flow 1, `OutputToOFPortRegMark` is loaded, and the port number of the tunnel is loaded to `TargetOFPortField`. Flows 3-5 match packets destined for local Pods, identified by the destination MAC address being that of one of the local Pods. Similar to flow 1, `OutputToOFPortRegMark` is loaded, and the port number of the local Pods is loaded to `TargetOFPortField`. Flow 6 is the table-miss flow. This table is dedicated to `TrafficControl`. If you dump the flows of this table, you may see the following: ```text table=TrafficControl, priority=210,reg0=0x200006/0x60000f actions=goto_table:Output table=TrafficControl, priority=200,reg1=0x25 actions=setfield:0x22->reg9,setfield:0x800000/0xc00000->reg4,goto_table:IngressSecurityClassifier table=TrafficControl, priority=200,inport=\"web-7975-274540\" actions=setfield:0x22->reg9,setfield:0x800000/0xc00000->reg4,gototable:IngressSecurityClassifier table=TrafficControl, priority=200,reg1=0x26 actions=setfield:0x27->reg9,setfield:0x400000/0xc00000->reg4,goto_table:IngressSecurityClassifier table=TrafficControl, priority=200,inport=\"db-755c6-5080e3\" actions=setfield:0x27->reg9,setfield:0x400000/0xc00000->reg4,gototable:IngressSecurityClassifier table=TrafficControl, priority=0 actions=goto_table:IngressSecurityClassifier ``` Flow 1 matches packets returned from TrafficControl return ports and forwards them to table [Output], where the packets are output to the port to which they are destined. To identify such packets, `OutputToOFPortRegMark`, indicating that the packets should be output to an OVS port, and `FromTCReturnRegMark` loaded in table [Classifier], indicating that the packets are from a TrafficControl return port, are" }, { "data": "Flows 2-3 are installed for the sample [TrafficControl redirect-web-to-local] to mark the packets associated with the Pods labeled by `app: web` using `TrafficControlRedirectRegMark`. Flow 2 handles the ingress direction, while flow 3 handles the egress direction. In table [Output], these packets will be redirected to a TrafficControl target port specified in `TrafficControlTargetOFPortField`, of which value is loaded in these 2 flows. Flows 4-5 are installed for the sample [TrafficControl mirror-db-to-local] to mark the packets associated with the Pods labeled by `app: db` using `TrafficControlMirrorRegMark`. Similar to flows 2-3, flows 4-5 also handles the two directions. In table [Output], these packets will be mirrored (duplicated) to a TrafficControl target port specified in `TrafficControlTargetOFPortField`, of which value is loaded in these 2 flows. Flow 6 is the table-miss flow. This table is to classify packets before they enter the tables for ingress security. If you dump the flows of this table, you may see the following: ```text table=IngressSecurityClassifier, priority=210,pktmark=0x80000000/0x80000000,ctstate=-rpl+trk,ip actions=goto_table:ConntrackCommit table=IngressSecurityClassifier, priority=201,reg4=0x80000/0x80000 actions=goto_table:AntreaPolicyIngressRule table=IngressSecurityClassifier, priority=200,reg0=0x20/0xf0 actions=goto_table:IngressMetric table=IngressSecurityClassifier, priority=200,reg0=0x10/0xf0 actions=goto_table:IngressMetric table=IngressSecurityClassifier, priority=200,reg0=0x40/0xf0 actions=goto_table:IngressMetric table=IngressSecurityClassifier, priority=200,ctmark=0x40/0x40 actions=gototable:ConntrackCommit table=IngressSecurityClassifier, priority=0 actions=goto_table:AntreaPolicyIngressRule ``` Flow 1 matches locally generated request packets for liveness/readiness probes from kubelet, identified by `pkt_mark` which is set by iptables in the host network namespace. It forwards the packets to table [ConntrackCommit] directly to bypass all tables for ingress security. Flow 2 matches packets destined for NodePort Services and forwards them to table [AntreaPolicyIngressRule] to enforce Antrea-native NetworkPolicies applied to NodePort Services. Without this flow, if the selected Endpoint is not a local Pod, the packets might be matched by one of the flows 3-5, skipping table [AntreaPolicyIngressRule]. Flows 3-5 matches packets destined for the local Antrea gateway, tunnel, uplink port with `ToGatewayRegMark`, `ToTunnelRegMark` or `ToUplinkRegMark`, respectively, and forwards them to table [IngressMetric] directly to bypass all tables for ingress security. Flow 5 matches packets from hairpin connections with `HairpinCTMark` and forwards them to table [ConntrackCommit] directly to bypass all tables for ingress security. Refer to this PR for more information. Flow 6 is the table-miss flow. This table is very similar to table [AntreaPolicyEgressRule] but implements the ingress rules of Antrea-native NetworkPolicies. Depending on the tier to which the policy belongs, the rules will be installed in a table corresponding to that tier. The ingress table to tier mappings is as follows: ```text Antrea-native NetworkPolicy other Tiers -> AntreaPolicyIngressRule K8s NetworkPolicy -> IngressRule Antrea-native NetworkPolicy Baseline Tier -> IngressDefaultRule ``` Again for this table, you will need to keep in mind the Antrea-native NetworkPolicy and Antrea-native L7 NetworkPolicy that we are using that we are using. Since these sample ingress policies reside in the Application Tier, if you dump the flows for this table, you may see the following: ```text table=AntreaPolicyIngressRule, priority=64990,ctstate=-new+est,ip actions=gototable:IngressMetric table=AntreaPolicyIngressRule, priority=64990,ctstate=-new+rel,ip actions=gototable:IngressMetric table=AntreaPolicyIngressRule, priority=14500,reg1=0x7 actions=conjunction(14,2/3) table=AntreaPolicyIngressRule, priority=14500,ip,nw_src=10.10.0.26 actions=conjunction(14,1/3) table=AntreaPolicyIngressRule, priority=14500,tcp,tp_dst=8080 actions=conjunction(14,3/3) table=AntreaPolicyIngressRule, priority=14500,conjid=14,ip actions=setfield:0xd->reg6,ct(commit,table=IngressMetric,zone=65520,exec(setfield:0xd/0xffffffff->ctlabel,setfield:0x80/0x80->ctmark,setfield:0x20000000000000000/0xfff0000000000000000->ctlabel)) table=AntreaPolicyIngressRule, priority=14600,ip,nw_src=10.10.0.26 actions=conjunction(6,1/3) table=AntreaPolicyIngressRule, priority=14600,reg1=0x25 actions=conjunction(6,2/3) table=AntreaPolicyIngressRule, priority=14600,tcp,tp_dst=80 actions=conjunction(6,3/3) table=AntreaPolicyIngressRule, priority=14600,conjid=6,ip actions=setfield:0x6->reg6,ct(commit,table=IngressMetric,zone=65520,exec(setfield:0x6/0xffffffff->ctlabel)) table=AntreaPolicyIngressRule, priority=14600,ip actions=conjunction(4,1/2) table=AntreaPolicyIngressRule, priority=14599,reg1=0x25 actions=conjunction(4,2/2) table=AntreaPolicyIngressRule, priority=14599,conjid=4 actions=setfield:0x4->reg3,setfield:0x400/0x400->reg0,gototable:IngressMetric table=AntreaPolicyIngressRule, priority=0 actions=goto_table:IngressRule ``` Flows 1-2, which are installed by default with the highest priority, match non-new and \"tracked\" packets and forward them to table [IngressMetric] to bypass the check from egress rules. This means that if a connection is established, its packets go straight to table [IngressMetric], with no other match required. In particular, this ensures that reply traffic is never dropped because of an Antrea-native NetworkPolicy or K8s NetworkPolicy rule. However, this also means that ongoing connections are not affected if the Antrea-native NetworkPolicy or the K8s NetworkPolicy is" }, { "data": "Similar to table [AntreaPolicyEgressRule], the priorities of flows 3-13 installed for the ingress rules are decided by the following: The `spec.tier` value in an Antrea-native NetworkPolicy determines the primary level for flow priority. The `spec.priority` value in an Antrea-native NetworkPolicy determines the secondary level for flow priority within the same `spec.tier`. A lower value in this field corresponds to a higher priority for the flow. The rule's position within an Antrea-native NetworkPolicy also influences flow priority. Rules positioned closer to the beginning have higher priority for the flow. Flows 3-6, whose priories are all 14500, are installed for the egress rule `AllowFromClientL7` in the sample policy. These flows are described as follows: Flow 3 is used to match packets with the source IP address in set {10.10.0.26}, which has all IP addresses of the Pods selected by the label `app: client`, constituting the first dimension for `cojunction` with `conj_id` 14. Flow 4 is used to match packets with the output OVS port in set {0x25}, which has all the ports of the Pods selected by the label `app: web`, constituting the second dimension for `conjunction` with `conj_id` 14. Flow 5 is used to match packets with the destination TCP port in set {8080} specified in the rule, constituting the third dimension for `conjunction` with `conj_id` 14. Flow 6 is used to match packets meeting all the three dimensions of `conjunction` with `conj_id` 14 and forward them to table [IngressMetric], persisting `conj_id` to `IngressRuleCTLabel` consumed in table [IngressMetric]. Additionally, for the L7 protocol: `L7NPRedirectCTMark` is persisted, indicating the packets should be redirected to an application-aware engine to be filtered according to L7 rules, such as method `GET` and path `/api/v2/*` in the sample policy. A VLAN ID allocated for the Antrea-native L7 NetworkPolicy is persisted in `L7NPRuleVlanIDCTLabel`, which will be consumed in table [Output]. Flows 7-11, whose priorities are 14600, are installed for the egress rule `AllowFromClient` in the sample policy. These flows are described as follows: Flow 7 is used to match packets with the source IP address in set {10.10.0.26}, which has all IP addresses of the Pods selected by the label `app: client`, constituting the first dimension for `cojunction` with `conj_id` 6. Flow 8 is used to match packets with the output OVS port in set {0x25}, which has all the ports of the Pods selected by the label `app: web`, constituting the second dimension for `conjunction` with `conj_id` 6. Flow 9 is used to match packets with the destination TCP port in set {80} specified in the rule, constituting the third dimension for `conjunction` with `conj_id` 6. Flow 10 is used to match packets meeting all the three dimensions of `conjunction` with `conj_id` 6 and forward them to table [IngressMetric], persisting `conj_id` to `IngressRuleCTLabel` consumed in table [IngressMetric]. Flows 11-13, whose priorities are all 14599, are installed for the egress rule with a `Drop` action defined after the rule `AllowFromClient` in the sample policy, serves as a default rule. Unlike the default of K8s NetworkPolicy, Antrea-native NetworkPolicy has no default rule, and all rules should be explicitly defined. Hence, they are evaluated as-is, and there is no need for a table [AntreaPolicyIngressDefaultRule]. These flows are described as follows: Flow 11 is used to match any IP packets, constituting the second dimension for `conjunction` with `conj_id` 4. This flow, which matches all IP packets, exists because we need at least 2 dimensions for a conjunctive" }, { "data": "Flow 12 is used to match packets with the output OVS port in set {0x25}, which has all the ports of the Pods selected by the label `app: web`, constituting the first dimension for `conjunction` with `conj_id` 4. Flow 13 is used to match packets meeting both dimensions of `conjunction` with `conj_id` 4. `APDenyRegMark` that will be consumed in table [IngressMetric] to which the packets are forwarded is loaded. Flow 14 is the table-miss flow to forward packets not matched by other flows to table [IngressMetric]. This table is very similar to table [EgressRule] but implements ingress rules for K8s NetworkPolicies. Once again, you will need to keep in mind the K8s NetworkPolicy that we are using. If you dump the flows of this table, you should see something like this: ```text table=IngressRule, priority=200,ip,nw_src=10.10.0.26 actions=conjunction(3,1/3) table=IngressRule, priority=200,reg1=0x25 actions=conjunction(3,2/3) table=IngressRule, priority=200,tcp,tp_dst=80 actions=conjunction(3,3/3) table=IngressRule, priority=190,conjid=3,ip actions=setfield:0x3->reg6,ct(commit,table=IngressMetric,zone=65520,exec(setfield:0x3/0xffffffff->ctlabel)) table=IngressRule, priority=0 actions=goto_table:IngressDefaultRule ``` Flows 1-4 are installed for the ingress rule in the sample K8s NetworkPolicy. These flows are described as follows: Flow 1 is used to match packets with the source IP address in set {10.10.0.26}, which is from the Pods selected by the label `app: client` in the `default` Namespace, constituting the first dimension for `conjunction` with `conj_id` 3. Flow 2 is used to match packets with the output port OVS in set {0x25}, which has all ports of the Pods selected by the label `app: web` in the `default` Namespace, constituting the second dimension for `conjunction` with `conj_id` 3. Flow 3 is used to match packets with the destination TCP port in set {80} specified in the rule, constituting the third dimension for `conjunction` with `conj_id` 3. Flow 4 is used to match packets meeting all the three dimensions of `conjunction` with `conj_id` 3 and forward them to table [IngressMetric], persisting `conj_id` to `IngressRuleCTLabel`. Flow 5 is the table-miss flow to forward packets not matched by other flows to table [IngressDefaultRule]. This table is similar in its purpose to table [EgressDefaultRule], and it complements table [IngressRule] for K8s NetworkPolicy ingress rule implementation. In Kubernetes, when a NetworkPolicy is applied to a set of Pods, then the default behavior for ingress connections for these Pods becomes \"deny\" (they become [isolated Pods](https://kubernetes.io/docs/concepts/services-networking/network-policies/#isolated-and-non-isolated-pods)). This table is in charge of dropping traffic destined for Pods to which a NetworkPolicy (with an ingress rule) is applied, and which did not match any of the \"allow\" list rules. If you dump the flows of this table, you may see the following: ```text table=IngressDefaultRule, priority=200,reg1=0x25 actions=drop table=IngressDefaultRule, priority=0 actions=goto_table:IngressMetric ``` Flow 1, based on our sample K8s NetworkPolicy, is to drop traffic destined for OVS port 0x25, the port number associated with a Pod selected by the label `app: web`. Flow 2 is the table-miss flow to forward packets to table [IngressMetric]. This table is also used to implement Antrea-native NetworkPolicy ingress rules created in the Baseline Tier. Since the Baseline Tier is meant to be enforced after K8s NetworkPolicies, the corresponding flows will be created at a lower priority than K8s NetworkPolicy default drop flows. These flows are similar to flows 3-9 in table [AntreaPolicyIngressRule]. This table is very similar to table [EgressMetric], but used to collect ingress metrics for Antrea-native NetworkPolicies. If you dump the flows of this table, you may see the following: ```text table=IngressMetric, priority=200,ctstate=+new,ctlabel=0x3/0xffffffff,ip actions=goto_table:ConntrackCommit table=IngressMetric, priority=200,ctstate=-new,ctlabel=0x3/0xffffffff,ip actions=goto_table:ConntrackCommit table=IngressMetric, priority=200,ctstate=+new,ctlabel=0x6/0xffffffff,ip actions=goto_table:ConntrackCommit table=IngressMetric, priority=200,ctstate=-new,ctlabel=0x6/0xffffffff,ip actions=goto_table:ConntrackCommit table=IngressMetric, priority=200,reg0=0x400/0x400,reg3=0x4 actions=drop table=IngressMetric, priority=0 actions=goto_table:ConntrackCommit ``` Flows 1-2, matching packets with `IngressRuleCTLabel` set to 3 (the `conj_id` allocated for the sample K8s NetworkPolicy ingress rule and loaded in table [IngressRule] flow 4), are used to collect metrics for the ingress" }, { "data": "Flows 3-4, matching packets with `IngressRuleCTLabel` set to 6 (the `conj_id` allocated for the sample Antrea-native NetworkPolicy ingress rule and loaded in table [AntreaPolicyIngressRule] flow 10), are used to collect metrics for the ingress rule. Flow 5 is the drop rule for the sample Antrea-native NetworkPolicy ingress rule. It drops the packets by matching `APDenyRegMark` loaded in table [AntreaPolicyIngressRule] flow 13 and `APConjIDField` set to 4 which is the `conj_id` allocated for the ingress rule and loaded in table [AntreaPolicyIngressRule] flow 13. Flow 6 is the table-miss flow. This table is in charge of committing non-Service connections in `CtZone`. If you dump the flows of this table, you may see the following: ```text table=ConntrackCommit, priority=200,ctstate=+new+trk-snat,ctmark=0/0x10,ip actions=ct(commit,table=Output,zone=65520,exec(move:NXMNXREG0[0..3]->NXMNXCT_MARK[0..3])) table=ConntrackCommit, priority=0 actions=goto_table:Output ``` Flow 1 is designed to match the first packet of non-Service connections with the \"tracked\" state and `NotServiceCTMark`. Then it commits the relevant connections in `CtZone`, persisting the value of `PktSourceField` to `ConnSourceCTMarkField`, and forwards the packets to table [Output]. Flow 2 is the table-miss flow. This is the final table in the pipeline, responsible for handling the output of packets from OVS. It addresses the following cases: Output packets to an application-aware engine for further L7 protocol processing. Output packets to a target port and a mirroring port defined in a TrafficControl CR with `Mirror` action. Output packets to a port defined in a TrafficControl CR with `Redirect` action. Output packets from hairpin connections to the ingress port where they were received. Output packets to a target port. Output packets to the OpenFlow controller (Antrea Agent). Drop packets. If you dump the flows of this table, you may see the following: ```text table=Output, priority=212,ctmark=0x80/0x80,reg0=0x200000/0x600000 actions=pushvlan:0x8100,move:NXMNXCTLABEL[64..75]->OXMOFVLANVID[],output:\"antrea-l7-tap0\" table=Output, priority=211,reg0=0x200000/0x600000,reg4=0x400000/0xc00000 actions=output:NXMNXREG1[],output:NXMNXREG9[] table=Output, priority=211,reg0=0x200000/0x600000,reg4=0x800000/0xc00000 actions=output:NXMNXREG9[] table=Output, priority=210,ctmark=0x40/0x40 actions=INPORT table=Output, priority=200,reg0=0x200000/0x600000 actions=output:NXMNXREG1[] table=Output, priority=200,reg0=0x2400000/0xfe600000 actions=meter:256,controller(reason=no_match,id=62373,userdata=01.01) table=Output, priority=200,reg0=0x4400000/0xfe600000 actions=meter:256,controller(reason=no_match,id=62373,userdata=01.02) table=Output, priority=0 actions=drop ``` Flow 1 is for case 1. It matches packets with `L7NPRedirectCTMark` and `OutputToOFPortRegMark`, and then outputs them to the port `antrea-l7-tap0` specifically created for connecting to an application-aware engine. Notably, these packets are pushed with an 802.1Q header and loaded with the VLAN ID value persisted in `L7NPRuleVlanIDCTLabel` before being output, due to the implementation of Antrea-native L7 NetworkPolicy. The application-aware engine enforcing L7 policies (e.g., Suricata) can leverage the VLAN ID to determine which set of rules to apply to the packet. Flow 2 is for case 2. It matches packets with `TrafficControlMirrorRegMark` and `OutputToOFPortRegMark`, and then outputs them to the port specified in `TargetOFPortField` and the port specified in `TrafficControlTargetOFPortField`. Unlike the `Redirect` action, the `Mirror` action creates an additional copy of the packet. Flow 3 is for case 3. It matches packets with `TrafficControlRedirectRegMark` and `OutputToOFPortRegMark`, and then outputs them to the port specified in `TrafficControlTargetOFPortField`. Flow 4 is for case 4. It matches packets from hairpin connections by matching `HairpinCTMark` and outputs them back to the port where they were received. Flow 5 is for case 5. It matches packets by matching `OutputToOFPortRegMark` and outputs them to the OVS port specified by the value stored in `TargetOFPortField`. Flows 6-7 are for case 6. They match packets by matching `OutputToControllerRegMark` and the value stored in `PacketInOperationField`, then output them to the OpenFlow controller (Antrea Agent) with corresponding user data. In practice, you will see additional flows similar to these ones to accommodate different scenarios (different PacketInOperationField values). Note that packets sent to controller are metered to avoid overrunning the antrea-agent and using too many resources. Flow 8 is the table-miss flow for case 7. It drops packets that do not match any of the flows in this table." } ]
{ "category": "Runtime", "file_name": "ovs-pipeline.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "Allow Recurring Backup Detached Volumes In the current Longhorn implementation, users cannot do recurring backup when volumes are detached. This enhancement gives the users an option to do recurring backup even when volumes are detached. https://github.com/longhorn/longhorn/issues/1509 Give the users an option to allow recurring backup to happen when the volume is detached. Don't overwrite the old backups with new identical backups. This happens when the volume is detached for a long time and has no new data. Avoid conflict with the data locality feature if the automatically attach happens. The volume will not be available while the recurring backup process is happening. Add new global boolean setting `Allow-Recurring-Backup-While-Volume-Detached`. When the users enable `Allow-Recurring-Backup-While-Volume-Detached`, allow recurring backup for detached volumes. Users use . OpenFaaS (Functions as a Service) is a framework for building serverless functions with Docker and Kubernetes. Users deploy serverless functions using Longhorn as persistent storage. Users also set up recurring backup for those Longhorn volumes. OpenFaas has APIGateway that watches and manages request to serverless functions. The nature of serverless functions is that the container that those functions run on are only created when APIGateway sees client requests to those functions. When there is no demand (e.g there is no function calls because there is no users' request), OpenFaas will scale down the number of function replicas (similar to Pod concept in Kubernetes) to 0. As the result, the Longhorn volumes are attached when the functions are called and detached when no one is calling the functions. If the recurring backup can only apply to the attached volumes, there will be many miss backups. From the users' perspective, when they schedule a backup, they would expect that they can see the backup to be created automatically according to a certain schedule. The expectation is, the backup store should always have the latest data of the volume. But in the case of a volume was changed then detached before the last backup was done, the expectation will not be met. Then if the volume is lost during the time that volume is detached, then we lost the changed part of data of volume since the last backup. Users need to turn on the global setting `Allow-Recurring-Backup-While-Volume-Detached` to enable recurring backup for detached volumes. Longhorn will automatically attaches volumes, disables volume's frontend, takes a snapshot, creates a backup, then detaches the volume. During the time the volume was attached automatically, the volume is not ready for workload. Workload will have to wait until the recurring backup finish. There is no API change. The new global setting `Allow-Recurring-Backup-While-Volume-Detached` will use the same `/v1/settings` API. Add new global boolean setting `Allow-Recurring-Backup-While-Volume-Detached` In `volume-controller`, we don't suspend volume's recurring jobs when either of the following condition match: The volume is" }, { "data": "The volume is detached but users `Allow-Recurring-Backup-While-Volume-Detached` is set to `true`. Other than that, we suspend volume's recurring jobs. Modify the cronjob to do the following: Check to see if the volume is attached. If the volume is attached, we follow the same process as the current implementation. If the volume is detached, attach the volume to the node of the current Longhorn manager. Also, disable the volume's frontend in the attaching request. Disable the volume's frontend make sure that pod cannot use the volume during the recurring backup process. This is necessary so that we can safely detach the volume when finishing the backup. Wait for the volume to be in attached state. Check the size of `VolumeHead`, if it is empty, skip the backup. We don't want to overwrite the old backups with new identical backups. This happens when the volume is detached for a long time and has no new data. Detach the volume when finish backup. Set `Allow-Recurring-Backup-While-Volume-Detached` to `false` Create a volume Attach the volume, write some data to the volume Detach the volume Set the recurring backup for the volume on every minute Wait for 2 minutes, verify that there is no new backup created Set `Allow-Recurring-Backup-While-Volume-Detached` to `true` Wait until the recurring job begins. Verify that Longhorn automatically attaches the volume, and does backup, then detaches the volume. On very subsequence minutes, verify that Longhorn automatically attaches the volume, but doesn't do backup, then detaches the volume. The reason that Longhorn does not do backup is there is no new data. Delete the recurring backup Create a PVC from the volume Create a deployment of 1 pod using the PVC Write 1GB data to the volume from the pod. Scale down the deployment. The volume is detached. Set the recurring backup for every 2 minutes. Wait until the recurring backup starts, scale up the deployment to 1 pod. Verify that pod cannot start until the recurring backup finishes. On very subsequence 2 minutes, verify that Longhorn doesn't do backup. The reason that Longhorn does not do backup is there is no new data. Delete the recurring job Turn on data locality for the volume Set the number of NumberOfReplicas to 1 Let say the volume is attaching to node-1. Wait until there is a healthy replica on node-1 and there is no other replica. Write 200MB data to the volume. Detach the volume Turn off data locality for the volume Attach the volume to node-2 Detach the volume Set the recurring backup for every 1 minutes. Wait until the recurring backup starts. Verify that Longhorn automatically attaches the volume to node-2, does backup, then detaches the volume. However, Longhorn doesn't trigger the replica rebuild, and there is no new replica on node-2. Set `Allow-Recurring-Backup-While-Volume-Detached` to `true` This enhancement doesn't require an upgrade strategy." } ]
{ "category": "Runtime", "file_name": "20201002-allow-recurring-backup-detached-volumes.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "sidebar_position: 2 :::note This feature requires at least JuiceFS v1.0.0, for previous versions, you need to upgrade all JuiceFS clients, and then enable trash using the `config` subcommand, introduced in below sections. ::: JuiceFS enables the trash feature by default, files deleted will be moved in a hidden directory named `.trash` under the file system root, and kept for specified period of time before expiration. Until actual expiration, file system usage (check using `df -h`) will not change, this is also true with the corresponding object storage data. When using `juicefs format` command to initialize JuiceFS volume, users are allowed to specify `--trash-days <val>` to set the number of days which files are kept in the `.trash` directory. Within this period, user-removed files are not actually purged, so the file system usage shown in the output of `df` command will not decrease, and the blocks in the object storage will still exist. To control the expiration settings, use the option which is available for both `juicefs format` and `juicefs config`: ```shell juicefs format META-URL myjfs --trash-days=7 juicefs config META-URL --trash-days=7 juicefs config META-URL --trash-days=0 ``` In addition, the automatic cleaning of the trash relies on the background job of the JuiceFS client. To ensure that the background job can be executed properly, at least one online mount point is required, and the parameter should not be used when mounting the file system. When files are deleted, they will be moved to a directory that takes up the format of `.trash/YYYY-MM-DD-HH/[parent inode]-[file inode]-[file name]`, where `YYYY-MM-DD-HH` is the UTC time of the deletion. You can locate the deleted files and recover them if you remember when they are deleted. If you have found the desired files in Trash, you can recover them using `mv`: ```shells mv .trash/2022-11-30-10/[parent inode]-[file inode]-[file name] . ``` Files within the Trash directory lost all their directory structure information, and are stored in a \"flatten\" style, however the parent directory inode is preserved in the file name, if you have forgotten the file name, look for parent directory inode using , and then track down the desired files. Assuming the mount point being `/jfs`, and you've accidentally deleted `/jfs/data/config.json`, but you cannot directly recover this `config.json` because you've forgotten its name, use the following procedure to locate the parent directory inode, and then locate the corresponding trash files. ```shell juicefs info /jfs/data find /jfs/.trash -name '3-*' mv /jfs/.trash/2022-11-30-10/3-* /jfs/data ``` Keep in mind that only the root user have write access to the Trash directory, so the method introduced above is only available to the root user. If a normal user happens to have read permission to these deleted files, they can also recover them via a read-only method like `cp`, although this obviously wastes storage capacity. If you accidentally delete a complicated structured directory, using solely `mv` to recover can be a disaster, for example: ```shell $ tree data data app1 config config.json app2 config config.json $ juicefs rmr data $ tree .trash/2023-08-14-05 .trash/2023-08-14-05 1-12-data 12-13-app1 12-15-app2 13-14-config 14-17-config.json 15-16-config 16-18-config.json ``` To resolve such inconvenience, JuiceFS v1.1 provides the subcommand to quickly restore deleted files, while preserving its original directory structure. Run this procedure as follows: ```shell $ juicefs restore $META_URL 2023-08-14-05 $ tree .trash/2023-08-14-05 .trash/2023-08-14-05 1-12-data app1 config" }, { "data": "app2 config config.json juicefs restore $META_URL 2023-08-14-05 --put-back ``` When files in the trash directory reach their expiration time, they will be automatically cleaned up. It is important to note that the file cleaning is performed by the background job of the JuiceFS client, which is scheduled to run every hour by default. Therefore, when there are a large number of expired files, the cleaning speed of the object storage may not be as fast as expected, and it may take some time to see the change in storage capacity. If you want to permanently delete files before their expiration time, you need to have `root` privileges and use or the system's built-in `rm` command to delete the files in the `.trash` directory, so that storage space can be immediately released. For example, to permanently delete a directory in the trash: ```shell juicefs rmr .trash/2022-11-30-10/ ``` If you want to delete expired files more quickly, you can mount multiple mount points to exceed the deletion speed limit of a single client. Apart from user deleted files, there's another type of data which also resides in Trash, which isn't directly visible from the `.trash` directory, they are stale slices created by file edits and overwrites. Read more in . To sum up, if applications constantly delete or overwrite files, object storage usage will exceed file system usage. Although stale slices cannot be browsed or manipulated, you can use to observe its scale: ```shell $ juicefs status META-URL --more ... Trash Files: 0 0.0/s Trash Files: 0.0 b (0 Bytes) 0.0 b/s Pending Deleted Files: 0 0.0/s Pending Deleted Files: 0.0 b (0 Bytes) 0.0 b/s Trash Slices: 27 26322.2/s Trash Slices: 783.0 b (783 Bytes) 753.1 KiB/s Pending Deleted Slices: 0 0.0/s Pending Deleted Slices: 0.0 b (0 Bytes) 0.0 b/s ... ``` Stale slices are also kept according to the expiration settings, this adds another layer of data security: if files are erroneously edited or overwritten, original state can be recovered through metadata backups (provided that you have already set up metadata backup). If you do need to rollback this type of accident overwrites, you need to obtain a copy of the metadata backup, and then mount using this copy, so that you can visit the file system in its older state, and recover any files before they are tampered. See for more. Due to its invisibility, stale slices can grow to a very large size, if you do need to delete them, follow below procedure: ```shell juicefs config META-URL --trash-days 0 juicefs gc --compact juicefs gc --delete ``` All users are allowed to browse the trash directory and see the full list of removed files. However, only root has write privilege to the `.trash` directory. Since JuiceFS keeps the original permission modes even for the trashed files, normal users can read files that they have permission to. Several caveats on Trash privileges: When JuiceFS Client is started by a non-root user, add the `-o allow_root` option or trash cannot be emptied normally. The `.trash` directory can only be accessed from the file system root, thus not available for sub-directory mount points. User cannot create new files inside the trash directory, and only root are allowed to move or delete files in trash." } ]
{ "category": "Runtime", "file_name": "trash.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "rkt supports running containers using SELinux . At start-up, rkt will attempt to read `/etc/selinux/(policy)/contexts/lxc_contexts`. If this file doesn't exist, no SELinux transitions will be performed. If it does, rkt will generate a per-instance context. All mounts for the instance will be created using the file context defined in `lxccontexts`, and the instance processes will be run in a context derived from the process context defined in `lxccontexts`. Processes started in these contexts will be unable to interact with processes or files in any other instance's context, even though they are running as the same user. Individual Linux distributions may impose additional isolation constraints on these contexts. For example, given the following `lxc_contexts`: ``` process = \"systemu:systemr:svirtlxcnet_t:s0\" content = \"systemu:objectr:virtvarlib_t:s0\" file = \"systemu:objectr:svirtlxcfile_t:s0\" ``` You could define a policy where members of `svirtlxcnet_t` context cannot write on TCP sockets. Note that the policy is responsibility of your distribution and might differ from this example. To find out more about policies you can check the . Please refer to your distribution documentation for further details on its policy." } ]
{ "category": "Runtime", "file_name": "selinux.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "For detailed information about this project, please refer to the . This project is designed for locating the code for installing all required components to set up a cluster, including controller and nbp plugins. Currently we support several install tools for diversity. is a radically simple IT automation platform that makes your applications and systems easier to deploy. OpenSDS installer project holds all code related to `opensds-ansible` in ansible folder for installing and configuring OpenSDS cluster through ansible tool. is a popular tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources. OpenSDS installer project also holds all code related to `opensds-charts` in charts folder for installing and configuring OpenSDS cluster through helm tool. Mailing list: slack: # Ideas/Bugs:" } ]
{ "category": "Runtime", "file_name": "opensds-installer.md", "project_name": "Soda Foundation", "subcategory": "Cloud Native Storage" }
[ { "data": "Longhorn v1.5.0 is the latest version of Longhorn 1.5. It introduces many enhancements, improvements, and bug fixes as described below including performance, stability, maintenance, resilience, and so on. Please try it and feedback. Thanks for all the contributions! For the definition of stable or latest release, please check . > Please note that this is a preview feature, so should not be used in any production environment. A preview feature is disabled by default and would be changed in the following versions until it becomes general availability. In addition to the existing iSCSI stack (v1) data engine, we are introducing the v2 data engine based on SPDK (Storage Performance Development Kit). This release includes the introduction of volume lifecycle management, degraded volume handling, offline replica rebuilding, block device management, and orphaned replica management. For the performance benchmark and comparison with v1, check the report . Introducing the new Longhorn VolumeAttachment CR, which ensures exclusive attachment and supports automatic volume attachment and detachment for various headless operations such as volume cloning, backing image export, and recurring jobs. Cluster Autoscaler was initially introduced as an experimental feature in v1.3. After undergoing automatic validation on different public cloud Kubernetes distributions and receiving user feedback, it has now reached general availability. Previously, there were two separate instance manager pods responsible for volume engine and replica process management. However, this setup required high resource usage, especially during live upgrades. In this release, we have merged these pods into a single instance manager, reducing the initial resource requirements. Longhorn supports different compression methods for volume backups, including lz4, gzip, or no compression. This allows users to choose the most suitable method based on their data type and usage requirements. While volume filesystem trim was introduced in v1.4, users had to perform the operation manually. From this release, users can create a recurring job that automatically runs the trim process, improving space efficiency without requiring human intervention. Longhorn supports filesystem trim for RWX (Read-Write-Many) volumes, expanding the trim functionality beyond RWO (Read-Write-Once) volumes only. To ensure compatibility after an upgrade, we have implemented upgrade path enforcement. This prevents unintended downgrades and ensures the system and data remain intact. Users can now utilize the unified CSI VolumeSnapshot interface to manage Backing Images similar to volume snapshots and backups. Introducing two new recurring job types specifically designed for snapshot cleanup and deletion. These jobs allow users to remove unnecessary snapshots for better space efficiency. & To enhance users' backup strategies and align with data governance policies, Longhorn now supports additional backup storage protocols, including CIFS and" }, { "data": "The new Node Drain Policy provides flexible strategies to protect volume data during Kubernetes upgrades or node maintenance operations. This ensures the integrity and availability of your volumes. Please ensure your Kubernetes cluster is at least v1.21 before installing Longhorn v1.5.0. Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions . Please ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.5.0 from v1.4.x. Only support upgrading from 1.4.x. Follow the upgrade instructions . Please check the to know more about deprecated, removed, incompatible features and important changes. If you upgrade indirectly from an older version like v1.3.x, please also check the corresponding important note for each upgrade version path. Please follow up on about any outstanding issues found after this" }, { "data": ") - @PhanLe1010 ) - @mantissahz @chriscchien ) - @yangchiu @PhanLe1010 ) - @derekbit @chriscchien ) - @yangchiu @PhanLe1010 ) - @c3y1huang @chriscchien ) - @yangchiu @ChanYiLin ) - @mantissahz @chriscchien ) - @ChanYiLin @chriscchien ) - @yangchiu @mantissahz ) - @derekbit @chriscchien ) - @c3y1huang @chriscchien ) - @derekbit @roger-ryao ) - @yangchiu @c3y1huang ) - @yangchiu @c3y1huang ) - @yangchiu @ejweber ) - @derekbit @shuo-wu @DamiaSan ) - @mantissahz @weizhe0422 ) - @PhanLe1010 @chriscchien ) - @DamiaSan ) - @mantissahz @chriscchien ) - @c3y1huang @chriscchien ) - @DamiaSan ) - @shuo-wu ) - @ChanYiLin @smallteeths @chriscchien ) - @yangchiu @ChanYiLin ) - @shuo-wu ) - @DamiaSan ) - @derekbit @roger-ryao ) - @derekbit ) - @shuo-wu ) - @derekbit @chriscchien ) - @yangchiu @derekbit ) - @derekbit @chriscchien ) - @yangchiu @DamiaSan ) - @yangchiu @derekbit ) - @yangchiu @derekbit ) - @smallteeths @chriscchien ) - @yangchiu @smallteeths ) - @shuo-wu ) - @yangchiu @derekbit ) - @yangchiu @mantissahz ) - @c3y1huang @chriscchien ) - @ChanYiLin @smallteeths @roger-ryao ) - @weizhe0422 @PhanLe1010 ) - @yangchiu @PhanLe1010 ) - @mantissahz @roger-ryao ) - @ejweber @roger-ryao ) - @smallteeths @roger-ryao ) - @mantissahz @chriscchien ) - @derekbit @chriscchien ) - @weizhe0422 @roger-ryao ) - @c3y1huang @chriscchien @roger-ryao ) - @PhanLe1010 @roger-ryao ) - @derekbit @roger-ryao ) - @c3y1huang @smallteeths @roger-ryao ) - @yangchiu @PhanLe1010 ) - @derekbit @roger-ryao ) - @derekbit @chriscchien ) - @derekbit @chriscchien ) - @ChanYiLin @roger-ryao ) - @ChanYiLin @chriscchien ) - @c3y1huang @chriscchien ) - @ChanYiLin @chriscchien ) - @PhanLe1010 @roger-ryao ) - @ChanYiLin @chriscchien ) - @mantissahz @smallteeths @roger-ryao ) - @PhanLe1010 @roger-ryao ) - @yangchiu @ChanYiLin ) - @yangchiu @ejweber ) - @ChanYiLin @weizhe0422 ) - @ChanYiLin @derekbit ) - @ChanYiLin @derekbit ) - @smallteeths @roger-ryao ) - @derekbit @roger-ryao ) - @yangchiu @derekbit ) - @derekbit @chriscchien ) - @yangchiu @shuo-wu ) - @derekbit ) - @ChanYiLin ) - @derekbit @chriscchien ) - @derekbit @weizhe0422 @chriscchien ) - @yangchiu @derekbit ) - @derekbit @chriscchien ) - @ChanYiLin @roger-ryao ) - @yangchiu @derekbit ) - @weizhe0422 @roger-ryao ) - @derekbit @roger-ryao ) - @chriscchien @DamiaSan ) - @derekbit @chriscchien ) - @derekbit @roger-ryao ) - @yangchiu @ChanYiLin @mantissahz ) - @yangchiu @derekbit ) - @derekbit @roger-ryao ) - @weizhe0422 @PhanLe1010 @smallteeths ) - @yangchiu @mantissahz ) - @yangchiu @ChanYiLin ) - @ChanYiLin @chriscchien ) - @PhanLe1010 @chriscchien ) - @PhanLe1010 @chriscchien ) - @mantissahz @roger-ryao ) - @ChanYiLin ) - @mantissahz @chriscchien ) - @ejweber ) - @derekbit @chriscchien ) - @mantissahz @roger-ryao ) - @shuo-wu @roger-ryao ) - @derekbit ) - @ChanYiLin @roger-ryao ) - @mantissahz @chriscchien ) - @c3y1huang @chriscchien ) - @yangchiu @weizhe0422 ) - @achims311 @roger-ryao ) - @derekbit @roger-ryao ) - @yangchiu @derekbit ) - @c3y1huang ) - @c3y1huang @chriscchien ) - @yangchiu @smallteeths ) - @derekbit ) - @yangchiu @derekbit ) - @yangchiu @c3y1huang ) - @hedefalk @shuo-wu @chriscchien ) - @yangchiu @derekbit ) - @yangchiu @derekbit ) - @ChanYiLin @roger-ryao ) - @mantissahz @roger-ryao ) - @yangchiu @derekbit ) - @derekbit @roger-ryao ) - @yangchiu @c3y1huang ) - @smallteeths @chriscchien ) - @yangchiu @ejweber ) - @derekbit ) - @PhanLe1010 @chriscchien ) - @yangchiu @PhanLe1010 ) - @PhanLe1010 @roger-ryao ) - @ejweber @roger-ryao ) - @c3y1huang ) - @shuo-wu @chriscchien ) - @c3y1huang @roger-ryao ) - @derekbit @roger-ryao ) - @ChanYiLin @mantissahz ) - @c3y1huang @roger-ryao ) - @yangchiu @derekbit ) - @c3y1huang @roger-ryao ) - @yangchiu @PhanLe1010 ) - @weizhe0422 @ejweber ) - @yangchiu @mantissahz ) - @weizhe0422 @PhanLe1010 ) - @c3y1huang @chriscchien ) - @WebberHuang1118 @chriscchien ) - @derekbit @weizhe0422 ) - @ejweber @chriscchien" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.5.0.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Troubleshooting\" layout: docs These tips can help you troubleshoot known issues. If they don't help, you can , or talk to us on the on the Kubernetes Slack server. You can use the `velero bug` command to open a by launching a browser window with some prepopulated values. Values included are OS, CPU architecture, `kubectl` client and server versions (if available) and the `velero` client version. This information isn't submitted to Github until you click the `Submit new issue` button in the Github UI, so feel free to add, remove or update whatever information you like. You can use the `velero debug` command to generate a debug bundle, which is a tarball that contains: Version information Logs of velero server and plugins Resources managed by velero server such as backup, restore, podvolumebackup, podvolumerestore, etc. Logs of the backup and restore, if specified in the parameters Please use command `velero debug --help` to see more usage details. You can increase the verbosity of the Velero server by editing your Velero deployment to look like this: ``` kubectl edit deployment/velero -n velero ... containers: name: velero image: velero/velero:latest command: /velero args: server --log-level # Add this line debug # Add this line ... ``` Because of how Kubernetes handles Service objects of `type=LoadBalancer`, when you restore these objects you might encounter an issue with changed values for Service UIDs. Kubernetes automatically generates the name of the cloud resource based on the Service UID, which is different when restored, resulting in a different name for the cloud load balancer. If the DNS CNAME for your application points to the DNS name of your cloud load balancer, you'll need to update the CNAME pointer when you perform a Velero restore. Alternatively, you might be able to use the Service's `spec.loadBalancerIP` field to keep connections valid, if your cloud provider supports this value. See . Velero's server will not start if the required Custom Resource Definitions are not found in Kubernetes. Run `velero install` again to install any missing custom resource definitions. Downloading artifacts from object storage utilizes temporary, signed URLs. In the case of S3-compatible providers, such as Ceph, there may be differences between their implementation and the official S3 API that cause errors. Here are some things to verify if you receive `SignatureDoesNotMatch` errors: Make sure your S3-compatible layer is using (such as Ceph RADOS v12.2.7) For Ceph, try using a native Ceph account for credentials instead of external providers such as OpenStack Keystone Velero cannot resume backups that were interrupted. Backups stuck in the `InProgress` phase can be deleted with `kubectl delete backup <name> -n <velero-namespace>`. Backups in the `InProgress` phase have not uploaded any files to object storage. Steps to troubleshoot: Confirm that your velero deployment has metrics publishing enabled. The have been setup with . Confirm that the Velero server pod exposes the port on which the metrics server listens on. By default, this value is 8085. ```yaml ports: containerPort: 8085 name: metrics protocol: TCP ``` Confirm that the metric server is listening for and responding to connections on this port. This can be done using as shown below ```bash $ kubectl -n <YOURVELERONAMESPACE> port-forward <YOURVELEROPOD> 8085:8085 Forwarding from 127.0.0.1:8085 -> 8085 Forwarding from [::1]:8085 -> 8085 . ." }, { "data": "``` Now, visiting http://localhost:8085/metrics on a browser should show the metrics that are being scraped from Velero. Confirm that the Velero server pod has the necessary for prometheus to scrape metrics. Confirm, from the Prometheus UI, that the Velero pod is one of the targets being scraped from Prometheus. Cloud provider credentials are given to Velero to store and retrieve backups from the object store and to perform volume snapshotting operations. These credentials are either passed to Velero at install time using: `--secret-file` flag to the `velero install` command. OR `--set-file credentials.secretContents.cloud` flag to the `helm install` command. Or, they are specified when creating a `BackupStorageLocation` using the `--credential` flag. If using the credentials provided at install time, they are stored in the cluster as a Kubernetes secret named `cloud-credentials` in the same namespace in which Velero is installed. Follow the below troubleshooting steps to confirm that Velero is using the correct credentials: Confirm that the `cloud-credentials` secret exists and has the correct content. ```bash $ kubectl -n velero get secrets cloud-credentials NAME TYPE DATA AGE cloud-credentials Opaque 1 11h $ kubectl -n velero get secrets cloud-credentials -ojsonpath={.data.cloud} | base64 --decode <Output should be your credentials> ``` Confirm that velero deployment is mounting the `cloud-credentials` secret. ```bash $ kubectl -n velero get deploy velero -ojson | jq .spec.template.spec.containers[0].volumeMounts [ { \"mountPath\": \"/plugins\", \"name\": \"plugins\" }, { \"mountPath\": \"/scratch\", \"name\": \"scratch\" }, { \"mountPath\": \"/credentials\", \"name\": \"cloud-credentials\" } ] ``` If is enabled, then, confirm that the restic daemonset is also mounting the `cloud-credentials` secret. ```bash $ kubectl -n velero get ds restic -ojson |jq .spec.template.spec.containers[0].volumeMounts [ { \"mountPath\": \"/host_pods\", \"mountPropagation\": \"HostToContainer\", \"name\": \"host-pods\" }, { \"mountPath\": \"/scratch\", \"name\": \"scratch\" }, { \"mountPath\": \"/credentials\", \"name\": \"cloud-credentials\" } ] ``` Confirm if the correct credentials are mounted into the Velero pod. ```bash $ kubectl -n velero exec -ti deploy/velero -- bash nobody@velero-69f9c874c-l8mqp:/$ cat /credentials/cloud <Output should be your credentials> ``` Follow the below troubleshooting steps to confirm that Velero is using the correct credentials if using credentials specific to a : Confirm that the object storage provider plugin being used supports multiple credentials. If the logs from the Velero deployment contain the error message `\"config has invalid keys credentialsFile\"`, the version of your object storage plugin does not yet support multiple credentials. The object storage plugins support this feature, so please update your plugin to the latest version if you see the above error message. If you are using a plugin from a different provider, please contact them for further advice. Confirm that the secret and key referenced by the `BackupStorageLocation` exists in the Velero namespace and has the correct content: ```bash BSL_SECRET=$(kubectl get backupstoragelocations.velero.io -n velero <bsl-name> -o yaml -o jsonpath={.spec.credential.name}) BSLSECRETKEY=$(kubectl get backupstoragelocations.velero.io -n velero <bsl-name> -o yaml -o jsonpath={.spec.credential.key}) kubectl -n velero get secret $BSL_SECRET kubectl -n velero get secret $BSLSECRET -ojsonpath={.data.$BSLSECRET_KEY} | base64 --decode ``` If the secret can't be found, the secret does not exist within the Velero namespace and must be created. If no output is produced when printing the contents of the secret, the key within the secret may not exist or may have no content. Ensure that the key exists within the secret's data by checking the output from `kubectl -n velero describe secret $BSL_SECRET`. If it does not exist, follow the instructions for to add the base64 encoded credentials data." } ]
{ "category": "Runtime", "file_name": "troubleshooting.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Cgroup metadata ``` -h, --help help for cgroups ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - Display cgroup metadata maintained by Cilium" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_cgroups.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Join the [kubernetes-security-announce] group for security and vulnerability announcements. You can also subscribe to an RSS feed of the above using . Instructions for reporting a vulnerability can be found on the [Kubernetes Security and Disclosure Information] page. Information about supported Kubernetes versions can be found on the [Kubernetes version and version skew support policy] page on the Kubernetes website." } ]
{ "category": "Runtime", "file_name": "SECURITY.md", "project_name": "Stash by AppsCode", "subcategory": "Cloud Native Storage" }
[ { "data": "The LINSTOR controller exposes metrics that can be scraped by Prometheus. The endpoint used for scraping metrics is exposed on `127.0.0.1:3370/metrics` on the LINSTOR controller. `linstor_info`: Versioning information for the LINSTOR controller. `linstornodestate`: Node type and state for each node in LINSTOR. `linstorresourcedefinition_count`: Number of resources currently managed by LINSTOR. `linstorresourcestate`: State of each resource managed by LINSTOR. `linstorvolumestate`: State of each volume managed by LINSTOR. `linstorvolumeallocatedsizebytes`: Total storage in bytes currently allocated for given volume. `linstorstoragepoolcapacityfree_bytes`: Total free storage in bytes available in given LINSTOR storage-pool. `linstorstoragepoolcapacitytotal_bytes`: Total storage in bytes managed by given LINSTOR storage-pool. `linstorstoragepoolerrorcount`: Number or errors logged on given LINSTOR storage-pool. `linstorerrorreports_count`: Number or error-reports logged by LINSTOR. `linstorscraperequests_count`: Number of scrape requests on the LINSTOR metrics endpoint since last restart. `linstorscrapeduration_seconds`: Time spent scraping LINSTOR metrics in seconds. `jvmmemorybytes_used`: Used bytes of a given JVM memory area. `jvmmemorybytes_committed`: Committed bytes of a given JVM memory area. `jvmmemorybytes_max`: Max bytes of a given JVM memory area. `jvmmemorybytes_init`: Initial bytes of a given JVM memory area. `jvmmemorypoolbytesused`: Used bytes of a given JVM memory pool. `jvmmemorypoolbytescommitted`: Committed bytes of a given JVM memory pool. `jvmmemorypoolbytesmax`: Max bytes of a given JVM memory pool. `jvmmemorypoolbytesinit`: Initial bytes of a given JVM memory pool. `jvmclassesloaded`: Number of classes currently loaded in the JVM. `jvmclassesloaded_total`: Number of classes that have been loaded since JVM start. `jvmclassesunloaded_total`: Number of classes that have been unloaded since JVM start. `jvmthreadscurrent`: Current thread count of a JVM. `jvmthreadsdaemon`: Daemon thread count of a JVM. `jvmthreadspeak`: Peak thread count of a JVM. `jvmthreadsstarted_total`: Started thread count of a JVM. `jvmthreadsdeadlocked`: Cycles of JVM threads that are in a deadlock state. `jvmthreadsdeadlocked_monitor`: Cycles of JVM threads that are in a deadlock state waiting to acquire object monitors. `jvmthreadsstate`: Number of threads by state. `jvm_info`: JVM version info. `processcpuseconds_total`: Total user and system CPU time spent in seconds. `processstarttime_seconds`: Start time of the process since Unix epoch in seconds. `processopenfds`: Number of open file descriptors. `processmaxfds`: Maximum number of open file descriptors. `processvirtualmemory_bytes`: Virtual memory size in bytes. `processresidentmemory_bytes`: Resident memory size in bytes. `jvmbufferpoolusedbytes`: Used bytes of a given JVM buffer pool. `jvmbufferpoolcapacitybytes`: Capacity of JVM buffer pool in bytes. `jvmbufferpoolusedbuffers`: Used buffers of a given JVM buffer pool. `jvmmemorypoolallocatedbytes_total`: Total bytes allocated in given JVM memory pool. `jvmgccollection_seconds`: Time spent in a given JVM garbage collector in seconds." } ]
{ "category": "Runtime", "file_name": "prometheus.md", "project_name": "LINSTOR", "subcategory": "Cloud Native Storage" }
[ { "data": "This document is written specifically for developers: it is not intended for end users. If you want to contribute changes that you have made, please read the for information about our processes. You are working on a non-critical test or development system. The recommended way to create a development environment is to first to create a working system. The installation guide instructions will install all required Kata Containers components, plus a container manager, the hypervisor, and the Kata Containers image and guest kernel. Alternatively, you can perform a , or continue with to build the Kata Containers components from source. Note: If you decide to build from sources, you should be aware of the implications of using an unpackaged system which will not be automatically updated as new are made available. You need to install the following to build Kata Containers components: To view the versions of go known to work, see the `golang` entry in the . To view the versions of rust known to work, see the `rust` entry in the . `make`. `gcc` (required for building the shim and runtime). ```bash $ git clone https://github.com/kata-containers/kata-containers.git $ pushd kata-containers/src/runtime $ make && sudo -E \"PATH=$PATH\" make install $ sudo mkdir -p /etc/kata-containers/ $ sudo install -o root -g root -m 0640 /usr/share/defaults/kata-containers/configuration.toml /etc/kata-containers $ popd ``` The build will create the following: runtime binary: `/usr/local/bin/kata-runtime` and `/usr/local/bin/containerd-shim-kata-v2` configuration file: `/usr/share/defaults/kata-containers/configuration.toml` and `/etc/kata-containers/configuration.toml` Kata containers can run with either an initrd image or a rootfs image. If you want to test with `initrd`, make sure you have uncommented `initrd = /usr/share/kata-containers/kata-containers-initrd.img` in your configuration file, commenting out the `image` line in `/etc/kata-containers/configuration.toml`. For example: ```bash $ sudo sed -i 's/^\\(image =.*\\)/# \\1/g' /etc/kata-containers/configuration.toml $ sudo sed -i 's/^# \\(initrd =.*\\)/\\1/g' /etc/kata-containers/configuration.toml ``` You can create the initrd image as shown in the section. If you want to test with a rootfs `image`, make sure you have uncommented `image = /usr/share/kata-containers/kata-containers.img` in your configuration file, commenting out the `initrd` line. For example: ```bash $ sudo sed -i 's/^\\(initrd =.*\\)/# \\1/g' /etc/kata-containers/configuration.toml ``` The rootfs image is created as shown in the section. One of the `initrd` and `image` options in Kata runtime config file MUST be set but not both. The main difference between the options is that the size of `initrd`(10MB+) is significantly smaller than rootfs `image`(100MB+). Enable seccomp as follows: ```bash $ sudo sed -i '/^disableguestseccomp/ s/true/false/' /etc/kata-containers/configuration.toml ``` This will pass container seccomp profiles to the kata agent. Note: To enable SELinux on the guest, SELinux MUST be also enabled on the host. You MUST create and build a rootfs image for SELinux in advance. See and . SELinux on the guest is supported in only a rootfs image currently, so you cannot enable SELinux with the agent init (`AGENT_INIT=yes`) yet. Enable guest SELinux in Enforcing mode as follows: ``` $ sudo sed -i '/^disableguestselinux/ s/true/false/g' /etc/kata-containers/configuration.toml ``` The runtime automatically will set `selinux=1` to the kernel parameters and `xattr` option to `virtiofsd` when `disableguestselinux` is set to `false`. If you want to enable SELinux in Permissive mode, add `enforcing=0` to the kernel parameters. Enable full debug as follows: ```bash $ sudo sed -i -e 's/^# \\(enable_debug\\).=.*$/\\1 = true/g' /etc/kata-containers/configuration.toml $ sudo sed -i -e 's/^kernelparams = \"\\(.*\\)\"/kernelparams = \"\\1 agent.log=debug initcall_debug\"/g'" }, { "data": "``` If you are using `containerd` and the Kata `containerd-shimv2` to launch Kata Containers, and wish to enable Kata debug logging, there are two ways this can be enabled via the `containerd` configuration file, detailed below. The Kata logs appear in the `containerd` log files, along with logs from `containerd` itself. For more information about `containerd` debug, please see the . Enabling full `containerd` debug also enables the shimv2 debug. Edit the `containerd` configuration file to include the top level debug option such as: ```toml [debug] level = \"debug\" ``` If you only wish to enable debug for the `containerd` shims themselves, just enable the debug option in the `plugins.linux` section of the `containerd` configuration file, such as: ```toml [plugins.linux] shim_debug = true ``` Depending on the CRI-O version being used one of the following configuration files can be found: `/etc/crio/crio.conf` or `/etc/crio/crio.conf.d/00-default`. If the latter is found, the change must be done there as it'll take precedence, overriding `/etc/crio/crio.conf`. ```toml log_level = \"info\" ``` Switching the default `log_level` from `info` to `debug` enables shimv2 debug logs. CRI-O logs can be found by using the `crio` identifier, and Kata specific logs can be found by using the `kata` identifier. Enabling results in the Kata components generating large amounts of logging, which by default is stored in the system log. Depending on your system configuration, it is possible that some events might be discarded by the system logging daemon. The following shows how to determine this for `systemd-journald`, and offers possible workarounds and fixes. Note The method of implementation can vary between Operating System installations. Amend these instructions as necessary to your system implementation, and consult with your system administrator for the appropriate configuration. `systemd-journald` can be configured to rate limit the number of journal entries it stores. When messages are suppressed, it is noted in the logs. This can be checked for by looking for those notifications, such as: ```bash $ sudo journalctl --since today | fgrep Suppressed Jun 29 14:51:17 mymachine systemd-journald[346]: Suppressed 4150 messages from /system.slice/docker.service ``` This message indicates that a number of log messages from the `docker.service` slice were suppressed. In such a case, you can expect to have incomplete logging information stored from the Kata Containers components. In order to capture complete logs from the Kata Containers components, you need to reduce or disable the `systemd-journald` rate limit. Configure this at the global `systemd-journald` level, and it will apply to all system slices. To disable `systemd-journald` rate limiting at the global level, edit the file `/etc/systemd/journald.conf`, and add/uncomment the following lines: ``` RateLimitInterval=0s RateLimitBurst=0 ``` Restart `systemd-journald` for the changes to take effect: ```bash $ sudo systemctl restart systemd-journald ``` Note: You should only do this step if you are testing with the latest version of the agent. The agent is built with a statically linked `musl.` The default `libc` used is `musl`, but on `ppc64le` and `s390x`, `gnu` should be used. To configure this: ```bash $ export ARCH=\"$(uname -m)\" $ if [ \"$ARCH\" = \"ppc64le\" -o \"$ARCH\" = \"s390x\" ]; then export LIBC=gnu; else export LIBC=musl; fi $ [ \"${ARCH}\" == \"ppc64le\" ] && export ARCH=powerpc64le $ rustup target add \"${ARCH}-unknown-linux-${LIBC}\" ``` To build the agent: The agent is built with seccomp capability by" }, { "data": "If you want to build the agent without the seccomp capability, you need to run `make` with `SECCOMP=no` as follows. ```bash $ make -C kata-containers/src/agent SECCOMP=no ``` For building the agent with seccomp support using `musl`, set the environment variables for the . ```bash $ export LIBSECCOMPLINKTYPE=static $ export LIBSECCOMPLIBPATH=\"the path of the directory containing libseccomp.a\" $ make -C kata-containers/src/agent ``` If the compilation fails when the agent tries to link the `libseccomp` library statically against `musl`, you will need to build `libseccomp` manually with `-UFORTIFYSOURCE`. You can use to install `libseccomp` for the agent. ```bash $ mkdir -p ${seccompinstallpath} ${gperfinstallpath} $ pushd kata-containers/ci $ script -fec 'sudo -E ./installlibseccomp.sh ${seccompinstallpath} ${gperfinstall_path}\"' $ export LIBSECCOMPLIBPATH=\"${seccompinstallpath}/lib\" $ popd ``` On `ppc64le` and `s390x`, `glibc` is used. You will need to install the `libseccomp` library provided by your distribution. e.g. `libseccomp-dev` for Ubuntu, or `libseccomp-devel` for CentOS Note: If you enable seccomp in the main configuration file but build the agent without seccomp capability, the runtime exits conservatively with an error message. As a prerequisite, you need to install Docker. Otherwise, you will not be able to run the `rootfs.sh` script with `USE_DOCKER=true` as expected in the following example. ```bash $ export distro=\"ubuntu\" # example $ export ROOTFS_DIR=\"$(realpath kata-containers/tools/osbuilder/rootfs-builder/rootfs)\" $ sudo rm -rf \"${ROOTFS_DIR}\" $ pushd kata-containers/tools/osbuilder/rootfs-builder $ script -fec 'sudo -E USE_DOCKER=true ./rootfs.sh \"${distro}\"' $ popd ``` You MUST choose a distribution (e.g., `ubuntu`) for `${distro}`. You can get a supported distributions list in the Kata Containers by running the following. ```bash $ ./kata-containers/tools/osbuilder/rootfs-builder/rootfs.sh -l ``` If you want to build the agent without seccomp capability, you need to run the `rootfs.sh` script with `SECCOMP=no` as follows. ```bash $ script -fec 'sudo -E AGENTINIT=yes USEDOCKER=true SECCOMP=no ./rootfs.sh \"${distro}\"' ``` If you want to enable SELinux on the guest, you MUST choose `centos` and run the `rootfs.sh` script with `SELINUX=yes` as follows. ``` $ script -fec 'sudo -E GOPATH=$GOPATH USE_DOCKER=true SELINUX=yes ./rootfs.sh centos' ``` Note: Check the before creating rootfs. You must ensure that the default Docker runtime is `runc` to make use of the `USE_DOCKER` variable. If that is not the case, remove the variable from the previous command. See . Note: You should only do this step if you are testing with the latest version of the agent. ```bash $ sudo install -o root -g root -m 0550 -t \"${ROOTFSDIR}/usr/bin\" \"${ROOTFSDIR}/../../../../src/agent/target/x86_64-unknown-linux-musl/release/kata-agent\" $ sudo install -o root -g root -m 0440 \"${ROOTFSDIR}/../../../../src/agent/kata-agent.service\" \"${ROOTFSDIR}/usr/lib/systemd/system/\" $ sudo install -o root -g root -m 0440 \"${ROOTFSDIR}/../../../../src/agent/kata-containers.target\" \"${ROOTFSDIR}/usr/lib/systemd/system/\" ``` ```bash $ pushd kata-containers/tools/osbuilder/image-builder $ script -fec 'sudo -E USEDOCKER=true ./imagebuilder.sh \"${ROOTFS_DIR}\"' $ popd ``` If you want to enable SELinux on the guest, you MUST run the `image_builder.sh` script with `SELINUX=yes` to label the guest image as follows. To label the image on the host, you need to make sure that SELinux is enabled (`selinuxfs` is mounted) on the host and the rootfs MUST be created by running the `rootfs.sh` with `SELINUX=yes`. ``` $ script -fec 'sudo -E USEDOCKER=true SELINUX=yes ./imagebuilder.sh ${ROOTFS_DIR}' ``` Currently, the `imagebuilder.sh` uses `chcon` as an interim solution in order to apply `containerruntimeexect` to the `kata-agent`. Hence, if you run `restorecon` to the guest image after running the `image_builder.sh`, the `kata-agent` needs to be labeled `containerruntimeexec_t` again by yourself. Notes: You must ensure that the default Docker runtime is `runc` to make use of the `USE_DOCKER` variable. If that is not the case, remove the variable from the previous" }, { "data": "See . If you do not wish to build under Docker, remove the `USE_DOCKER` variable in the previous command and ensure the `qemu-img` command is available on your system. If `qemu-img` is not installed, you will likely see errors such as `ERROR: File /dev/loop19p1 is not a block device` and `losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools`. These can be mitigated by installing the `qemu-img` command (available in the `qemu-img` package on Fedora or the `qemu-utils` package on Debian). If `loop` module is not probed, you will likely see errors such as `losetup: cannot find an unused loop device`. Execute `modprobe loop` could resolve it. ```bash $ pushd kata-containers/tools/osbuilder/image-builder $ commit=\"$(git log --format=%h -1 HEAD)\" $ date=\"$(date +%Y-%m-%d-%T.%N%z)\" $ image=\"kata-containers-${date}-${commit}\" $ sudo install -o root -g root -m 0640 -D kata-containers.img \"/usr/share/kata-containers/${image}\" $ (cd /usr/share/kata-containers && sudo ln -sf \"$image\" kata-containers.img) $ popd ``` ```bash $ export distro=\"ubuntu\" # example $ export ROOTFS_DIR=\"$(realpath kata-containers/tools/osbuilder/rootfs-builder/rootfs)\" $ sudo rm -rf \"${ROOTFS_DIR}\" $ pushd kata-containers/tools/osbuilder/rootfs-builder/ $ script -fec 'sudo -E AGENTINIT=yes USEDOCKER=true ./rootfs.sh \"${distro}\"' $ popd ``` `AGENT_INIT` controls if the guest image uses the Kata agent as the guest `init` process. When you create an initrd image, always set `AGENT_INIT` to `yes`. You MUST choose a distribution (e.g., `ubuntu`) for `${distro}`. You can get a supported distributions list in the Kata Containers by running the following. ```bash $ ./kata-containers/tools/osbuilder/rootfs-builder/rootfs.sh -l ``` If you want to build the agent without seccomp capability, you need to run the `rootfs.sh` script with `SECCOMP=no` as follows. ```bash $ script -fec 'sudo -E AGENTINIT=yes USEDOCKER=true SECCOMP=no ./rootfs.sh \"${distro}\"' ``` Note: Check the before creating rootfs. Optionally, add your custom agent binary to the rootfs with the following commands. The default `$LIBC` used is `musl`, but on ppc64le and s390x, `gnu` should be used. Also, Rust refers to ppc64le as `powerpc64le`: ```bash $ export ARCH=\"$(uname -m)\" $ [ \"${ARCH}\" == \"ppc64le\" ] || [ \"${ARCH}\" == \"s390x\" ] && export LIBC=gnu || export LIBC=musl $ [ \"${ARCH}\" == \"ppc64le\" ] && export ARCH=powerpc64le $ sudo install -o root -g root -m 0550 -T \"${ROOTFSDIR}/../../../../src/agent/target/${ARCH}-unknown-linux-${LIBC}/release/kata-agent\" \"${ROOTFSDIR}/sbin/init\" ``` ```bash $ pushd kata-containers/tools/osbuilder/initrd-builder $ script -fec 'sudo -E AGENTINIT=yes USEDOCKER=true ./initrdbuilder.sh \"${ROOTFSDIR}\"' $ popd ``` ```bash $ pushd kata-containers/tools/osbuilder/initrd-builder $ commit=\"$(git log --format=%h -1 HEAD)\" $ date=\"$(date +%Y-%m-%d-%T.%N%z)\" $ image=\"kata-containers-initrd-${date}-${commit}\" $ sudo install -o root -g root -m 0640 -D kata-containers-initrd.img \"/usr/share/kata-containers/${image}\" $ (cd /usr/share/kata-containers && sudo ln -sf \"$image\" kata-containers-initrd.img) $ popd ``` You can build and install the guest kernel image as shown . When setting up Kata using a , the `QEMU` VMM is installed automatically. Cloud-Hypervisor, Firecracker and StratoVirt VMMs are available from the , as well as through . You may choose to manually build your VMM/hypervisor. Kata Containers makes use of upstream QEMU branch. The exact version and repository utilized can be found by looking at the . Find the correct version of QEMU from the versions file: ```bash $ source kata-containers/tools/packaging/scripts/lib.sh $ qemuversion=\"$(getfromkatadeps \"assets.hypervisor.qemu.version\")\" $ echo \"${qemu_version}\" ``` Get source from the matching branch of QEMU: ```bash $ git clone -b \"${qemu_version}\" https://github.com/qemu/qemu.git $ yourqemudirectory=\"$(realpath qemu)\" ``` There are scripts to manage the build and packaging of" }, { "data": "For the examples below, set your environment as: ```bash $ packaging_dir=\"$(realpath kata-containers/tools/packaging)\" ``` Kata often utilizes patches for not-yet-upstream and/or backported fixes for components, including QEMU. These can be found in the , and it's recommended that you apply them. For example, suppose that you are going to build QEMU version 5.2.0, do: ```bash $ \"$packagingdir/scripts/applypatches.sh\" \"$packaging_dir/qemu/patches/5.2.x/\" ``` To build utilizing the same options as Kata, you should make use of the `configure-hypervisor.sh` script. For example: ```bash $ pushd \"$yourqemudirectory\" $ \"$packaging_dir/scripts/configure-hypervisor.sh\" kata-qemu > kata.cfg $ eval ./configure \"$(cat kata.cfg)\" $ make -j $(nproc --ignore=1) $ sudo -E make install $ popd ``` If you do not want to install the respective QEMU version, the configuration file can be modified to point to the correct binary. In `/etc/kata-containers/configuration.toml`, change `path = \"/path/to/qemu/build/qemu-system-x86_64\"` to point to the correct QEMU binary. See the for a reference on how to get, setup, configure and build QEMU for Kata. Note: You should only do this step if you are on aarch64/arm64. You should include which are under upstream review for supporting NVDIMM on aarch64. You could build the custom `qemu-system-aarch64` as required with the following command: ```bash $ git clone https://github.com/kata-containers/tests.git $ script -fec 'sudo -E tests/.ci/install_qemu.sh' ``` When using the file system type virtio-fs (default), `virtiofsd` is required ```bash $ pushd kata-containers/tools/packaging/static-build/virtiofsd $ ./build.sh $ popd ``` Modify `/etc/kata-containers/configuration.toml` and update value `virtiofsdaemon = \"/path/to/kata-containers/tools/packaging/static-build/virtiofsd/virtiofsd/virtiofsd\"` to point to the binary. You can check if your system is capable of creating a Kata Container by running the following: ```bash $ sudo kata-runtime check ``` If your system is not able to run Kata Containers, the previous command will error out and explain why. Refer to the how-to guide. Refer to the how-to guide. If you are unable to create a Kata Container first ensure you have before attempting to create a container. Then run the script and paste its output directly into a . Note: The `kata-collect-data.sh` script is built from the repository. To perform analysis on Kata logs, use the tool, which can convert the logs into formats (e.g. JSON, TOML, XML, and YAML). See . ```bash $ sudo docker info 2>/dev/null | grep -i \"default runtime\" | cut -d: -f2- | grep -q runc && echo \"SUCCESS\" || echo \"ERROR: Incorrect default Docker runtime\" ``` Kata containers provides two ways to connect to the guest. One is using traditional login service, which needs additional works. In contrast the simple debug console is easy to setup. Kata Containers 2.0 supports a shell simulated console for quick debug purpose. This approach uses VSOCK to connect to the shell running inside the guest which the agent starts. This method only requires the guest image to contain either `/bin/sh` or `/bin/bash`. Enable debugconsoleenabled in the `configuration.toml` configuration file: ```toml [agent.kata] debugconsoleenabled = true ``` This will pass `agent.debugconsole agent.debugconsole_vport=1026` to agent as kernel parameters, and sandboxes created using this parameters will start a shell in guest if new connection is accept from VSOCK. For Kata Containers `2.0.x` releases, the `kata-runtime exec` command depends on the`kata-monitor` running, in order to get the sandbox's `vsock` address to connect to. Thus, first start the `kata-monitor` process. ```bash $ sudo kata-monitor ``` `kata-monitor` will serve at `localhost:8090` by default. You need to start a container for example: ```bash $ sudo ctr run --runtime io.containerd.kata.v2 -d" }, { "data": "testdebug ``` Then, you can use the command `kata-runtime exec <sandbox id>` to connect to the debug console. ``` $ kata-runtime exec testdebug bash-4.2# id uid=0(root) gid=0(root) groups=0(root) bash-4.2# pwd / bash-4.2# exit exit ``` `kata-runtime exec` has a command-line option `runtime-namespace`, which is used to specify under which the particular pod was created. By default, it is set to `k8s.io` and works for containerd when configured with Kubernetes. For CRI-O, the namespace should set to `default` explicitly. This should not be confused with . For other CRI-runtimes and configurations, you may need to set the namespace utilizing the `runtime-namespace` option. If you want to access guest OS through a traditional way, see . By default you cannot login to a virtual machine, since this can be sensitive from a security perspective. Also, allowing logins would require additional packages in the rootfs, which would increase the size of the image used to boot the virtual machine. If you want to login to a virtual machine that hosts your containers, complete the following steps (using rootfs or initrd image). Note: The following debug console instructions assume a systemd-based guest O/S image. This means you must create a rootfs for a distro that supports systemd. Currently, all distros supported by support systemd except for Alpine Linux. Look for `INIT_PROCESS=systemd` in the `config.sh` osbuilder rootfs config file to verify an osbuilder distro supports systemd for the distro you want to build rootfs for. For an example, see the . For a non-systemd-based distro, create an equivalent system service using that distros init system syntax. Alternatively, you can build a distro that contains a shell (e.g. `bash(1)`). In this circumstance it is likely you need to install additional packages in the rootfs and add agent.debug_console to kernel parameters in the runtime config file. This tells the Kata agent to launch the console directly. Once these steps are taken you can connect to the virtual machine using the . To login to a virtual machine, you must or containing a shell such as `bash(1)`. For Clear Linux, you will need an additional `coreutils` package. For example using CentOS: ```bash $ pushd kata-containers/tools/osbuilder/rootfs-builder $ export ROOTFS_DIR=\"$(realpath ./rootfs)\" $ script -fec 'sudo -E USEDOCKER=true EXTRAPKGS=\"bash coreutils\" ./rootfs.sh centos' ``` Follow the instructions in the section when using rootfs, or when using initrd, complete the steps in the section. Install the image: Note: When using an initrd image, replace the below rootfs image name `kata-containers.img` with the initrd image name `kata-containers-initrd.img`. ```bash $ name=\"kata-containers-centos-with-debug-console.img\" $ sudo install -o root -g root -m 0640 kata-containers.img \"/usr/share/kata-containers/${name}\" $ popd ``` Next, modify the `image=` values in the `[hypervisor.qemu]` section of the to specify the full path to the image name specified in the previous code section. Alternatively, recreate the symbolic link so it points to the new debug image: ```bash $ (cd /usr/share/kata-containers && sudo ln -sf \"$name\" kata-containers.img) ``` Note: You should take care to undo this change after you finish debugging to avoid all subsequently created containers from using the debug image. Create a container as normal. For example using `crictl`: ```bash $ sudo crictl run -r kata container.yaml pod.yaml ``` The steps required to enable debug console for QEMU slightly differ with those for firecracker / cloud-hypervisor. Add `agent.debug_console` to the guest kernel command line to allow the agent process to start a debug" }, { "data": "```bash $ sudo sed -i -e 's/^kernelparams = \"\\(.*\\)\"/kernelparams = \"\\1 agent.debugconsole\"/g' \"${kataconfiguration_file}\" ``` Here `kataconfigurationfile` could point to `/etc/kata-containers/configuration.toml` or `/usr/share/defaults/kata-containers/configuration.toml` or `/opt/kata/share/defaults/kata-containers/configuration-{hypervisor}.toml`, if you installed Kata Containers using `kata-deploy`. Slightly different configuration is required in case of firecracker and cloud hypervisor. Firecracker and cloud-hypervisor don't have a UNIX socket connected to `/dev/console`. Hence, the kernel command line option `agent.debug_console` will not work for them. These hypervisors support `hybrid vsocks`, which can be used for communication between the host and the guest. The kernel command line option `agent.debugconsolevport` was added to allow developers specify on which `vsock` port the debugging console should be connected. Add the parameter `agent.debugconsolevport=1026` to the kernel command line as shown below: ```bash sudo sed -i -e 's/^kernelparams = \"\\(.*\\)\"/kernelparams = \"\\1 agent.debugconsolevport=1026\"/g' \"${kataconfigurationfile}\" ``` Note Ports 1024 and 1025 are reserved for communication with the agent and gathering of agent logs respectively. Next, connect to the debug console. The VSOCKS paths vary slightly between each VMM solution. In case of cloud-hypervisor, connect to the `vsock` as shown: ```bash $ sudo su -c 'cd /var/run/vc/vm/${sandbox_id}/root/ && socat stdin unix-connect:clh.sock' CONNECT 1026 ``` Note: You need to type `CONNECT 1026` and press `RETURN` key after entering the `socat` command. For firecracker, connect to the `hvsock` as shown: ```bash $ sudo su -c 'cd /var/run/vc/firecracker/${sandbox_id}/root/ && socat stdin unix-connect:kata.hvsock' CONNECT 1026 ``` Note: You need to press the `RETURN` key to see the shell prompt. For QEMU, connect to the `vsock` as shown: ```bash $ sudo su -c 'cd /var/run/vc/vm/${sandbox_id} && socat \"stdin,raw,echo=0,escape=0x11\" \"unix-connect:console.sock\"' ``` To disconnect from the virtual machine, type `CONTROL+q` (hold down the `CONTROL` key and press `q`). For developers interested in using a debugger with the runtime, please look at . If the image is created using , the following YAML file exists and contains details of the image and how it was created: ```bash $ cat /var/lib/osbuilder/osbuilder.yaml ``` Sometimes it is useful to capture the kernel boot messages from a Kata Container launch. If the container launches to the point whereby you can `exec` into it, and if the container has the necessary components installed, often you can execute the `dmesg` command inside the container to view the kernel boot logs. If however you are unable to `exec` into the container, you can enable some debug options to have the kernel boot messages logged into the system journal. Set `enable_debug = true` in the `[hypervisor.qemu]` and `[runtime]` sections For generic information on enabling debug in the configuration file, see the section. The kernel boot messages will appear in the `kata` logs (and in the `containerd` or `CRI-O` log appropriately). such as: ```bash $ sudo journalctl -t kata -- Logs begin at Thu 2020-02-13 16:20:40 UTC, end at Thu 2020-02-13 16:30:23 UTC. -- ... time=\"2020-09-15T14:56:23.095113803+08:00\" level=debug msg=\"reading guest console\" console-protocol=unix console-url=/run/vc/vm/ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791/console.sock pid=107642 sandbox=ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791 source=virtcontainers subsystem=sandbox vmconsole=\"[ 0.395399] brd: module loaded\" time=\"2020-09-15T14:56:23.102633107+08:00\" level=debug msg=\"reading guest console\" console-protocol=unix console-url=/run/vc/vm/ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791/console.sock pid=107642 sandbox=ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791 source=virtcontainers subsystem=sandbox vmconsole=\"[ 0.402845] random: fast init done\" time=\"2020-09-15T14:56:23.103125469+08:00\" level=debug msg=\"reading guest console\" console-protocol=unix console-url=/run/vc/vm/ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791/console.sock pid=107642 sandbox=ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791 source=virtcontainers subsystem=sandbox vmconsole=\"[ 0.403544] random: crng init done\" time=\"2020-09-15T14:56:23.105268162+08:00\" level=debug msg=\"reading guest console\" console-protocol=unix console-url=/run/vc/vm/ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791/console.sock pid=107642 sandbox=ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791 source=virtcontainers subsystem=sandbox vmconsole=\"[ 0.405599] loop: module loaded\" time=\"2020-09-15T14:56:23.121121598+08:00\" level=debug msg=\"reading guest console\" console-protocol=unix console-url=/run/vc/vm/ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791/console.sock pid=107642 sandbox=ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791 source=virtcontainers subsystem=sandbox vmconsole=\"[ 0.421324] memmapinitzone_device initialised 32768 pages in 12ms\" ... ``` Refer to the which is useful to fetch these." } ]
{ "category": "Runtime", "file_name": "Developer-Guide.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for fish Generate the autocompletion script for the fish shell. To load completions in your current shell session: cilium-operator-alibabacloud completion fish | source To load completions for every new session, execute once: cilium-operator-alibabacloud completion fish > ~/.config/fish/completions/cilium-operator-alibabacloud.fish You will need to start a new shell for this setup to take effect. ``` cilium-operator-alibabacloud completion fish [flags] ``` ``` -h, --help help for fish --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell" } ]
{ "category": "Runtime", "file_name": "cilium-operator-alibabacloud_completion_fish.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"ark schedule delete\" layout: docs Delete a schedule Delete a schedule ``` ark schedule delete NAME [flags] ``` ``` -h, --help help for delete ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with schedules" } ]
{ "category": "Runtime", "file_name": "ark_schedule_delete.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Upgrading to Velero 1.4\" layout: docs Velero , or installed. Note: The v1.4.1 tag was created in code, but has no associated docker image due to misconfigured building infrastructure. v1.4.2 fixed this. If you're not yet running at least Velero v1.3, see the following: Before upgrading, check the to make sure your version of Kubernetes is supported by the new version of Velero. Install the Velero v1.4 command-line interface (CLI) by following the . Verify that you've properly installed it by running: ```bash velero version --client-only ``` You should see the following output: ```bash Client: Version: v1.4.3 Git commit: <git SHA> ``` Update the container image used by the Velero deployment and, optionally, the restic daemon set: ```bash kubectl set image deployment/velero \\ velero=velero/velero:v1.4.3 \\ --namespace velero kubectl set image daemonset/restic \\ restic=velero/velero:v1.4.3 \\ --namespace velero ``` Update the Velero custom resource definitions (CRDs) to include the new backup progress fields: ```bash velero install --crds-only --dry-run -o yaml | kubectl apply -f - ``` NOTE: If you are upgrading Velero in Kubernetes 1.14.x or earlier, you will need to use `kubectl apply`'s `--validate=false` option when applying the CRD configuration above. See and for more context. Confirm that the deployment is up and running with the correct version by running: ```bash velero version ``` You should see the following output: ```bash Client: Version: v1.4.3 Git commit: <git SHA> Server: Version: v1.4.3 ```" } ]
{ "category": "Runtime", "file_name": "upgrade-to-1.4.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Introduces support for Go modules. The `v4` version will be backwards compatible with `v3.x.y`. Starting from this release, we are adopting the policy to support the most 2 recent versions of Go currently available. By the time of this release, this is Go 1.15 and 1.16 (). Fixed a potential issue that could occur when the verification of `exp`, `iat` or `nbf` was not required and contained invalid contents, i.e. non-numeric/date. Thanks for @thaJeztah for making us aware of that and @giorgos-f3 for originally reporting it to the formtech fork (). Added support for EdDSA / ED25519 (). Optimized allocations (). Import Path Change*: See MIGRATION_GUIDE.md for tips on updating your code Changed the import path from `github.com/dgrijalva/jwt-go` to `github.com/golang-jwt/jwt` Fixed type confusing issue between `string` and `). This fixes CVE-2020-26160 Added method `ParseUnverified` to allow users to split up the tasks of parsing and validation HMAC signing method returns `ErrInvalidKeyType` instead of `ErrInvalidKey` where appropriate Added options to `request.ParseFromRequest`, which allows for an arbitrary list of modifiers to parsing behavior. Initial set include `WithClaims` and `WithParser`. Existing usage of this function will continue to work as before. Deprecated `ParseFromRequestWithClaims` to simplify API in the future. Improvements to `jwt` command line tool Added `SkipClaimsValidation` option to `Parser` Documentation updates Compatibility Breaking Changes*: See MIGRATION_GUIDE.md for tips on updating your code Dropped support for `[]byte` keys when using RSA signing methods. This convenience feature could contribute to security vulnerabilities involving mismatched key types with signing methods. `ParseFromRequest` has been moved to `request` subpackage and usage has changed The `Claims` property on `Token` is now type `Claims` instead of `map[string]interface{}`. The default value is type `MapClaims`, which is an alias to `map[string]interface{}`. This makes it possible to use a custom type when decoding claims. Other Additions and Changes Added `Claims` interface type to allow users to decode the claims into a custom type Added `ParseWithClaims`, which takes a third argument of type `Claims`. Use this function instead of `Parse` if you have a custom type you'd like to decode into. Dramatically improved the functionality and flexibility of `ParseFromRequest`, which is now in the `request` subpackage Added `ParseFromRequestWithClaims` which is the `FromRequest` equivalent of `ParseWithClaims` Added new interface type `Extractor`, which is used for extracting JWT strings from http requests. Used with `ParseFromRequest` and `ParseFromRequestWithClaims`. Added several new, more specific, validation errors to error type bitmask Moved examples from README to executable example files Signing method registry is now thread safe Added new property to `ValidationError`, which contains the raw error returned by calls made by parse/verify (such as those returned by keyfunc or json parser) This will likely be the last backwards compatible release before 3.0.0, excluding essential bug" }, { "data": "Added new option `-show` to the `jwt` command that will just output the decoded token without verifying Error text for expired tokens includes how long it's been expired Fixed incorrect error returned from `ParseRSAPublicKeyFromPEM` Documentation updates Exposed inner error within ValidationError Fixed validation errors when using UseJSONNumber flag Added several unit tests Added support for signing method none. You shouldn't use this. The API tries to make this clear. Updated/fixed some documentation Added more helpful error message when trying to parse tokens that begin with `BEARER ` Added new type, Parser, to allow for configuration of various parsing parameters You can now specify a list of valid signing methods. Anything outside this set will be rejected. You can now opt to use the `json.Number` type instead of `float64` when parsing token JSON Added support for Fixed some bugs with ECDSA parsing Added support for ECDSA signing methods Added support for RSA PSS signing methods (requires go v1.4) Gracefully handle a `nil` `Keyfunc` being passed to `Parse`. Result will now be the parsed token and an error, instead of a panic. Backwards compatible API change that was missed in 2.0.0. The `SignedString` method on `Token` now takes `interface{}` instead of `[]byte` There were two major reasons for breaking backwards compatibility with this update. The first was a refactor required to expand the width of the RSA and HMAC-SHA signing implementations. There will likely be no required code changes to support this change. The second update, while unfortunately requiring a small change in integration, is required to open up this library to other signing methods. Not all keys used for all signing methods have a single standard on-disk representation. Requiring `[]byte` as the type for all keys proved too limiting. Additionally, this implementation allows for pre-parsed tokens to be reused, which might matter in an application that parses a high volume of tokens with a small set of keys. Backwards compatibilty has been maintained for passing `[]byte` to the RSA signing methods, but they will also accept `rsa.PublicKey` and `rsa.PrivateKey`. It is likely the only integration change required here will be to change `func(t jwt.Token) ([]byte, error)` to `func(t jwt.Token) (interface{}, error)` when calling `Parse`. Compatibility Breaking Changes* `SigningMethodHS256` is now `SigningMethodHMAC` instead of `type struct` `SigningMethodRS256` is now `SigningMethodRSA` instead of `type struct` `KeyFunc` now returns `interface{}` instead of `[]byte` `SigningMethod.Sign` now takes `interface{}` instead of `[]byte` for the key `SigningMethod.Verify` now takes `interface{}` instead of `[]byte` for the key Renamed type `SigningMethodHS256` to `SigningMethodHMAC`. Specific sizes are now just instances of this type. Added public package global `SigningMethodHS256` Added public package global `SigningMethodHS384` Added public package global `SigningMethodHS512` Renamed type `SigningMethodRS256` to `SigningMethodRSA`. Specific sizes are now just instances of this type. Added public package global `SigningMethodRS256` Added public package global `SigningMethodRS384` Added public package global `SigningMethodRS512` Moved sample private key for HMAC tests from an inline value to a file on disk. Value is unchanged. Refactored the RSA implementation to be easier to read Exposed helper methods `ParseRSAPrivateKeyFromPEM` and `ParseRSAPublicKeyFromPEM` Fixed bug in parsing public keys from certificates Added more tests around the parsing of keys for RS256 Code refactoring in RS256 implementation. No functional changes Fixed panic if RS256 signing method was passed an invalid key First versioned release API stabilized Supports creating, signing, parsing, and validating JWT tokens Supports RS256 and HS256 signing methods" } ]
{ "category": "Runtime", "file_name": "VERSION_HISTORY.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "The GNU Lesser General Public License (LGPL v2.1) is applicable to the following component(s): libc.so.6 libpthread.so.0 nestybox/libseccomp Conditions for section 6 of this license are met by disclosure of the Sysbox source code in this repo. ``` GNU Lesser General Public License Version 2.1, February 1999 Copyright 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. This is the first released version of the Lesser GPL. It also counts as the successor of the GNU Library Public License, version 2, hence the version number 2.1. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public Licenses are intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This license, the Lesser General Public License, applies to some specially designated software packages--typically libraries--of the Free Software Foundation and other authors who decide to use it. You can use it too, but we suggest you first think carefully about whether this license or the ordinary General Public License is the better strategy to use in any particular case, based on the explanations below. When we speak of free software, we are referring to freedom of use, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish); that you receive source code or can get it if you want it; that you can change the software and use pieces of it in new free programs; and that you are informed that you can do these things. To protect your rights, we need to make restrictions that forbid distributors to deny you these rights or to ask you to surrender these rights. These restrictions translate to certain responsibilities for you if you distribute copies of the library or if you modify it. For example, if you distribute copies of the library, whether gratis or for a fee, you must give the recipients all the rights that we gave you. You must make sure that they, too, receive or can get the source code. If you link other code with the library, you must provide complete object files to the recipients, so that they can relink them with the library after making changes to the library and recompiling it. And you must show them these terms so they know their rights. We protect your rights with a two-step method: (1) we copyright the library, and (2) we offer you this license, which gives you legal permission to copy, distribute and/or modify the library. To protect each distributor, we want to make it very clear that there is no warranty for the free library. Also, if the library is modified by someone else and passed on, the recipients should know that what they have is not the original version, so that the original author's reputation will not be affected by problems that might be introduced by others. Finally, software patents pose a constant threat to the existence of any free program. We wish to make sure that a company cannot effectively restrict the users of a free program by obtaining a restrictive license from a patent" }, { "data": "Therefore, we insist that any patent license obtained for a version of the library must be consistent with the full freedom of use specified in this license. Most GNU software, including some libraries, is covered by the ordinary GNU General Public License. This license, the GNU Lesser General Public License, applies to certain designated libraries, and is quite different from the ordinary General Public License. We use this license for certain libraries in order to permit linking those libraries into non-free programs. When a program is linked with a library, whether statically or using a shared library, the combination of the two is legally speaking a combined work, a derivative of the original library. The ordinary General Public License therefore permits such linking only if the entire combination fits its criteria of freedom. The Lesser General Public License permits more lax criteria for linking other code with the library. We call this license the Lesser General Public License because it does Less to protect the user's freedom than the ordinary General Public License. It also provides other free software developers Less of an advantage over competing non-free programs. These disadvantages are the reason we use the ordinary General Public License for many libraries. However, the Lesser license provides advantages in certain special circumstances. For example, on rare occasions, there may be a special need to encourage the widest possible use of a certain library, so that it becomes a de-facto standard. To achieve this, non-free programs must be allowed to use the library. A more frequent case is that a free library does the same job as widely used non-free libraries. In this case, there is little to gain by limiting the free library to free software only, so we use the Lesser General Public License. In other cases, permission to use a particular library in non-free programs enables a greater number of people to use a large body of free software. For example, permission to use the GNU C Library in non-free programs enables many more people to use the whole GNU operating system, as well as its variant, the GNU/Linux operating system. Although the Lesser General Public License is Less protective of the users' freedom, it does ensure that the user of a program that is linked with the Library has the freedom and the wherewithal to run that program using a modified version of the Library. The precise terms and conditions for copying, distribution and modification follow. Pay close attention to the difference between a work based on the library and a work that uses the library. The former contains code derived from the library, whereas the latter must be combined with the library in order to run. TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION This License Agreement applies to any software library or other program which contains a notice placed by the copyright holder or other authorized party saying it may be distributed under the terms of this Lesser General Public License (also called this License). Each licensee is addressed as you. A library means a collection of software functions and/or data prepared so as to be conveniently linked with application programs (which use some of those functions and data) to form executables. The Library, below, refers to any such software library or work which has been distributed under these terms. A work based on the Library means either the Library or any derivative work under copyright law: that is to say, a work containing the Library or a portion of it, either verbatim or with modifications and/or translated straightforwardly into another language. (Hereinafter, translation is included without limitation in the term" }, { "data": "Source code for a work means the preferred form of the work for making modifications to it. For a library, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the library. Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running a program using the Library is not restricted, and output from such a program is covered only if its contents constitute a work based on the Library (independent of the use of the Library in a tool for writing it). Whether that is true depends on what the Library does and what the program that uses the Library does. You may copy and distribute verbatim copies of the Library's complete source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and distribute a copy of this License along with the Library. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. You may modify your copy or copies of the Library or any portion of it, thus forming a work based on the Library, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) The modified work must itself be a software library. b) You must cause the files modified to carry prominent notices stating that you changed the files and the date of any change. c) You must cause the whole of the work to be licensed at no charge to all third parties under the terms of this License. d) If a facility in the modified Library refers to a function or a table of data to be supplied by an application program that uses the facility, other than as an argument passed when the facility is invoked, then you must make a good faith effort to ensure that, in the event an application does not supply such function or table, the facility still operates, and performs whatever part of its purpose remains meaningful. (For example, a function in a library to compute square roots has a purpose that is entirely well-defined independent of the application. Therefore, Subsection 2d requires that any application-supplied function or table used by this function must be optional: if the application does not supply it, the square root function must still compute square roots.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Library, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Library, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote" }, { "data": "Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Library. In addition, mere aggregation of another work not based on the Library with the Library (or with a work based on the Library) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. You may opt to apply the terms of the ordinary GNU General Public License instead of this License to a given copy of the Library. To do this, you must alter all the notices that refer to this License, so that they refer to the ordinary GNU General Public License, version 2, instead of to this License. (If a newer version than version 2 of the ordinary GNU General Public License has appeared, then you can specify that version instead if you wish.) Do not make any other change in these notices. Once this change is made in a given copy, it is irreversible for that copy, so the ordinary GNU General Public License applies to all subsequent copies and derivative works made from that copy. This option is useful when you wish to copy part of the code of the Library into a program that is not a library. You may copy and distribute the Library (or a portion or derivative of it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange. If distribution of object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place satisfies the requirement to distribute the source code, even though third parties are not compelled to copy the source along with the object code. A program that contains no derivative of any portion of the Library, but is designed to work with the Library by being compiled or linked with it, is called a work that uses the Library. Such a work, in isolation, is not a derivative work of the Library, and therefore falls outside the scope of this License. However, linking a work that uses the Library with the Library creates an executable that is a derivative of the Library (because it contains portions of the Library), rather than a work that uses the library. The executable is therefore covered by this License. Section 6 states terms for distribution of such executables. When a work that uses the Library uses material from a header file that is part of the Library, the object code for the work may be a derivative work of the Library even though the source code is not. Whether this is true is especially significant if the work can be linked without the Library, or if the work is itself a library. The threshold for this to be true is not precisely defined by law. If such an object file uses only numerical parameters, data structure layouts and accessors, and small macros and small inline functions (ten lines or less in length), then the use of the object file is unrestricted, regardless of whether it is legally a derivative work. (Executables containing this object code plus portions of the Library will still fall under Section" }, { "data": "Otherwise, if the work is a derivative of the Library, you may distribute the object code for the work under the terms of Section 6. Any executables containing that work also fall under Section 6, whether or not they are linked directly with the Library itself. As an exception to the Sections above, you may also combine or link a work that uses the Library with the Library to produce a work containing portions of the Library, and distribute that work under terms of your choice, provided that the terms permit modification of the work for the customer's own use and reverse engineering for debugging such modifications. You must give prominent notice with each copy of the work that the Library is used in it and that the Library and its use are covered by this License. You must supply a copy of this License. If the work during execution displays copyright notices, you must include the copyright notice for the Library among them, as well as a reference directing the user to the copy of this License. Also, you must do one of these things: a) Accompany the work with the complete corresponding machine-readable source code for the Library including whatever changes were used in the work (which must be distributed under Sections 1 and 2 above); and, if the work is an executable linked with the Library, with the complete machine-readable work that uses the Library, as object code and/or source code, so that the user can modify the Library and then relink to produce a modified executable containing the modified Library. (It is understood that the user who changes the contents of definitions files in the Library will not necessarily be able to recompile the application to use the modified definitions.) b) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (1) uses at run time a copy of the library already present on the user's computer system, rather than copying library functions into the executable, and (2) will operate properly with a modified version of the library, if the user installs one, as long as the modified version is interface-compatible with the version that the work was made with. c) Accompany the work with a written offer, valid for at least three years, to give the same user the materials specified in Subsection 6a, above, for a charge no more than the cost of performing this distribution. d) If distribution of the work is made by offering access to copy from a designated place, offer equivalent access to copy the above specified materials from the same place. e) Verify that the user has already received a copy of these materials or that you have already sent this user a copy. For an executable, the required form of the work that uses the Library must include any data and utility programs needed for reproducing the executable from it. However, as a special exception, the materials to be distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. It may happen that this requirement contradicts the license restrictions of other proprietary libraries that do not normally accompany the operating system. Such a contradiction means you cannot use both them and the Library together in an executable that you" }, { "data": "You may place library facilities that are a work based on the Library side-by-side in a single library together with other library facilities not covered by this License, and distribute such a combined library, provided that the separate distribution of the work based on the Library and of the other library facilities is otherwise permitted, and provided that you do these two things: a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities. This must be distributed under the terms of the Sections above. b) Give prominent notice with the combined library of the fact that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work. You may not copy, modify, sublicense, link with, or distribute the Library except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, link with, or distribute the Library is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Library or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Library (or any work based on the Library), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Library or works based on it. Each time you redistribute the Library (or any work based on the Library), the recipient automatically receives a license from the original licensor to copy, distribute, link with or modify the Library subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties with this License. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Library at all. For example, if a patent license would not permit royalty-free redistribution of the Library by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Library. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply, and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system which is implemented by public license" }, { "data": "Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. If the distribution and/or use of the Library is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Library under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. The Free Software Foundation may publish revised and/or new versions of the Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Library specifies a version number of this License which applies to it and any later version, you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Library does not specify a license version number, you may choose any version ever published by the Free Software Foundation. If you wish to incorporate parts of the Library into other free programs whose distribution conditions are incompatible with these, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE LIBRARY AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Libraries If you develop a new library, and you want it to be of the greatest possible use to the public, we recommend making it free software that everyone can redistribute and change. You can do so by permitting redistribution under these terms (or, alternatively, under the terms of the ordinary General Public License). To apply these terms, attach the following notices to the" }, { "data": "It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the copyright line and a pointer to where the full notice is found. <one line to give the library's name and a brief idea of what it does.> Copyright (C) <year> <name of author> This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Also add information on how to contact you by electronic and paper mail. You should also get your employer (if you work as a programmer) or your school, if any, to sign a copyright disclaimer for the library, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the library `Frob' (a library for tweaking knobs) written by James Random Hacker. <signature of Ty Coon>, 1 April 1990 Ty Coon, President of Vice That's all there is to it! ``` BSD 2-clause \"Simplified\" License is applicable to the following component(s): ``` Copyright 2013 Suryandaru Triandana <syndtr@gmail.com> All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ``` Copyright (c) 2013, Georg Reinke (<guelfey at gmail dot com>), Google All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE" }, { "data": "IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ``` Copyright (c) 2015, Dave Cheney <dave@cheney.net> All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ``` Copyright (c) 2015 Matthew Heon <mheon@redhat.com> Copyright (c) 2015 Paul Moore <pmoore@redhat.com> All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ``` Blackfriday is distributed under the Simplified BSD License: Copyright 2011 Russ Ross All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE" }, { "data": "IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ``` BSD 2-Clause License Copyright (c) 2017, Karrick McDermott All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` BSD 3-clause \"New\" or \"Revised\" License is applicable to the following component(s): ``` Copyright (c) 2009 The Go Authors. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ``` Copyright (C) 2014-2015 Docker Inc & Go Authors. All rights reserved. Copyright (C) 2017 SUSE LLC. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of Google" }, { "data": "nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ``` Copyright 2008 Google Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Code generated by the Protocol Buffer compiler is owned by the owner of the input file used when generating it. This code is not standalone and requires a support library to be linked with it. This support library is itself covered by the above license. ``` ``` Copyright (c) 2014 Will Fitzgerald. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE" }, { "data": "IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ``` Copyright (c) 2013, Patrick Mezard All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. The names of its contributors may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ``` Copyright (c) 2012 The Go Authors. All rights reserved. Copyright (c) 2012-2019 fsnotify Authors. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ``` Copyright (c) 2013, The GoGo Authors. All rights reserved. Protocol Buffers for Go with Gadgets Go support for Protocol Buffers - Google's data interchange format Copyright 2010 The Go Authors. All rights reserved. https://github.com/golang/protobuf Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of Google" }, { "data": "nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ``` Copyright 2010 The Go Authors. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ``` Copyright (c) 2009,2014 Google Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ``` Copyright (c) 2013-2019 Tommi" }, { "data": "Copyright (c) 2009, 2011, 2012 The Go Authors. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. The following included software components have additional copyright notices and license terms that may differ from the above. File fuse.go: // Adapted from Plan 9 from User Space's src/cmd/9pfuse/fuse.c, // which carries this notice: // // The files in this directory are subject to the following license. // // The author of this software is Russ Cox. // // Copyright (c) 2006 Russ Cox // // Permission to use, copy, modify, and distribute this software for any // purpose without fee is hereby granted, provided that this entire notice // is included in all copies of any software which is or includes a copy // or modification of this software and in all copies of the supporting // documentation for such software. // // THIS SOFTWARE IS BEING PROVIDED \"AS IS\", WITHOUT ANY EXPRESS OR IMPLIED // WARRANTY. IN PARTICULAR, THE AUTHOR MAKES NO REPRESENTATION OR WARRANTY // OF ANY KIND CONCERNING THE MERCHANTABILITY OF THIS SOFTWARE OR ITS // FITNESS FOR ANY PARTICULAR PURPOSE. File fuse_kernel.go: // Derived from FUSE's fuse_kernel.h /* This file defines the kernel interface of FUSE Copyright (C) 2001-2007 Miklos Szeredi <miklos@szeredi.hu> This -- and only this -- header file may also be distributed under the terms of the BSD Licence as follows: Copyright (C) 2001-2007 Miklos Szeredi. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE" }, { "data": "IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ ``` Mozilla Public License 2.0 is applicable to the following component(s): ``` Mozilla Public License, version 2.0 Definitions 1.1. \"Contributor\" means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software. 1.2. \"Contributor Version\" means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor's Contribution. 1.3. \"Contribution\" means Covered Software of a particular Contributor. 1.4. \"Covered Software\" means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof. 1.5. \"Incompatible With Secondary Licenses\" means a. that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or b. that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License. 1.6. \"Executable Form\" means any form of the work other than Source Code Form. 1.7. \"Larger Work\" means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software. 1.8. \"License\" means this document. 1.9. \"Licensable\" means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License. 1.10. \"Modifications\" means any of the following: a. any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or b. any new file in Source Code Form that contains any Covered Software. 1.11. \"Patent Claims\" of a Contributor means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version. 1.12. \"Secondary License\" means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses. 1.13. \"Source Code Form\" means the form of the work preferred for making modifications. 1.14. \"You\" (or \"Your\") means an individual or a legal entity exercising rights under this License. For legal entities, \"You\" includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, \"control\" means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity. License Grants and Conditions 2.1. Grants Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license: a. under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and b. under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version. 2.2. Effective Date The licenses granted in Section" }, { "data": "with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution. 2.3. Limitations on Grant Scope The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor: a. for any code that a Contributor has removed from Covered Software; or b. for infringements caused by: (i) Your and any other third party's modifications of Covered Software, or (ii) the combination of its Contributions with other software (except as part of its Contributor Version); or c. under Patent Claims infringed by Covered Software in the absence of its Contributions. This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4). 2.4. Subsequent Licenses No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3). 2.5. Representation Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License. 2.6. Fair Use This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents. 2.7. Conditions Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1. Responsibilities 3.1. Distribution of Source Form All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients' rights in the Source Code Form. 3.2. Distribution of Executable Form If You distribute Covered Software in Executable Form then: a. such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and b. You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients' rights in the Source Code Form under this License. 3.3. Distribution of a Larger Work You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s)." }, { "data": "Notices You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies. 3.5. Application of Additional Terms You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction. Inability to Comply Due to Statute or Regulation If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply with the terms of this License to the maximum extent possible; and (b) describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it. Termination 5.1. The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated (a) provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice. 5.2. If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate. 5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination. Disclaimer of Warranty Covered Software is provided under this License on an \"as is\" basis, without warranty of any kind, either expressed, implied, or statutory, including, without limitation, warranties that the Covered Software is free of defects, merchantable, fit for a particular purpose or non-infringing. The entire risk as to the quality and performance of the Covered Software is with You. Should any Covered Software prove defective in any respect, You (not any Contributor) assume the cost of any necessary servicing, repair, or" }, { "data": "This disclaimer of warranty constitutes an essential part of this License. No use of any Covered Software is authorized under this License except under this disclaimer. Limitation of Liability Under no circumstances and under no legal theory, whether tort (including negligence), contract, or otherwise, shall any Contributor, or anyone who distributes Covered Software as permitted above, be liable to You for any direct, indirect, special, incidental, or consequential damages of any character including, without limitation, damages for lost profits, loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses, even if such party shall have been informed of the possibility of such damages. This limitation of liability shall not apply to liability for death or personal injury resulting from such party's negligence to the extent applicable law prohibits such limitation. Some jurisdictions do not allow the exclusion or limitation of incidental or consequential damages, so this exclusion and limitation may not apply to You. Litigation Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party's ability to bring cross-claims or counter-claims. Miscellaneous This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor. Versions of the License 10.1. New Versions Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number. 10.2. Effect of New Versions You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward. 10.3. Modified Versions If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License). 10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached. Exhibit A - Source Code Form License Notice This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice. You may add additional accurate notices of copyright ownership. Exhibit B - \"Incompatible With Secondary Licenses\" Notice This Source Code Form is \"Incompatible With Secondary Licenses\", as defined by the Mozilla Public License," } ]
{ "category": "Runtime", "file_name": "OSS_DISCLOSURES.md", "project_name": "Sysbox", "subcategory": "Container Runtime" }
[ { "data": "Before continuing, make sure you've read the This is a (probably incomplete) list of software used to develop the operator: ...the operator is written in it... Provides all the plumbing and project structure Used to ensure all files are formatted, generated code is up to date and more Used for code formatting Lints for go code Some additional software you may find useful: can create virtual machines (for example a virtual kubernetes cluster) Commit hooks ensure that all new and changed code is formatted and tested before committing. To set up commit hooks install `pre-commit` (see above) and run: ``` $ pre-commit install ``` Now, if you try to commit a file that for example is not properly formatted, you will receive a message like: ``` $ git commit -a -m \"important fix; no time to check for errors\" Trim Trailing Whitespace.................................................Passed Fix End of Files.........................................................Passed Check Yaml...........................................(no files to check)Skipped Check for added large files..............................................Passed gofumpt..................................................................Failed hook id: gofumpt exit code: 1 files were modified by this hook version/version.go golangci-lint............................................................Failed hook id: golangci-lint exit code: 1 ``` Use `make test` to run our test suite. It will download the required control plane binaries to execute the webhook and controller tests. For basic unit testing use the basic go test framework. If something you want to test relies on the Kubernetes API, check out the test suite for the As of right now, there is no recommended way to end-to-end test the operator. It probably involves some virtual machines running a basic kubernetes cluster. On every pull request, we run a set of tests, specified in `.github/workflows`. The checks include `go test` `golandci-lint` `pre-commit run` This repository enforces for every commit. Every commit needs a `Signed-off-by` line. You can add them by passing the `-s` flag to `git commit`: ``` $ git commit -s -m 'This is my commit message' ```" } ]
{ "category": "Runtime", "file_name": "DEVELOPMENT.md", "project_name": "Piraeus Datastore", "subcategory": "Cloud Native Storage" }
[ { "data": "This week I (@crosbymichael) have been working on the OCI runtime specifications. Since containerd is to be fully compliant with the OCI specs we need to make sure we reach a 1.0 and have a solid base to build on. We are getting very close on the spec side and have a final rc out for review. If you have time please check it out. I have also been working on the `runc` side of things, fixing terminal handling for `exec` and implementing the changes in our go bindings. I have also worked on adding a global process monitor/reaper to containerd. The issues before with having a `SIGCHLD` reaper is that the Go `exec.Cmd` and especially its `Wait` method did not play well together. This would cause races between the reaper doing a `waitpid` and the `exec.Cmd` doing a `Wait`. I think we solved this problem fully now as long as code is written against the `reaper` api. It is a little more of a burden on developers of containerd but makes it much more robust when dealing with processes being reparented to `containerd` and its `shim`. We merged a PR making the snapshot registration dynamic. This allows users to compile containerd with additional shapshotters than the ones we have builtin to the core. Because of the current state of Go 1.8 plugins the best course of action for adding additional runtimes, snapshotters, and other extensions is to modify the `builtins.go` file for the `containerd` command and recompile. Hopefully the short comings will be fixed in later Go releases. ```go package main // register containerd builtins here import ( _ \"github.com/containerd/containerd/linux\" _ \"github.com/containerd/containerd/services/content\" _ \"github.com/containerd/containerd/services/execution\" _ \"github.com/containerd/containerd/snapshot/btrfs\" _ \"github.com/containerd/containerd/snapshot/overlay\" ) ``` We are working towards being feature complete mid April, with a Q2 1.0. The Beta milestone on github should reflect this goal and the dates associated with it. After we hit a feature complete state we hope to finish the Docker, swarm, and kube CRI integrations to make sure that we provide the feature set that is required for each, before locking down the APIs. There is going to be another containerd summit at Dockecon this year. I created a document in the repository so that everyone can add discussion points for the breakout sessions. We should have a much larger crowd than the previous summit, therefore, having a few pre-defined discussion points will help. We will still have ad-hoc discussions but it never hurts to be prepared. We are still moving forward so that we can have the distribution, content, and snapshotters all working together over the API. One of the challenges is to make sure things work well over GRPC. We need to be able to support things like `docker build` and making sure that it performs as good or better than what you expect." } ]
{ "category": "Runtime", "file_name": "2017-03-10.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "It provides feature of managing blocks used below a given namespace. (Expects the inodes to be linked to the proper namespace). It follows a simple logic for quota implementation: I manage only the local data, but if someone 'trusted' tells me, there is more usage in other places, I will consider it. I don't preserve the information about cluster view, so, everytime I am restarted, someone should give me global data to be proper. This translator is designed to be sitting on brick side (ie, somewhere above `storage/posix` in same graph). More on the reasons for this translator is given in One need to mark a directory as namespace first, and then set Quota on top of it. `setfattr -n trusted.glusterfs.namespace -v true ${mountpoint}/directory` To set 100MiB quota limit: `setfattr -n trusted.gfs.squota.limit -v 100000000 ${mountpoint}/directory` Updating the hard limit is as simple as calling above xattr again. `setfattr -n trusted.gfs.squota.limit -v 500000000 ${mountpoint}/directory` Call removexattr() on the directory, with the flag. `setfattr -x trusted.gfs.squota.limit ${mountpoint}/directory` This quota feature is not complete without a global view helper process, which sees the complete data from all bricks. Idea is to run it at regular frequency on all the directories which has Quota limit set ``` ... qdir=${mountpoint}/directory used_size=$(df --block-size=1 --output=used $qdir | tail -n1); setfattr -n glusterfs.quota.total-usage -v ${used_size} $qdir; ... ``` With this, the total usage would be updated in the translator and the new value would be considered for quota checks." } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }
[ { "data": "This document describes how to configure runtime options for `containerd-shim-runsc-v1`. You can find the installation instructions and minimal requirements in . The shim can be provided with a configuration file containing options to the shim itself as well as a set of flags to runsc. Here is a quick example: ```shell cat <<EOF | sudo tee /etc/containerd/runsc.toml option = \"value\" [runsc_config] flag = \"value\" EOF ``` The set of options that can be configured can be found in . Values under `[runsc_config]` can be used to set arbitrary flags to runsc. `flag = \"value\"` is converted to `--flag=\"value\"` when runsc is invoked. Run `runsc flags` so see which flags are available Next, containerd needs to be configured to send the configuration file to the shim. Starting in 1.3, containerd supports a configurable `ConfigPath` in the runtime configuration. Here is an example: ```shell cat <<EOF | sudo tee /etc/containerd/config.toml version = 2 [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc] runtime_type = \"io.containerd.runc.v2\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runsc] runtime_type = \"io.containerd.runsc.v1\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runsc.options] TypeUrl = \"io.containerd.runsc.v1.options\" ConfigPath = \"/etc/containerd/runsc.toml\" EOF ``` When you are done, restart containerd to pick up the changes. ```shell sudo systemctl restart containerd ``` When `shim_debug` is enabled in `/etc/containerd/config.toml`, containerd will forward shim logs to its own log. You can additionally set `level = \"debug\"` to enable debug logs. To see the logs run `sudo journalctl -u containerd`. Here is a containerd configuration file that enables both options: ```shell cat <<EOF | sudo tee /etc/containerd/config.toml version = 2 [debug] level = \"debug\" [plugins.\"io.containerd.runtime.v1.linux\"] shim_debug = true [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc] runtime_type = \"io.containerd.runc.v2\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runsc] runtime_type = \"io.containerd.runsc.v1\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runsc.options] TypeUrl = \"io.containerd.runsc.v1.options\" ConfigPath = \"/etc/containerd/runsc.toml\" EOF ``` It can be hard to separate containerd messages from the shim's though. To create a log file dedicated to the shim, you can set the `logpath` and `loglevel` values in the shim configuration file: `log_path` is the directory where the shim logs will be created. `%ID%` is the path is replaced with the container ID. `log_level` sets the logs level. It is normally set to \"debug\" as there is not much interesting happening with other log levels. gVisor debug logging can be enabled by setting the `debug` and `debug-log` flag. The shim will replace \"%ID%\" with the container ID, and \"%COMMAND%\" with the runsc command (run, boot, etc.) in the path of the `debug-log` flag. Find out more about debugging in the . ```shell cat <<EOF | sudo tee /etc/containerd/runsc.toml log_path = \"/var/log/runsc/%ID%/shim.log\" log_level = \"debug\" [runsc_config] debug = \"true\" debug-log = \"/var/log/runsc/%ID%/gvisor.%COMMAND%.log\" EOF ```" } ]
{ "category": "Runtime", "file_name": "configuration.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "<p>Packages:</p> <ul> <li> <a href=\"#cr%2fv1alpha1\">cr/v1alpha1</a> </li> </ul> <h2 id=\"cr/v1alpha1\">cr/v1alpha1</h2> <p> <p>Package v1alpha1 is the v1alpha1 version of the API.</p> </p> Resource Types: <ul><li> <a href=\"#cr/v1alpha1.ActionSet\">ActionSet</a> </li><li> <a href=\"#cr/v1alpha1.Blueprint\">Blueprint</a> </li><li> <a href=\"#cr/v1alpha1.Profile\">Profile</a> </li></ul> <h3 id=\"cr/v1alpha1.ActionSet\">ActionSet </h3> <p> <p>ActionSet describes kanister actions.</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> cr/v1alpha1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>ActionSet</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#cr/v1alpha1.ActionSetSpec\"> ActionSetSpec </a> </em> </td> <td> <p>Spec defines the specification for the actionset. The specification includes a list of Actions to be performed. Each Action includes details about the referenced Blueprint and other objects used to perform the defined action.</p> <br/> <br/> <table> </table> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"#cr/v1alpha1.ActionSetStatus\"> ActionSetStatus </a> </em> </td> <td> <p>Status refers to the current status of the Kanister actions.</p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.Blueprint\">Blueprint </h3> <p> <p>Blueprint describes kanister actions.</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> cr/v1alpha1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>Blueprint</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>actions</code><br/> <em> <a href=\"#cr/v1alpha1.*./pkg/apis/cr/v1alpha1.BlueprintAction\"> map[string]*./pkg/apis/cr/v1alpha1.BlueprintAction </a> </em> </td> <td> <p>Actions is the list of actions constructing the Blueprint.</p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.Profile\">Profile </h3> <p> <p>Profile captures information about a storage location for backup artifacts and corresponding credentials, that will be made available to a Blueprint phase.</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> cr/v1alpha1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>Profile</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>location</code><br/> <em> <a href=\"#cr/v1alpha1.Location\"> Location </a> </em> </td> <td> <p>Location provides the information about the object storage that is going to be used by Kanister to upload the backup objects.</p> </td> </tr> <tr> <td> <code>credential</code><br/> <em> <a href=\"#cr/v1alpha1.Credential\"> Credential </a> </em> </td> <td> <p>Credential represents the credentials associated with the Location.</p> </td> </tr> <tr> <td> <code>skipSSLVerify</code><br/> <em> bool </em> </td> <td> <p>SkipSSLVerify is a boolean that specifies whether skipping SSL verification is allowed when operating with the Location. If omitted from the CR definition, it defaults to false</p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.ActionProgress\">ActionProgress </h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.ActionSetStatus\">ActionSetStatus</a>) </p> <p> <p>ActionProgress provides information on the progress of an action.</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>runningPhase</code><br/> <em> string </em> </td> <td> <p>RunningPhase represents which phase of the action is being run</p> </td> </tr> <tr> <td> <code>percentCompleted</code><br/> <em> string </em> </td> <td> <p>PercentCompleted is computed by assessing the number of completed phases against the the total number of phases.</p> </td> </tr> <tr> <td> <code>lastTransitionTime</code><br/> <em> <a href=\"https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#time-v1-meta\"> Kubernetes meta/v1.Time </a> </em> </td> <td> <p>LastTransitionTime represents the last date time when the progress status was received.</p> </td> </tr> </tbody> </table> <h3" }, { "data": "</h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.ActionSet\">ActionSet</a>) </p> <p> <p>ActionSetSpec is the specification for the actionset.</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>actions</code><br/> <em> <a href=\"#cr/v1alpha1.ActionSpec\"> []ActionSpec </a> </em> </td> <td> <p>Actions represents a list of Actions that need to be performed by the actionset.</p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.ActionSetStatus\">ActionSetStatus </h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.ActionSet\">ActionSet</a>) </p> <p> <p>ActionSetStatus is the status for the actionset. This should only be updated by the controller.</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>state</code><br/> <em> <a href=\"#cr/v1alpha1.State\"> State </a> </em> </td> <td> <p>State represents the current state of the actionset. There are four possible values: &ldquo;Pending&rdquo;, &ldquo;Running&rdquo;, &ldquo;Failed&rdquo;, and &ldquo;Complete&rdquo;.</p> </td> </tr> <tr> <td> <code>actions</code><br/> <em> <a href=\"#cr/v1alpha1.ActionStatus\"> []ActionStatus </a> </em> </td> <td> <p>Actions list represents the latest available observations of the current state of all the actions.</p> </td> </tr> <tr> <td> <code>error</code><br/> <em> <a href=\"#cr/v1alpha1.Error\"> Error </a> </em> </td> <td> <p>Error contains the detailed error message of an actionset failure.</p> </td> </tr> <tr> <td> <code>progress</code><br/> <em> <a href=\"#cr/v1alpha1.ActionProgress\"> ActionProgress </a> </em> </td> <td> <p>Progress provides information on the progress of a running actionset. This includes the percentage of completion of an actionset and the phase that is currently being executed.</p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.ActionSpec\">ActionSpec </h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.ActionSetSpec\">ActionSetSpec</a>) </p> <p> <p>ActionSpec is the specification for a single Action.</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <p>Name is the action we&rsquo;ll perform. For example: <code>backup</code> or <code>restore</code>.</p> </td> </tr> <tr> <td> <code>object</code><br/> <em> <a href=\"#cr/v1alpha1.ObjectReference\"> ObjectReference </a> </em> </td> <td> <p>Object refers to the thing we&rsquo;ll perform this action on.</p> </td> </tr> <tr> <td> <code>blueprint</code><br/> <em> string </em> </td> <td> <p>Blueprint with instructions on how to execute this action.</p> </td> </tr> <tr> <td> <code>artifacts</code><br/> <em> <a href=\"#cr/v1alpha1.Artifact\"> map[string]./pkg/apis/cr/v1alpha1.Artifact </a> </em> </td> <td> <p>Artifacts will be passed as inputs into this phase.</p> </td> </tr> <tr> <td> <code>configMaps</code><br/> <em> <a href=\"#cr/v1alpha1.ObjectReference\"> map[string]./pkg/apis/cr/v1alpha1.ObjectReference </a> </em> </td> <td> <p>ConfigMaps that we&rsquo;ll get and pass into the blueprint.</p> </td> </tr> <tr> <td> <code>secrets</code><br/> <em> <a href=\"#cr/v1alpha1.ObjectReference\"> map[string]./pkg/apis/cr/v1alpha1.ObjectReference </a> </em> </td> <td> <p>Secrets that we&rsquo;ll get and pass into the blueprint.</p> </td> </tr> <tr> <td> <code>profile</code><br/> <em> <a href=\"#cr/v1alpha1.ObjectReference\"> ObjectReference </a> </em> </td> <td> <p>Profile is use to specify the location where store artifacts and the credentials authorized to access them.</p> </td> </tr> <tr> <td> <code>podOverride</code><br/> <em> <a href=\"#cr/v1alpha1.JSONMap\"> JSONMap </a> </em> </td> <td> <p>PodOverride is used to specify pod specs that will override the default pod specs</p> </td> </tr> <tr> <td> <code>options</code><br/> <em> map[string]string </em> </td> <td> <p>Options will be used to specify additional values to be used in the Blueprint.</p> </td> </tr> <tr> <td> <code>preferredVersion</code><br/> <em> string </em> </td> <td> <p>PreferredVersion will be used to select the preferred version of Kanister functions to be executed for this action</p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.ActionStatus\">ActionStatus </h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.ActionSetStatus\">ActionSetStatus</a>) </p> <p> <p>ActionStatus is updated as we execute phases.</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <p>Name is the action we&rsquo;ll perform. For example: <code>backup</code> or <code>restore</code>.</p> </td> </tr> <tr> <td> <code>object</code><br/> <em>" }, { "data": "href=\"#cr/v1alpha1.ObjectReference\"> ObjectReference </a> </em> </td> <td> <p>Object refers to the thing we&rsquo;ll perform this action on.</p> </td> </tr> <tr> <td> <code>blueprint</code><br/> <em> string </em> </td> <td> <p>Blueprint with instructions on how to execute this action.</p> </td> </tr> <tr> <td> <code>phases</code><br/> <em> <a href=\"#cr/v1alpha1.Phase\"> []Phase </a> </em> </td> <td> <p>Phases are sub-actions an are executed sequentially.</p> </td> </tr> <tr> <td> <code>artifacts</code><br/> <em> <a href=\"#cr/v1alpha1.Artifact\"> map[string]./pkg/apis/cr/v1alpha1.Artifact </a> </em> </td> <td> <p>Artifacts created by this phase.</p> </td> </tr> <tr> <td> <code>deferPhase</code><br/> <em> <a href=\"#cr/v1alpha1.Phase\"> Phase </a> </em> </td> <td> <p>DeferPhase is the phase that is executed at the end of an action irrespective of the status of other phases in the action</p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.Artifact\">Artifact </h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.ActionSpec\">ActionSpec</a>, <a href=\"#cr/v1alpha1.ActionStatus\">ActionStatus</a>, <a href=\"#cr/v1alpha1.BlueprintAction\">BlueprintAction</a>) </p> <p> <p>Artifact tracks objects produced by an action.</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>keyValue</code><br/> <em> map[string]string </em> </td> <td> <p>KeyValue represents key-value pair artifacts produced by the action.</p> </td> </tr> <tr> <td> <code>kopiaSnapshot</code><br/> <em> string </em> </td> <td> <p>KopiaSnapshot captures the kopia snapshot information produced as a JSON string by kando command in phases of an action.</p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.BlueprintAction\">BlueprintAction </h3> <p> <p>BlueprintAction describes the set of phases that constitute an action.</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <p>Name contains the name of the action.</p> </td> </tr> <tr> <td> <code>kind</code><br/> <em> string </em> </td> <td> <p>Kind contains the resource on which this action has to be performed.</p> </td> </tr> <tr> <td> <code>configMapNames</code><br/> <em> []string </em> </td> <td> <p>ConfigMapNames is used to specify the config map names that can be used later in the action phases.</p> </td> </tr> <tr> <td> <code>secretNames</code><br/> <em> []string </em> </td> <td> <p>List of Kubernetes secret names used in action phases.</p> </td> </tr> <tr> <td> <code>inputArtifactNames</code><br/> <em> []string </em> </td> <td> <p>InputArtifactNames is the list of Artifact names that were set from previous action and can be consumed in the current action.</p> </td> </tr> <tr> <td> <code>outputArtifacts</code><br/> <em> <a href=\"#cr/v1alpha1.Artifact\"> map[string]./pkg/apis/cr/v1alpha1.Artifact </a> </em> </td> <td> <p>OutputArtifacts is the map of rendered artifacts produced by the BlueprintAction.</p> </td> </tr> <tr> <td> <code>phases</code><br/> <em> <a href=\"#cr/v1alpha1.BlueprintPhase\"> []BlueprintPhase </a> </em> </td> <td> <p>Phases is the list of BlueprintPhases which are invoked in order when executing this action.</p> </td> </tr> <tr> <td> <code>deferPhase</code><br/> <em> <a href=\"#cr/v1alpha1.BlueprintPhase\"> BlueprintPhase </a> </em> </td> <td> <p>DeferPhase is invoked after the execution of Phases that are defined for an action. A DeferPhase is executed regardless of the statuses of the other phases of the action. A DeferPhase can be used for cleanup operations at the end of an action.</p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.BlueprintPhase\">BlueprintPhase </h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.BlueprintAction\">BlueprintAction</a>) </p> <p> <p>BlueprintPhase is a an individual unit of execution.</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>func</code><br/> <em> string </em> </td> <td> <p>Func is the name of a registered Kanister function.</p> </td> </tr> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <p>Name contains name of the phase.</p> </td> </tr> <tr> <td> <code>objects</code><br/> <em> <a href=\"#cr/v1alpha1.ObjectReference\"> map[string]./pkg/apis/cr/v1alpha1.ObjectReference </a> </em> </td> <td> <p>ObjectRefs represents a map of references to the Kubernetes objects that can later be used in the <code>Args</code> of the" }, { "data": "</td> </tr> <tr> <td> <code>args</code><br/> <em> map[string]interface{} </em> </td> <td> <p>Args represents a map of named arguments that the controller will pass to the Kanister function.</p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.Credential\">Credential </h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.Profile\">Profile</a>) </p> <p> <p>Credential</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>type</code><br/> <em> <a href=\"#cr/v1alpha1.CredentialType\"> CredentialType </a> </em> </td> <td> <p>Type represents the information about how the credentials are provided for the respective object storage.</p> </td> </tr> <tr> <td> <code>keyPair</code><br/> <em> <a href=\"#cr/v1alpha1.KeyPair\"> KeyPair </a> </em> </td> <td> <p>KeyPair represents the key-value map used for the Credential of Type KeyPair.</p> </td> </tr> <tr> <td> <code>secret</code><br/> <em> <a href=\"#cr/v1alpha1.ObjectReference\"> ObjectReference </a> </em> </td> <td> <p>Secret represents the Kubernetes Secret Object used for the Credential of Type Secret.</p> </td> </tr> <tr> <td> <code>kopiaServerSecret</code><br/> <em> <a href=\"#cr/v1alpha1.KopiaServerSecret\"> KopiaServerSecret </a> </em> </td> <td> <p>KopiaServerSecret represents the secret being used by Credential of Type Kopia.</p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.CredentialType\">CredentialType (<code>string</code> alias)</p></h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.Credential\">Credential</a>) </p> <p> <p>CredentialType</p> </p> <table> <thead> <tr> <th>Value</th> <th>Description</th> </tr> </thead> <tbody><tr><td><p>&#34;keyPair&#34;</p></td> <td></td> </tr><tr><td><p>&#34;kopia&#34;</p></td> <td></td> </tr><tr><td><p>&#34;secret&#34;</p></td> <td></td> </tr></tbody> </table> <h3 id=\"cr/v1alpha1.Error\">Error </h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.ActionSetStatus\">ActionSetStatus</a>) </p> <p> <p>Error represents an error that occurred when executing an actionset.</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>message</code><br/> <em> string </em> </td> <td> <p>Message is the actual error message that is displayed in case of errors.</p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.JSONMap\">JSONMap (<code>map[string]interface{}</code> alias)</p></h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.ActionSpec\">ActionSpec</a>) </p> <p> <p>JSONMap contains PodOverride specs.</p> </p> <h3 id=\"cr/v1alpha1.KeyPair\">KeyPair </h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.Credential\">Credential</a>) </p> <p> <p>KeyPair</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>idField</code><br/> <em> string </em> </td> <td> <p>IDField specifies the corresponding key in the secret where the AWS Key ID value is stored.</p> </td> </tr> <tr> <td> <code>secretField</code><br/> <em> string </em> </td> <td> <p>SecretField specifies the corresponding key in the secret where the AWS Secret Key value is stored.</p> </td> </tr> <tr> <td> <code>secret</code><br/> <em> <a href=\"#cr/v1alpha1.ObjectReference\"> ObjectReference </a> </em> </td> <td> <p>Secret represents a Kubernetes Secret object storing the KeyPair credentials.</p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.KopiaServerSecret\">KopiaServerSecret </h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.Credential\">Credential</a>) </p> <p> <p>KopiaServerSecret contains credentials to connect to Kopia server</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>username</code><br/> <em> string </em> </td> <td> <p>Username represents the username used to connect to the Kopia Server.</p> </td> </tr> <tr> <td> <code>hostname</code><br/> <em> string </em> </td> <td> <p>Hostname represents the hostname used to connect to the Kopia Server.</p> </td> </tr> <tr> <td> <code>userPassphrase</code><br/> <em> <a href=\"#cr/v1alpha1.KopiaServerSecretRef\"> KopiaServerSecretRef </a> </em> </td> <td> <p>UserPassphrase is the user password used to connect to the Kopia Server.</p> </td> </tr> <tr> <td> <code>tlsCert</code><br/> <em> <a href=\"#cr/v1alpha1.KopiaServerSecretRef\"> KopiaServerSecretRef </a> </em> </td> <td> <p>TLSCert is the certificate used to connect to the Kopia Server.</p> </td> </tr> <tr> <td> <code>connectOptions</code><br/> <em> map[string]int </em> </td> <td> <p>ConnectOptions represents a map of options which can be used to connect to the Kopia Server.</p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.KopiaServerSecretRef\">KopiaServerSecretRef </h3> <p> (<em>Appears on:</em>" }, { "data": "href=\"#cr/v1alpha1.KopiaServerSecret\">KopiaServerSecret</a>) </p> <p> <p>KopiaServerSecretRef refers to K8s secrets containing Kopia creds</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>key</code><br/> <em> string </em> </td> <td> <p>Key represents the corresponding key in the secret where the required credential or certificate value is stored.</p> </td> </tr> <tr> <td> <code>secret</code><br/> <em> <a href=\"#cr/v1alpha1.ObjectReference\"> ObjectReference </a> </em> </td> <td> <p>Secret is the K8s secret object where the creds related to the Kopia Server are stored.</p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.Location\">Location </h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.Profile\">Profile</a>) </p> <p> <p>Location</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>type</code><br/> <em> <a href=\"#cr/v1alpha1.LocationType\"> LocationType </a> </em> </td> <td> <p>Type specifies the kind of object storage that would be used to upload the backup objects. Currently supported values are: &ldquo;GCS&rdquo;, &ldquo;S3Compliant&rdquo;, and &ldquo;Azure&rdquo;.</p> </td> </tr> <tr> <td> <code>bucket</code><br/> <em> string </em> </td> <td> <p>Bucket represents the bucket on the object storage where the backup is uploaded.</p> </td> </tr> <tr> <td> <code>endpoint</code><br/> <em> string </em> </td> <td> <p>Endpoint specifies the endpoint where the object storage is accessible at.</p> </td> </tr> <tr> <td> <code>prefix</code><br/> <em> string </em> </td> <td> <p>Prefix is the string that would be prepended to the object path in the bucket where the backup objects are uploaded.</p> </td> </tr> <tr> <td> <code>region</code><br/> <em> string </em> </td> <td> <p>Region represents the region of the bucket specified above.</p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.LocationType\">LocationType (<code>string</code> alias)</p></h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.Location\">Location</a>) </p> <p> <p>LocationType</p> </p> <table> <thead> <tr> <th>Value</th> <th>Description</th> </tr> </thead> <tbody><tr><td><p>&#34;azure&#34;</p></td> <td></td> </tr><tr><td><p>&#34;gcs&#34;</p></td> <td></td> </tr><tr><td><p>&#34;kopia&#34;</p></td> <td></td> </tr><tr><td><p>&#34;s3Compliant&#34;</p></td> <td></td> </tr></tbody> </table> <h3 id=\"cr/v1alpha1.ObjectReference\">ObjectReference </h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.ActionSpec\">ActionSpec</a>, <a href=\"#cr/v1alpha1.ActionStatus\">ActionStatus</a>, <a href=\"#cr/v1alpha1.BlueprintPhase\">BlueprintPhase</a>, <a href=\"#cr/v1alpha1.Credential\">Credential</a>, <a href=\"#cr/v1alpha1.KeyPair\">KeyPair</a>, <a href=\"#cr/v1alpha1.KopiaServerSecretRef\">KopiaServerSecretRef</a>) </p> <p> <p>ObjectReference refers to a kubernetes object.</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> <em> string </em> </td> <td> <p>API version of the referent.</p> </td> </tr> <tr> <td> <code>group</code><br/> <em> string </em> </td> <td> <p>API Group of the referent.</p> </td> </tr> <tr> <td> <code>resource</code><br/> <em> string </em> </td> <td> <p>Resource name of the referent.</p> </td> </tr> <tr> <td> <code>kind</code><br/> <em> string </em> </td> <td> <p>Kind of the referent. More info: <a href=\"https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds\">https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds</a></p> </td> </tr> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <p>Name of the referent. More info: <a href=\"http://kubernetes.io/docs/user-guide/identifiers#names\">http://kubernetes.io/docs/user-guide/identifiers#names</a></p> </td> </tr> <tr> <td> <code>namespace</code><br/> <em> string </em> </td> <td> <p>Namespace of the referent. More info: <a href=\"http://kubernetes.io/docs/user-guide/namespaces\">http://kubernetes.io/docs/user-guide/namespaces</a></p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.Phase\">Phase </h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.ActionStatus\">ActionStatus</a>) </p> <p> <p>Phase is subcomponent of an action.</p> </p> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <p>Name represents the name of the Blueprint phase.</p> </td> </tr> <tr> <td> <code>state</code><br/> <em> <a href=\"#cr/v1alpha1.State\"> State </a> </em> </td> <td> <p>State represents the current state of execution of the Blueprint phase.</p> </td> </tr> <tr> <td> <code>output</code><br/> <em> map[string]interface{} </em> </td> <td> <p>Output is the map of output artifacts produced by the Blueprint phase.</p> </td> </tr> </tbody> </table> <h3 id=\"cr/v1alpha1.State\">State (<code>string</code> alias)</p></h3> <p> (<em>Appears on:</em> <a href=\"#cr/v1alpha1.ActionSetStatus\">ActionSetStatus</a>, <a href=\"#cr/v1alpha1.Phase\">Phase</a>) </p> <p> <p>State is the current state of a phase of execution.</p> </p> <table> <thead> <tr> <th>Value</th> <th>Description</th> </tr> </thead> <tbody><tr><td><p>&#34;complete&#34;</p></td> <td><p>StateComplete means this action or phase finished successfully.</p> </td> </tr><tr><td><p>&#34;failed&#34;</p></td> <td><p>StateFailed means this action or phase was unsuccessful.</p> </td> </tr><tr><td><p>&#34;pending&#34;</p></td> <td><p>StatePending mean this action or phase has yet to be executed.</p> </td> </tr><tr><td><p>&#34;running&#34;</p></td> <td><p>StateRunning means this action or phase is currently executing.</p> </td> </tr></tbody> </table> <hr/> <p><em> Generated with <code>gen-crd-api-reference-docs</code> . </em></p>" } ]
{ "category": "Runtime", "file_name": "API.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "The Sysbox test suite is made up of the following: Unit tests: written with Go's \"testing\" package. Integration tests: written using the framework. All tests run inside a privileged Docker container (i.e., Sysbox as well as the tests that exercise it execute inside that container). Host must have kernel >= 5.12. Install the following tools: Docker Make Upgrade the kernel: see https://aws.amazon.com/premiumsupport/knowledge-center/amazon-linux-2-kernel-upgrade/ Install the kernel headers into the host: ``` sudo yum install kernel-devel-$(uname -r) ``` To run the entire Sysbox test suite: ``` $ make test ``` This command runs all test targets (i.e., unit and integration tests). Without uid-shifting: ``` $ make test-sysbox ``` With uid-shifting: ``` $ make test-sysbox-shiftuid ``` It's also possible to run a specific integration test with: ``` $ make test-sysbox TESTPATH=<test-name> ``` For example, to run all sysbox-fs handler tests: ``` $ make test-sysbox TESTPATH=tests/sysfs ``` Or to run one specific handler test: ``` $ make test-sysbox TESTPATH=tests/sysfs/disable_ipv6.bats ``` To run unit tests for one of the Sysbox components (e.g., sysbox-fs, sysbox-mgr, etc.) ``` $ make test-runc $ make test-fs $ make test-mgr ``` The Sysbox integration Makefile target (`test-sysbox`) spawns a Docker privileged container using the image in tests/Dockerfile.[distro]. It then mounts the developer's Sysbox directory into the privileged container, builds and starts Sysbox inside of it, and runs the tests in directory `tests/`. These tests use the test framework, which is pre-installed in the privileged container image. In order to launch the privileged container, Docker must be present in the host and configured without userns-remap (as userns-remap is not compatible with privileged containers). In other words, make sure the `/etc/docker/daemon.json` file is not configured with the `userns-remap` option prior to running the Sysbox integration tests. In order to debug, it's very useful to launch the Docker privileged container and get a shell in it. This can be done with: ``` $ make test-shell ``` or ``` $ make test-shell-shiftuid ``` The former command configures Docker inside the test container in userns-remap mode. The latter command configures docker inside the privileged test container without userns remap, thus forcing Sysbox to use uid-shifting via the shiftfs module. From within the test shell, you can deploy a system container as usual: ``` ``` Or you can run a test with: ``` ``` The test suite creates directories on the host which it mounts into the privileged test container. The programs running inside the privileged container (e.g., docker, sysbox, etc) place data in these directories. The Sysbox test targets do not cleanup the contents of these directories so as to allow their reuse between test runs in order to speed up testing (e.g., to avoid having the test container download fresh docker images between subsequent test runs). Instead, cleanup of these directories must be done manually via the following make target: ``` $ sudo make test-cleanup ``` The target must be run as root, because some of the files being cleaned up were created by root processes inside the privileged test container." } ]
{ "category": "Runtime", "file_name": "test.md", "project_name": "Sysbox", "subcategory": "Container Runtime" }
[ { "data": "sidebar_position: 2 sidebar_label: \"Deploy with hwameistor-operator\" You can use hwameistor-operator to deploy and manage HwameiStor system. Perform Life Cycle Management (LCM) for HwameiStor components: LocalDiskManager LocalStorage Scheduler AdmissionController VolumeEvictor Exporter HA module Apiserver Graph UI Configure the disks for different purposes Set up the storage pools automatically by discovering the underlying disks' type (e.g. HDD, SSD) Set up the StorageClasses automatically according to the Hwameistor's configurations and capabilities Add hwameistor-operator Helm Repo ```console helm repo add hwameistor-operator https://hwameistor.io/hwameistor-operator helm repo update hwameistor-operator ``` Install HwameiStor with hwameistor-operator :::note If no available clean disk provided, the operator will not create StorageClass automatically. Operator will claim disk automatically while installing, the available disks will be added into pool of LocalStorageNode. If available clean disk provided after installing, it's needed to apply a LocalDiskClaim manually to add the disk into pool of LocalStorageNode. Once LocalStorageNode has any disk available in its pool, the operator will create StorageClass automatically. That is to say, no capacity, no StorageClass. ::: ```console helm install hwameistor-operator hwameistor-operator/hwameistor-operator -n hwameistor --create-namespace ``` Optional installation parameters: Disk Reserve Available clean disk will be claimed and added into pool of LocalStorageNode by default. If you want to reserve some disks for other use before installing, you can set diskReserveConfigurations by helm values. Method 1: ```console helm install hwameistor-operator hwameistor-operator/hwameistor-operator -n hwameistor --create-namespace \\ --set diskReserve\\[0\\].nodeName=node1 \\ --set diskReserve\\[0\\].devices={/dev/sdc\\,/dev/sdd} \\ --set diskReserve\\[1\\].nodeName=node2 \\ --set diskReserve\\[1\\].devices={/dev/sdc\\,/dev/sde} ``` This is a example to set diskReserveConfigurations by `helm install --set`, it may be hard to write `--set` options like that. If it's possible, we suggest write the diskReserveConfigurations values into a file. Method 2: ```yaml diskReserve: nodeName: node1 devices: /dev/sdc /dev/sdd nodeName: node2 devices: /dev/sdc /dev/sde ``` For example, you write values like this into a file call diskReserve.yaml, you can apply the file when running `helm install`. ```console helm install hwameistor-operator hwameistor-operator/hwameistor-operator -n hwameistor --create-namespace -f diskReserve.yaml ``` Enable authentication ```console helm install hwameistor-operator hwameistor-operator/hwameistor-operator -n hwameistor --create-namespace \\ --set apiserver.authentication.enable=true \\ --set apiserver.authentication.accessId={YourName} \\ --set apiserver.authentication.secretKey={YourPassword} ``` You can also enable authentication by editing deployment/apiserver. Install operator by using DaoCloud image registry: ```console helm install hwameistor-operator hwameistor-operator/hwameistor-operator -n hwameistor --create-namespace \\ --set global.hwameistorImageRegistry=ghcr.m.daocloud.io \\ --set global.k8sImageRegistry=m.daocloud.io/registry.k8s.io ```" } ]
{ "category": "Runtime", "file_name": "operator.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Container Storage Interface (CSI) link: https://github.com/vmware-tanzu/velero-plugin-for-csi objectStorage: false volumesnapshotter: true localStorage: true beta: true supportedByVeleroTeam: true This repository contains Velero plugins for snapshotting CSI backed PVCs using the CSI beta snapshot APIs. These plugins are currently in beta as of the Velero 1.4 release and will follow the CSI volumesnapshotting APIs in upstream Kubernetes to GA." } ]
{ "category": "Runtime", "file_name": "10-container-storage-interface.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "This resource controls the state of a LINSTOR satellite. NOTE: This resource is not intended to be changed directly, instead it is created by the Piraeus Operator by merging all matching resources. Holds the desired state the satellite. Holds the default image registry to use for all Piraeus images. Inherited from . If empty (the default), the operator will use `quay.io/piraeusdatastore`. Holds a reference to the that controls this satellite. Holds the storage pools to configure on the node. Inherited from matching resources. Holds the properties which should be set on the node level. Inherited from matching resources. Configures a TLS secret used by the LINSTOR Satellite. Inherited from matching resources. Holds patches to apply to the Kubernetes resources. Inherited from matching resources. Reports the actual state of the satellite. The Operator reports the current state of the LINSTOR Satellite through a set of conditions. Conditions are identified by their `type`. | `type` | Explanation | |--|| | `Applied` | All Kubernetes resources were applied. | | `Available` | The LINSTOR Satellite is connected to the LINSTOR Controller | | `Configured` | Storage Pools and Properties are configured on the Satellite | | `EvacuationCompleted` | Only available when the Satellite is being deleted: Indicates progress of the eviction of resources. |" } ]
{ "category": "Runtime", "file_name": "linstorsatellite.md", "project_name": "Piraeus Datastore", "subcategory": "Cloud Native Storage" }
[ { "data": "name: Bug report about: Create a report to help us improve title: '' labels: '' assignees: '' Describe the bug A clear and concise description of what the bug is. To Reproduce Steps to reproduce the behavior: Go to '...' Click on '....' Scroll down to '....' See error Expected behavior A clear and concise description of what you expected to happen. Screenshots If applicable, add screenshots to help explain your problem. Desktop (please complete the following information): OS: [e.g. iOS] Browser [e.g. chrome, safari] Version [e.g. 22] Smartphone (please complete the following information): Device: [e.g. iPhone6] OS: [e.g. iOS8.1] Browser [e.g. stock browser, safari] Version [e.g. 22] Additional context Add any other context about the problem here." } ]
{ "category": "Runtime", "file_name": "bug_report.md", "project_name": "FabEdge", "subcategory": "Cloud Native Network" }
[ { "data": "outline: deep This page demonstrates usage of some of the runtime APIs provided by VitePress. The main `useData()` API can be used to access site, theme, and page data for the current page. It works in both `.md` and `.vue` files: ```md <script setup> import { useData } from 'vitepress' const { theme, page, frontmatter } = useData() </script> <pre>{{ theme }}</pre> <pre>{{ page }}</pre> <pre>{{ frontmatter }}</pre> ``` <script setup> import { useData } from 'vitepress' const { site, theme, page, frontmatter } = useData() </script> <pre>{{ theme }}</pre> <pre>{{ page }}</pre> <pre>{{ frontmatter }}</pre> Check out the documentation for the ." } ]
{ "category": "Runtime", "file_name": "api-examples.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "Name | Type | Description | Notes | - | - | - DestinationUrl | Pointer to string | | [optional] `func NewVmCoredumpData() *VmCoredumpData` NewVmCoredumpData instantiates a new VmCoredumpData object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewVmCoredumpDataWithDefaults() *VmCoredumpData` NewVmCoredumpDataWithDefaults instantiates a new VmCoredumpData object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *VmCoredumpData) GetDestinationUrl() string` GetDestinationUrl returns the DestinationUrl field if non-nil, zero value otherwise. `func (o VmCoredumpData) GetDestinationUrlOk() (string, bool)` GetDestinationUrlOk returns a tuple with the DestinationUrl field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmCoredumpData) SetDestinationUrl(v string)` SetDestinationUrl sets DestinationUrl field to given value. `func (o *VmCoredumpData) HasDestinationUrl() bool` HasDestinationUrl returns a boolean if a field has been set." } ]
{ "category": "Runtime", "file_name": "VmCoredumpData.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "For all WasmEdge security-related defects, please send an email to security@secondstate.io. You will receive an acknowledgement mail within 24 hours. After that, we will give a detailed response about the subsequent process within 48 hours. Please do not submit security vulnerabilities directly as Github Issues. For known public security vulnerabilities, we will disclose the disclosure as soon as possible after receiving the report. Vulnerabilities discovered for the first time will be disclosed in accordance with the following process: The received security vulnerability report shall be handed over to the security team for follow-up coordination and repair work. After the vulnerability is confirmed, we will create a draft Security Advisory on Github that lists the details of the vulnerability. Invite related personnel to discuss about the fix. Fork the temporary private repository on Github, and collaborate to fix the vulnerability. After the fix code is merged into all supported versions, the vulnerability will be publicly posted in the GitHub Advisory Database." } ]
{ "category": "Runtime", "file_name": "SECURITY.md", "project_name": "WasmEdge Runtime", "subcategory": "Container Runtime" }
[ { "data": "This enhancement is to simplify and automatically evict the replicas on the selected disabled disks or nodes to other suitable disks and nodes per user's request. Meanwhile keep the same level of fault tolerance during this eviction period of time. https://github.com/longhorn/longhorn/issues/292 https://github.com/longhorn/longhorn/issues/298 Allow user easily evict the replicas on the selected disks or nodes to other disks or nodes without impact the user defined `Volume.Spec.numberOfReplicas` and keep the same level of fault tolerance. This means we don't change the user defined replica number. Report any error to user during the eviction time. Allow user to cancel the eviction at any time. Add `Eviction Requested` with `true` and `false` selection buttons for disks and nodes. This is for user to evict or cancel the eviction of the disks or the nodes. Add new `evictionRequested` field to `Node.Spec`, `Node.Spec.disks` Spec and `Replica.Status`. These will help tracking the request from user and trigger replica controller to update `Replica.Status` and volume controller to do the eviction. And this will reconcile with `scheduledReplica` of selected disks on the nodes. Display `fail to evict` error message to `Dashboard` and any other eviction errors to the `Event log`. For disk replacement or node replacement, the eviction needs to be done successfully in order to guarantee Longhorn volume function properly. Before, when user wants to evict a disk or a node they need to do the following steps: User needs to disable the disk or the node. User needs to scale up the replica count for the volume which has replica on disabled disks or nodes, and wait for the rebuild complete, scale down the replica count, then delete the replicas on this disk or node. After this enhancement, user can click `true` to the `Eviction Requested` on scheduling disabled disks or nodes. Or select `Disable` for scheduling and `true` to the `Eviction Requested` at the same time then save this change. The backend will take care of the eviction for the disks or nodes and cleanup for all the replicas on disks or nodes. User can select `true` to the `Eviction Requested` from `Longhorn UI` for disks or nodes. And user has to make sure the selected disks or nodes have been disabled, or select the `Disable` Scheduling at the same time of `true` to the `Eviction Requested`. Once `Eviction Requested` has been set to `true` on the disks or nodes, they can not be enabled for `Scheduling`. If the disks or the nodes haven't been disabled for `Scheduling`, there will be error message showed in `Dashboard` immediately to indicate that user need to disable the disk or node for" }, { "data": "And user will wait for the replica number for the disks or nodes to be 0. If there is any error e.g. no space or couldn't find other schedulable disk, the error message will be logged in the `Event log`. And the eviction will be suspended until either user sets the `Eviction Requested` to `false` or cleanup more disk spaces for the new replicas. If user cancel the eviction by setting the `Eviction Requested` to `false`, the remaining replicas on the selected disks or nodes will remain on the disks or nodes. From an API perspective, the call to set `Eviction Requested` to `true` or `false` on the `Node` or `Disk` eviction should look the same. The logic for handling the new field `Eviction Requested` `true` or `false` should to be in the `Node Controller` and `Volume Controller`. On `Longhorn UI` `Node` page, for nodes eviction, adding `Eviction Requested` `true` and `false` options in the `Edit Node` sub-selection, next to `Node Scheduling`. For disks eviction, adding `Eviction Requested` `true` and `false` options in `Edit node and disks` sub-selection under `Operation` column next to each disk `Scheduling` options. This is for user to evict or cancel the eviction of the disks or the nodes. Add new `evictionRequested` field to `Node.Spec`, `Node.Spec.disks` Spec and `Replica.Status`. These will help tracking the request from user and trigger replica controller to update `Replica.Status` and volume controller to do the eviction. And this will reconcile with `scheduledReplica` of selected disks on the nodes. Add a informer in `Replica Controller` to get these information and update `evictionRequested` field in `Replica.Status`. Once `Eviction Requested` has been set to `true` for disks or nodes, the `evictionRequested` fields for the disks and nodes will be set to `true` (default is `false`). `Replica Controller` will update `evictionRequested` field in `Replica.Status` and `Volume Controller` to get these information from it's replicas. During reconcile the engine replica, based on `Replica.Status.EvictionRequested` of the volume replicas to trigger rebuild for different volumes' replicas. And remove one replica with `evictionRequested` `true`. Logged the errors to `Event log` during the reconcile process. By the end from `Longhorn UI`, the replica number on the eviction disks or nodes should be 0, this mean eviction is success. If the volume is 'Detached', Longhorn will 'Automatically Attach' the volume and do the eviction, after eviction success, the volume will be 'Automatically detach'. If there is any error during the eviction, it will get suspended, until user solve the problem, the 'Auto Detach' will be triggered at the end. Positive Case: For both `Replica Node Level Soft Anti-Affinity` has been enabled and disabled. Also the volume can be 'Attached' or 'Detached'. User can select one or more disks or nodes for" }, { "data": "Select `Eviction Requested` to `true` on the disabled disks or nodes, Longhorn should start rebuild replicas for the volumes which have replicas on the eviction disks or nodes, and after rebuild success, the replica number on the evicted disks or nodes should be 0. E.g. When there are 3 nodes in the cluster, and with `Replica Node Level Soft Anti-Affinity` is set to `false`, disable one node, and create a volume with replica count 2. And then evict one of them, the eviction should get stuck, then set `Replica Node Level Soft Anti-Affinity` to `true`, the eviction should go through. Negative Cases: If user selects the disks or nodes have not been disabled scheduling, Longhorn should display the error message on `Dashboard` immediately. Or during the eviction, the disabled disk or node can not be re-enabled again. If there is no enough disk spaces or nodes for disks or nodes eviction, Longhorn should log the error message in the `Event Log`. And once the disk spaces or nodes resources are good enough, the eviction should continue. Or if the user selects `Eviction Requested` to `false`, Longhorn should stop eviction and clear the `evictionRequested` fields for nodes, disks and volumes crd objects. E.g. When there are 3 nodes in the cluster, and the volume replica count is 3, the eviction should get stuck when the `Replica Node Level Soft Anti-Affinity` is `false`. For `Replica Node Level Soft Anti-Affinity` is enabled, create 2 replicas on the same disk or node, and then evict this disk or node, the 2 replicas should goto another disk of node. For `Replica Node Level Soft Anti-Affinity` is disabled, create 1 replica on a disk, and evict this disk or node, the replica should goto the other disk of node. For node eviction, Longhorn will process the eviction based on the disks for the node, this is like disk eviction. After eviction success, the replica number on the evicted node should be 0. During the eviction, user can click the `Replicas Number` on the `Node` page, and set which replicas are left from eviction, and click the `Replica Name` will redirect user to the `Volume` page to set if there is any error for this volume. If there is any error during the rebuild, Longhorn should display the error message from UI. The error could be `failed to schedule a replica` due to disk space or based on schedule policy, can not find a valid disk to put the replica. No special upgrade strategy is necessary. Once the user upgrades to the new version of `Longhorn`, these new capabilities will be accessible from the `longhorn-ui` without any special work." } ]
{ "category": "Runtime", "file_name": "20200727-add-replica-eviction-support-for-disks-and-nodes.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "Kubernetes never force deletes pods of StatefulSet or Deployment on a down node. Since the pod on the down node wasn't removed, the volume will be stuck on the down node with it as well. The replacement pods cannot be started because the Longhorn volume is RWO (see more about access modes ), which can only be attached to one node at a time. We provide an option for users to help them automatically force delete terminating pods of StatefulSet/Deployment on the down node. After force deleting, Kubernetes will detach Longhorn volume and spin up replacement pods on a new node. https://github.com/longhorn/longhorn/issues/1105 The goal is to help the users to monitor node status and automatically force delete terminating pods on down nodes. Without this feature, users would have to manually force delete the pods so that new replacement pods can be started. Implemented a mechanism to force delete pods in the Deployment/StatefulSet on a down node. There are 4 options for `NodeDownPodDeletionPolicy`: `DoNothing` `DeleteStatefulSetPod` `DeleteDeploymentPod` `DeleteBothStatefulsetAndDeploymentPod` When the setting is enabled, Longhorn will monitor node status and force delete pods on the down node on the behalf of users. Before this feature is implemented, the users would have to manually monitor and force delete pods when a node down so that Longhorn volume can be detached and a new replacement pod can start. This process should be automated. After this feature is implemented, the users can have the option to allow Longhorn to monitor and force delete the pods on their behalf. To use this enhancement, users need to change the Longhorn setting `NodeDownPodDeletionPolicy`. The default setting is `DoNothing` which means Longhorn will not force delete any pods on a down node. As a side note, even when `NodeDownPodDeletionPolicy` is set to `do-nothing`, the still works so deployment pods are fine if users enable automatic `volumeattachment` removal. No API changes. We created a new controller, `Kubernetes POD Controller`, to watch pods and nodes status and handle the force deletion. Force delete a pod when all of the below conditions are met: The `NodeDownPodDeletionPolicy` and pods' owner are as in the below table: | Policy \\ Kind | `StatefulSet` | `ReplicaSet` | Other | | :- | :-: | :-: | :-: | | `DoNothing` | Don't delete | Don't delete | Don't delete | | `DeleteStatefulSetPod` | Force delete | Don't delete | Don't delete | | `DeleteDeploymentPod` | Don't delete | Force delete | Don't delete | | `DeleteBothStatefulsetAndDeploymentPod` | Force delete | Force delete | Don't delete | Node containing the pod is down which is determined by the" }, { "data": "The function `IsNodeDownOrDeleted` checks whether the node status is `NotReady` The pod is terminating (which means the pod has deletionTimestamp set) and the DeletionTimestamp has passed. Pod has a PV with provisioner `driver.longhorn.io` Same as the Design Setup a cluster of 3 nodes Install Longhorn and set `Default Replica Count = 2` (because we will turn off one node) Create a StatefulSet with 2 pods using the command: ``` kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/examples/statefulset.yaml ``` Create a volume + pv + pvc named `vol1` and create a deployment of default ubuntu named `shell` with the usage of pvc `vol1` mounted under `/mnt/vol1` Find the node which contains one pod of the StatefulSet/Deployment. Power off the node wait till the `pod.deletionTimestamp` has passed verify no replacement pod generated, the pod is stuck at terminating forever. wait till pod's status becomes `terminating` and the `pod.deletionTimestamp` has passed (around 7 minutes) verify that the pod is deleted and there is a new running replacement pod. Verify that you can access/read/write the volume on the new pod wait till the `pod.deletionTimestamp` has passed replacement pod will be stuck in `Pending` state forever force delete the terminating pod wait till replacement pod is running verify that you can access `vol1` via the `shell` replacement pod under `/mnt/vol1` once it is in the running state wait till replacement pod is generated (default is around 6 minutes, kubernetes setting) wait till the `pod.deletionTimestamp` has passed verify that you can access `vol1` via the `shell` replacement pod under `/mnt/vol1` once it is in the running state verify that the original `shell` pod is stuck in `Pending` state forever wait till replacement pod is generated (default is around 6 minutes, kubernetes setting) verify that you can access `vol1` via the `shell` replacement pod under `/mnt/vol1` once it is in the running state verify that the original `shell` pod is stuck in `Pending` state forever wait till the `pod.deletionTimestamp` has passed verify that the pod is deleted and there is a new running replacement pod. verify that you can access `vol1` via the `shell` replacement pod under `/mnt/vol1` Verify that Longhorn never deletes any other pod on the down node. One typical scenario when the enhancement has succeeded is as below. When a node (say `node-x`) goes down (assume using Kubernetes' default settings and user allows Longhorn to force delete pods): | Time | Event | | :- | :-: | | 0m:00s | `node-x`goes down and stops sending heartbeats to Kubernetes Node controller | | 0m:40s | Kubernetes Node controller reports `node-x` is `NotReady`. | | 5m:40s | Kubernetes Node controller starts evicting pods from `node-x` using graceful termination (set `DeletionTimestamp` and `deletionGracePeriodSeconds = 10s/30s`) | | 5m:50s/6m:10s | Longhorn forces delete the pod of StatefulSet/Deployment which uses Longhorn volume | Doesn't impact upgrade." } ]
{ "category": "Runtime", "file_name": "20200817-improve-node-failure-handling.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage the egress routing rules ``` -h, --help help for egress ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - List egress policy entries" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_egress.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "We always provide security updates for the . Whenever there is a security update you just need to upgrade to the latest version. All security bugs in (or other minio/* repositories) should be reported by email to security@min.io. Your email will be acknowledged within 48 hours, and you'll receive a more detailed response to your email within 72 hours indicating the next steps in handling your report. Please, provide a detailed explanation of the issue. In particular, outline the type of the security issue (DoS, authentication bypass, information disclose, ...) and the assumptions you're making (e.g. do you need access credentials for a successful exploit). If you have not received a reply to your email within 48 hours or you have not heard from the security team for the past five days please contact the security team directly: Primary security coordinator: aead@min.io Secondary coordinator: harsha@min.io If you receive no response: dev@min.io MinIO uses the following disclosure process: Once the security report is received one member of the security team tries to verify and reproduce the issue and determines the impact it has. A member of the security team will respond and either confirm or reject the security report. If the report is rejected the response explains why. Code is audited to find any potential similar problems. Fixes are prepared for the latest release. On the date that the fixes are applied a security advisory will be published on <https://blog.min.io>. Please inform us in your report email whether MinIO should mention your contribution w.r.t. fixing the security issue. By default MinIO will not publish this information to protect your privacy. This process can take some time, especially when coordination is required with maintainers of other projects. Every effort will be made to handle the bug in as timely a manner as possible, however it's important that we follow the process described above to ensure that disclosures are handled consistently." } ]
{ "category": "Runtime", "file_name": "SECURITY.md", "project_name": "MinIO", "subcategory": "Cloud Native Storage" }
[ { "data": "WasmEdge can support customized SaaS extensions or applications using serverless functions instead of traditional network APIs. That dramatically improves SaaS users' and developers' productivity. WasmEdge could be embedded into SaaS products to execute user-defined functions. In this scenario, the WasmEdge function API replaces the SaaS web API. The embedded WasmEdge functions are much faster, safer, and easier to use than RPC functions over the web. Edge servers could provide WasmEdge-based containers to interact with existing SaaS or PaaS APIs without requiring the user to run his own servers (eg callback servers). The serverless API services can be co-located in the same networks as the SaaS to provide optimal performance and security. The examples below showcase how WasmEdge-based serverless functions connect together SaaS APIs from different services, and process data flows across those SaaS APIs according each user's business logic. It is also known as `` aka the Chinese Slack. It is created by Byte Dance, the parent company of Tiktok." } ]
{ "category": "Runtime", "file_name": "serverless_saas.md", "project_name": "WasmEdge Runtime", "subcategory": "Container Runtime" }
[ { "data": "sidebar_position: 1 sidebar_label: \"Volume Snapshot\" In HwameiStor, it allows users to create snapshots of data volumes and perform restore and rollback operations based on data volume snapshots. :::note Currently, only snapshots are supported for non highly available LVM type data volumes. To avoid data inconsistency, please pause or stop I/O before taking a snapshot. ::: Please follow the steps below to create a VolumeSnapshotClass and a VolumeSnapshot to use it. By default, HwameiStor does not automatically create a VolumeSnapshotClass during installation, so you need to create a VolumeSnapshotClass manually. A sample VolumeSnapshotClass is as follows: ```yaml kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 metadata: name: hwameistor-storage-lvm-snapshot annotations: snapshot.storage.kubernetes.io/is-default-class: \"true\" parameters: snapsize: \"1073741824\" driver: lvm.hwameistor.io deletionPolicy: Delete ``` snapsizeIt specifies the size of VolumeSnapshot :::note If the snapsize parameter is not specified, the size of the created snapshot is consistent with the size of the source volume. ::: After you create a VolumeSnapshotClass, you can use it to create VolumeSnapshot. A sample VolumeSnapshot is as follows: ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: snapshot-local-storage-pvc-lvm spec: volumeSnapshotClassName: hwameistor-storage-lvm-snapshot source: persistentVolumeClaimName: local-storage-pvc-lvm ``` persistentVolumeClaimNameIt specifies the PVC to create the VolumeSnapshot After creating a VolumeSnapshot, you can check the VolumeSnapshot using the following command. ```console $ kubectl get vs NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE snapshot-local-storage-pvc-lvm true local-storage-pvc-lvm 1Gi hwameistor-storage-lvm-snapshot snapcontent-0fc17697-68ea-49ce-8e4c-7a791e315110 53y 2m57s ``` After creating a VolumeSnapshot, you can check the Hwameistor LocalvolumeSnapshot using the following command. ```console $ kubectl get lvs NAME CAPACITY SOURCEVOLUME STATE MERGING INVALID AGE snapcontent-0fc17697-68ea-49ce-8e4c-7a791e315110 1073741824 pvc-967baffd-ce10-4739-b996-87c9ed24e635 Ready 5m31s ``` CAPACITY: The capacity size of the snapshot SourceVOLUME: The source volume name of the snapshot MERGING: Whether the snapshot is in a merged state (usually triggered by rollback operation) INVALID: Whether the snapshot is invalidated (usually triggered when the snapshot capacity is full) AGE: The actual creation time of the snapshot (different from the CR creation time, this time is the creation time of the underlying snapshot data volume) After creating a VolumeSnapshot, you can restore and rollback the VolumeSnapshot. You can create pvc to restore VolumeSnapshot, as follows: ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: local-storage-pvc-lvm-restore spec: storageClassName: local-storage-hdd-lvm dataSource: name: snapshot-local-storage-pvc-lvm kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: ReadWriteOnce resources: requests: storage: 1Gi ``` :::note To roll back a snapshot, you must first stop the I/O of the source volume, such as stopping the application and waiting for the rollback operation to complete, confirm data consistency before using the rolled back data volume. ::: VolumeSnapshot can be rolled back by creating the resource LocalVolumeSnapshotRestore, as follows: ```yaml apiVersion: hwameistor.io/v1alpha1 kind: LocalVolumeSnapshotRestore metadata: name: rollback-test spec: sourceVolumeSnapshot: snapcontent-0fc17697-68ea-49ce-8e4c-7a791e315110 restoreType: \"rollback\" ``` sourceVolumeSnapshotIt specifies the LocalVolumeSnapshot to be rollback. Observing the created LocalVolumeSnapshotRestore, you can understand the entire rollback process through the state. After the rollback is complete, the corresponding LocalVolumeSnapshotRestore will be deleted. ```console $ kubectl get LocalVolumeSnapshotRestore -w NAME TARGETVOLUME SOURCESNAPSHOT STATE AGE restore-test2 pvc-967baffd-ce10-4739-b996-87c9ed24e635 snapcontent-0fc17697-68ea-49ce-8e4c-7a791e315110 Submitted 0s restore-test2 pvc-967baffd-ce10-4739-b996-87c9ed24e635 snapcontent-81a1f605-c28a-4e60-8c78-a3d504cbf6d9 InProgress 0s restore-test2 pvc-967baffd-ce10-4739-b996-87c9ed24e635 snapcontent-81a1f605-c28a-4e60-8c78-a3d504cbf6d9 Completed 2s ```" } ]
{ "category": "Runtime", "file_name": "volume_snapshot.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Usage Tracking sidebar_position: 4 JuiceFS by default collects and reports anonymous usage data. It only collects core metrics (e.g. version number, file system size), no user or any sensitive data will be collected. You could review related code . These data help us understand how the community is using this project. You could disable reporting easily by command line option `--no-usage-report`: ``` juicefs mount --no-usage-report ```" } ]
{ "category": "Runtime", "file_name": "usage_tracking.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "layout: global title: HDFS This guide describes the instructions to configure {:target=\"_blank\"} as Alluxio's under storage system. HDFS, or Hadoop Distributed File System, is the primary distributed storage used by Hadoop applications, providing reliable and scalable storage for big data processing in Hadoop ecosystems. For more information about HDFS, please read its {:target=\"_blank\"}. If you haven't already, please see before you get started. In preparation for using HDFS with Alluxio: <table class=\"table table-striped\"> <tr> <td markdown=\"span\" style=\"width:30%\">`<HDFS_NAMENODE>`</td> <td markdown=\"span\">The IP address of the NameNode that processes client connections to the cluster. NameNode is the master node in the Apache Hadoop HDFS Architecture that maintains and manages the blocks present on the DataNodes (slave nodes).</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<HDFS_PORT>`</td> <td markdown=\"span\">The port at which the NameNode accepts client connections.</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<HADOOP_VERSION>`</td> <td markdown=\"span\"></td> </tr> </table> To configure Alluxio to use HDFS as under storage, you will need to modify the configuration file `conf/alluxio-site.properties`. If the file does not exist, create the configuration file from the template. ```shell $ cp conf/alluxio-site.properties.template conf/alluxio-site.properties ``` Specify the HDFS namenode and the HDFS port as the underfs address by modifying `conf/alluxio-site.properties`. For example, the under storage address can be `hdfs://localhost:8020` if you are running the HDFS namenode locally with default port and mapping HDFS root directory to Alluxio, or `hdfs://localhost:8020/alluxio/data` if only the HDFS directory `/alluxio/data` is mapped to Alluxio. ```properties alluxio.dora.client.ufs.root=hdfs://<HDFSNAMENODE>:<HDFSPORT> ``` To find out where HDFS is running, use `hdfs getconf -confKey fs.defaultFS` to get the default hostname and port HDFS is listening on. Additionally, you may need to specify the following property to be your HDFS version. See . ```properties alluxio.underfs.version=<HADOOP_VERSION> ``` Once you have configured Alluxio to HDFS, try to see that everything works. When HDFS has non-default configurations, you need to configure Alluxio servers to access HDFS with the proper configuration file. Note that once this is set, your applications using Alluxio client do not need any special configuration. There are two possible approaches: Copy or make symbolic links from `hdfs-site.xml` and `core-site.xml` from your Hadoop installation into `${ALLUXIO_HOME}/conf`. Make sure this is set up on all servers running Alluxio. Alternatively, you can set the property `alluxio.underfs.hdfs.configuration` in `conf/alluxio-site.properties` to point to your `hdfs-site.xml` and `core-site.xml`. Make sure this configuration is set on all servers running Alluxio. ```properties alluxio.underfs.hdfs.configuration=/path/to/hdfs/conf/core-site.xml:/path/to/hdfs/conf/hdfs-site.xml ``` To configure Alluxio to work with HDFS namenodes in HA mode, first configure Alluxio servers to . In addition, set the under storage address to `hdfs://nameservice/` (`nameservice` is the {:target=\"_blank\"} already configured in `hdfs-site.xml`). To mount an HDFS subdirectory to Alluxio instead of the whole HDFS namespace, change the under storage address to something like `hdfs://nameservice/alluxio/data`. ```properties alluxio.dora.client.ufs.root=hdfs://nameservice/ ``` To ensure that the permission information of files/directories including user, group and mode in HDFS is consistent with Alluxio (e.g., a file created by user Foo in Alluxio is persisted to HDFS also with owner as user Foo), the user to start Alluxio master and worker processes is required to be either: {:target=\"_blank\"}. Namely, use the same user that starts HDFS namenode process to also start Alluxio master and worker processes. A member of {:target=\"_blank\"}. Edit HDFS configuration file `hdfs-site.xml` and check the value of configuration property `dfs.permissions.superusergroup`. If this property is set with a group (e.g., \"hdfs\"), add the user to start Alluxio process" }, { "data": "\"alluxio\") to this group (\"hdfs\"); if this property is not set, add a group to this property where your Alluxio running user is a member of this newly added group. The user set above is only the identity that starts Alluxio master and worker processes. Once Alluxio servers are started, it is unnecessary to run your Alluxio client applications using this user. If your HDFS cluster is Kerberized, first configure Alluxio servers to . In addition, security configuration is needed for Alluxio to be able to communicate with the HDFS cluster. Set the following Alluxio properties in `alluxio-site.properties`: ```properties alluxio.master.keytab.file=<YOURHDFSKEYTABFILEPATH> alluxio.master.principal=hdfs/<_HOST>@<REALM> alluxio.worker.keytab.file=<YOURHDFSKEYTABFILEPATH> alluxio.worker.principal=hdfs/<_HOST>@<REALM> ``` If connecting to secure HDFS, run `kinit` on all Alluxio nodes. Use the principal `hdfs` and the keytab that you configured earlier in `alluxio-site.properties`. A known limitation is that the Kerberos TGT may expire after the max renewal lifetime. You can work around this by renewing the TGT periodically. Otherwise you may see `No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)` when starting Alluxio services. Another option is to set `alluxio.hadoop.kerberos.keytab.login.autorenewal=true` so the TGT is automatically refreshed. The user can also use `alluxio.hadoop.security.krb5.conf` to specify the krb5.conf file location and use `alluxio.hadoop.security.authentication` to specify authentication method. By default, Alluxio will use machine-level Kerberos configuration to determine the Kerberos realm and KDC. You can override these defaults by setting the JVM properties `java.security.krb5.realm` and `java.security.krb5.kdc`. To set these, set `ALLUXIOJAVAOPTS` in `conf/alluxio-env.sh`. ```sh ALLUXIOJAVAOPTS+=\" -Djava.security.krb5.realm=<YOURKERBEROSREALM> -Djava.security.krb5.kdc=<YOURKERBEROSKDC_ADDRESS>\" ``` There are multiple ways for a user to mount an HDFS cluster with a specified version as an under storage into Alluxio namespace. Before mounting HDFS with a specific version, make sure you have built a client with that specific version of HDFS. You can check the existence of this client by going to the `lib` directory under the Alluxio directory. If you have built Alluxio from source, you can build additional client jar files by running `mvn` command under the `underfs` directory in the Alluxio source tree. For example, issuing the following command would build the client jar for the 2.8.0 version. ```shell $ mvn -T 4C clean install -Dmaven.javadoc.skip=true -DskipTests \\ -Dlicense.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true \\ -Pufs-hadoop-2 -Dufs.hadoop.version=2.8.0 ``` When mounting the under storage of Alluxio root directory with a specific HDFS version, one can add the following line to the site properties file (`conf/alluxio-site.properties`) ```properties alluxio.dora.client.ufs.root=hdfs://namenode1:8020 alluxio.underfs.version=2.2 ``` Alluxio supports the following versions of HDFS as a valid argument of mount option `alluxio.underfs.version`: Apache Hadoop: 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 2.10, 3.0, 3.1, 3.2, 3.3 Note: Apache Hadoop 1.0 and 1.2 are still supported, but not included in the default download. To build this module yourself, build the shaded hadoop client and then the UFS module. Hadoop comes with a native library that provides better performance and additional features compared to its Java implementation. For example, when the native library is used, the HDFS client can use native checksum function which is more efficient than the default Java implementation. To use the Hadoop native library with Alluxio HDFS under filesystem, first install the native library on Alluxio nodes by following the instructions on {:target=\"_blank\"}. Once the hadoop native library is installed on the machine, update Alluxio startup Java parameters in `conf/alluxio-env.sh` by adding the following line: ```sh ALLUXIOJAVAOPTS+=\" -Djava.library.path=<localpathcontaininghadoopnative_library> \" ``` Make sure to restart Alluxio services for the change to take effect." } ]
{ "category": "Runtime", "file_name": "HDFS.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "Configuration options encompass settings that provide control over various aspects of CRIO's behavior and functionality. These options allow for the customization of operational parameters, shaping how CRI-O operates within a Kubernetes cluster. In CRI-O, configuration options undergo deprecation exclusively during major and minor version changes. Removals are not implemented abruptly within patch releases, ensuring a seamless transition for users adapting to evolving configurations. A configuration option is labeled as deprecated for a minimum of one release cycle before its actual removal. This period serves as a warning, offering users an opportunity to adjust their configurations in preparation for upcoming changes. The deprecation is communicated through various channels, including documentation revisions, notifications indicating the deprecation of configuration options in the CRI-O CLI help text, and a corresponding log entry within CRI-O itself. In the domain of system configurations, options are typically excluded when they are no longer considered essential or have been superseded by alternatives, particularly those within the Kubelet. The management of runtime containers has shifted towards the Kubelet, leading to the replacement of several CRI-O configuration options by the Kubelet. Runtime Management, previously handled with the \"runtime_path\" configuration option, has been replaced with the \"--container-runtime\" flag in Kubelet. Pod PIDs Limit, formerly set with the \"pids_limit\" configuration option, has been replaced with the \"--pods_pids-limit\" flag in Kubelet. Image Pull Timeout, initially defined by the \"imagepulltimeout\" configuration option, has been replaced with the \"--image-pull-progress-deadline\" flag in Kubelet. CRI-O configuration files are composed in TOML. Typically, TOML libraries used by CRI-O ignore unfamiliar or unacknowledged configuration parameters. While these unacknowledged values are generally accepted, any unfamiliar flags in the Command Line Interface (CLI) might result in a failure in CRI-O. In specific cases, a Command Line Interface (CLI) flag designated for removal may be retained for an additional release; however, it will be deactivated during this period. This extension is provided to users, granting them additional time to update their configurations and scripts before the flag is ultimately eliminated in subsequent releases." } ]
{ "category": "Runtime", "file_name": "deprecating_process.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "https://github.com/vmware-tanzu/velero/releases/tag/v1.6.0 `velero/velero:v1.6.0` https://velero.io/docs/v1.6/ https://velero.io/docs/v1.6/upgrade-to-1.6/ Support for per-BSL credentials Progress reporting for restores Restore API Groups by priority level Restic v0.12.0 upgrade End-to-end testing CLI usability improvements Add support for restic to use per-BSL credentials. Velero will now serialize the secret referenced by the `Credential` field in the BSL and use this path when setting provider specific environment variables for restic commands. (#3489, @zubron) Upgrade restic from v0.9.6 to v0.12.0. (#3528, @ashish-amarnath) Progress reporting added for Velero Restores (#3125, @pranavgaikwad) Add uninstall option for velero cli (#3399, @vadasambar) Add support for per-BSL credentials. Velero will now serialize the secret referenced by the `Credential` field in the BSL and pass this path through to Object Storage plugins via the `config` map using the `credentialsFile` key. (#3442, @zubron) Fixed a bug where restic volumes would not be restored when using a namespace mapping. (#3475, @zubron) Restore API group version by priority. Increase timeout to 3 minutes in DeploymentIsReady(...) function in the install package (#3133, @codegold79) Add field and cli flag to associate a credential with a BSL on BSL create|set. (#3190, @carlisia) Add colored output to `describe schedule/backup/restore` commands (#3275, @mike1808) Add CAPI Cluster and ClusterResourceSets to default restore priorities so that the capi-controller-manager does not panic on restores. (#3446, @nrb) Use label to select Velero deployment in plugin cmd (#3447, @codegold79) feat: support setting BackupStorageLocation CA certificate via `velero backup-location set --cacert` (#3167, @jenting) Add restic initContainer length check in pod volume restore to prevent restic plugin container disappear in runtime (#3198, @shellwedance) Bump versions of external snapshotter and others in order to make `go get` to succeed (#3202, @georgettica) Support fish shell completion (#3231, @jenting) Change the logging level of PV deletion timeout from Debug to Warn (#3316, @MadhavJivrajani) Set the BSL created at install time as the \"default\" (#3172, @carlisia) Capitalize all help messages (#3209, @jenting) Increased default Velero pod memory limit to 512Mi (#3234, @dsmithuchida) Fixed an issue where the deletion of a backup would fail if the backup tarball couldn't be downloaded from object" }, { "data": "Now the tarball is only downloaded if there are associated DeleteItemAction plugins and if downloading the tarball fails, the plugins are skipped. (#2993, @zubron) feat: add delete sub-command for BSL (#3073, @jenting) BSLs with validation disabled should be validated at least once (#3084, @ashish-amarnath) feat: support configures BackupStorageLocation custom resources to indicate which one is the default (#3092, @jenting) Added \"--preserve-nodeports\" flag to preserve original nodePorts when restoring. (#3095, @yusufgungor) Owner reference in backup when created from schedule (#3127, @matheusjuvelino) issue: add flag to the schedule cmd to configure the `useOwnerReferencesInBackup` option #3176 (#3182, @matheusjuvelino) cli: allow creating multiple instances of Velero across two different namespaces (#2886, @alaypatel07) Feature: It is possible to change the timezone of the container by specifying in the manifest.. env: [TZ: Zone/Country], or in the Helm Chart.. configuration: {extraEnvVars: [TZ: 'Zone/Country']} (#2944, @mickkael) Fix issue where bare `velero` command returned an error code. (#2947, @nrb) Restore CRD Resource name to fix CRD wait functionality. (#2949, @sseago) Fixed 'velero.io/change-pvc-node-selector' plugin to fetch configmap using label key \"velero.io/change-pvc-node-selector\" (#2970, @mynktl) Compile with Go 1.15 (#2974, @gliptak) Fix BSL controller to avoid invoking init() on all BSLs regardless of ValidationFrequency (#2992, @betta1) Ensure that bound PVCs and PVs remain bound on restore. (#3007, @nrb) Allows the restic-wait container to exist in any order in the pod being restored. Prints a warning message in the case where the restic-wait container isn't the first container in the list of initialization containers. (#3011, @doughepi) Add warning to velero version cmd if the client and server versions mismatch. (#3024, @cvhariharan) Use namespace and name to match PVB to Pod restore (#3051, @ashish-amarnath) Fixed various typos across codebase (#3057, @invidian) ItemAction plugins for unresolvable types should not be run for all types (#3059, @ashish-amarnath) Basic end-to-end tests, generate data/backup/remove/restore/verify. Uses distributed data generator (#3060, @dsu-igeek) Added GitHub Workflow running Codespell for spell checking (#3064, @invidian) Pass annotations from schedule to backup it creates the same way it is done for labels. Add WithannotationsMap function to builder to be able to pass map instead of key/val list (#3067, @funkycode) Add instructions to clone repository for examples in docs (#3074, @MadhavJivrajani) update setup-kind github actions CI (#3085, @ashish-amarnath) Modify wrong function name to correct one. (#3106, @shellwedance)" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.6.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage fqdn proxy cache ``` cilium-dbg fqdn cache [flags] ``` ``` -h, --help help for cache ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage fqdn proxy - Clean fqdn cache - List fqdn cache contents" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_fqdn_cache.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "English This chapter introduces how POD access network with the RoCE interface of the host. RDMA devices' network namespaces have two modes: shared and exclusive. Containers can either share or exclusively access RDMA network cards. In Kubernetes, shared cards can be utilized with macvlan or ipvlan CNI, while the exclusive one can be used with SR-IOV CNI. Shared mode. Spiderpool leverages macvlan or ipvlan CNI to expose RoCE network cards on the host machine for all Pods. The is employed for exposing RDMA card resources and scheduling Pods. Exclusive mode. Spiderpool utilizes to expose RDMA cards on the host machine for Pods, providing access to RDMA resources. is used to ensure isolation of RDMA devices. For isolated RDMA network cards, at least one of the following conditions must be met: (1) Kernel based on 5.3.0 or newer, RDMA modules loaded in the system. rdma-core package provides means to automatically load relevant modules on system start (2) Mellanox OFED version 4.7 or newer is required. In this case it is not required to use a Kernel based on 5.3.0 or newer. The following steps demonstrate how to enable shared usage of RDMA devices by Pods in a cluster with two nodes via macvlan CNI: Ensure that the host machine has an RDMA card installed and the driver is properly installed, ensuring proper RDMA functioning. In our demo environment, the host machine is equipped with a Mellanox ConnectX-5 NIC with RoCE capabilities. Follow to install the latest OFED driver. To confirm the presence of RDMA devices, use the following command: To confirm the presence of RoCE devices, use the following command: ~# rdma link link mlx50/1 state ACTIVE physicalstate LINK_UP netdev ens6f0np0 link mlx51/1 state ACTIVE physicalstate LINK_UP netdev ens6f1np1 ~# ibstat mlx5_0 | grep \"Link layer\" Link layer: Ethernet Make sure that the RDMA subsystem of the host is in shared mode. If not, switch to shared mode. ~# rdma system netns shared copy-on-fork on ~# rdma system set netns shared Verify the details of the RDMA card for subsequent device resource discovery by the device plugin. Enter the following command with NIC vendors being 15b3 and its deviceIDs being 1017: ~# lspci -nn | grep Ethernet af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5] [15b3:1017] af:00.1 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5] [15b3:1017] Install Spiderpool and configure sriov-network-operator: helm upgrade spiderpool spiderpool/spiderpool --namespace kube-system --reuse-values \\ --set rdma.rdmaSharedDevicePlugin.install=true \\ --set rdma.rdmaSharedDevicePlugin.deviceConfig.resourcePrefix=\"spidernet.io\" \\ --set rdma.rdmaSharedDevicePlugin.deviceConfig.resourceName=\"hcashareddevices\" \\ --set rdma.rdmaSharedDevicePlugin.deviceConfig.rdmaHcaMax=500 \\ --set rdma.rdmaSharedDevicePlugin.deviceConfig.vendors=\"15b3\" \\ --set rdma.rdmaSharedDevicePlugin.deviceConfig.deviceIDs=\"1017\" > - If Macvlan is not installed in your cluster, you can specify the Helm parameter `--set plugins.installCNI=true` to install Macvlan in your cluster. > > - If you are a user from China, you can specify the parameter `--set global.imageRegistryOverride=ghcr.m.daocloud.io` to avoid image pull failures from Spiderpool. After completing the installation of Spiderpool, you can manually edit the spiderpool-rdma-shared-device-plugin configmap to reconfigure the RDMA shared device plugin. Once the installation is complete, the following components will be installed: ~# kubectl get pod -n kube-system spiderpool-agent-9sllh 1/1 Running 0 1m spiderpool-agent-h92bv 1/1 Running 0 1m spiderpool-controller-7df784cdb7-bsfwv 1/1 Running 0 1m spiderpool-init 0/1 Completed 0 1m spiderpool-rdma-shared-device-plugin-dr7w8 1/1 Running 0 1m spiderpool-rdma-shared-device-plugin-zj65g 1/1 Running 0 1m View the available resources on a node, including the reported RDMA device resources: ~# kubectl get no -o json | jq -r '[.items[] | {name:.metadata.name, allocable:.status.allocatable}]' [ { \"name\": \"10-20-1-10\", \"allocable\": { \"cpu\": \"40\", \"memory\": \"263518036Ki\", \"pods\": \"110\", \"spidernet.io/hcashareddevices\": \"500\", ... } }," }, { "data": "] > If the reported resource count is 0, it may be due to the following reasons: > > (1) Verify that the vendors and deviceID in the spiderpool-rdma-shared-device-plugin configmap match the actual values. > > (2) Check the logs of the rdma-shared-device-plugin. If you encounter errors related to RDMA NIC support, try installing apt-get install rdma-core or dnf install rdma-core on the host machine. > > `error creating new device: \"missing RDMA device spec for device 0000:04:00.0, RDMA device \\\"issm\\\" not found\"` Create macvlan CNI configuration with specifying `spec.macvlan.master` to be an RDMA of the node ,and set up the corresponding ippool resources: cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: v4-81 spec: gateway: 172.81.0.1 ips: 172.81.0.100-172.81.0.120 subnet: 172.81.0.0/16 apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: macvlan-ens6f0np0 namespace: kube-system spec: cniType: macvlan macvlan: master: \"ens6f0np0\" ippools: ipv4: [\"v4-81\"] EOF Following the configurations from the previous step, create a DaemonSet application that spans across nodes for testing ANNOTATION_MULTUS=\"v1.multus-cni.io/default-network: kube-system/macvlan-ens6f0np0\" RESOURCE=\"spidernet.io/hcashareddevices\" NAME=rdma-macvlan cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: DaemonSet metadata: name: ${NAME} labels: app: $NAME spec: selector: matchLabels: app: $NAME template: metadata: name: $NAME labels: app: $NAME annotations: ${ANNOTATION_MULTUS} spec: containers: image: docker.io/mellanox/rping-test imagePullPolicy: IfNotPresent name: mofed-test securityContext: capabilities: add: [ \"IPC_LOCK\" ] resources: limits: ${RESOURCE}: 1 command: sh -c | ls -l /dev/infiniband /sys/class/net sleep 1000000 EOF Verify that RDMA communication is correct between the Pods across nodes. Open a terminal and access one Pod to launch a service: ~# rdma link 0/1: mlx50/1: state ACTIVE physicalstate LINK_UP 1/1: mlx51/1: state ACTIVE physicalstate LINK_UP ~# ibreadlat Open a terminal and access another Pod to launch a service: ~# rdma link 0/1: mlx50/1: state ACTIVE physicalstate LINK_UP 1/1: mlx51/1: state ACTIVE physicalstate LINK_UP ~# ibreadlat 172.81.0.120 RDMA_Read Latency Test Dual-port : OFF Device : mlx5_0 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF TX depth : 1 Mtu : 1024[B] Link type : Ethernet GID index : 12 Outstand reads : 16 rdma_cm QPs : OFF Data ex. method : Ethernet local address: LID 0000 QPN 0x0107 PSN 0x79dd10 OUT 0x10 RKey 0x1fddbc VAddr 0x000000023bd000 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:81:00:119 remote address: LID 0000 QPN 0x0107 PSN 0x40001a OUT 0x10 RKey 0x1fddbc VAddr 0x00000000bf9000 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:81:00:120 Conflicting CPU frequency values detected: 2200.000000 != 1040.353000. CPU Frequency is not max. Conflicting CPU frequency values detected: 2200.000000 != 1849.351000. CPU Frequency is not max. 2 1000 6.88 16.81 7.04 7.06 0.31 7.38 16.81 The following steps demonstrate how to enable isolated usage of RDMA devices by Pods in a cluster with two nodes via SR-IOV CNI: Ensure that the host machine has an RDMA and SR-IOV enabled card and the driver is properly installed. In our demo environment, the host machine is equipped with a Mellanox ConnectX-5 NIC with RoCE capabilities. Follow to install the latest OFED driver. To confirm the presence of RoCE devices, use the following command: ~# rdma link link mlx50/1 state ACTIVE physicalstate LINK_UP netdev ens6f0np0 link mlx51/1 state ACTIVE physicalstate LINK_UP netdev ens6f1np1 ~# ibstat mlx5_0 | grep \"Link layer\" Link layer: Ethernet Make sure that the RDMA subsystem on the host is operating in exclusive mode. If not, switch to exclusive mode. ~# rdma system set netns exclusive ~# echo \"options ibcore netnsmode=0\" >> /etc/modprobe.d/ib_core.conf ~# rdma system netns exclusive copy-on-fork on (Optional) in an SR-IOV scenario, applications can enable NVIDIA's GPUDirect RDMA feature. For instructions on installing the kernel module, please refer to . Install Spiderpool set the values `--set sriov.install=true` If you are a user from China, you can specify the parameter `--set global.imageRegistryOverride=ghcr.m.daocloud.io` to pull image from china" }, { "data": "After completing the installation of Spiderpool, you can manually edit the spiderpool-rdma-shared-device-plugin configmap to reconfigure the RDMA shared device plugin. Once the installation is complete, the following components will be installed: ~# kubectl get pod -n kube-system spiderpool-agent-9sllh 1/1 Running 0 1m spiderpool-agent-h92bv 1/1 Running 0 1m spiderpool-controller-7df784cdb7-bsfwv 1/1 Running 0 1m spiderpool-sriov-operator-65b59cd75d-89wtg 1/1 Running 0 1m spiderpool-init 0/1 Completed 0 1m Configure SR-IOV operator Look up the device information of the RoCE interface. Enter the following command to get NIC vendors 15b3 and deviceIDs 1017 ~# lspci -nn | grep Ethernet af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5] [15b3:1017] af:00.1 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5] [15b3:1017] By the way, the number of VFs determines how many SR-IOV network cards can be provided for PODs on a host. The network card from different manufacturers have different amount limit of VFs. For example, the Mellanox connectx5 used in this example can create up to 127 VFs. Apply the following configuration, and the VFs will be created on the host. Notice, this may cause the nodes to reboot, owing to taking effect the new configuration in the network card driver. cat <<EOF | kubectl apply -f - apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: roce-sriov namespace: kube-system spec: nodeSelector: kubernetes.io/os: \"linux\" resourceName: mellanoxroce priority: 99 numVfs: 12 nicSelector: deviceID: \"1017\" rootDevices: 0000:af:00.0 vendor: \"15b3\" deviceType: netdevice isRdma: true EOF Verify the available resources on the node, including the reported SR-IOV device resources: ~# kubectl get no -o json | jq -r '[.items[] | {name:.metadata.name, allocable:.status.allocatable}]' [ { \"name\": \"10-20-1-10\", \"allocable\": { \"cpu\": \"40\", \"pods\": \"110\", \"spidernet.io/mellanoxroce\": \"12\", ... } }, ... ] Create macvlan CNI configuration and corresponding ippool resources. cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: v4-81 spec: gateway: 172.81.0.1 ips: 172.81.0.100-172.81.0.120 subnet: 172.81.0.0/16 apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: roce-sriov namespace: kube-system spec: cniType: sriov sriov: resourceName: spidernet.io/mellanoxroce enableRdma: true ippools: ipv4: [\"v4-81\"] EOF Following the configurations from the previous step, create a DaemonSet application that spans across nodes for testing ANNOTATION_MULTUS=\"v1.multus-cni.io/default-network: kube-system/roce-sriov\" RESOURCE=\"spidernet.io/mellanoxroce\" NAME=rdma-sriov cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: DaemonSet metadata: name: ${NAME} labels: app: $NAME spec: selector: matchLabels: app: $NAME template: metadata: name: $NAME labels: app: $NAME annotations: ${ANNOTATION_MULTUS} spec: containers: image: docker.io/mellanox/rping-test imagePullPolicy: IfNotPresent name: mofed-test securityContext: capabilities: add: [ \"IPC_LOCK\" ] resources: limits: ${RESOURCE}: 1 command: sh -c | ls -l /dev/infiniband /sys/class/net sleep 1000000 EOF Verify that RDMA communication is correct between the Pods across nodes. Open a terminal and access one Pod to launch a service: ~# rdma link 7/1: mlx53/1: state ACTIVE physicalstate LINK_UP netdev eth0 ~# ibreadlat Open a terminal and access another Pod to launch a service: ~# rdma link 10/1: mlx55/1: state ACTIVE physicalstate LINK_UP netdev eth0 ~# ibreadlat 172.81.0.118 libibverbs: Warning: couldn't stat '/sys/class/infiniband/mlx5_4'. libibverbs: Warning: couldn't stat '/sys/class/infiniband/mlx5_2'. libibverbs: Warning: couldn't stat '/sys/class/infiniband/mlx5_0'. libibverbs: Warning: couldn't stat '/sys/class/infiniband/mlx5_3'. libibverbs: Warning: couldn't stat '/sys/class/infiniband/mlx5_1'. RDMA_Read Latency Test Dual-port : OFF Device : mlx5_5 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF TX depth : 1 Mtu : 1024[B] Link type : Ethernet GID index : 2 Outstand reads : 16 rdma_cm QPs : OFF Data ex. method : Ethernet local address: LID 0000 QPN 0x0b69 PSN 0xd476c2 OUT 0x10 RKey 0x006f00 VAddr 0x00000001f91000 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:81:00:105 remote address: LID 0000 QPN 0x0d69 PSN 0xbe5c89 OUT 0x10 RKey 0x004f00 VAddr 0x0000000160d000 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:81:00:118 Conflicting CPU frequency values detected: 2200.000000 != 1338.151000. CPU Frequency is not max. Conflicting CPU frequency values detected: 2200.000000 != 2881.668000. CPU Frequency is" } ]
{ "category": "Runtime", "file_name": "rdma-roce.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "https://github.com/containerd/containerd/pull/646 We merged a PR to add our first pass of container level metrics to our prometheus output in containerd. We still have to review the metric names and structure before having something we are all comfortable supporting in the long run but we now have comprehensive metrics for all containers running on containerd. After hitting a major milestone of getting a proof of concept for end to end pull and run, this week the focus was on getting that code in the right place and figure out where the implementation gaps are. https://github.com/containerd/containerd/pull/660 We merged support for getting an image config that was pulled off of a registry and generating a spec based on the image properties in the `ctr` command. This will let you pull images off of a registry and run them based on the config and how the image was built. Its very simple at the moment but will will be porting over the default spec and generation code from Docker soon into a package that can be easily consumed by clients of containerd. You can test this by running: ```console bash sudo dist pull docker.io/library/redis:alpine sudo ctr run --id redis -t docker.io/library/redis:alpine ``` https://github.com/containerd/containerd/pull/638 We refactored the fetch command into a more generic image handler interface. As we look forward to supporting the full oci image spec as well as the Docker distribution specifications, we are removing any opinionated code to make distribution as generalized and efficient as possible. ```console $ dist images REF TYPE DIGEST SIZE docker.io/library/redis:latest application/vnd.docker.distribution.manifest.v2+json sha256:1b358a2b0dc2629af3ed75737e2f07e5b3408eabf76a8fa99606ec0c276a93f8 71.0 MiB ``` https://github.com/containerd/containerd/pull/635 The `overlay` and `btrfs` driver implementations are now fully implemented and share an implementation for metadata storage. This new metadata storage package allows not only making snapshot drivers easier, but allow us to focus on making our existing drivers more resilient and stable once." } ]
{ "category": "Runtime", "file_name": "2017-03-24.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "|](<https://goreportcard.com/report/github.com/kanisterio/kanister>)|](<https://github.com/kanisterio/kanister/actions>)| |-|--| The design of Kanister was driven by the following main goals: Application-Centric: Given the increasingly complex and distributed nature of cloud-native data services, there is a growing need for data management tasks to be at the application level. Experts who possess domain knowledge of a specific application\\'s needs should be able to capture these needs when performing data operations on that application. API Driven: Data management tasks for each specific application may vary widely, and these tasks should be encapsulated by a well-defined API so as to provide a uniform data management experience. Each application expert can provide an application-specific pluggable implementation that satisfies this API, thus enabling a homogeneous data management experience of diverse and evolving data services. Extensible: Any data management solution capable of managing a diverse set of applications must be flexible enough to capture the needs of custom data services running in a variety of environments. Such flexibility can only be provided if the solution itself can easily be extended. Follow the instructions in the section to get section to get Kanister up and running on your Kubernetes cluster. Then see Kanister in action by going through the walkthrough under . The section provides architectural insights into how things work. We recommend that you take a look at it." } ]
{ "category": "Runtime", "file_name": "overview.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "Casdoor is a UI-first centralized authentication / Single-Sign-On (SSO) platform supporting OAuth 2.0, OIDC and SAML, integrated with Casbin RBAC and ABAC permission management. This document covers configuring Casdoor identity provider support with MinIO. Configure and install casdoor server by following . For a quick installation, docker-compose reference configs are also available on the . Go to Applications Create or use an existing Casdoor application Edit the application Copy `Client ID` and `Client secret` Add your redirect url (callback url) to `Redirect URLs` Save Go to Users Edit the user Add your MinIO policy (ex: `readwrite`) in `Tag` Save Open your favorite browser and visit: http://`CASDOOR_ENDPOINT`/.well-known/openid-configuration, you will see the OIDC configure of Casdoor. ``` export MINIOROOTUSER=minio export MINIOROOTPASSWORD=minio123 minio server /mnt/export ``` Here are all the available options to configure OpenID connect ``` mc admin config set myminio/ identity_openid KEY: identity_openid enable OpenID SSO support ARGS: config_url* (url) openid discovery document e.g. \"https://accounts.google.com/.well-known/openid-configuration\" client_id (string) unique public identifier for apps e.g. \"292085223830.apps.googleusercontent.com\" claim_name (string) JWT canned policy claim name, defaults to \"policy\" claim_prefix (string) JWT claim namespace prefix e.g. \"customer1/\" scopes (csv) Comma separated list of OpenID scopes for server, defaults to advertised scopes from discovery document e.g. \"email,admin\" comment (sentence) optionally add a comment to this setting ``` and ENV based options ``` mc admin config set myminio/ identity_openid --env KEY: identity_openid enable OpenID SSO support ARGS: MINIOIDENTITYOPENIDCONFIGURL* (url) openid discovery document e.g. \"https://accounts.google.com/.well-known/openid-configuration\" MINIOIDENTITYOPENIDCLIENTID (string) unique public identifier for apps e.g. \"292085223830.apps.googleusercontent.com\" MINIOIDENTITYOPENIDCLAIMNAME (string) JWT canned policy claim name, defaults to \"policy\" MINIOIDENTITYOPENIDCLAIMPREFIX (string) JWT claim namespace prefix e.g. \"customer1/\" MINIOIDENTITYOPENID_SCOPES (csv) Comma separated list of OpenID scopes for server, defaults to advertised scopes from discovery document e.g. \"email,admin\" MINIOIDENTITYOPENID_COMMENT (sentence) optionally add a comment to this setting ``` Set `identityopenid` config with `configurl`, `client_id` and restart MinIO ``` ~ mc admin config set myminio identityopenid configurl=\"http://CASDOORENDPOINT/.well-known/openid-configuration\" clientid=<client id> clientsecret=<client secret> claimname=\"tag\" ``` NOTE: As MinIO needs to use a claim attribute in JWT for its policy, you should configure it in casdoor as well. Currently, casdoor uses `tag` as a workaround for configuring MinIO's policy. Once successfully set restart the MinIO instance. ``` mc admin service restart myminio ``` On another terminal run `web-identity.go` a sample client application which obtains JWT idtokens from an identity provider, in our case its Keycloak. Uses the returned idtoken response to get new temporary credentials from the MinIO server using the STS API call `AssumeRoleWithWebIdentity`. ``` $ go run docs/sts/web-identity.go -cid account -csec 072e7f00-4289-469c-9ab2-bbe843c7f5a8 -config-ep \"http://CASDOOR_ENDPOINT/.well-known/openid-configuration\" -port 8888 2018/12/26 17:49:36 listening on http://localhost:8888/ ``` This will open the login page of Casdoor, upon successful login, STS credentials along with any buckets discovered using the credentials will be printed on the screen, for example: ``` { buckets: [ ], credentials: { AccessKeyID: \"EJOLVY3K3G4BF37YD1A0\", SecretAccessKey: \"1b+w8LlDqMQOquKxIlZ2ggP+bgE51iwNG7SUVPJJ\", SessionToken: \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3NLZXkiOiJFSk9MVlkzSzNHNEJGMzdZRDFBMCIsImFkZHJlc3MiOltdLCJhZmZpbGlhdGlvbiI6IiIsImFwcGxlIjoiIiwiYXVkIjpbIjI0YTI1ZWEwNzE0ZDkyZTc4NTk1Il0sImF2YXRhciI6Imh0dHBzOi8vY2FzYmluLm9yZy9pbWcvY2FzYmluLnN2ZyIsImF6dXJlYWQiOiIiLCJiaW8iOiIiLCJiaXJ0aGRheSI6IiIsImNyZWF0ZWRJcCI6IiIsImNyZWF0ZWRUaW1lIjoiMjAyMS0xMi0wNlQyMzo1ODo0MyswODowMCIsImRpbmd0YWxrIjoiIiwiZGlzcGxheU5hbWUiOiJjYmMiLCJlZHVjYXRpb24iOiIiLCJlbWFpbCI6IjE5OTkwNjI2LmxvdmVAMTYzLmNvbSIsImV4cCI6MTY0MzIwMjIyMCwiZmFjZWJvb2siOiIiLCJnZW5kZXIiOiIiLCJnaXRlZSI6IiIsImdpdGh1YiI6IiIsImdpdGxhYiI6IiIsImdvb2dsZSI6IiIsImhhc2giOiIiLCJob21lcGFnZSI6IiIsImlhdCI6MTY0MzE5MjEwMSwiaWQiOiIxYzU1NTgxZS01ZmEyLTQ4NTEtOWM2NC04MjNhNjYyZDBkY2IiLCJpZENhcmQiOiIiLCJpZENhcmRUeXBlIjoiIiwiaXNBZG1pbiI6dHJ1ZSwiaXNEZWZhdWx0QXZhdGFyIjpmYWxzZSwiaXNEZWxldGVkIjpmYWxzZSwiaXNGb3JiaWRkZW4iOmZhbHNlLCJpc0dsb2JhbEFkbWluIjp0cnVlLCJpc09ubGluZSI6ZmFsc2UsImlzcyI6Imh0dHA6Ly9sb2NhbGhvc3Q6ODAwMCIsImxhbmd1YWdlIjoiIiwibGFyayI6IiIsImxhc3RTaWduaW5JcCI6IiIsImxhc3RTaWduaW5UaW1lIjoiIiwibGRhcCI6IiIsImxpbmtlZGluIjoiIiwibG9jYXRpb24iOiIiLCJuYW1lIjoiY2JjIiwibmJmIjoxNjQzMTkyMTAxLCJub25jZSI6Im51bGwiLCJvd25lciI6ImJ1aWx0LWluIiwicGFzc3dvcmQiOiIiLCJwYXNzd29yZFNhbHQiOiIiLCJwZXJtYW5lbnRBdmF0YXIiOiIiLCJwaG9uZSI6IjE4ODE3NTgzMjA3IiwicHJlSGFzaCI6IjAwY2JiNGEyOTBjZDBjZDgwZmZkZWMyZjBhOWJlM2E2IiwicHJvcGVydGllcyI6e30sInFxIjoiIiwicmFua2luZyI6MCwicmVnaW9uIjoiIiwic2NvcmUiOjIwMDAsInNpZ251cEFwcGxpY2F0aW9uIjoiYXBwLWJ1aWx0LWluIiwic2xhY2siOiIiLCJzdWIiOiIxYzU1NTgxZS01ZmEyLTQ4NTEtOWM2NC04MjNhNjYyZDBkY2IiLCJ0YWciOiJyZWFkd3JpdGUiLCJ0aXRsZSI6IiIsInR5cGUiOiJub3JtYWwtdXNlciIsInVwZGF0ZWRUaW1lIjoiIiwid2VjaGF0IjoiIiwid2Vjb20iOiIiLCJ3ZWlibyI6IiJ9.C5ZoJrojpRSePgEf9O-JTnc9BgoDNC5JX5AxlE9npd2tNl3ftudhny47pG6GgNDeiCMiaxueNybHPEPltJTw\", SignerType: 1 } } ``` Open MinIO URL on the browser, lets say <http://localhost:9000/> Click on `Login with SSO` User will be redirected to the Casdoor user login page, upon successful login the user will be redirected to MinIO page and logged in automatically, the user should see now the buckets and objects they have access to." } ]
{ "category": "Runtime", "file_name": "casdoor.md", "project_name": "MinIO", "subcategory": "Cloud Native Storage" }
[ { "data": "This document outlines the options for upgrading from a to a . Kata Containers 2.x is the new focus for the Kata Containers development community. Although Kata Containers 1.x releases will continue to be published for a period of time, once a stable release for Kata Containers 2.x is published, Kata Containers 1.x stable users should consider switching to the Kata 2.x release. To display the current Kata Containers version, run one of the following: ```bash $ kata-runtime --version $ containerd-shim-kata-v2 --version ``` Kata Containers 2.x releases are published on the . Alternatively, if you are using Kata Containers version 1.12.0 or newer, you can check for newer releases using the command line: ```bash $ kata-runtime check --check-version-only ``` There are various other related options. Run `kata-runtime check --help` for further details. The is compatible with the . However, if you have created a local configuration file (`/etc/kata-containers/configuration.toml`), this will mask the newer Kata Containers 2.x configuration file. Since Kata Containers 2.x introduces a number of new options and changes some default values, we recommend that you disable the local configuration file (by moving or renaming it) until you have reviewed the changes to the official configuration file and applied them to your local file if required. As shown in the , Kata Containers provide binaries for popular distributions in their native packaging formats. This allows Kata Containers to be upgraded using the standard package management tools for your distribution. Note: Users should prefer the distribution packaged version of Kata Containers unless they understand the implications of a manual installation. Note: Unless you are an advanced user, if you are using a static installation of Kata Containers, we recommend you remove it and install a instead. If the following command displays the output \"static\", you are using a static version of Kata Containers: ```bash $ ls /opt/kata/bin/kata-runtime &>/dev/null && echo static ``` Static installations are installed in `/opt/kata/`, so to uninstall simply remove this directory. If you understand the implications of using a static installation, to upgrade first , then . See the for details on how to automatically install and configuration a static release with containerd. Note: This section only applies to advanced users who have built their own guest kernel or image. If you are using custom , you must upgrade them to work with Kata Containers 2.x since Kata Containers 1.x assets will not work. See the following for further details: The official assets are packaged meaning they are automatically included in new releases." } ]
{ "category": "Runtime", "file_name": "Upgrading.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "containerd-config - information on the containerd config containerd config [command] The containerd config command has one subcommand, named default, which will display on standard output the default containerd config for this version of the containerd daemon. This output can be piped to a containerd-config.toml(5) file and placed in /etc/containerd to be used as the configuration for containerd on daemon startup. The configuration can be placed in any filesystem location and used with the --config option to the containerd daemon as well. See containerd-config.toml(5) for more information on the containerd configuration options. default : This subcommand will output the TOML formatted containerd configuration to standard output Please file any specific issues that you encounter at https://github.com/containerd/containerd. Phil Estes <estesp@gmail.com> ctr(8), containerd(8), containerd-config.toml(5)" } ]
{ "category": "Runtime", "file_name": "containerd-config.8.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "slug: minio title: HwameiStor Supports MinIO authors: [Simon, Michelle] tags: [Test] This blog introduces an MinIO storage solution built on HwameiStor, and clarifies the detailed test procedures about whether HwameiStor can properly support those basic features and tenant isolation function provided by MinIO. MinIO is a high performance object storage solution with native support for Kubernetes deployments. It can provide distributed, S3-compatible, and multi-cloud storage service in public cloud, private cloud, and edge computing scenarios. MinIO is a software-defined product and released under . It can also run well on x86 and other standard hardware. MinIO is designed to meet private cloud's requirements for high performance, in addition to all required features of object storage. MinIO features easy to use, cost-effective, and high performance in providing scalable cloud-native object storage services. MinIO works well in traditional object storage scenarios, such as secondary storage, disaster recovery, and archiving. It also shows competitive capabilities in machine learning, big data, private cloud, hybrid cloud, and other emerging fields to well support data analysis, high performance workloads, and cloud-native applications. MinIO is designed for the cloud-native architecture, so it can be run as a lightweight container and managed by external orchestration tools like Kubernetes. The MinIO package comprises of static binary files less than 100 MB. This small package enables it to efficiently use CPU and memory resources even with high workloads and can host a large number of tenants on shared hardware. MinIO's architecture is as follows: MinIO can run on a standard server that have installed proper local drivers (JBOD/JBOF). An MinIO cluster has a totally symmetric architecture. In other words, each server provide same functions, without any name node or metadata server. MinIO can write both data and metadata as objects, so there is no need to use metadata servers. MinIO provides erasure coding, bitrot protection, encryption and other features in a strict and consistent way. Each MinIO cluster is a set of distributed MinIO servers, one MinIO process running on each node. MinIO runs in a userspace as a single process, and it uses lightweight co-routines for high concurrence. It divides drivers into erasure sets (generally 16 drivers in each set), and uses the deterministic hash algorithm to place objects into these erasure sets. MinIO is specifically designed for large-scale and multi-datacenter cloud storage service. Tenants can run their own MinIO clusters separately from others, getting rid of interruptions from upgrade or security" }, { "data": "Tenants can scale up by connecting multi clusters across geographical regions. A Kubernetes cluster was deployed with three virtual machines: one as the master node and two as worker nodes. The kubelet version is 1.22.0. Deploy HwameiStor local storage on Kubernetes: Allocate five disks (SDB, SDC, SDD, SDE, and SDF) for each worker node to support HwameiStor local disk management: Check node status of local storage: Create storageClass: This section will show how to deploy minio-operator, how to create a tenant, and how to configure HwameiStor local volumes. Copy minio-operator repo to your local environment ``` git clone <https://github.com/minio/operator.git> ``` Enter helm operator directory `/root/operator/helm/operator` Deploy the minio-operator instance ``` helm install minio-operator \\ --namespace minio-operator \\ --create-namespace \\ --generate-name . --set persistence.storageClass=local-storage-hdd-lvm . ``` Check minio-operator running status Enter the `/root/operator/examples/kustomization/base` directory and change `tenant.yaml` Enter the `/root/operator/helm/tenant/` directory and change `values.yaml` Enter `/root/operator/examples/kustomization/tenant-lite` directory and change `kustomization.yaml` Change `tenant.yaml` Change `tenantNamePatch.yaml` Create a tenant ``` kubectl apply k . ``` Check resource status of the tenant minio-t1 To create another new tenant, you can first create a new directory `tenant` (in this example `tenant-lite-2`) under `/root/operator/examples/kustomization` and change the files listed above Run `kubectl apply k .` to create the new tenant `minio-t2` Run the following commands in sequence to finish this configuration: ``` kubectl get statefulset.apps/minio-t1-pool-0 -nminio-tenant -oyaml ``` ``` kubectl get pvc A ``` ``` kubectl get pvc export-minio6-0 -nminio-6 -oyaml ``` ``` kubectl get pv ``` ``` kubectl get pvc data0-minio-t1-pool-0-0 -nminio-tenant -oyaml ``` ``` kubectl get lv ``` ``` kubect get lvr ``` With the above settings in place, now let's test basic features and tenant isolation. Log in to `minio console10.6.163.52:30401/login` Get JWT by `kubectl minio proxy -n minio-operator` Browse and manage information about newly-created tenants Log in as tenant minio-t1 (Account: minio) Browse bucket bk-1 Create a new bucket bk-1-1 Create path path-1-2 Upload the file Upload the folder Create a user with read-only permission Log in as tenant minio-t2 Only minio-t2 information is visible. You cannot see information about tenant minio-t1. Create bucket Create path Upload the file Create a user Configure user policies Delete a bucket In this test, we successfully deployed MinIO distributed object storage on the basis of Kubernetes 1.22 and the HwameiStor local storage. We performed the basic feature test, system security test, and operation and maintenance management test. All tests are passed, proving HwameiStor can well support for MinIO." } ]
{ "category": "Runtime", "file_name": "2022-07-16_minio-test.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "The `snapshot-editor` is a program for modification of Firecracker snapshots. Firecracker snapshot consists of 2 files: `vmstate` file: file with Firecracker internal data such as vcpu states, devices states etc. `memory` file: file with guest memory. This command is used to merge a `diff` snapshot memory file on top of a base memory file. Note You can also use `rebase-snap` (deprecated) tool for this. Arguments: `MEMORY_PATH` - path to the `memory` file `DIFF_PATH` - path to the `diff` file Usage: ```bash snapshot-editor edit-memory rebase \\ --memory-path <MEMORY_PATH> \\ --diff-path <DIFF_PATH> ``` Example: ```bash snapshot-editor edit-memory rebase \\ --memory-path ./memory_file \\ --diff-path ./diff_file ``` This command is used to remove specified registers from vcpu states inside vmstate snapshot file. Arguments: `VMSTATE_PATH` - path to the `vmstate` file `OUTPUT_PATH` - path to the file where the output will be placed `[REGS]` - set of u32 values representing registers ids as they are defined in KVM. Can be both in decimal and in hex formats. Usage: ```bash snapshot-editor edit-vmstate remove-regs \\ --vmstate-path <VMSTATE_PATH> \\ --output-path <OUTPUT_PATH> \\ [REGS]... ``` Example: ```bash ./snapshot-editor edit-vmstate remove-regs \\ --vmstate-path ./vmstate_file \\ --output-path ./newvmstatefile \\ 0x1 0x2 ``` This command is used to print version of the provided vmstate file. Arguments: `VMSTATE_PATH` - path to the `vmstate` file Usage: ```bash snapshot-editor info-vmstate version --vmstate-path <VMSTATE_PATH> ``` Example: ```bash ./snapshot-editor info-vmstate version --vmstate-path ./vmstate_file ``` This command is used to print the vCPU states inside vmstate snapshot file. Arguments: `VMSTATE_PATH` - path to the `vmstate` file Usage: ```bash snapshot-editor info-vmstate vcpu-states --vmstate-path <VMSTATE_PATH> ``` Example: ```bash ./snapshot-editor info-vmstate vcpu-states --vmstate-path ./vmstate_file ``` This command is used to print the vmstate of snapshot file in readable format thus, making it easier to compare vmstate of 2 snapshots. Arguments: `VMSTATE_PATH` - path to the `vmstate` file Usage: ```bash snapshot-editor info-vmstate vm-state --vmstate-path <VMSTATE_PATH> ``` Example: ```bash ./snapshot-editor info-vmstate vm-state --vmstate-path ./vmstate_file ```" } ]
{ "category": "Runtime", "file_name": "snapshot-editor.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "Spiderpool-controller needs TLS certificates to run webhook server. You can configure it in several ways. Use Helm's template function to generate TLS certificates. This is the simplest and most common way to configure: ```bash helm install spiderpool spiderpool/spiderpool --namespace kube-system \\ --set spiderpoolController.tls.method=auto ``` Note that the default value of parameter `spiderpoolController.tls.method` is `auto`. If you want to run spiderpool-controller with a self-signed certificate, `provided` would be a good choice. You can use OpenSSL to generate certificates, or run the following script: ```bash wget https://raw.githubusercontent.com/spidernet-io/spiderpool/main/tools/cert/generateCert.sh ``` Generate the certificates: ```bash chmod +x generateCert.sh && ./generateCert.sh \"/tmp/tls\" CA=`cat /tmp/tls/ca.crt | base64 -w0 | tr -d '\\n'` SERVER_CERT=`cat /tmp/tls/server.crt | base64 -w0 | tr -d '\\n'` SERVER_KEY=`cat /tmp/tls/server.key | base64 -w0 | tr -d '\\n'` ``` Then, deploy Spiderpool in the `provided` mode: ```bash helm install spiderpool spiderpool/spiderpool --namespace kube-system \\ --set spiderpoolController.tls.method=provided \\ --set spiderpoolController.tls.provided.tlsCa=${CA} \\ --set spiderpoolController.tls.provided.tlsCert=${SERVER_CERT} \\ --set spiderpoolController.tls.provided.tlsKey=${SERVER_KEY} ``` It is not recommended to use this mode directly, because the Spiderpool requires the TLS certificates provided by cert-manager, while the cert-manager requires the IP address provided by Spiderpool (cycle reference). Therefore, if possible, you must first using other IPAM CNI in the Kubernetes cluster, and then deploy Spiderpool. ```bash helm install spiderpool spiderpool/spiderpool --namespace kube-system \\ --set spiderpoolController.tls.method=certmanager \\ --set spiderpoolController.tls.certmanager.issuerName=${CERTMANAGERISSUER_NAME} ```" } ]
{ "category": "Runtime", "file_name": "certificate.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"Extend Velero\" layout: docs Velero includes mechanisms for extending the core functionality to meet your individual backup/restore needs: allow you to specify commands to be executed within running pods during a backup. This is useful if you need to run a workload-specific command prior to taking a backup (for example, to flush disk buffers or to freeze a database). allow you to develop custom object/block storage back-ends or per-item backup/restore actions that can execute arbitrary logic, including modifying the items being backed up/restored. Plugins can be used by Velero without needing to be compiled into the core Velero binary." } ]
{ "category": "Runtime", "file_name": "extend.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Kilo provides a command line tool for inspecting and interacting with clusters: `kgctl`. This tool can be used to understand a mesh's topology, get the WireGuard configuration for a peer, or graph a cluster. `kgctl` requires a Kubernetes configuration file to be provided, either by setting the `KUBECONFIG` environment variable or by providing the `--kubeconfig` flag. The `kgctl` binary is automatically compiled for Linux, macOS, and Windows for every release of Kilo and can be downloaded from . Kilo is written in Golang and as a result the in order to build the `kgctl` binary. To download the Kilo source code and then build and install `kgctl` using the latest commit all with a single command, run: ```shell go install github.com/squat/kilo/cmd/kgctl@latest ``` Alternatively, `kgctl` can be built and installed based on specific version of the code by specifying a Git tag or hash, e.g.: ```shell go install github.com/squat/kilo/cmd/kgctl@0.2.0 ``` When working on Kilo locally, it can be helpful to build and test the `kgctl` binary as part of the development cycle. In order to build a binary from a local checkout of the Git repository, run: ```shell make ``` This will produce a `kgctl` binary at `./bin/<your-os>/<your-architecture>/kgctl`. Install `kgctl` from the Arch User Repository using an AUR helper like `paru` or `yay`: ```shell paru -S kgctl-bin ``` The CLI can be used to install `kgctl` on any OS and architecture: ```shell arkade get kgctl ``` |Command|Syntax|Description| |-|-|-| ||`kgctl connect <peer-name> [flags]`|Connect the host to the cluster, setting up the required interfaces, routes, and keys.| ||`kgctl graph [flags]`|Produce a graph in GraphViz format representing the topology of the cluster.| ||`kgctl showconf ( node \\| peer ) <name> [flags]`|Show the WireGuard configuration for a node or peer in the mesh.| The `connect` command configures the local host as a WireGuard Peer of the cluster and applies all of the necessary networking configuration to connect to the cluster. As long as the process is running, it will watch the cluster for changes and automatically manage the configuration for new or updated Peers and Nodes. If the given Peer name does not exist in the cluster, the command will register a new Peer and generate the necessary WireGuard keys. When the command exits, all of the configuration, including newly registered Peers, is cleaned up. Example: ```shell SERVICECIDR=10.43.0.0/16 kgctl connect --allowed-ips $SERVICECIDR ``` The local host is now connected to the cluster and all IPs from the cluster and any registered Peers are fully routable. By default, `kgctl` will use the local host's hostname as the Peer name in the mesh; this can be overridden by providing an additional argument for the preferred" }, { "data": "When combined with the `--clean-up false` flag, the configuration produced by the command is persistent and will remain in effect even after the process is stopped. With the service CIDR of the cluster routable from the local host, Kubernetes DNS names can now be resolved by the cluster DNS provider. For example, the following snippet could be used to resolve the clusterIP of the Kubernetes API: ```shell dig @$(kubectl get service -n kube-system kube-dns -o=jsonpath='{.spec.clusterIP}') kubernetes.default.svc.cluster.local +short ``` For convenience, the cluster DNS provider's IP address can be configured as the local host's DNS server, making Kubernetes DNS names easily resolvable. For example, if using `systemd-resolved`, the following snippet could be used: ```shell systemd-resolve --interface kilo0 --set-dns $(kubectl get service -n kube-system kube-dns -o=jsonpath='{.spec.clusterIP}') --set-domain cluster.local dig kubernetes.default.svc.cluster.local +short ``` Note: The `connect` command is currently only supported on Linux. Note: The `connect` command requires the `CAPNETADMIN` capability in order to configure the host's networking stack; unprivileged users will need to use `sudo` or similar tools. The `graph` command generates a graph in GraphViz format representing the Kilo mesh. This graph can be helpful in understanding or debugging the topology of a network. Example: ```shell kgctl graph ``` This will produce some output in the DOT graph description language, e.g.: ```dot digraph kilo { label=\"10.2.4.0/24\"; labelloc=t; outputorder=nodesfirst; overlap=false; \"ip-10-0-6-7\"->\"ip-10-0-6-146\"[ dir=both ]; \"ip-10-1-13-74\"->\"ip-10-1-20-76\"[ dir=both ]; \"ip-10-0-6-7\"->\"ip-10-1-13-74\"[ dir=both ]; \"ip-10-0-6-7\"->\"squat\"[ dir=both, style=dashed ]; \"ip-10-1-13-74\"->\"squat\"[ dir=both, style=dashed ]; } ; ``` To render the graph, use one of the GraphViz layout tools, e.g. `circo`: ```shell kgctl graph | circo -Tsvg > cluster.svg ``` This will generate an SVG like: <img src=\"./graphs/location.svg\" /> The `showconf` command outputs the WireGuard configuration for a node or peer in the cluster, i.e. the configuration that the node or peer would need to set on its local WireGuard interface in order to participate in the mesh. Example: ```shell NODE=master # the name of a node kgctl showconf node $NODE ``` This will produce some output in INI format, e.g. ```ini [Interface] ListenPort = 51820 [Peer] AllowedIPs = 10.2.0.0/24, 10.1.13.74/32, 10.2.4.0/24, 10.1.20.76/32, 10.4.0.2/32 Endpoint = 3.120.246.76:51820 PersistentKeepalive = 0 PublicKey = IgDTEvasUvxisSAmfBKh8ngFmc2leZBvkRwYBhkybUg= ``` The `--as-peer` flag modifies the behavior of the command so that it outputs the configuration that a different WireGuard interface would need in order to communicate with the specified node or peer. When further combined with the `--output yaml` flag, this command can be useful to register a node in one cluster as a peer of another cluster, e.g.: ```shell NODE=master # the name of a node kgctl --kubeconfig $KUBECONFIG1 showconf node $NODE --as-peer --output yaml | kubectl --kubeconfig $KUBECONFIG2 apply -f - ```" } ]
{ "category": "Runtime", "file_name": "kgctl.md", "project_name": "Kilo", "subcategory": "Cloud Native Network" }
[ { "data": "```json { \"clienttimeoutms\": \"Request timeout time\", \"bodybandwidthmbps\": \"Read body bandwidth, default is 1MBps\", \"bodybasetimeoutms\": \"Read body benchmark time, so the maximum time to read body is bodybasetimeoutms+size/bodybandwidthmbps(converted to ms)\", \"transport_config\": { \"...\": \"See the detailed configuration of the golang http library transport. In general, it can be ignored, and the default configuration is provided in the code\" } } ``` ::: tip Note This default configuration is supported starting from version v3.2.1. ::: The following default configuration is only enabled when all items in `transport_config` are default values. ```json { \"maxconnsper_host\": 10, \"maxidleconns\": 1000, \"maxidleconnsperhost\": 10, \"idleconntimeout_ms\": 10000 } ``` The Lb version mainly implements load balancing, failure node removal and reuse of multiple nodes. Its configuration is based on single-point configuration, with the following additional configuration items. ```json { \"hosts\": \"List of destination hosts for requests\", \"backup_hosts\": \"List of backup destination hosts. When all hosts are unavailable, they will be used\", \"hosttrytimes\": \"Number of retries for each node failure, used in conjunction with node removal. When a target host fails continuously for hosttrytimes times, if the failure removal mechanism is enabled, the node will be removed from the available list\", \"try_times\": \"Number of retries for each request failure\", \"failretryinterval_s\": \"Used in conjunction with node removal to implement the time interval for failed nodes to be reused. If this value is less than or equal to 0, no removal will be performed. The default value is -1\", \"MaxFailsPeriodS\": \"Time interval for recording consecutive failures. For example, if the current node has failed N times, when the time interval between the N+1th failure and the Nth failure is less than this value, the node will be recorded as the N+1th failure. Otherwise, it will be recorded as the first failure\" }" } ]
{ "category": "Runtime", "file_name": "rpc.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "This document describes how Firecracker persists microVM state into Firecracker snapshots. It describes the snapshot format, encoding, compatibility and limitations. Firecracker uses the serde crate [1] along with the bincode [2] format to serialize its state into Firecracker snapshots. Firecracker snapshots have versions that are independent of Firecracker versions. Each Firecracker version declares support for a specific snapshot data format version. When creating a snapshot, Firecracker will use the supported snapshot format version. When loading a snapshot, Firecracker will check that format of the snapshot file is compatible with the snapshot version Firecracker supports. Firecracker persists the microVM state as 2 separate objects: a guest memory file a microVM state file. *The block devices attached to the microVM are not considered part of the state and need to be managed separately.* The guest memory file contains the microVM memory saved as a dump of all pages. In the VM state file, Firecracker stores the internal state of the VMM (device emulation, KVM and vCPUs) with 2 exceptions - serial emulation and vsock backend. While we continuously improve and extend Firecracker's features by adding new capabilities, devices or enhancements, the microVM state file may change both structurally and semantically with each new release. A Firecracker snapshot has the following format: | Field | Bits | Description | | -- | - | | | magicid | 64 | Firecracker snapshot and architecture (x8664/aarch64). | | version | M | The snapshot data format version (`MAJOR.MINOR.PATCH`) | | state | N | Bincode blob containing the microVM state. | | crc | 64 | Optional CRC64 sum of magic_id, version and state fields. | The snapshot format has its own version encoded in the snapshot file itself after the snapshot's `magic_id`. The snapshot format version is independent of the Firecracker version and it is of the form `MAJOR.MINOR.PATCH`. Currently, Firecracker uses the for serializing the microVM state. The encoding format that bincode uses does not allow backwards compatible changes in the state, so essentially every change in the microVM state description will result in bump of the format's `MAJOR` version. If the needs arises, we will look into alternative formats that allow more flexibility with regards to backwards compatibility. If/when this happens, we will define how changes in the snapshot format reflect to changes in its `MAJOR.MINOR.PATCH` version. During research and prototyping we considered multiple storage formats. The criteria used for comparing these are: performance, size, rust support, specification, versioning support, community and tooling. Performance, size and Rust support are hard requirements while all others can be the subject of trade offs. More info about this comparison can be found" }, { "data": "Key benefits of using bincode: Minimal snapshot size overhead Minimal CPU overhead Simple implementation The current implementation relies on the . The minimum kernel version required by Firecracker snapshots is 4.14. Snapshots can be saved and restored on the same kernel version without any issues. There might be issues when restoring snapshots created on different host kernel version even when using the same Firecracker version. SnapshotCreate and SnapshotLoad operations across different host kernels is considered unstable in Firecracker as the saved KVM state might have different semantics on different kernels. The current Firecracker devices are backwards compatible up to the version that introduces them. Ideally this property would be kept over time, but there are situations when a new version of a device exposes new features to the guest that do not exist in an older version. In such cases restoring a snapshot at an older version becomes impossible without breaking the guest workload. The microVM state file links some resources that are external to the snapshot: tap devices by device name, block devices by block file path, vsock backing Unix domain socket by socket name. To successfully restore a microVM one should check that: tap devices are available, their names match their original names since these are the values saved in the microVM state file, and they are accessible to the Firecracker process where the microVM is being restored, block devices are set up at their original relative or absolute paths with the proper permissions, as the Firecracker process with the restored microVM will attempt to access them exactly as they were accessed in the original Firecracker process, the vsock backing Unix domain socket is available, its name matches the original name, and it is accessible to the new Firecracker process. Firecracker microVMs snapshot functionality is available for Intel/AMD/ARM64 CPU models that support the hardware virtualizations extensions, more details are available . Snapshots are not compatible across CPU architectures and even across CPU models of the same architecture. They are only compatible if the CPU features exposed to the guest are an invariant when saving and restoring the snapshot. The trivial scenario is creating and restoring snapshots on hosts that have the same CPU model. Restoring from an Intel snapshot on AMD (or vice-versa) is not supported. It is important to note that guest workloads can still execute instructions that are being by CPUID and restoring and saving of such workloads will lead to undefined result. Firecracker retrieves the state of a discrete list of MSRs from KVM, more specifically, the MSRs corresponding to the guest exposed features. The microVM state file format is implemented in the in the Firecracker repository. All Firecracker devices implement the trait which exposes an interface that enables creating from and saving to the microVM state." } ]
{ "category": "Runtime", "file_name": "versioning.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "We definitely welcome your patches and contributions to gRPC! Please read the gRPC organization's and before proceeding. If you are new to github, please start by reading In order to protect both you and ourselves, you will need to sign the . How to get your contributions merged smoothly and quickly. Create small PRs that are narrowly focused on addressing a single concern. We often times receive PRs that are trying to fix several things at a time, but only one fix is considered acceptable, nothing gets merged and both author's & review's time is wasted. Create more PRs to address different concerns and everyone will be happy. The grpc package should only depend on standard Go packages and a small number of exceptions. If your contribution introduces new dependencies which are NOT in the , you need a discussion with gRPC-Go authors and consultants. For speculative changes, consider opening an issue and discussing it first. If you are suggesting a behavioral or API change, consider starting with a [gRFC proposal](https://github.com/grpc/proposal). Provide a good PR description as a record of what change is being made and why it was made. Link to a github issue if it exists. Don't fix code style and formatting unless you are already changing that line to address an issue. PRs with irrelevant changes won't be merged. If you do want to fix formatting or style, do that in a separate PR. Unless your PR is trivial, you should expect there will be reviewer comments that you'll need to address before merging. We expect you to be reasonably responsive to those comments, otherwise the PR will be closed after 2-3 weeks of inactivity. Maintain clean commit history and use meaningful commit messages. PRs with messy commit history are difficult to review and won't be merged. Use `rebase -i upstream/master` to curate your commit history and/or to bring in latest changes from master (but avoid rebasing in the middle of a code review). Keep your PR up to date with upstream/master (if there are merge conflicts, we can't really merge your change). All tests need to be passing before your change can be merged. We recommend you run tests locally before creating your PR to catch breakages early on. `make all` to test everything, OR `make vet` to catch vet errors `make test` to run the tests `make testrace` to run tests in race mode optional `make testappengine` to run tests with appengine Exceptions to the rules can be made if there's a compelling reason for doing so." } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "Soda Foundation", "subcategory": "Cloud Native Storage" }
[ { "data": "% Include content from ```{include} ../CONTRIBUTING.md :start-after: <!-- Include start contributing --> :end-before: <!-- Include end contributing --> ``` Follow the steps below to set up your development environment to get started working on new features for Incus. To build the dependencies, follow the instructions in {ref}`installingfromsource`. After setting up your build environment, add your GitHub fork as a remote: git remote add myfork git@github.com:<your_username>/incus.git git remote update Then switch to it: git checkout myfork/main Finally, you should be able to run `make` inside the repository and build your fork of the project. At this point, you most likely want to create a new branch for your changes on your fork: ```bash git checkout -b [nameofyournewbranch] git push myfork [nameofyournewbranch] ``` Persistent data is stored in the `INCUS_DIR` directory, which is generated by `incus admin init`. The `INCUS_DIR` defaults to `/var/lib/incus`. As you develop, you may want to change the `INCUS_DIR` for your fork of Incus so as to avoid version conflicts. Binaries compiled from your source will be generated in the `$(go env GOPATH)/bin` directory by default. You will need to explicitly invoke these binaries (not the global `incusd` you may have installed) when testing your changes. You may choose to create an alias in your `~/.bashrc` to call these binaries with the appropriate flags more conveniently. If you have a `systemd` service configured to run the Incus daemon from a previous installation of Incus, you may want to disable it to avoid version conflicts. We want Incus to be as easy and straight-forward to use as possible. Therefore, we aim to provide documentation that contains the information that users need to work with Incus, that covers all common use cases, and that answers typical questions. You can contribute to the documentation in various different ways. We appreciate your contributions! Typical ways to contribute are: Add or update documentation for new features or feature improvements that you contribute to the code. We'll review the documentation update and merge it together with your code. Add or update documentation that clarifies any doubts you had when working with the product. Such contributions can be done through a pull request or through a post in the section on the" }, { "data": "New tutorials will be considered for inclusion in the docs (through a link or by including the actual content). To request a fix to the documentation, open a documentation issue on . We'll evaluate the issue and update the documentation accordingly. Post a question or a suggestion on the . We'll monitor the posts and, if needed, update the documentation accordingly. Ask questions or provide suggestions in the `#lxc` channel on . Given the dynamic nature of IRC, we cannot guarantee answers or reactions to IRC posts, but we monitor the channel and try to improve our documentation based on the received feedback. % Include content from ```{include} README.md :start-after: <!-- Include start docs --> ``` When you open a pull request, a preview of the documentation output is built automatically. GitHub runs automatic checks on the documentation to verify the spelling, the validity of links, correct formatting of the Markdown files, and the use of inclusive language. You can (and should!) run these tests locally as well with the following commands: Check the spelling: `make doc-spellcheck` Check the validity of links: `make doc-linkcheck` Check the Markdown formatting: `make doc-lint` Check for inclusive language: `make doc-woke` ```{note} We are currently in the process of moving the documentation of configuration options to code comments. At the moment, not all configuration options follow this approach. ``` The documentation of configuration options is extracted from comments in the Go code. Look for comments that start with `gendoc:generate` in the code. When you add or change a configuration option, make sure to include the required documentation comment for it. Then run `make generate-config` to re-generate the `doc/config_options.txt` file. The updated file should be checked in. The documentation includes sections from the `doc/config_options.txt` to display a group of configuration options. For example, to include the core server options: ```` % Include content from ```{include} config_options.txt :start-after: <!-- config group server-core start --> :end-before: <!-- config group server-core end --> ``` ```` If you add a configuration option to an existing group, you don't need to do any updates to the documentation files. The new option will automatically be picked up. You only need to add an include to a documentation file if you are defining a new group." } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "| json type \\ dest type | bool | int | uint | float |string| | | | | |--|--| | number | positive => true <br/> negative => true <br/> zero => false| 23.2 => 23 <br/> -32.1 => -32| 12.1 => 12 <br/> -12.1 => 0|as normal|same as origin| | string | empty string => false <br/> string \"0\" => false <br/> other strings => true | \"123.32\" => 123 <br/> \"-123.4\" => -123 <br/> \"123.23xxxw\" => 123 <br/> \"abcde12\" => 0 <br/> \"-32.1\" => -32| 13.2 => 13 <br/> -1.1 => 0 |12.1 => 12.1 <br/> -12.3 => -12.3<br/> 12.4xxa => 12.4 <br/> +1.1e2 =>110 |same as origin| | bool | true => true <br/> false => false| true => 1 <br/> false => 0 | true => 1 <br/> false => 0 |true => 1 <br/>false => 0|true => \"true\" <br/> false => \"false\"| | object | true | 0 | 0 |0|originnal json| | array | empty array => false <br/> nonempty array => true| [] => 0 <br/> [1,2] => 1 | [] => 0 <br/> [1,2] => 1 |[] => 0<br/>[1,2] => 1|original json|" } ]
{ "category": "Runtime", "file_name": "fuzzy_mode_convert_table.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "This documentation simply describes how to build the kubemark image used in ```bash cd $KUBERNETES_PATH git checkout v1.29.0 make WHAT=cmd/kubemark KUBEBUILDPLATFORMS=linux/amd64 cp ./_output/local/bin/linux/amd64/kubemark cluster/images/kubemark cd cluster/images/kubemark docker build -t antrea/kubemark:v1.29.0 . ```" } ]
{ "category": "Runtime", "file_name": "build-kubemark.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "name: Enhancement Request about: Suggest an idea for this project Is your feature request related to a problem?/Why is this needed <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> Describe the solution you'd like in detail <!-- A clear and concise description of what you want to happen. --> Describe alternatives you've considered <!-- A clear and concise description of any alternative solutions or features you've considered. --> Additional context <!-- Add any other context or screenshots about the feature request here. -->" } ]
{ "category": "Runtime", "file_name": "enhancement.md", "project_name": "Container Storage Interface (CSI)", "subcategory": "Cloud Native Storage" }
[ { "data": "The official binary releases of containerd are available for the `amd64` (also known as `x86_64`) and `arm64` (also known as `aarch64`) architectures. Typically, you will have to install and from their official sites too. Download the `containerd-<VERSION>-<OS>-<ARCH>.tar.gz` archive from https://github.com/containerd/containerd/releases , verify its sha256sum, and extract it under `/usr/local`: ```console $ tar Cxzvf /usr/local containerd-1.6.2-linux-amd64.tar.gz bin/ bin/containerd-shim-runc-v2 bin/containerd-shim bin/ctr bin/containerd-shim-runc-v1 bin/containerd bin/containerd-stress ``` The `containerd` binary is built dynamically for glibc-based Linux distributions such as Ubuntu and Rocky Linux. This binary may not work on musl-based distributions such as Alpine Linux. Users of such distributions may have to install containerd from the source or a third party package. FAQ: For Kubernetes, do I need to download `cri-containerd-(cni-)<VERSION>-<OS-<ARCH>.tar.gz` too? Answer: No. As the Kubernetes CRI feature has been already included in `containerd-<VERSION>-<OS>-<ARCH>.tar.gz`, you do not need to download the `cri-containerd-....` archives to use CRI. The `cri-containerd-...` archives are , do not work on old Linux distributions, and will be removed in containerd 2.0. If you intend to start containerd via systemd, you should also download the `containerd.service` unit file from https://raw.githubusercontent.com/containerd/containerd/main/containerd.service into `/usr/local/lib/systemd/system/containerd.service`, and run the following commands: ```bash systemctl daemon-reload systemctl enable --now containerd ``` Download the `runc.<ARCH>` binary from https://github.com/opencontainers/runc/releases , verify its sha256sum, and install it as `/usr/local/sbin/runc`. ```console $ install -m 755 runc.amd64 /usr/local/sbin/runc ``` The binary is built statically and should work on any Linux distribution. Download the `cni-plugins-<OS>-<ARCH>-<VERSION>.tgz` archive from https://github.com/containernetworking/plugins/releases , verify its sha256sum, and extract it under `/opt/cni/bin`: ```console $ mkdir -p /opt/cni/bin $ tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz ./ ./macvlan ./static ./vlan ./portmap ./host-local ./vrf ./bridge ./tuning ./firewall ./host-device ./sbr ./loopback ./dhcp ./ptp ./ipvlan ./bandwidth ``` The binaries are built statically and should work on any Linux distribution. The `containerd.io` packages in DEB and RPM formats are distributed by Docker (not by the containerd project). See the Docker documentation for how to set up `apt-get` or `dnf` to install `containerd.io` packages: The `containerd.io` package contains runc too, but does not contain CNI plugins. To install containerd and its dependencies from the source, see . From an elevated PowerShell session (running as Admin) run the following commands: ```PowerShell Stop-Service containerd $Version=\"1.7.13\" # update to your preferred version $Arch = \"amd64\" # arm64 also available curl.exe -LO https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-$Arch.tar.gz tar.exe xvf .\\containerd-$Version-windows-amd64.tar.gz Copy-Item -Path .\\bin -Destination $Env:ProgramFiles\\containerd -Recurse -Force $Path = [Environment]::GetEnvironmentVariable(\"PATH\", \"Machine\") + [IO.Path]::PathSeparator + \"$Env:ProgramFiles\\containerd\" $Env:Path = [System.Environment]::GetEnvironmentVariable(\"Path\",\"Machine\") + \";\" + [System.Environment]::GetEnvironmentVariable(\"Path\",\"User\") containerd.exe config default | Out-File $Env:ProgramFiles\\containerd\\config.toml -Encoding ascii Get-Content $Env:ProgramFiles\\containerd\\config.toml containerd.exe --register-service Start-Service containerd ``` Tip for Running `containerd` Service on Windows: `containerd` logs are not persisted when we start it as a service using Windows Service Manager. can be used to configure logs to go into a cyclic buffer: ```powershell nssm.exe install containerd nssm.exe set containerd AppStdout \"\\containerd.log\" nssm.exe set containerd AppStderr \"\\containerd.err.log\" nssm.exe start containerd # to stop: nssm.exe stop containerd ``` There are several command line interface (CLI) projects for interacting with containerd: Name | Community | API | Target | Web site | -|--|- | -|| `ctr` | containerd | Native | For debugging only | (None, see `ctr --help` to learn the usage) | `nerdctl` | containerd (non-core) | Native | General-purpose |" }, { "data": "| `crictl` | Kubernetes SIG-node | CRI | For debugging only | https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md | While the `ctr` tool is bundled together with containerd, it should be noted the `ctr` tool is solely made for debugging containerd. The tool provides stable and human-friendly user experience. Example (`ctr`): ```bash ctr images pull docker.io/library/redis:alpine ctr run docker.io/library/redis:alpine redis ``` Example (`nerdctl`): ```bash nerdctl run --name redis redis:alpine ``` containerd has built-in support for Kubernetes Container Runtime Interface (CRI). To set up containerd nodes for managed Kubernetes services, see the service providers' documentations: For non-managed environments, see the following Kubernetes documentations: - - containerd uses a configuration file located in `/etc/containerd/config.toml` for specifying daemon level options. A sample configuration file can be found . The default configuration can be generated via `containerd config default > /etc/containerd/config.toml`. There are many different ways to use containerd. If you are a developer working on containerd you can use the `ctr` tool or the `nerdctl` tool to quickly test features and functionality without writing extra code. However, if you want to integrate containerd into your project we have an easy to use client package that allows you to work with containerd. In this guide we will pull and run a redis server with containerd using the client package. This project requires a recent version of Go. See the header of for the recommended Go version. We will start a new `main.go` file and import the containerd client package. ```go package main import ( \"log\" containerd \"github.com/containerd/containerd/v2/client\" ) func main() { if err := redisExample(); err != nil { log.Fatal(err) } } func redisExample() error { client, err := containerd.New(\"/run/containerd/containerd.sock\") if err != nil { return err } defer client.Close() return nil } ``` This will create a new client with the default containerd socket path. Because we are working with a daemon over GRPC we need to create a `context` for use with calls to client methods. containerd is also namespaced for callers of the API. We should also set a namespace for our guide after creating the context. ```go ctx := namespaces.WithNamespace(context.Background(), \"example\") ``` Having a namespace for our usage ensures that containers, images, and other resources without containerd do not conflict with other users of a single daemon. Now that we have a client to work with we need to pull an image. We can use the redis image based on Alpine Linux from the DockerHub. ```go image, err := client.Pull(ctx, \"docker.io/library/redis:alpine\", containerd.WithPullUnpack) if err != nil { return err } ``` The containerd client uses the `Opts` pattern for many of the method calls. We use the `containerd.WithPullUnpack` so that we not only fetch and download the content into containerd's content store but also unpack it into a snapshotter for use as a root filesystem. Let's put the code together that will pull the redis image based on alpine linux from Dockerhub and then print the name of the image on the console's output. ```go package main import ( \"context\" \"log\" containerd \"github.com/containerd/containerd/v2/client\" \"github.com/containerd/containerd/v2/pkg/namespaces\" ) func main() { if err := redisExample(); err != nil { log.Fatal(err) } } func redisExample() error { client, err := containerd.New(\"/run/containerd/containerd.sock\") if err != nil { return err } defer client.Close() ctx := namespaces.WithNamespace(context.Background(), \"example\") image, err := client.Pull(ctx, \"docker.io/library/redis:alpine\"," }, { "data": "if err != nil { return err } log.Printf(\"Successfully pulled %s image\\n\", image.Name()) return nil } ``` ```bash go build main.go sudo ./main 2017/08/13 17:43:21 Successfully pulled docker.io/library/redis:alpine image ``` Now that we have an image to base our container off of, we need to generate an OCI runtime specification that the container can be based off of as well as the new container. containerd provides reasonable defaults for generating OCI runtime specs. There is also an `Opt` for modifying the default config based on the image that we pulled. The container will be based off of the image, and we will: allocate a new read-write snapshot so the container can store any persistent information. create a new spec for the container. ```go container, err := client.NewContainer( ctx, \"redis-server\", containerd.WithNewSnapshot(\"redis-server-snapshot\", image), containerd.WithNewSpec(oci.WithImageConfig(image)), ) if err != nil { return err } defer container.Delete(ctx, containerd.WithSnapshotCleanup) ``` If you have an existing OCI specification created you can use `containerd.WithSpec(spec)` to set it on the container. When creating a new snapshot for the container we need to provide a snapshot ID as well as the Image that the container will be based on. By providing a separate snapshot ID than the container ID we can easily reuse, existing snapshots across different containers. We also add a line to delete the container along with its snapshot after we are done with this example. Here is example code to pull the redis image based on alpine linux from Dockerhub, create an OCI spec, create a container based on the spec and finally delete the container. ```go package main import ( \"context\" \"log\" containerd \"github.com/containerd/containerd/v2/client\" \"github.com/containerd/containerd/v2/pkg/namespaces\" \"github.com/containerd/containerd/v2/pkg/oci\" ) func main() { if err := redisExample(); err != nil { log.Fatal(err) } } func redisExample() error { client, err := containerd.New(\"/run/containerd/containerd.sock\") if err != nil { return err } defer client.Close() ctx := namespaces.WithNamespace(context.Background(), \"example\") image, err := client.Pull(ctx, \"docker.io/library/redis:alpine\", containerd.WithPullUnpack) if err != nil { return err } log.Printf(\"Successfully pulled %s image\\n\", image.Name()) container, err := client.NewContainer( ctx, \"redis-server\", containerd.WithNewSnapshot(\"redis-server-snapshot\", image), containerd.WithNewSpec(oci.WithImageConfig(image)), ) if err != nil { return err } defer container.Delete(ctx, containerd.WithSnapshotCleanup) log.Printf(\"Successfully created container with ID %s and snapshot with ID redis-server-snapshot\", container.ID()) return nil } ``` Let's see it in action. ```bash go build main.go sudo ./main 2017/08/13 18:01:35 Successfully pulled docker.io/library/redis:alpine image 2017/08/13 18:01:35 Successfully created container with ID redis-server and snapshot with ID redis-server-snapshot ``` One thing that may be confusing at first for new containerd users is the separation between a `Container` and a `Task`. A container is a metadata object that resources are allocated and attached to. A task is a live, running process on the system. Tasks should be deleted after each run while a container can be used, updated, and queried multiple times. ```go task, err := container.NewTask(ctx, cio.NewCreator(cio.WithStdio)) if err != nil { return err } defer task.Delete(ctx) ``` The new task that we just created is actually a running process on your system. We use `cio.WithStdio` so that all IO from the container is sent to our `main.go` process. This is a `cio.Opt` that configures the `Streams` used by `NewCreator` to return a `cio.IO` for the new" }, { "data": "If you are familiar with the OCI runtime actions, the task is currently in the \"created\" state. This means that the namespaces, root filesystem, and various container level settings have been initialized but the user defined process, in this example \"redis-server\", has not been started. This gives users a chance to setup network interfaces or attach different tools to monitor the container. containerd also takes this opportunity to monitor your container as well. Waiting on things like the container's exit status and cgroup metrics are setup at this point. If you are familiar with prometheus you can curl the containerd metrics endpoint (in the `config.toml` that we created) to see your container's metrics: ```bash curl 127.0.0.1:1338/v1/metrics ``` Pretty cool right? Now that we have a task in the created state we need to make sure that we wait on the task to exit. It is essential to wait for the task to finish so that we can close our example and cleanup the resources that we created. You always want to make sure you `Wait` before calling `Start` on a task. This makes sure that you do not encounter any races if the task has a simple program like `/bin/true` that exits promptly after calling start. ```go exitStatusC, err := task.Wait(ctx) if err != nil { return err } if err := task.Start(ctx); err != nil { return err } ``` Now we should see the `redis-server` logs in our terminal when we run the `main.go` file. Since we are running a long running server we will need to kill the task in order to exit out of our example. To do this we will simply call `Kill` on the task after waiting a couple of seconds so we have a chance to see the redis-server logs. ```go time.Sleep(3 * time.Second) if err := task.Kill(ctx, syscall.SIGTERM); err != nil { return err } status := <-exitStatusC code, exitedAt, err := status.Result() if err != nil { return err } fmt.Printf(\"redis-server exited with status: %d\\n\", code) ``` We wait on our exit status channel that we setup to ensure the task has fully exited and we get the exit status. If you have to reload containers or miss waiting on a task, `Delete` will also return the exit status when you finally delete the task. We got you covered. ```go status, err := task.Delete(ctx) ``` Here is the full example that we just put together. ```go package main import ( \"context\" \"fmt\" \"log\" \"syscall\" \"time\" \"github.com/containerd/containerd/v2/pkg/cio\" containerd \"github.com/containerd/containerd/v2/client\" \"github.com/containerd/containerd/v2/pkg/oci\" \"github.com/containerd/containerd/v2/pkg/namespaces\" ) func main() { if err := redisExample(); err != nil { log.Fatal(err) } } func redisExample() error { // create a new client connected to the default socket path for containerd client, err := containerd.New(\"/run/containerd/containerd.sock\") if err != nil { return err } defer client.Close() // create a new context with an \"example\" namespace ctx := namespaces.WithNamespace(context.Background(), \"example\") // pull the redis image from DockerHub image, err := client.Pull(ctx, \"docker.io/library/redis:alpine\", containerd.WithPullUnpack) if err != nil { return err } // create a container container, err := client.NewContainer( ctx, \"redis-server\", containerd.WithImage(image), containerd.WithNewSnapshot(\"redis-server-snapshot\", image), containerd.WithNewSpec(oci.WithImageConfig(image)), ) if err != nil { return err } defer container.Delete(ctx, containerd.WithSnapshotCleanup) // create a task from the container task, err := container.NewTask(ctx, cio.NewCreator(cio.WithStdio)) if err != nil { return err } defer" }, { "data": "// make sure we wait before calling start exitStatusC, err := task.Wait(ctx) if err != nil { return err } // call start on the task to execute the redis server if err := task.Start(ctx); err != nil { return err } // sleep for a lil bit to see the logs time.Sleep(3 * time.Second) // kill the process and get the exit status if err := task.Kill(ctx, syscall.SIGTERM); err != nil { return err } // wait for the process to fully exit and print out the exit status status := <-exitStatusC code, _, err := status.Result() if err != nil { return err } fmt.Printf(\"redis-server exited with status: %d\\n\", code) return nil } ``` We can build this example and run it as follows to see our hard work come together. ```bash go build main.go sudo ./main 1:C 04 Aug 20:41:37.682 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 1:C 04 Aug 20:41:37.682 # Redis version=4.0.1, bits=64, commit=00000000, modified=0, pid=1, just started 1:C 04 Aug 20:41:37.682 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf 1:M 04 Aug 20:41:37.682 # You requested maxclients of 10000 requiring at least 10032 max file descriptors. 1:M 04 Aug 20:41:37.682 # Server can't set maximum open files to 10032 because of OS error: Operation not permitted. 1:M 04 Aug 20:41:37.682 # Current maximum open files is 1024. maxclients has been reduced to 992 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'. 1:M 04 Aug 20:41:37.683 * Running mode=standalone, port=6379. 1:M 04 Aug 20:41:37.683 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. 1:M 04 Aug 20:41:37.684 # Server initialized 1:M 04 Aug 20:41:37.684 # WARNING overcommitmemory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommitmemory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. 1:M 04 Aug 20:41:37.684 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled. 1:M 04 Aug 20:41:37.684 * Ready to accept connections 1:signal-handler (1501879300) Received SIGTERM scheduling shutdown... 1:M 04 Aug 20:41:40.791 # User requested shutdown... 1:M 04 Aug 20:41:40.791 * Saving the final RDB snapshot before exiting. 1:M 04 Aug 20:41:40.794 * DB saved on disk 1:M 04 Aug 20:41:40.794 # Redis is now ready to exit, bye bye... redis-server exited with status: 0 ``` In the end, we really did not write that much code when you use the client package. - - We hope this guide helped to get you up and running with containerd. Feel free to join the `#containerd` and `#containerd-dev` slack channels on Cloud Native Computing Foundation's (CNCF) slack - `cloud-native.slack.com` if you have any questions and like all things, if you want to help contribute to containerd or this guide, submit a pull request." } ]
{ "category": "Runtime", "file_name": "getting-started.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> List all metrics for the operator ``` cilium-operator metrics list [flags] ``` ``` -h, --help help for list -p, --match-pattern string Show only metrics whose names match matchpattern -o, --output string json| yaml| jsonpath='{}' -s, --server-address string Address of the operator API server (default \"localhost:9234\") ``` - Access metric status of the operator" } ]
{ "category": "Runtime", "file_name": "cilium-operator_metrics_list.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "](https://github.com/ligato/vpp-agent/compare/v3.3.0...master) - - - - - - - <! RELEASE CHANGELOG TEMPLATE: <a name=\"vX.Y.Z\"></a> --> <a name=\"v3.4.0\"></a> VPP 22.02 (default) VPP 21.06 VPP 21.01 VPP 20.09 VPP 20.05 fix: Fix generating jsonschema fix: Fix `agentctl report` command failing in StoneWork feat: Add support for VPP 22.02 Upgrade dependencies with security issues fix: remove unnecessary print statement Update base image to ubuntu:20.04 docs: Fix documentation comment formatting <a name=\"v3.3.0\"></a> VPP 21.06 (default) VPP 21.01 VPP 20.09 VPP 20.05 VPP 20.01 and older (no longer supported) DHCP proxy key should include the Rx-VRF ID Fix metadata Update for Linux interfaces Fix delete by resync failure of init file data by using agentctl Report VPP not responding state as unhealthy status Various JSON schema REST API export fix and enhancements Do not release mutex for printing log message bugs VPP-1968 and VPP-1970 were fixed in VPP 21.01 VPP interface requires IP address with network prefix Fix VPP config for VPP 20.09 Add mutex to configurator dump default linux vrf device mtu for latest linux kernels linux route metric make route scope relevant for IPv4 only route and netalloc plugin integration Add support for VPP interface with RDMA driver Add reporting capabilities to agentctl Agentctl support for yaml configuration of 3rd party configuration models Configurator Notify improvements Proxied models (remotely learned models used in DefaultRegistry) Add support for dummy linux interface and existing IPs Add support for existing Linux VRFs Add possibility to setup NB configuration from file at VPP-Agent start Allow multiple grpc-based data sources Support for cache DNS server in VPP + e2e test refactoring Add support for VPP 21.01 Added REST API for retrieving JSON schema for VPP-Agent configuration added REST API for validating yaml VPP-Agent configuration using validate methods on registered descriptors added REST API for getting and setting NB VPP-Agent configuration Add support for VPP 21.06 remove support for VPP 20.01 Migrate fully to protov2 Run VPP+VPPAgent in a container for e2e tests (support for multi-VPP tests) Add test workflow for integration/e2e <a name=\"v3.2.0\"></a> VPP 20.09 (compatible) VPP 20.05 (default) VPP 20.01 (backwards compatible) VPP 19.08 (backwards compatible) VPP 19.04 (no longer supported) Fixes and improvements for agentctl and models Fix creation of multiple ipip tunnels Fix IPSec tun protect + add IPSec e2e test Fix bridge domain dump for VPP 20.05 Fix IPSec SA add/del in VPP 20.05 Update expected output of agentctl status command vpp/ifplugin: Recognize interface name prefix \"tun\" as TAP Fix IPv4 link-local IP address handling maps caching prometheus gauges weren't really used Permit agent to run even when VPP stats are unavailable Fix grpc context timeout for agentctl import command Changed nat44 pool key to prevent possible key collisions Remove forced delay for linux interface notifications agentctl: Add config get/update commands agentctl: Support specific history seq num and improve layout agentctl: Add config.resync subcommand (with resync) IP Flow Information eXport (IPFIX) plugin Add IPIP & IPSec point-to-multipoint support Wireguard plugin support Add tunnel mode support for VPP TAP interfaces New REST endpoint for retrieving version of Agent Add support for IPv6 ND address autoconfig Add VRF field to proxy ARP range Switch to new proto v2" }, { "data": "ipsec: allow configuring salt for encryption algorithm gtpu: Add RemoteTeid to GTPU interface Added support for NAT44 static mapping twice-NAT pool IP address reference add IP protocol number to ACL model gtpu: Add support for arbitrary DecapNextNode configurator: Add support for waiting until config update is done telemetry: Add reading VPP threads to the telemetry plugin linux: Add support for Linux VRFs VRRP support perf: Performance enhancement for adding many rules to Linux IP Improve testing process for e2e/integration tests docs: Add example for developing agents with custom VPP plugins Delete unused REST handler for VPP commands separate model for IPSec Security Policies do not mark hostifname for AF_PACKET as deprecated Store interface internal name & dev type as metadata Check if HostIfName contains non-printable characters Fix error message for duplicate keys <a name=\"v3.1.0\"></a> Switch cn-infra dependency to using vanity import path To migrate, replace all cn-infra import paths (`github.com/ligato/cn-infra` -> `go.ligato.io/cn-infra/v2`) To update cn-infra dependency, run `go get -u go.ligato.io/cn-infra/v2@master`. Add missing models to ConfigData Fix watching VPP events Allow customizing polling from stats poller IPIP tunnel + IPSec tunnel protection support Add prometheus metrics to govppmux Add prometheus metrics to kvscheduler Improve performance testing suite <a name=\"v3.0.1\"></a> Add missing models to ConfigData (https://github.com/ligato/vpp-agent/pull/1625) <a name=\"v3.0.0\"></a> VPP 20.01 (default) VPP 19.08.1 (recommended) VPP 19.04.4 VPP L3 plugin: `IPScanNeighbor` was disabled for VPP 20.01 due to VPP API changes (will be implemented later using new model) VPP NAT plugin: `VirtualReassembly` in `Nat44Global` was disabled for VPP 20.01 due to VPP API changes (will be implemented later in VPP L3 plugin using new model) migrate from dep to Go modules for dependency management and remove vendor directory use vanity import path `go.ligato.io/vpp-agent/v3` in Go files move all .proto files into `proto/ligato` directory and add check for breaking changes check for duplicate Linux interface IP address VPP interface plugin: Allow AF-PACKET to reference target Linux interface via logical name VPP L3 plugin: add support for L3 cross-connects VPP L3 plugin: IP flow hash settings support VPP NAT plugin: NAT interface and AddressPool API changes VPP plugins: support disabling VPP plugins VPP client: add support for govpp proxy optimize getting model keys, with up to 20% faster transactions agentctl output formatting improvements (#1581, #1582, #1589) generated VPP binary API now imports common types from `*_types` packages development docker images now have smaller size (~400MB less) start using Github Workflows for CI/CD pipeline add gRPC reflection service <a name=\"v2.5.1\"></a> VPP 20.01-379 (`20.01-rc0~379-ga6b93eac5`) VPP 20.01-324 (`20.01-rc0~324-g66a332cf1`) VPP 19.08.1 (default) VPP 19.04 (backward compatible) cn-infra v2.2 Fix linux interface dump () Fix VRF for SR policy () <a name=\"v2.5.0\"></a> VPP 20.01-379 (`20.01-rc0~379-ga6b93eac5`) VPP 20.01-324 (`20.01-rc0~324-g66a332cf1`) VPP 19.08.1 (default) VPP 19.04 (backward compatible) cn-infra v2.2 SRv6 global config (encap source address) Support for Linux configuration dumping Update GoVPP with fix for stats conversion panic <a name=\"v2.4.0\"></a> VPP 20.01-379 (`20.01-rc0~379-ga6b93eac5`) VPP 20.01-324 (`20.01-rc0~324-g66a332cf1`) VPP 19.08.1 (default) VPP 19.04 (backward compatible) cn-infra v2.2 This release introduces compatibility with two different commits of the VPP 20.01. Previously compatible version was updated to commit `324-g66a332cf1`, and support for `379-ga6b93eac5` was added. Other previous versions remained. - Added `StatsPoller` service periodically retrieving VPP stats. <a name=\"v2.3.0\"></a> VPP 20.01 (`20.01-rc0~161-ge5948fb49~b3570`) VPP 19.08.1 (default) VPP 19.04 (backward compatible) cn-infra v2.2 VPP support for version 19.08 was updated to 19.08.1. Support for 19.01 was dropped in this release. Linux interfaces with 'EXISTING' type should be resynced properly. Resolved issue with SRv6 removal. AgentCTL dump command fixed. ACL ICMP rule is now properly configured and data can be obtained using the ACL dump. Missing dependency for SRv6 L2 steering fixed. Fixed issue with possible division by zero and missing interface MTU. Namespace plugin uses a Docker event listener instead of periodical polling. This should prevent cases where quickly started microservice container was not detected. - A new plugin called netalloc which allows disassociating topology from addressing in the network configuration. Interfaces, routes and other network objects' addresses can be symbolic references into the pool of allocated addresses known to netalloc plugin. See for more information. - Added support for GRE tunnel" }, { "data": "Choose the `GRE_TUNNEL` interface type with appropriate link data. - Many new features and enhancements added to the AgentCTL: version is defined as a parameter for root command instead of the separate command ETCD endpoints can be defined via the `ETCD_ENDPOINTS` environment variable sub-command `config` supports `get/put/del` commands `model` sub-commands improved added VPP command to manage VPP instance Additionally, starting with this release the AgentCTL is a VPP-Agent main control tool and the vpp-agent-ctl was definitely removed. Many end-to-end tests introduced, gradually increasing VPP-Agent stability. - IP addresses assigned by the DHCP are excluded from the interface address descriptor. VPP-Agent now processes status change notifications labeled by the VPP as UNKNOWN. - Dockerclient microservice polling replaced with an event listener. - SRv6 dynamic proxy routing now can be connected to a non-zero VRF table. <a name=\"v2.2.0\"></a> VPP 19.08 (rc1) VPP 19.04 (default) VPP 19.01 (backward compatible) cn-infra v2.2 CN-infra version updated to 2.2 contains a supervisor fix which should prevent the issue where the supervisor logging occasionally caused the agent to crash during large outputs. - Added option to configure SPAN records. Northbound data are formatted by the . - Clientv2 is now recognized as separate data source by the orchestrator plugin. This feature allows to use the localclient together with other data sources. Updated documentation comments in the protobuf API. <a name=\"v2.2.0-beta\"></a> VPP 19.08 (rc1) VPP 19.04 (default) VPP 19.01 (backward compatible) Fixed SRv6 localsid delete case for non-zero VRF tables. Fixed interface IPv6 detection in the descriptor. Various bugs fixed in KV scheduler TXN post-processing. Interface plugin config names fixed, no stats publishers are now used by default. Instead, datasync is used (by default ETCD, Redis and Consul). Rx-placement and rx-mode is now correctly dependent on interface link state. Fixed crash for iptables rulechain with default microservice. Punt dump fixed in all supported VPP versions. Removal of registered punt sockets fixed after a resync. Punt socket paths should no longer be unintentionally recreated. IP redirect is now correctly dependent on RX interface. Fixed IPSec security association configuration for tunnel mode. Fixed URL for VPP metrics in telemetry plugin Routes are now properly dependent on VRF. Defined new environment variable `DISABLEINTERFACESTATS` to generally disable interface plugin stats. Defined new environment variable `RESYNC_TIMEOU` to override default resync timeout. Added for more information. - GoVPPMux stats can be read with rest under path `/govppmux/stats`. Added disabling of interface stats via the environment variable `DISABLEINTERFACESTATS`. Added disabling of interface status publishing via environment variable `DISABLESTATUSPUBLISHING`. - Added some more performance improvements. The same key can be no more matched by multiple descriptors. - ABF plugin was added to config data model and is now initialized in configurator. - Interface rx-placement and rx-mode was enhanced and now allows per-queue configuration. Added for rx-placement and rx-mode. - NAT example updated for VPP 19.04 - Route keys were changed to prevent collisions with some types of configuration. Route with outgoing interface now contains the interface name in the key. Added support for DHCP proxy. A new descriptor allows calling CRUD operations to VPP DHCP proxy servers. - Added support for Punt exceptions. IP redirect dump was implemented for VPP 19.08. - Interface metrics added to telemetry plugin. Note that the URL for prometheus export was changed to `/metrics/vpp`. Plugin configuration file now has an option to skip certain metrics. - Added support for IPSec plugin Added support for punt plugin - We continuously update the new CTL tool. Various bugs were fixed some new features added. Added new command `import` which can import configuration from file. The supervisor was replaced with VPP-Agent init" }, { "data": "Images now use pre-built VPP images from <a name=\"v2.1.1\"></a> VPP 19.04 (`stable/1904`, recommended) VPP 19.01 (backward compatible) Fixed IPv6 detection for Linux interfaces . Fixed config file names for ifplugin in VPP & Linux . Fixed setting status publishers from env var: `VPPSTATUSPUBLISHERS`. The start/stop timeouts for agent can be configured using env vars: `STARTTIMEOUT=15s` and `STOPTIMEOUT=5s`, with values parsed as duration. ABF was added to the `ConfigData` message for VPP . Images now install all compiled .deb packages from VPP (including `vpp-plugin-dpdk`). <a name=\"v2.1.0\"></a> VPP 19.04 (`stable/1904`, recommended) VPP 19.01 (backward compatible) cn-infra v2.1 Go 1.11 The VPP 18.10 was deprecated and is no longer compatible. All non-zero VRF tables now must be explicitly created, providing a VRF proto-modeled data to the VPP-Agent. Otherwise, some configuration items will not be created as before (for example interface IP addresses). VPP ARP `retrieve` now also returns IPv6 entries. - The GoVPPMux plugin configuration file contains a new option `ConnectViaShm`, which when set to `true` forces connecting to the VPP via shared memory prefix. This is an alternative to environment variable `GOVPPMUX_NOSOCK`. - The configurator plugin now collects statistics which are available via the `GetStats()` function or via REST on URL `/stats/configurator`. - Added transaction statistics. - Added new plugin ABF - ACL-based forwarding, providing an option to configure routing based on matching ACL rules. An ABF entry configures interfaces which will be attached, list of forwarding paths and associated access control list. - Added support for Generic Segmentation Offload (GSO) for TAP interfaces. - A new model for VRF tables was introduced. Every VRF is defined by an index and an IP version, a new optional label was added. Configuration types using non-zero VRF now require it to be created, since the VRF is considered a dependency. VRFs with zero-index are present in the VPP by default and do not need to be configured (applies for both, IPv4 and IPv6). - This tool becomes obsolete and was completely replaced with a new implementation. Please note that the development of this tool is in the early stages, and functionality is quite limited now. New and improved functionality is planned for the next couple of releases since our goal is to have a single vpp-agent control utility. Because of this, we have also deprecated the vpp-agent-ctl tool which will be most likely removed in the next release. - The KV Scheduler received another performance improvements. - Attempt to configure a Bond interface with already existing ID returns a non-retriable error. - Before adding an IPv6 address to the Linux interface, the plugins will use `sysctl` to ensure the IPv6 is enabled in the target OS. Supervisord is started as a process with PID 1 The ligato.io webpage is finally available, check out it ! We have also released a with a lot of new or updated articles, guides, tutorials and many more. Most of the README.md files scattered across the code were removed or updated and moved to the site. <a name=\"v2.0.2\"></a> VPP 19.01 (updated to `v19.01.1-14-g0f36ef60d`) VPP 18.10 (backward compatible) cn-infra v2.0 Go 1.11 This minor release brought compatibility with updated version of the VPP 19.01. <a name=\"v2.0.1\"></a> VPP 19.01 (compatible by default, recommended) VPP 18.10 (backward compatible) cn-infra v2.0 Go 1.11 Fixed bug where Linux network namespace was not reverted in some cases. The VPP socketclient connection checks (and waits) for the socket file in the same manner as for the shared memory, giving the GoVPPMux more time to connect in case the VPP startup is delayed. Also errors occurred during the shm/socket file watch are now properly" }, { "data": "Fixed wrong dependency for SRv6 end functions referencing VRF tables (DT6,DT4,T). - Added option to adjust the number of connection attempts and time delay between them. Seek `retry-connect-count` and `retry-connect-timeout` fields in . Also keep in mind the total time in which plugins can be initialized when using these fields. - Default loopback MTU was set to 65536. - Plugin descriptor returns `ErrEscapedNetNs` if Linux namespace was changed but not reverted back before returned to scheduler. Supervisord process is now started with PID=1 <a name=\"v2.0.0\"></a> VPP 19.01 (compatible by default, recommended) VPP 18.10 (backward compatible) cn-infra v2.0 Go 1.11 All northbound models were re-written and simplified and most of them are no longer compatible with model data from v1. The `v1` label from all vpp-agent keys was updated to `v2`. Plugins using some kind of dependency on other VPP/Linux plugin (for example required interface) should be updated and handled by the KVScheduler. We expect a lot of known and unknown race-condition and plugin dependency related issues to be solved by the KV Scheduler. MTU is omitted for the sub-interface type. If linux plugin attempts to switch to non-existing namespace, it prints appropriate log message as warning, and continues with execution instead of interrupt it with error. Punt socket path string is cleaned from unwanted characters. Added VPE compatibility check for L3 plugin vppcalls. The MAC address assigned to an af-packet interface is used from the host only if not provided from the configuration. Fixed bug causing the agent to crash in an attempt to 'update' rx-placement with empty value. Switch interface from zero to non-zero VRF causes VPP issues - this limitation was now restricted only to unnumbered interfaces. IPSec tunnel dump now also retrieves integ/crypto keys. Errored operation should no more publish to the index mapping. Some obsolete Retval checks were removed. Error caused by missing DPDK interface is no longer retryable. Linux interface IP address without mask is now handled properly. Fixed bug causing agent to crash when some VPP plugin we support was not loaded. Fixed metrics retrieval in telemetry plugin. The bidirectional forwarding detection (aka BFD plugin) was removed. We plan to add it in one of the future releases. The L4 plugin (application namespaces) was removed. We experienced problems with the VPP with some messages while using socket client connection. The issue kind was that the reply message was not returned (GoVPP could not decode it). If you encounter similar error, please try to setup VPP connection using shared memory (see below). Performance The vpp-agent now supports connection via socket client (in addition to shared memory). The socket client connection provides higher performance and message throughput, thus it was set as default connection type. The shared memory is still available via the environment variable `GOVPPMUX_NOSOCK`. Many other changes, benchmarking and profiling was done to improve vpp-agent experience. Multi-VPP support The VPP-agent can connect to multiple versions of the VPP with the same binary file without any additional building or code changes. See compatibility part to know which versions are supported. The list will be extended in the future. Models All vpp-agent models were reviewed and cleaned up. Various changes were done, like simple renaming (in order to have more meaningful fields, avoid duplicated names in types, etc.), improved model convenience (interface type-specific fields are now defined as `oneof`, preventing to set multiple or incorrect data) and other. All models were also moved to the common folder. - Added new component called KVScheduler, as a reaction to various flaws and issues with race conditions between Vpp/Linux plugins, poor readability and poorly readable" }, { "data": "Also the system of notifications between plugins was unreliable and hard to debug or even understand. Based on this experience, a new framework offers improved generic mechanisms to handle dependencies between configuration items and creates clean and readable transaction-based logging. Since this component significantly changed the way how plugins are defined, we recommend to learn more about it on the . - The orchestrator is a new component which long-term added value will be a support for multiple northbound data sources (KVDB, GRPC, ...). The current implementation handles combination of GRPC + KVDB, which includes data changes and resync. In the future, any combination of sources will be supported. - Added `Ping()` method to the VPE vppcalls usable to test the VPP connection. - UDP encapsulation can be configured to an IPSec tunnel interface Support for new Bond-type interfaces. Support for L2 tag rewrite (currently present in the interface plugin because of the inconsistent VPP API) - Added support for session affinity in NAT44 static mapping with load balancer. - Support for Dynamic segment routing proxy with L2 segment routing unaware services. Added support for SRv6 end function End.DT4 and End.DT6. - Added support for new Linux interface type - loopback. Attempt to assign already existing IP address to the interface does not cause an error. - Added new linux IP tables plugin able to configure IP tables chain in the specified table, manage chain rules and set default chain policy. - Performance improvements related to memory management. - Need for config file was removed, GoVPP is now set with default values if the startup config is not provided. `DefaultReplyTimeout` is now configured globally, instead of set for every request separately. Tolerated default health check timeout is now set to 250ms (up from 100ms). The old value had not provide enough time in some cases. - Model moved to the - Model reviewed, updated and moved to the . Interface plugin now handles IPSec tunnel interfaces (previously done in IPSec plugin). NAT related configuration was moved to its own plugin. New interface stats (added in 1.8.1) use new GoVPP API, and publishing frequency was significantly decreased to handle creation of multiple interfaces in short period of time. - Model moved to the The IPSec interface is no longer processed by the IPSec plugin (moved to interface plugin). The ipsec link in interface model now uses the enum definitions from IPSec model. Also some missing crypto algorithms were added. - Model moved to the and split to three separate models for bridge domains, FIBs and cross connects. - Model moved to the and split to three separate models for ARPs, Proxy ARPs including IP neighbor and Routes. - Defined new plugin to handle NAT-related configuration and its own (before a part of interface plugin). - Model moved to the . Added retrieve support for punt socket. The current implementation is not final - plugin uses local cache (it will be enhanced when the appropriate VPP binary API call will be added). - Model moved to the . - Model reviewed, updated and moved to the . - Model moved to the and split to separate models for ARPs and Routes. Linux routes and ARPs have a new dependency - the target interface is required to contain an IP address. - New auxiliary plugin to handle linux namespaces and microservices (evolved from ns-handler). Also defines for generic linux namespace definition. Configuration file for GoVPP was removed, forcing to use default values (which are the same as they were in the file). Fixes for installing ARM64" }, { "data": "Kafka is no longer required in order to run vpp-agent from the image. Added documentation for the punt plugin, describing main features and usage of the punt plugin. Added documentation for the , describing main and usage of the IPSec plugin. Added documentation for the . The document is only available on . Description improved in various proto files. Added a lot of new documentation for the KVScheduler (examples, troubleshooting, debugging guides, diagrams, ...) Added tutorial for KV Scheduler. Added many new documentation articles to the . However, most of is there only temporary since we are preparing new ligato.io website with all the documentation and other information about the Ligato project. Also majority of readme files from the vpp-agent repository will be removed in the future. <a name=\"v1.8.1\"></a> Motive for this minor release was updated VPP with several fixed bugs from the previous version. The VPP version also introduced new interface statistics mechanism, thus the stats processing was updated in the interface plugin. v19.01-16~gd30202244 cn-infra v1.7 GO 1.11 VPP bug: fixed crash when attempting to run in kubernetes pod VPP bug: fixed crash in barrier sync when vlibworkerthreads is zero * Support for new VPP stats (the support for old ones were deprecated by the VPP, thus removed from the vpp-agent as well). <a name=\"v1.8.0\"></a> VPP v19.01-rc0~394-g6b4a32de cn-infra v1.7 Go 1.11 Pre-existing VETH-type interfaces are now read from the default OS namespace during resync if the Linux interfaces were dumped. The Linux interface dump method does not return an error if some interface namespace becomes suddenly unavailable at the read-time. Instead, this case is logged and all the other interfaces are returned as usual. The Linux localclient's delete case for Linux interfaces now works properly. The Linux interface dump now uses OS link name (instead of vpp-agent specific name) to read the interface attributes. This sometimes caused errors where an incorrect or even none interface was read. Fixed bug where the unsuccessful namespace switch left the namespace file opened. Fixed crash if the Linux plugin was disabled. Fixed occasional crash in vpp-agent interface notifications. Corrected interface counters for TX packets. Access list with created TCP/UDP/ICMP rule, which remained as empty struct no longer causes vpp-agent to crash * Rx-mode and Rx-placement now support dump via the respective binary API call vpp-rpc-plugin GRPC now supports also IPSec configuration. All currently supported configuration items can be also dumped/read via GRPC (similar to rest) GRPC now allows to automatically persist configuration to the data store. The desired DB has to be defined in the new GRPC config file (see for additional information). * Added simple new punt plugin. The plugin allows to register/unregister punt to host via Unix domain socket. The new was added for this configuration type. Since the VPP API is incomplete, the configuration does not support dump. * The VxLAN interface now support IPv4/IPv6 virtual routing and forwarding (VRF tables). Support for new interface type: VmxNet3. The VmxNet3 virtual network adapter has no physical counterpart since it is optimized for performance in a virtual machine. Because built-in drivers for this card are not provided by default in the OS, the user must install VMware Tools. The interface model was updated for the VmxNet3 specific configuration. * IPSec resync processing for security policy databases (SPD) and security associations (SA) was improved. Data are properly read from northbound and southbound, compared and partially configured/removed, instead of complete cleanup and re-configuration. This does not appeal to IPSec tunnel interfaces. IPSec tunnel can be now set as an unnumbered" }, { "data": "* In case of error, the output returns correct error code with cause (parsed from JSON) instead of an empty body <a name=\"v1.7.0\"></a> VPP 18.10-rc0~505-ge23edac cn-infra v1.6 Go 1.11 Corrected several cases where various errors were silently ignored GRPC registration is now done in Init() phase, ensuring that it finishes before GRPC server is started Removed occasional cases where Linux tap interface was not configured correctly Fixed FIB configuration failures caused by wrong updating of the metadata after several modifications No additional characters are added to NAT tag and can be now configured with the full length without index out of range errors Linux interface resync registers all VETH-type interfaces, despite the peer is not known Status publishing to ETCD/Consul now should work properly Fixed occasional failure caused by concurrent map access inside Linux plugin interface configurator VPP route dump now correctly recognizes route type * It is now possible to dump unnumbered interface data Rx-placement now uses specific binary API to configure instead of generic CLI API * Bridge domain ARP termination table can now be dumped * Linux interface watcher was reintroduced. Linux interfaces can be now dumped. * Linux ARP entries and routes can be dumped. * Improved error propagation in all the VPP plugins. Majority of errors now print the stack trace to the log output allowing better error tracing and debugging. Stopwatch was removed from all vppcalls * Improved error propagation in all Linux plugins (same way as for VPP) Stopwatch was removed from all linuxcalls * Tracer (introduced in cn-infra 1.6) added to VPP message processing, replacing stopwatch. The measurement should be more precise and logged for all binary API calls. Also the rest plugin now allows showing traced entries. The image can now be built on ARM64 platform <a name=\"v1.6.0\"></a> VPP 18.10-rc0~169-gb11f903a cn-infra v1.5 Flavors were replaced with new way of managing plugins. REST interface URLs were changed, see for complete list. if VPP routes are dumped, all paths are returned NAT load-balanced static mappings should be resynced correctly telemetry plugin now correctly parses parentheses for `show node counters` telemetry plugin will not hide an error caused by value loading if the config file is not present Linux plugin namespace handler now correctly handles namespace switching for interfaces with IPv6 addresses. Default IPv6 address (link local) will not be moved to the new namespace if there are no more IPv6 addresses configured within the interface. This should prevent failures in some cases where IPv6 is not enabled in the destination namespace. VxLAN with non-zero VRF can be successfully removed Lint is now working again VPP route resync works correctly if next hop IP address is not defined Deprecating flavors CN-infra 1.5 brought new replacement for flavors and it would be a shame not to implement it in the vpp-agent. The old flavors package was removed and replaced with this new concept, visible in app package vpp-agent. * All VPP configuration types are now supported to be dumped using REST. The output consists of two parts; data formatted as NB proto model, and metadata with VPP specific configuration (interface indexes, different counters, etc.). REST prefix was changed. The new URL now contains API version and purpose (dump, put). The list of all URLs can be found in the * Added support for NAT virtual reassembly for both, IPv4 and IPv6. See change in * Vpp-agent now knows about DROP-type routes. They can be configured and also dumped. VPP default routes, which are DROP-type is recognized and registered. Currently, resync does not remove or correlate such a route type automatically, so no default routes are unintentionally" }, { "data": "New configurator for L3 IP scan neighbor was added, allowing to set/unset IP scan neigh parameters to the VPP. * all vppcalls were unified under API defined for every configuration type (e.g. interfaces, l2, l3, ...). Configurators now use special handler object to access vppcalls. This should prevent duplicates and make vppcalls cleaner and more understandable. * VPP interface DHCP configuration can now be dumped and added to resync processing Interfaces and also L3 routes can be configured for non-zero VRF table if IPv6 is used. * All examples were reworked to use new flavors concept. The purpose was not changed. using Ubuntu 18.04 as the base image <a name=\"v1.5.2\"></a> VPP 18.07-rc0~358-ga5ee900 cn-infra v1.4.1 (minor version fixes bug in Consul) * Fixed bug where lack of config file could cause continuous polling. The interval now also cannot be changed to a value less than 5 seconds. Telemetry plugin is now closed properly <a name=\"v1.5.1\"></a> VPP 18.07-rc0~358-ga5ee900 cn-infra v1.4 * Default polling interval was raised to 30s. Added option to use telemetry config file to change polling interval, or turn the polling off, disabling the telemetry plugin. The change was added due to several reports where often polling is suspicious of interrupting VPP worker threads and causing packet drops and/or other negative impacts. More information how to use the config file can be found in the . <a name=\"v1.5.0\"></a> VPP 18.07-rc0~358-ga5ee900 cn-infra v1.4 The package `etcdv3` was renamed to `etcd`, along with its flag and configuration file. The package `defaultplugins` was renamed to `vpp` to make the purpose of the package clear Fixed a few issues with parsing VPP metrics from CLI for . Fixed bug in GoVPP occurring after some request timed out, causing the channel to receive replies from the previous request and always returning an error. Fixed issue which prevented setting interface to non-existing VRF. Fixed bug where removal of an af-packet interface caused attached Veth to go DOWN. Fixed NAT44 address pool resolution which was not correct in some cases. Fixed bug with adding SR policies causing incomplete configuration. * Is now optional and can be disabled via configuration file. * Added support for VxLAN multicast Rx-placement can be configured on VPP interfaces * IPsec UDP encapsulation can now be set (NAT traversal) Replace `STARTAGENT` with `OMITAGENT` to match `RETAIN_SUPERVISOR` and keep both unset by default. Refactored and cleaned up execute scripts and remove unused scripts. Fixed some issues with `RETAIN_SUPERVISOR` option. Location of supervisord pid file is now explicitly set to `/run/supervisord.pid` in supervisord.conf file. The vpp-agent is now started with single flag `--config-dir=/opt/vpp-agent/dev`, and will automatically load all configuration from that directory. <a name=\"v1.4.1\"></a> A minor release using newer VPP v18.04 version. VPP v18.04 (2302d0d) cn-infra v1.3 VPP submodule was removed from the project. It should prevent various problems with dependency resolution. Fixed known bug present in the previous version of the VPP, issued as . Current version contains appropriate fix. <a name=\"v1.4.0\"></a> VPP v18.04 (ac2b736) cn-infra v1.3 Fixed case where the creation of the Linux route with unreachable gateway threw an error. The route is now appropriately cached and created when possible. Fixed issue with GoVPP channels returning errors after a timeout. Fixed various issues related to caching and resync in L2 cross-connect Split horizon group is now correctly assigned if an interface is created after bridge domain Fixed issue where the creation of FIB while the interface was not a part of the bridge domain returned an error. VPP crash may occur if there is interface with non-default VRF (>0). There is an issue created with more details * Consul is now supported as a key-value store alternative to ETCD. More information in the" }, { "data": "* New plugin for collecting telemetry data about VPP metrics and serving them via HTTP server for Prometheus. More information in the . * Now supports tunnel interface for encrypting all the data passing through that interface. GRPC Vpp-agent itself can act as a GRPC server (no need for external executable) All configuration types are supported (incl. Linux interfaces, routes and ARP) Client can read VPP notifications via vpp-agent. * New plugin with support for Segment Routing. More information in the . * Added support for self-twice-NAT vpp-agent-grpc executable merged with command. * `configure reply timeout` can be configured. Support for VPP started with custom shared memory prefix. SHM may be configured via the GoVPP plugin config file. More info in the Overall redundancy cleanup and corrected naming for all proto models. Added more unit tests for increased coverage and code stability. now contains two examples, the old one demonstrating basic plugin functionality was moved to plugin package, and specialised example for was added. now contains two examples, the old one demonstrating interface usage was moved to package and new example for linux was added. <a name=\"v1.3.0\"></a> The vpp-agent is now using custom VPP branch . VPP v18.01-rc0~605-g954d437 cn-infra v1.2 Resync of ifplugin in both, VPP and Linux, was improved. Interfaces with the same configuration data are not recreated during resync. STN does not fail if IP address with a mask is provided. Fixed ingress/egress interface resolution in ACL. Linux routes now check network reachability for gateway address before configuration. It should prevent \"network unreachable\" errors during config. Corrected bridge domain crash in case non-bvi interface was added to another non-bvi interface. Fixed several bugs related to VETH and AF-PACKET configuration and resync. : New plugin for IPSec added. The IPSec is supported for VPP only with Linux set manually for now. IKEv2 is not yet supported. More information in the . * New namespace plugin added. The configurator handles common namespace and microservice processing and communication with other Linux plugins. * Added support for Network address translation. NAT plugin supports a configuration of NAT44 interfaces, address pools and DNAT. More information in the . DHCP can now be configured for the interface * Split-horizon group can be configured for bridge domain interface. * Added support for proxy ARP. For more information and configuration example, please see . * Support for automatic interface configuration (currently only TAP). * Removed configuration order of interfaces. The access list can be now configured even if interfaces do not exist yet, and add them later. vpp-agent-ctl The vpp-agent-ctl was refactored and command info was updated. VPP can be built and run in the release or debug mode. Read more information in the . Production image is now smaller by roughly 40% (229MB). <a name=\"v1.2.0\"></a> VPP v18.04-rc0~90-gd95c39e cn-infra v1.1 Fixed interface assignment in ACLs Fixed bridge domain BVI modification resolution vpp-agent-grpc (removed in 1.4 release, since then it is a part of the vpp-agent) now compiles properly together with other commands. VPP can occasionally cause a deadlock during checksum calculation (https://jira.fd.io/browse/VPP-1134) VPP-Agent might not properly handle initialization across plugins (this is not occurring currently, but needs to be tested more) * Improved resync of ACL entries. Every new ACL entry is correctly configured in the VPP and all obsolete entries are read and removed. * Improved resync of interfaces, BFD sessions, authentication keys, echo functions and STN. Better resolution of persistence config for interfaces. * Improved resync of bridge domains, FIB entries, and xConnect pairs. Resync now better correlates configuration present on the VPP with the NB" }, { "data": "* ARP does not need the interface to be present on the VPP. Configuration is cached and put to the VPP if requirements are fulfilled. Dependencies Migrated from glide to dep VPP compilation now skips building of Java/C++ APIs, this saves build time and final image size. Development image now runs VPP in debug mode with various debug options added in . <a name=\"v1.1.0\"></a> VPP version v18.04-rc0~33-gb59bd65 cn-infra v1.0.8 fixed skip-resync parameter if vpp-plugin.conf is not provided. corrected af_packet type interface behavior if veth interface is created/removed. several fixes related to the af_packet and veth interface type configuration. microservice and veth-interface related events are synchronized. VPP can occasionally cause a deadlock during checksum calculation (https://jira.fd.io/browse/VPP-1134) VPP-Agent might not properly handle initialization across plugins (this is not occurring currently, but needs to be tested more) - added support for un-numbered interfaces. The nterface can be marked as un-numbered with information about another interface containing required IP address. A un-numbered interface does not need to have IP address set. added support for virtio-based TAPv2 interfaces. interface status is no longer stored in the ETCD by default and it can be turned on using the appropriate setting in vpp-plugin.conf. See for more details. - bridge domain status is no longer stored in the ETCD by default and it can be turned on using the appropriate setting in vpp-plugin.conf. See for more details. - default MTU value was removed in order to be able to just pass empty MTU field. MTU now can be set only in interface configuration (preferred) or defined in vpp-plugin.conf. If none of them is set, MTU value will be empty. interface state data are stored in statuscheck readiness probe - removed strict configuration order for VPP ARP entries and routes. Both ARP entry or route can be configured without interface already present. l4plugin (removed in v2.0) removed strict configuration order for application namespaces. Application namespace can be configured without interface already present. localclient added API for ARP entries, L4 features, Application namespaces, and STN rules. logging consolidated and improved logging in vpp and Linux plugins. <a name=\"v1.0.8\"></a> VPP v18.01-rc0-309-g70bfcaf cn-infra v1.0.7 - ability to configure STN rules. See respective in interface plugin for more details. rx-mode settings can be set on interface. Ethernet-type interface can be set to POLLING mode, other types of interfaces supports also INTERRUPT and ADAPTIVE. Fields to set QueueID/QueueIDValid are also available added possibility to add interface to any VRF table. added defaultplugins API. API contains new Method `DisableResync(keyPrefix ...string)`. One or more ETCD key prefixes can be used as a parameter to disable resync for that specific key(s). l4plugin (removed in v2.0) added new l4 plugin to the VPP plugins. It can be used to enable/disable L4 features and configure application namespaces. See respective in L4 plugin for more details. support for VPP plugins/l3plugin ARP configuration. The configurator can perform the basic CRUD operation with ARP config. resync resync error propagation improved. If any resynced configuration fails, rest of the resync completes and will not be interrupted. All errors which appear during resync are logged after. - route configuration does not return an error if the required interface is missing. Instead, the route data are internally stored and configured when the interface appears. GoVPP delay flag removed from GoVPP plugin removed dead links from README files improved in multiple vpp-agent packages <a name=\"v1.0.7\"></a> VPP version v18.01-rc0~154-gfc1c612 cn-infra v1.0.6 - added resync strategies. Resync of VPP plugins can be set using defaultpluigns config file; Resync can be set to full (always resync everything) or dependent on VPP configuration (if there is none, skip resync). Resync can be also forced to skip using the" }, { "data": "- added support for basic CRUD operations with the static Address resolution protocol entries and static Routes. <a name=\"v1.0.6\"></a> cn-infra v1.0.5 - The configuration of vEth interfaces modified. Veth configuration defines two names: symbolic used internally and the one used in host OS. `HostIfName` field is optional. If it is not defined, the name in the host OS will be the same as the symbolic one - defined by `Name` field. <a name=\"v1.0.5\"></a> VPP version v17.10-rc0~334-gce41a5c cn-infra v1.0.4 - configuration file for govpp added Kafka Partitions Changes in offset handling, only automatically partitioned messages (hash, random) have their offset marked. Manually partitioned messages are not marked. Implemented post-init consumer (for manual partitioner only) which allows starting consuming after kafka-plugin Init() Minimalistic examples & documentation for Kafka API will be improved in a later release. <a name=\"v1.0.4\"></a> Kafka Partitions Implemented new methods that allow to specify partitions & offset parameters: publish: Mux.NewSyncPublisherToPartition() & Mux.NewAsyncPublisherToPartition() watch: ProtoWatcher.WatchPartition() Minimalistic examples & documentation for Kafka API will be improved in a later release. Flavors reduced to only local.FlavorVppLocal & vpp.Flavor GoVPP updated version waits until the VPP is ready to accept a new connection <a name=\"v1.0.3\"></a> VPP version v17.10-rc0~265-g809bc74 (upgraded because of VPP MEMIF fixes) Enabled support for wathing data store `OfDifferentAgent()` - see: examples/idxifacecache (removed in v2.0) examples/examples/idxbdcache (removed in v2.0) examples/idxvethcache (removed in v2.0) Preview of new Kafka client API methods that allows to fill also partition and offset argument. New methods implementation ignores these new parameters for now (fallback to existing implementation based on `github.com/bsm/sarama-cluster` and `github.com/Shopify/sarama`). <a name=\"v1.0.2\"></a> VPP version v17.10-rc0~203 A rarely occurring problem during startup with binary API connectivity. VPP rejects binary API connectivity when VPP Agent tries to connect too early (plan fix this behavior in next release). Algorithms for applying northbound configuration (stored in ETCD key-value data store) to VPP in the proper order of VPP binary API calls implemented in : network interfaces, especially: MEMIFs (optimized data plane network interface tailored for a container to container network connectivity) VETHs (standard Linux Virtual Ethernet network interface) AF_Packets (for accessing VETHs and similar type of interface) VXLANs, Physical Network Interfaces, loopbacks ... L2 BD & X-Connects L3 IP Routes & VRFs ACL (Access Control List) Support for Linux VETH northbound configuration implemented in applied in proper order with VPP AF_Packet configuration. Data Synchronization during startup for network interfaces & L2 BD (support for the situation when ETCD contain configuration before VPP Agent starts). Data replication and events: Updating operational data in ETCD (VPP indexes such as swifindex) and statistics (port counters). Updating statistics in Redis (optional once redis.conf available - see flags). Publishing links up/down events to Kafka message bus. Tools: that show state & configuration of VPP agents : container-based development environment for the VPP agent other features inherited from cn-infra: health: status check & k8s HTTP/REST probes logging: changing log level at runtime Ability to extend the behavior of the VPP Agent by creating new plugins on top of VPP Agent flavor (removed with CN-Infra v1.5). New plugins can access API for configured: VPP Network interfaces, Bridge domains and VETHs based on threadsafe map tailored for VPP data with advanced features (multiple watchers, secondary indexes). VPP Agent is embeddable in different software projects and with different systems by using Local Flavor (removed with CN-Infra v1.5) to reuse VPP Agent algorithms. For doing this there is VPP Agent client version 1 (removed in v2.0): local client - for embedded VPP Agent (communication inside one operating system process, VPP Agent effectively used as a library) remote client - for remote configuration of VPP Agent (while integrating for" } ]
{ "category": "Runtime", "file_name": "CHANGELOG.md", "project_name": "Ligato", "subcategory": "Cloud Native Network" }
[ { "data": "Short summary of the problem. Make the impact and severity as clear as possible. For example: An unsafe deserialization vulnerability allows any unauthenticated user to execute arbitrary code on the server. Provide the products or components that are affected by the vulnerability. Provide versions that were tested with the vulnerability Give all details on the vulnerability. References to the source code is very helpful for the maintainer. Complete instructions, including specific configuration details, to reproduce the vulnerability What kind of vulnerability is it? Who or what component(s) is impacted? Propose a remediation suggestion if you have one. Make it clear that this is just a suggestion, as the maintainer might have a better idea to fix the issue. List all researchers who contributed to this disclosure. If you found the vulnerability with a specific tool, you can also credit this tool. Provide contact information so we can contact you for any questions or requests for additional information." } ]
{ "category": "Runtime", "file_name": "REPORT_TEMPLATE.md", "project_name": "Project Calico", "subcategory": "Cloud Native Network" }
[ { "data": "- https://github.com/heptio/ark/releases/tag/v0.7.1 Run the Ark server in its own namespace, separate from backups/schedules/restores/config (#322, @ncdc) https://github.com/heptio/ark/releases/tag/v0.7.0 Run the Ark server in any namespace (#272, @ncdc) Add ability to delete backups and their associated data (#252, @skriss) Support both pre and post backup hooks (#243, @ncdc) Switch from Update() to Patch() when updating Ark resources (#241, @skriss) Don't fail the backup if a PVC is not bound to a PV (#256, @skriss) Restore serviceaccounts prior to workload controllers (#258, @ncdc) Stop removing annotations from PVs when restoring them (#263, @skriss) Update GCP client libraries (#249, @skriss) Clarify backup and restore creation messages (#270, @nrb) Update S3 bucket creation docs for us-east-1 (#285, @lypht)" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-0.7.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"ark backup\" layout: docs Work with backups Work with backups ``` -h, --help help for backup ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Back up and restore Kubernetes cluster resources. - Create a backup - Delete a backup - Describe backups - Download a backup - Get backups - Get backup logs" } ]
{ "category": "Runtime", "file_name": "ark_backup.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> CLI CLI for interacting with the local Cilium Agent ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -h, --help help for cilium-dbg -H, --host string URI to server-side API ``` - Access to BGP control plane - Direct access to local BPF maps - Resolve all of the configuration sources that apply to this node - Cgroup metadata - Output shell completion code - Cilium configuration options - Request available debugging information from agent - Manage transparent encryption - Manage endpoints - Manage fqdn proxy - Manage security identities - Manage IP addresses and associated information - Direct access to the kvstore - Show load information - Manage local redirect policies - Access userspace cached content of BPF maps - Access metric status - Display BPF program events - Manage cluster nodes - List node IDs and associated information - Manage security policies - Remove system state installed by Cilium at runtime - Manage XDP CIDR filters - Cilium upgrade helper - Introspect or mangle pcap recorder - Manage services & loadbalancers - Inspect StateDB - Display status of daemon - Run troubleshooting utilities to check control-plane connectivity - Print version information" } ]
{ "category": "Runtime", "file_name": "cilium-dbg.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "title: HPE Storage link: https://github.com/hpe-storage/velero-plugin objectStorage: false volumesnapshotter: true HPE storage plugin for Velero. To take snapshots of HPE volumes through Velero you need to install and configure the HPE Snapshotter plugin." } ]
{ "category": "Runtime", "file_name": "05-hpe-storage.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "<!--Before submitting a pull request, make sure you read about our Contribution notice here: <https://spidernet-io.github.io/spiderpool/latest/develop/contributing/>--> <!-- Add one of the following kinds: Required labels: release/none release/bug release/feature Optional labels: kind/bug kind/feature kind/ci-bug kind/doc --> What this PR does / why we need it: Which issue(s) this PR fixes: <!-- *Automatically closes linked issue when PR is merged. Usage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`. --> Fixes # Special notes for your reviewer:" } ]
{ "category": "Runtime", "file_name": "PULL_REQUEST_TEMPLATE.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "This document describes security vulnerabilities / CVEs that may impact the security of Sysbox containers. These may be vulnerabilities in Sysbox itself (which are fixed quickly), in CRI-O (when Sysbox is installed in Kubernetes clusters), or in the Linux kernel. | CVE | Date | Severity | Affects Sysbox | Details | | | -- | -- | -- | - | | 2024-21626 | 01/31/24 | High | No | | | 2022-0811 | 03/15/22 | High | Yes | | | 2022-0847 | 03/03/22 | High | Yes | | | 2022-0492 | 02/06/22 | Medium | No | | | 2022-0185 | 01/21/22 | High | Yes | | The sections below describe each of these in more detail. Date: 01/21/22 Severity: High Problem: is a vulnerability in the Linux kernel which permits a \"User Namespace\" escape (i.e., an unprivileged user inside a user-namespace may gain root access on the host). Effect on Sysbox: This vulnerability can negate the extra isolation of containers deployed with Sysbox as they always use the Linux user-namespace. Fix: The fix has been to the Linux kernel on 01/18/22 and picked up by several distros shortly after. For Ubuntu, the fix has been released and requires a . We recommend you upgrade your kernel (i.e., check if your kernel distro carries the fix and if so, apply it). Date: 02/06/22 Severity: Medium Problem: is a flaw in the Linux kernel's cgroups mechanism that under some circumstances allows the use of the cgroups v1 release_agent feature to escalate privileges. It affects containers in some cases, as described in this by Unit 42 at Palo Alto Networks. Effect on Sysbox: Sysbox is NOT vulnerable to the security flaw exposed by this CVE. The reason is that inside a Sysbox container the cgroups v1 release_agent can't be written to (by virtue of Sysbox setting up the container with the Linux user-namespace). Even if you create privileged containers inside a Sysbox container, they won't be vulnerable due to the Sysbox container's usage of the Linux user-namespace. Fix: on the latest Linux release. Even though this CVE does not affect Sysbox containers, it does affect regular containers under some scenarios. Therefore we recommend that you check when your Linux distro picks up the fix and apply it. Date: 03/03/22 Severity: High Problem: A flaw in the Linux pipes mechanism allows privilege" }, { "data": "Even a process whose user-ID is \"nobody\" can elevate its privileges. Effect on Sysbox: This vulnerability affects containers deployed with Sysbox as it voids the protection provided by the Linux user-namespace (where processes in the container run as \"nobody:nogroup\" at host level). Fix: The vulnerability first appeared in Linux kernel version 5.8, which was released in 08/2020. The vulnerability was fixed on 02/21/22 via and available in kernel versions 5.16.11, 5.15.25, and 5.10.102. We recommend you check when your Linux distro picks up the fix and apply it. Date: 03/15/22 Severity: High Problem: is a vulnerability that affects the CRI-O runtime. Since installing Sysbox on Kubernetes clusters , such clusters may be vulnerable. The vulnerability allows a user with rights to deploy pods on the Kubernetes cluster to achieve container escape and get root access to the underlying node, using a flaw in the way CRI-O parses the pod's `sysctl` securityContext. Refer to the CVE description for full details. Fix: The version of sysbox-deploy-k8s released after 04/12/22 carries a CRI-O binary that has been patched to fix this problem. This in CRI-O has the fix. To ensure you have the fix, check that your sysbox-deploy-k8s has a that points to image `registry.nestybox.com/nestybox/sysbox-deploy-k8s:v0.5.1` (or later). Alternatively, check the version of Sysbox in your Kubernetes nodes is v0.5.1 or later (e.g., run `systemctl status sysbox` on the K8s node). If you have a prior version of Sysbox installed in your cluster, then your CRI-O is vulnerable. In this case we recommend upgrading the Sysbox version on your Kubernetes cluster, using the steps described . Date: 01/31/24 Severity: High Problem: is a vulnerability in the OCI runc runtime that allows a container escape that gives it access to the host filesystem. Details can be found in The vulnerability impacts runc versions between v1.0.0-rc93 and 1.1.11 (inclusive), and has been fixed in runc 1.1.12. Though Sysbox is a modified fork of the OCI runc runtime, it's NOT affected by the same vulnerability because: Sysbox does not leak host file descriptors into the container as the vulnerable runc versions do. Sysbox always enables the Linux user-namespace on containers; thus, even if a host file descriptor had been leaked allowing the container to escape into the host filesystem, the container process would be quite limited on the actions it can take on the host (e.g., it would not have permissions to modify root or user owned files, unless these files have permissions enabled for \"others\")." } ]
{ "category": "Runtime", "file_name": "security-cve.md", "project_name": "Sysbox", "subcategory": "Container Runtime" }
[ { "data": "(appc) is an open specification that defines several aspects of how to run applications in containers: an image format, runtime environment, and discovery protocol. rkt's native and are those defined by the . Note that as of late 2016, appc is . However, appc is fully functional and stable and will continue to be supported. Future versions of rkt might gain . The image format defined by appc and used in rkt is the , or ACI. An ACI is a simple tarball bundle of a rootfs (containing all the files needed to execute an application) and an Image Manifest, which defines things like default execution parameters and default resource constraints. ACIs can be built with tools like , , or . Docker images can be converted to ACI using , although rkt will . Most parameters defined in an image can be overridden at runtime by rkt. For example, the `rkt run` command allows users to supply custom exec arguments to an image. appc defines the as the basic unit of execution. A pod is a grouping of one or more app images (ACIs), with some additional metadata optionally applied to the pod as a whole - for example, a resource constraint can be applied at the pod level and then forms an \"outer bound\" for all the applications in the pod. The images in a pod execute with a shared context, including networking. A pod in rkt is conceptually identical to a pod . rkt implements the two runtime components of the appc specification: the and the . It also uses schema and code from the upstream repo to manipulate ACIs, work with image and pod manifests, and perform image discovery. To validate that `rkt` successfully implements the ACE part of the spec, use the App Container : ``` --mds-register \\ --volume=database,kind=host,source=/tmp \\ https://github.com/appc/spec/releases/download/v0.8.11/ace-validator-main.aci \\ https://github.com/appc/spec/releases/download/v0.8.11/ace-validator-sidekick.aci ```" } ]
{ "category": "Runtime", "file_name": "app-container.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "Weave Net is licensed under the . Some vendored code is under different licenses though, all of them ship the entire license text they are under. https://github.com/weaveworks/go-checkpoint can be found in the ./vendor/ directory, is under MPL-2.0. Pulled in by dependencies are https://github.com/hashicorp/golang-lru (MPL-2.0) https://github.com/hashicorp/go-cleanhttp (MPL-2.0) https://github.com/opencontainers/go-digest (docs are under CC by-sa 4.0) is under LGPL-3, that's why we ship the license text in this repository." } ]
{ "category": "Runtime", "file_name": "VENDORED_CODE.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "(images-manage)= When working with images, you can inspect various information about the available images, view and edit their properties and configure aliases to refer to specific images. You can also export an image to a file, which can be useful to {ref}`copy or import it <images-copy>` on another machine. To list all images on a server, enter the following command: incus image list [<remote>:] If you do not specify a remote, the {ref}`default remote <images-remote-default>` is used. (images-manage-filter)= To filter the results that are displayed, specify a part of the alias or fingerprint after the command. For example, to show all Ubuntu 22.04 images, enter the following command: incus image list images: 22.04 You can specify several filters as well. For example, to show all Arm 64-bit Ubuntu 22.04 images, enter the following command: incus image list images: 22.04 arm64 To filter for properties other than alias or fingerprint, specify the filter in `<key>=<value>` format. For example: incus image list images: 22.04 architecture=x86_64 To view information about an image, enter the following command: incus image info <image_ID> As the image ID, you can specify either the image's alias or its fingerprint. For a remote image, remember to include the remote server (for example, `images:ubuntu/22.04`). To display only the image properties, enter the following command: incus image show <image_ID> You can also display a specific image property (located under the `properties` key) with the following command: incus image get-property <image_ID> <key> For example, to show the release name of the official Ubuntu 22.04 image, enter the following command: incus image get-property images:ubuntu/22.04 release (images-manage-edit)= To set a specific image property that is located under the `properties` key, enter the following command: incus image set-property <image_ID> <key> ```{note} These properties can be used to convey information about the image. They do not configure Incus' behavior in any way. ``` To edit the full image properties, including the top-level properties, enter the following command: incus image edit <image_ID> Configuring an alias for an image can be useful to make it easier to refer to an image, since remembering an alias is usually easier than remembering a fingerprint. Most importantly, however, you can change an alias to point to a different image, which allows creating an alias that always provides a current image (for example, the latest version of a release). You can see some of the existing aliases in the image list. To see the full list, enter the following command: incus image alias list You can directly assign an alias to an image when you {ref}`copy or import <images-copy>` or {ref}`publish <images-create-publish>` it. Alternatively, enter the following command: incus image alias create <aliasname> <imagefingerprint> You can also delete an alias: incus image alias delete <alias_name> To rename an alias, enter the following command: incus image alias rename <aliasname> <newalias_name> If you want to keep the alias name, but point the alias to a different image (for example, a newer version), you must delete the existing alias and then create a new one. (images-manage-export)= Images are located in the image store of your local server or a remote Incus server. You can export them to a file though. This method can be useful to back up image files or to transfer them to an air-gapped environment. To export a container image to a file, enter the following command: incus image export [<remote>:]<image> [<outputdirectorypath>] To export a virtual machine image to a file, add the `--vm` flag: incus image export [<remote>:]<image> [<outputdirectorypath>] --vm See {ref}`image-format` for a description of the file structure used for the image." } ]
{ "category": "Runtime", "file_name": "images_manage.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "is a programming framework enabling developers to build a distributed cloud system (Geo-Distributed Cloud System). YoMo's communication layer is made on top of the QUIC protocol, which brings high-speed data transmission. In addition, it has a built-in Streaming Serverless \"streaming function\", which significantly improves the development experience of distributed cloud systems. The distributed cloud system built by YoMo provides an ultra-high-speed communication mechanism between near-field computing power and terminals. It has a wide range of use cases in Metaverse, VR/AR, IoT, etc. YoMo is written in the Go language. For streaming Serverless, Golang plugins and shared libraries are used to load users' code dynamically, which also have certain limitations for developers. Coupled with Serverless architecture's rigid demand for isolation, this makes WebAssembly an excellent choice for running user-defined functions. For example, in the process of real-time AI inference in AR/VR devices or smart factories, the camera sends real-time unstructured data to the computing node in the near-field MEC (multi-access edge computing) device through YoMo. YoMo sends the AI computing result to the end device in real-time when the AI inference is completed. Thus, the hosted AI inference function will be automatically executed. However, a challenge for YoMo is to incorporate and manage handler functions written by multiple outside developers in an edge computing node. It requires runtime isolation for those functions without sacrificing performance. Traditional software container solutions, such as Docker, are not up to the task. They are too heavy and slow to handle real-time tasks. WebAssembly provides a lightweight and high-performance software container. It is ideally suited as a runtime for YoMos data processing handler functions. In this article, we will show you how to create a Rust function for Tensorflow-based image classification, compile it into WebAssembly, and then use YoMo to run it as a stream data handler. We use as our WebAssembly runtime because it offers the highest performance and flexibility compared with other WebAssembly runtimes. It is the only WebAssembly VM that reliably supports Tensorflow. YoMo manages WasmEdge VM instances and the contained WebAssembly bytecode apps through . Source code: <https://github.com/yomorun/yomo-wasmedge-tensorflow> Checkout Obviously, you will need to have , but I will assume you already did. Golang version should be newer than 1.15 for our example to work. You also need to install the YoMo CLI application. It orchestrates and coordinates data streaming and handler function invocations. ```bash $ go install github.com/yomorun/cli/yomo@latest $ yomo version YoMo CLI version: v0.1.3 ``` Next, please install the WasmEdge and its Tensorflow shared libraries. is a leading WebAssembly runtime hosted by the CNCF. We will use it to embed and run WebAssembly programs from YoMo. ```bash curl -sSf" }, { "data": "| bash ``` Finally, since our demo WebAssembly functions are written in Rust, you will also need a . For the rest of the demo, fork and clone the . ```bash git clone https://github.com/yomorun/yomo-wasmedge-tensorflow.git ``` The to process the YoMo image stream is written in Rust. It utilizes the WasmEdge Tensorflow API to process an input image. ```rust pub fn infer(image_data: Vec<u8>) -> Result<Vec<u8>, String> { let start = Instant::now(); // Load the TFLite model and its meta data (the text label for each recognized object number) let modeldata: &[u8] = includebytes!(\"lite-modelaiyvisionclassifierfoodV11.tflite\"); let labels = includestr!(\"aiyfoodV1labelmap.txt\"); // Pre-process the image to a format that can be used by this model let flatimg = wasmedgetensorflowinterface::loadjpgimagetorgb8(&imagedata[..], 192, 192); println!(\"RUST: Loaded image in ... {:?}\", start.elapsed()); // Run the TFLite model using the WasmEdge Tensorflow API let mut session = wasmedgetensorflowinterface::Session::new(&modeldata, wasmedgetensorflow_interface::ModelType::TensorFlowLite); session.addinput(\"input\", &flatimg, &[1, 192, 192, 3]) .run(); let resvec: Vec<u8> = session.getoutput(\"MobilenetV1/Predictions/Softmax\"); // Find the object index in res_vec that has the greatest probability // Translate the probability into a confidence level // Translate the object index into a label from the model meta data food_name let mut i = 0; let mut max_index: i32 = -1; let mut max_value: u8 = 0; while i < res_vec.len() { let cur = res_vec[i]; if cur > max_value { max_value = cur; max_index = i as i32; } i += 1; } println!(\"RUST: index {}, prob {}\", maxindex, maxvalue); let confidence: String; if max_value > 200 { confidence = \"is very likely\".to_string(); } else if max_value > 125 { confidence = \"is likely\".to_string(); } else { confidence = \"could be\".to_string(); } let ret_str: String; if max_value > 50 { let mut label_lines = labels.lines(); for i in 0..maxindex { label_lines.next(); } let foodname = labellines.next().unwrap().to_string(); ret_str = format!( \"It {} a <a href='https://www.google.com/search?q={}'>{}</a> in the picture\", confidence, foodname, foodname ); } else { retstr = \"It does not appears to be a food item in the picture.\".tostring(); } println!( \"RUST: Finished post-processing in ... {:?}\", start.elapsed() ); return Ok(retstr.asbytes().to_vec()); } ``` You should add `wasm32-wasi` target to rust to compile this function into WebAssembly bytecode. ```bash rustup target add wasm32-wasi cd flow/rustmobilenetfood cargo build --target wasm32-wasi --release cp target/wasm32-wasi/release/rustmobilenetfood_lib.wasm ../ ``` To release the best performance of WasmEdge, you should enable the AOT mode by compiling the `.wasm` file to the `.so`. ```bash wasmedge compile rustmobilenetfoodlib.wasm rustmobilenetfoodlib.so ``` On the YoMo side, we use the WasmEdge Golang API to start and run WasmEdge VM for the image classification function. The file in the source code project is as follows. ```go package main import ( \"crypto/sha1\" \"fmt\" \"log\" \"os\" \"sync/atomic\" \"github.com/second-state/WasmEdge-go/wasmedge\" bindgen \"github.com/second-state/wasmedge-bindgen/host/go\"" }, { "data": ") var ( counter uint64 ) const ImageDataKey = 0x10 func main() { // Connect to Zipper service sfn := yomo.NewStreamFunction(\"image-recognition\", yomo.WithZipperAddr(\"localhost:9900\")) defer sfn.Close() // set only monitoring data sfn.SetObserveDataID(ImageDataKey) // set handler sfn.SetHandler(Handler) // start err := sfn.Connect() if err != nil { log.Print(\" Connect to zipper failure: \", err) os.Exit(1) } select {} } // Handler process the data in the stream func Handler(img []byte) (byte, []byte) { // Initialize WasmEdge's VM vmConf, vm := initVM() bg := bindgen.Instantiate(vm) defer bg.Release() defer vm.Release() defer vmConf.Release() // recognize the image res, err := bg.Execute(\"infer\", img) if err == nil { fmt.Println(\"GO: Run bindgen -- infer:\", string(res)) } else { fmt.Println(\"GO: Run bindgen -- infer FAILED\") } // print logs hash := genSha1(img) log.Printf(\" received image-%d hash %v, img_size=%d \\n\", atomic.AddUint64(&counter, 1), hash, len(img)) return 0x11, nil } // genSha1 generate the hash value of the image func genSha1(buf []byte) string { h := sha1.New() h.Write(buf) return fmt.Sprintf(\"%x\", h.Sum(nil)) } // initVM initialize WasmEdge's VM func initVM() (wasmedge.Configure, wasmedge.VM) { wasmedge.SetLogErrorLevel() // Set Tensorflow not to print debug info os.Setenv(\"TFCPPMINLOGLEVEL\", \"3\") os.Setenv(\"TFCPPMINVLOGLEVEL\", \"3\") // Create configure vmConf := wasmedge.NewConfigure(wasmedge.WASI) // Create VM with configure vm := wasmedge.NewVMWithConfig(vmConf) // Init WASI var wasi = vm.GetImportObject(wasmedge.WASI) wasi.InitWasi( os.Args[1:], // The args os.Environ(), // The envs []string{\".:.\"}, // The mapping directories ) // Register WasmEdge-tensorflow and WasmEdge-image var tfobj = wasmedge.NewTensorflowImportObject() var tfliteobj = wasmedge.NewTensorflowLiteImportObject() vm.RegisterImport(tfobj) vm.RegisterImport(tfliteobj) var imgobj = wasmedge.NewImageImportObject() vm.RegisterImport(imgobj) // Instantiate wasm vm.LoadWasmFile(\"rustmobilenetfood_lib.so\") vm.Validate() return vmConf, vm } ``` Finally, we can start YoMo and see the entire data processing pipeline in action. Start the YoMo CLI application from the project folder. The defines port YoMo should listen on and the workflow handler to trigger for incoming data. Note that the flow name `image-recognition` matches the name in the aforementioned data handler . ```bash yomo serve -c ./zipper/workflow.yaml ``` Start the handler function by running the aforementioned program. ```bash cd flow go run --tags \"tensorflow image\" app.go ``` by sending a video to YoMo. The video is a series of image frames. The WasmEdge function in will be invoked against every image frame in the video. ```bash wget -P source 'https://github.com/yomorun/yomo-wasmedge-tensorflow/releases/download/v0.1.0/hot-dog.mp4' go run ./source/main.go ./source/hot-dog.mp4 ``` You can see the output from the WasmEdge handler function in the console. It prints the names of the objects detected in each image frame in the video. In this article, we have seen how to use the WasmEdge Tensorflow API and Golang SDK in YoMo framework to process an image stream in near real-time. In collaboration with YoMo, we will soon deploy WasmEdge in production in smart factories for a variety of assembly line tasks. WasmEdge is the software runtime for edge computing!" } ]
{ "category": "Runtime", "file_name": "yomo.md", "project_name": "WasmEdge Runtime", "subcategory": "Container Runtime" }
[ { "data": "Name | Type | Description | Notes | - | - | - ThreadsPerCore | Pointer to int32 | | [optional] CoresPerDie | Pointer to int32 | | [optional] DiesPerPackage | Pointer to int32 | | [optional] Packages | Pointer to int32 | | [optional] `func NewCpuTopology() *CpuTopology` NewCpuTopology instantiates a new CpuTopology object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewCpuTopologyWithDefaults() *CpuTopology` NewCpuTopologyWithDefaults instantiates a new CpuTopology object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *CpuTopology) GetThreadsPerCore() int32` GetThreadsPerCore returns the ThreadsPerCore field if non-nil, zero value otherwise. `func (o CpuTopology) GetThreadsPerCoreOk() (int32, bool)` GetThreadsPerCoreOk returns a tuple with the ThreadsPerCore field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *CpuTopology) SetThreadsPerCore(v int32)` SetThreadsPerCore sets ThreadsPerCore field to given value. `func (o *CpuTopology) HasThreadsPerCore() bool` HasThreadsPerCore returns a boolean if a field has been set. `func (o *CpuTopology) GetCoresPerDie() int32` GetCoresPerDie returns the CoresPerDie field if non-nil, zero value otherwise. `func (o CpuTopology) GetCoresPerDieOk() (int32, bool)` GetCoresPerDieOk returns a tuple with the CoresPerDie field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *CpuTopology) SetCoresPerDie(v int32)` SetCoresPerDie sets CoresPerDie field to given value. `func (o *CpuTopology) HasCoresPerDie() bool` HasCoresPerDie returns a boolean if a field has been set. `func (o *CpuTopology) GetDiesPerPackage() int32` GetDiesPerPackage returns the DiesPerPackage field if non-nil, zero value otherwise. `func (o CpuTopology) GetDiesPerPackageOk() (int32, bool)` GetDiesPerPackageOk returns a tuple with the DiesPerPackage field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *CpuTopology) SetDiesPerPackage(v int32)` SetDiesPerPackage sets DiesPerPackage field to given value. `func (o *CpuTopology) HasDiesPerPackage() bool` HasDiesPerPackage returns a boolean if a field has been set. `func (o *CpuTopology) GetPackages() int32` GetPackages returns the Packages field if non-nil, zero value otherwise. `func (o CpuTopology) GetPackagesOk() (int32, bool)` GetPackagesOk returns a tuple with the Packages field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *CpuTopology) SetPackages(v int32)` SetPackages sets Packages field to given value. `func (o *CpuTopology) HasPackages() bool` HasPackages returns a boolean if a field has been set." } ]
{ "category": "Runtime", "file_name": "CpuTopology.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "This week we worked on refactoring the storage and runtime aspects of containerd, much of this work won't be completed until next week. @stevvooe started work on a `dist` tool for fetching images and other distribution aspects. One of the problems we wanted to solve with containerd is the ability to decouple the fetch of image contents, the unpacking of the contents, and the storage of the contents. Separating these lets us download content as non-root on machines and only elevate privileges when the contents need to be unpacked on disk. https://github.com/containerd/containerd/pull/452 A large part of my week I have been working on the shim. It is a very core aspect of containerd and allows containers to run without being tied to the lifecycle of the containerd daemon. With this work we will end up with one shim per container that manages all additional processes in the container. This saves resources on the system as well as makes interacting with the shim much easier. We are placing the shim behind a GRPC API instead of the pipe based protocol that we have today. I don't have a PR open at this time but expect something next week. We worked on the build process this week as well as improvements across the codebase. There were 31 commits merged this week from various contributors. Overall, there were no large features hitting master this week. Many of those are still in the works but a lot was done to simplify existing code and reduce complexity where we can." } ]
{ "category": "Runtime", "file_name": "2017-01-20.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "Fork the repository into your account, referred in the below instructions as $user. ```bash working_dir=$GOPATH/src/github.com/openebs mkdir -p $working_dir cd $working_dir ``` Set `user` to match your Github profile name: for setting global configurations use - git config --global user.name \"your usernae here\" git config --global user.email \"your email used in github account\" and for configuring locally use the above commands just remove the --global flag ```bash user={your Github profile name} ``` ```bash git clone https://github.com/$user/openebs.git cd openebs/ git remote add upstream https://github.com/openebs/openebs.git git remote set-url --push upstream no_push git remote -v ``` ```bash git checkout master git fetch upstream master git rebase upstream/master git status git push origin master ``` ```bash git branch <branch_name> git checkout <branch_name> git push --set-upstream origin <branch_name> ``` ```bash git checkout <branch-name> git fetch upstream master git rebase upstream/master git status git push ``` ```bash git add -A git commit -m \"creating changes in local repository\" git push ``` ```bash git push origin --delete <branch_name> git branch -d <branch_name> ``` Though it is important to write unit tests, do not try to achieve 100% code coverage if it complicates writing these tests. If a unit test is simple to write & understand, most probably it will be extended when new code gets added. However, the reverse will lead to its removal on the whole. In other words, complicated unit tests will lead to decrease in the overall coverage in the long run. OpenEBS being an OpenSource project will always try to experiment with new ideas and concepts. Hence, writing unit tests will provide the necessary checklist that reduces the scope for errors. Go back to ." } ]
{ "category": "Runtime", "file_name": "git-cheatsheet.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "Antrea supports running on Windows worker Nodes. On Windows Nodes, Antrea sets up an overlay network to forward packets between Nodes and implements NetworkPolicies. On Windows, the Host Networking Service (HNS) is a necessary component to support container networking. For Antrea on Windows, \"Transparent\" mode is chosen for the HNS network. In this mode, containers will be directly connected to the physical network through an external Hyper-V switch. OVS is working as a forwarding extension for the external Hyper-V switch which was created by HNS. Hence, the packets that are sent from/to the containers can be processed by OVS. The network adapter used in the HNS Network is also added to the OVS bridge as the uplink interface. An internal interface for the OVS bridge is created, and the original networking configuration (e.g., IP, MAC and routing entries) on the host network adapter is moved to this new interface. Some extra OpenFlow entries are needed to ensure the host traffic can be forwarded correctly. <img src=\"../assets/hns_integration.svg\" width=\"600\" alt=\"HNS Integration\"> Windows NetNat is configured to make sure the Pods can access external addresses. The packet from a Pod to an external address is firstly output to antrea-gw0, and then SNAT is performed on the Windows host. The SNATed packet enters OVS from the OVS bridge interface and leaves the Windows host from the uplink interface directly. Antrea implements the Kubernetes ClusterIP Service leveraging OVS. Pod-to-ClusterIP-Service traffic is load-balanced and forwarded directly inside the OVS pipeline. And kube-proxy is running on each Windows Node to implement Kubernetes NodePort Service. Kube-proxy captures NodePort Service traffic and sets up a connection to a backend Pod to forwards the request using this connection. The forwarded request enters the OVS pipeline through \"antrea-gw0\" and is then forwarded to the Pod. To be compatible with OVS, kube-proxy on Windows must be configured to run in userspace mode, and a specific network adapter is required, on which Service IP addresses will be configured by kube-proxy. HNS Network is created during the Antrea Agent initialization phase, and it should be created before the OVS bridge is created. This is because OVS is working as the Hyper-V Switch Extension, and the ovs-vswitchd process cannot work correctly until the OVS Extension is enabled on the newly created Hyper-V Switch. When creating the HNS Network, the local subnet CIDR and the uplink network adapter are required. Antrea Agent finds the network adapter from the Windows host using the Node's internal IP as a filter, and retrieves the local Subnet CIDR from the Node spec. After the HNS Network is created, OVS extension should be enabled at once on the Hyper-V Switch. plugin is used to provide IPAM for containers, and the address is allocated from the subnet CIDR configured on the HNS Network. Windows HNS Endpoint is leveraged as the vNIC for each container. A single HNS Endpoint with the IP allocated by the IPAM plugin is created for each Pod. The HNS Endpoint should be attached to all containers in the same Pod to ensure that the network configuration can be correctly accessed (this operation is to make sure the DNS configuration is readable from all containers). One OVS internal port with the same name as the HNS Endpoint is also needed, in order to handle container traffic with OpenFlow rules. OpenFlow entries are installed to implement Pod-to-Pod, Pod-to-external and Pod-to-ClusterIP-Service" }, { "data": "CNIAdd request might be called multiple times for a given Pod. This is because kubelet on Windows assumes CNIAdd is an idempotent event, and it uses this event to query the Pod networking status. Antrea needs to identify the container type (sandbox or workload) from the CNIAdd request: we create the HNS Endpoint only when the request is for the sandbox container we attach the HNS Endpoint no matter whether it is a sandbox container or a workload container. The gateway port is created during the Antrea Agent initialization phase, and the address of the interface should be the first IP in the subnet. The port is an OVS internal port and its default name is \"antrea-gw0\". The gateway port is used to help implement L3 connectivity for the containers, including Pod-to-external, and Node-to-Pod. For the Pod-to-external case, OpenFlow entries are installed in order to output these packets to the host on the gateway port. To ensure the packet is forwarded correctly on the host, the IP-Forwarding feature should be enabled on the network adapter of the gateway port. A routing entry for traffic from the Node to the local Pod subnet is needed on the Windows host to ensure that the packet can enter the OVS pipeline on the gateway port. This routing entry is added when \"antrea-gw0\" is created. Every time a new Node joins the cluster, a host routing entry on the gateway port is required, and the remote subnet CIDR should be routed with the remote gateway address as the nexthop. Tunnel port configuration should be similar to Antrea on Linux: tunnel port is added after OVS bridge is created; a flow-based tunnel with the appropriate remote address is created for each Node in the cluster with OpenFlow. The only difference with Antrea on Linux is that the tunnel local address is required when creating the tunnel port (provided with `local_ip` option). This local address is the one configured on the OVS bridge. Since OVS is also responsible for taking charge of the network of the host, an interface for the OVS bridge is required on which the host network settings are configured. The virtual network adapter which is created when creating the HNS Network is used as the OVS bridge interface. The virtual network adapter is renamed as the expected OVS bridge name, then the OVS bridge port is created. Hence, OVS can find the virtual network adapter with the name and attach it directly. Windows host has configured the virtual network adapter with IP, MAC and route entries which were originally on the uplink interface when creating the HNSNetwork, as a result, no extra manual IP/MAC/Route configurations on the OVS bridge are needed. The packets that are sent to/from the Windows host should be forwarded on this interface. So the OVS bridge is also a valid entry point into the OVS pipeline. A special ofport number 65534 (named as LOCAL) for the OVS bridge is used in OpenFlow spec. In the OVS `Classifier` table, new OpenFlow entries are needed to match the packets from this interface. The packet entering OVS from this interface is output to the uplink interface directly. After the OVS bridge is created, the original physical adapter is added to the OVS bridge as the uplink interface. The uplink interface is used to support traffic from Pods accessing the world outside current" }, { "data": "We should differentiate the traffic if it is entering OVS from the uplink interface in OVS `Classifier` table. In encap mode, the packets entering OVS from the uplink interface is output to the bridge interface directly. In noEncap mode, there are three kinds of packets entering OVS from the uplink interface: 1) Traffic that is sent to local Pods from Pod on a different Node 2) Traffic that is sent to local Pods from a different Node according to the routing configuration 3) Traffic on the host network For 1 and 2, the packet enters the OVS pipeline, and the `macRewriteMark` is set to ensure the destination MAC can be modified. For 3, the packet is output to the OVS bridge interface directly. The packet is always output to the uplink interface if it is entering OVS from the bridge interface. We also output the Pod traffic to the uplink interface in noEncap mode, if the destination is a Pod on a different Node, or if it is a reply packet to the request which is sent from a different Node. Then we can reduce the cost that the packet enters OVS twice (OVS -> Windows host -> OVS). Following are the OpenFlow entries for uplink interface in encap mode. ```text Classifier Table: 0 table=0, priority=200, in_port=$uplink actions=LOCAL table=0, priority=200, in_port=LOCAL actions=output:$uplink ``` Following is an example for the OpenFlow entries related with uplink interface in noEncap mode. ```text Classifier Table: 0 table=0, priority=210, ip, inport=$uplink, nwdst=$localPodSubnet, actions=load:0x4->NXMNXREG0[0..15],load:0x 1->NXMNXREG0[19],resubmit(,29) table=0, priority=200, in_port=$uplink actions=LOCAL table=0, priority=200, in_port=LOCAL actions=output:$uplink L3Forwarding Table: 70 // Rewrite the destination MAC with the Node's MAC on which target Pod is located. table=70, priority=200,ip,nwdst=$peerPodSubnet actions=moddl_dst:$peerNodeMAC,resubmit(,80) // Rewrite the destination MAC with the Node's MAC if it is a reply for the access from the Node. table=70, priority=200,ctstate=+rpl+trk,ip,nwdst=$peerNodeIP actions=moddldst:$peerNodeMAC,resubmit(,80) L2ForwardingCalcTable: 80 table=80, priority=200,dldst=$peerNodeMAC actions=load:$uplink->NXMNXREG1[],setfield:0x10000/0x10000->reg0,resubmit(,105) ``` SNAT is an important feature of the Antrea Agent on Windows Nodes, required to support Pods accessing external addresses. It is implemented using the NAT capability of the Windows host. To support this feature, we configure NetNat on the Windows host for the Pod subnet: ```text New-NetNat -Name antrea-nat -InternalIPInterfaceAddressPrefix $localPodSubnet ``` The packet that is sent from local Pod to an external address leaves OVS from `antrea-gw0` and enters Windows host, and SNAT action is performed. The SNATed address is chosen by Windows host according to the routing configuration. As for the reply packet of the Pod-to-external traffic, it enters Windows host and performs de-SNAT first, and then the packet enters OVS from `antrea-gw0` and is forwarded to the Pod finally. Named pipe is used for local connections on Windows Nodes instead of Unix Domain Socket (UDS). It is used in these scenarios: OVSDB connection OpenFlow connection The connection between CNI plugin and CNI server While we provide different installation methods for Windows, the recommended one is to use the `antrea-windows-with-ovs.yml` manifest. With this method, the antrea-agent process and the OVS daemons (ovsdb-server and ovs-vswitchd) run as a Pod on Windows worker Nodes, and are managed by a DaemonSet. This installation method relies on support. The intra-Node Pod-to-Pod traffic and inter-Node Pod-to-Pod traffic are the same as Antrea on Linux. It is processed and forwarded by OVS, and controlled with OpenFlow entries. Kube-proxy userspace mode is configured to provide NodePort Service" }, { "data": "A specific Network adapter named \"HNS Internal NIC\" is provided to kube-proxy to configure Service addresses. The OpenFlow entries for the NodePort Service traffic on Windows are the same as those on Linux. AntreaProxy implements the ClusterIP Service function. Antrea Agent installs routes to send ClusterIP Service traffic from host network to the OVS bridge. For each Service, it adds a route that routes the traffic via a virtual IP (169.254.0.253), and it also adds a route to indicate that the virtual IP is reachable via antrea-gw0. The reason to add a virtual IP, rather than routing the traffic directly to antrea-gw0, is that then only one neighbor cache entry needs to be added, which resolves the virtual IP to a virtual MAC. When a Service's endpoints are in hostNetwork or external network, a request packet will have its destination IP DNAT'd to the selected endpoint IP and its source IP will be SNAT'd to the virtual IP (169.254.0.253). Such SNAT is needed for sending the reply packets back to the OVS pipeline from the host network, whose destination IP was the Node IP before d-SNATed to the virtual IP. Check the packet forwarding path described below. For a request packet from host, it will enter OVS pipeline via antrea-gw0 and exit via antrea-gw0 as well to host network. On Windows host, with the help of NetNat, the request packet's source IP will be SNAT'd again to Node IP. The reply packets are the reverse for both situations regardless of whether the endpoint is in ClusterCIDR or not. The following path is an example of host accessing a Service whose endpoint is a hostNetwork Pod on another Node. The request packet is like: ```text host -> antrea-gw0 -> OVS pipeline -> antrea-gw0 -> host NetNat -> br-int -> OVS pipeline -> peer Node | | DNAT(peer Node IP) SNAT(Node IP) SNAT(virtual IP) ``` The forwarding path of a reply packet is like: ```text peer Node -> OVS pipeline -> br-int -> host NetNat -> antrea-gw0 -> OVS pipeline -> antrea-gw0 -> host | | d-SNAT(virtual IP) d-SNAT(antrea-gw0 IP) d-DNAT(Service IP) ``` The Pod-to-external traffic leaves the OVS pipeline from the gateway interface, and then is SNATed on the Windows host. If the packet should leave Windows host from OVS uplink interface according to the routing configuration on the Windows host, it is forwarded to OVS bridge first on which the host IP is configured, and then output to the uplink interface by OVS pipeline. The corresponding reply traffic will enter OVS from the uplink interface first, and then enter the host from the OVS bridge interface. It is de-SNATed on the host and then back to OVS from `antre-gw0` and forwarded to the Pod finally. <img src=\"../assets/windowsexternaltraffic.svg\" width=\"600\" alt=\"Traffic to external\"> In \"Transparent\" mode, the Antrea Agent should also support the host traffic when necessary, which includes packets sent from the host to external addresses, and the ones sent from external addresses to the host. The host traffic enters OVS bridge and output to the uplink interface if the destination is reachable from the network adapter which is plugged on OVS as uplink. For the reverse path, the packet enters OVS from the uplink interface first, and then directly output to the bridge interface and enters Windows host. For the traffic that is connected to the Windows network adapters other than the OVS uplink interface, it is managed by Windows host." } ]
{ "category": "Runtime", "file_name": "windows-design.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "(incus-alias)= The Incus command-line client supports adding aliases for commands that you use frequently. You can use aliases as shortcuts for longer commands, or to automatically add flags to existing commands. To manage command aliases, you use the command. For example, to always ask for confirmation when deleting an instance, create an alias for `incus delete` that always runs `incus delete -i`: incus alias add delete \"delete -i\" To see all configured aliases, run . Run to see all available subcommands." } ]
{ "category": "Runtime", "file_name": "incus_alias.md", "project_name": "lxd", "subcategory": "Container Runtime" }