tag
dict
content
listlengths
1
139
{ "category": "Provisioning", "file_name": "understanding-github-code-search-syntax.md", "project_name": "Trivy", "subcategory": "Security & Compliance" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Runtime", "file_name": "v2.0.0.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "Antrea is a Kubernetes networking solution intended to be Kubernetes native. It operates at Layer 3/4 to provide networking and security services for a Kubernetes cluster, leveraging Open vSwitch as the networking data plane. Open vSwitch is a widely adopted high-performance programmable virtual switch; Antrea leverages it to implement Pod networking and security features. For instance, Open vSwitch enables Antrea to implement Kubernetes Network Policies in a very efficient manner. Antrea has been tested with Kubernetes clusters running version 1.19 or later. Getting started with Antrea is very simple, and takes only a few minutes. See how its done in the Getting started document. The Antrea community welcomes new contributors. We are waiting for your PRs! Code of Conduct. Also check out @ProjectAntrea on Twitter! Nephe implements security policies for VMs across clouds, leveraging Antrea-native policies. To explore more Antrea features and their usage, check the Getting started document and user guides in the Antrea documentation folder. Refer to the Changelogs for a detailed list of features introduced for each version release. For a list of Antrea Adopters, please refer to ADOPTERS.md. We are adding features very quickly to Antrea. Check out the list of features we are considering on our Roadmap page. Feel free to throw your ideas in! Antrea is licensed under the Apache License, version 2.0 To help you get started, see the documentation. 2024 The Linux Foundation. All Rights Reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of the Linux Foundation, please see our Trademarks Usage page." } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "Container Network Interface (CNI)", "subcategory": "Cloud Native Network" }
[ { "data": "CNI Documentation: This is CNI spec version 1.0.0. Note that this is independent from the version of the CNI library and plugins in this repository (e.g. the versions of releases ). Released versions of the spec are available as Git tags. | tag | spec permalink | major changes | |:|:--|:| | spec-v1.0.0 | spec at v1.0.0 | Removed non-list configurations; removed version field of interfaces array | | spec-v0.4.0 | spec at v0.4.0 | Introduce the CHECK command and passing prevResult on DEL | | spec-v0.3.1 | spec at v0.3.1 | none (typo fix only) | | spec-v0.3.0 | spec at v0.3.0 | rich result type, plugin chaining | | spec-v0.2.0 | spec at v0.2.0 | VERSION command | | spec-v0.1.0 | spec at v0.1.0 | initial version | Do not rely on these tags being stable. In the future, we may change our mind about which particular commit is the right marker for a given historical spec version. This document proposes a generic plugin-based networking solution for application containers on Linux, the Container Networking Interface, or CNI. For the purposes of this proposal, we define three terms very specifically: This document aims to specify the interface between runtimes and plugins. The key words must, must not, required, shall, shall not, should, should not, recommended, may and optional are used as specified in RFC 2119 . The CNI specification defines: CNI defines a network configuration format for administrators. It contains directives for both the container runtime as well as the plugins to consume. At plugin execution time, this configuration format is interpreted by the runtime and transformed in to a form to be passed to the plugins. In general, the network configuration is intended to be static. It can conceptually be thought of as being on disk, though the CNI specification does not actually require this. A network configuration consists of a JSON object with the following keys: Plugin configuration objects may contain additional fields than the ones defined here. The runtime MUST pass through these fields, unchanged, to the plugin, as defined in section 3. Required keys: Optional keys, used by the protocol: Reserved keys, used by the protocol: These keys are generated by the runtime at execution time, and thus should not be used in configuration. Optional keys, well-known: These keys are not used by the protocol, but have a standard meaning to plugins. Plugins that consume any of these configuration keys should respect their intended semantics. Other keys: Plugins may define additional fields that they accept and may generate an error if called with unknown fields. Runtimes must preserve unknown fields in plugin configuration objects when transforming for execution. ``` { \"cniVersion\": \"1.0.0\", \"name\": \"dbnet\", \"plugins\": [ { \"type\": \"bridge\", // plugin specific parameters \"bridge\": \"cni0\", \"keyA\": [\"some more\", \"plugin specific\", \"configuration\"], \"ipam\": { \"type\": \"host-local\", // ipam specific \"subnet\": \"10.1.0.0/16\", \"gateway\": \"10.1.0.1\", \"routes\": [ {\"dst\": \"0.0.0.0/0\"} ] }, \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } }, { \"type\": \"tuning\", \"capabilities\": { \"mac\": true }, \"sysctl\": { \"net.core.somaxconn\": \"500\" } }, { \"type\": \"portmap\", \"capabilities\": {\"portMappings\": true} } ] } ``` The CNI protocol is based on execution of binaries invoked by the container runtime. CNI defines the protocol between the plugin binary and the runtime. A CNI plugin is responsible for configuring a containers network interface in some manner. Plugins fall in to two broad categories: The runtime passes parameters to the plugin via environment variables and configuration. It supplies configuration via" }, { "data": "The plugin returns a result on stdout on success, or an error on stderr if the operation fails. Configuration and results are encoded in JSON. Parameters define invocation-specific settings, whereas configuration is, with some exceptions, the same for any given network. The runtime must execute the plugin in the runtimes networking domain. (For most cases, this means the root network namespace / dom0). Protocol parameters are passed to the plugins via OS environment variables. A plugin must exit with a return code of 0 on success, and non-zero on failure. If the plugin encounters an error, it should output an error result structure (see below). CNI defines 4 operations: ADD, DEL, CHECK, and VERSION. These are passed to the plugin via the CNI_COMMAND environment variable. A CNI plugin, upon receiving an ADD command, should either If the CNI plugin is successful, it must output a result structure (see below) on standard out. If the plugin was supplied a prevResult as part of its input configuration, it MUST handle prevResult by either passing it through, or modifying it appropriately. If an interface of the requested name already exists in the container, the CNI plugin MUST return with an error. A runtime should not call ADD twice (without an intervening DEL) for the same (CNICONTAINERID, CNIIFNAME) tuple. This implies that a given container ID may be added to a specific network more than once only if each addition is done with a different interface name. Input: The runtime will provide a JSON-serialized plugin configuration object (defined below) on standard in. Required environment parameters: Optional environment parameters: A CNI plugin, upon receiving a DEL command, should either Plugins should generally complete a DEL action without error even if some resources are missing. For example, an IPAM plugin should generally release an IP allocation and return success even if the container network namespace no longer exists, unless that network namespace is critical for IPAM management. While DHCP may usually send a release message on the container network interface, since DHCP leases have a lifetime this release action would not be considered critical and no error should be returned if this action fails. For another example, the bridge plugin should delegate the DEL action to the IPAM plugin and clean up its own resources even if the container network namespace and/or container network interface no longer exist. Plugins MUST accept multiple DEL calls for the same (CNICONTAINERID, CNIIFNAME) pair, and return success if the interface in question, or any modifications added, are missing. Input: The runtime will provide a JSON-serialized plugin configuration object (defined below) on standard in. Required environment parameters: Optional environment parameters: CHECK is a way for a runtime to probe the status of an existing container. Plugin considerations: Runtime considerations: Input: The runtime will provide a json-serialized plugin configuration object (defined below) on standard in. Required environment parameters: Optional environment parameters: All parameters, with the exception of CNI_PATH, must be the same as the corresponding ADD for this container. The plugin should output via standard-out a json-serialized version result object (see below). Input: A json-serialized object, with the following key: Required environment parameters: This section describes how a container runtime interprets a network configuration (as defined in section 1) and executes plugins accordingly. A runtime may wish to add, delete, or check a network configuration in a container. This results in a series of plugin ADD, DELETE, or CHECK executions," }, { "data": "This section also defines how a network configuration is transformed and provided to the plugin. The operation of a network configuration on a container is called an attachment. An attachment may be uniquely identified by the (CNICONTAINERID, CNIIFNAME) tuple. While a network configuration should not change between attachments, there are certain parameters supplied by the container runtime that are per-attachment. They are: Furthermore, the runtime must be provided a list of paths to search for CNI plugins. This must also be provided to plugins during execution via the CNI_PATH environment variable. For every configuration defined in the plugins key of the network configuration, The runtime must store the result returned by the final plugin persistently, as it is required for check and delete operations. Deleting a network attachment is much the same as adding, with a few key differences: For every plugin defined in the plugins key of the network configuration, in reverse order, If all plugins return success, return success to the caller. The runtime may also ask every plugin to confirm that a given attachment is still functional. The runtime must use the same attachment parameters as it did for the add operation. Checking is similar to add with two exceptions: For every plugin defined in the plugins key of the network configuration, If all plugins return success, return success to the caller. The network configuration format (which is a list of plugin configurations to execute) must be transformed to a format understood by the plugin (which is a single plugin configuration). This section describes that transformation. The execution configuration for a single plugin invocation is also JSON. It consists of the plugin configuration, primarily unchanged except for the specified additions and removals. The following fields must be inserted into the execution configuration by the runtime: The following fields must be removed by the runtime: All other fields should be passed through unaltered. Whereas CNI_ARGS are provided to all plugins, with no indication if they are going to be consumed, Capability arguments need to be declared explicitly in configuration. The runtime, thus, can determine if a given network configuration supports a specific capability. Capabilities are not defined by the specification - rather, they are documented conventions . As defined in section 1, the plugin configuration includes an optional key, capabilities. This example shows a plugin that supports the portMapping capability: ``` { \"type\": \"myPlugin\", \"capabilities\": { \"portMappings\": true } } ``` The runtimeConfig parameter is derived from the capabilities in the network configuration and the capability arguments generated by the runtime. Specifically, any capability supported by the plugin configuration and provided by the runtime should be inserted in the runtimeConfig. Thus, the above example could result in the following being passed to the plugin as part of the execution configuration: ``` { \"type\": \"myPlugin\", \"runtimeConfig\": { \"portMappings\": [ { \"hostPort\": 8080, \"containerPort\": 80, \"protocol\": \"tcp\" } ] } ... } ``` There are some operations that, for whatever reason, cannot reasonably be implemented as a discrete chained plugin. Rather, a CNI plugin may wish to delegate some functionality to another plugin. One common example of this is IP address management. As part of its operation, a CNI plugin is expected to assign (and maintain) an IP address to the interface and install any necessary routes relevant for that interface. This gives the CNI plugin great flexibility but also places a large burden on it. Many CNI plugins would need to have the same code to support several IP management schemes that users may desire" }, { "data": "dhcp, host-local). A CNI plugin may choose to delegate IP management to another plugin. To lessen the burden and make IP management strategy be orthogonal to the type of CNI plugin, we define a third type of plugin IP Address Management Plugin (IPAM plugin), as well as a protocol for plugins to delegate functionality to other plugins. It is however the responsibility of the CNI plugin, rather than the runtime, to invoke the IPAM plugin at the proper moment in its execution. The IPAM plugin must determine the interface IP/subnet, Gateway and Routes and return this information to the main plugin to apply. The IPAM plugin may obtain the information via a protocol (e.g. dhcp), data stored on a local filesystem, the ipam section of the Network Configuration file, etc. Like CNI plugins, delegated plugins are invoked by running an executable. The executable is searched for in a predefined list of paths, indicated to the CNI plugin via CNI_PATH. The delegated plugin must receive all the same environment variables that were passed in to the CNI plugin. Just like the CNI plugin, delegated plugins receive the network configuration via stdin and output results via stdout. Delegated plugins are provided the complete network configuration passed to the upper plugin. In other words, in the IPAM case, not just the ipam section of the configuration. Success is indicated by a zero return code and a Success result type output to stdout. When a plugin executes a delegated plugin, it should: If a plugin is executed with CNI_COMMAND=CHECK or DEL, it must also execute any delegated plugins. If any of the delegated plugins return error, error should be returned by the upper plugin. If, on ADD, a delegated plugin fails, the upper plugin should execute again with DEL before returning failure. Plugins can return one of three result types: Plugins provided a prevResult key as part of their request configuration must output it as their result, with any possible modifications made by that plugin included. If a plugin makes no changes that would be reflected in the Success result type, then it must output a result equivalent to the provided prevResult. Plugins must output a JSON object with the following keys upon a successful ADD operation: Delegated plugins may omit irrelevant sections. Delegated IPAM plugins must return an abbreviated Success object. Specifically, it is missing the interfaces array, as well as the interface entry in ips. Plugins should output a JSON object with the following keys if they encounter an error: Example: ``` { \"cniVersion\": \"1.0.0\", \"code\": 7, \"msg\": \"Invalid Configuration\", \"details\": \"Network 192.168.0.0/31 too small to allocate from.\" } ``` Error codes 0-99 are reserved for well-known errors. Values of 100+ can be freely used for plugin specific errors. | Error Code | Error Description | |-:|:--| | 1 | Incompatible CNI version | | 2 | Unsupported field in network configuration. The error message must contain the key and value of the unsupported field. | | 3 | Container unknown or does not exist. This error implies the runtime does not need to perform any container network cleanup (for example, calling the DEL action on the container). | | 4 | Invalid necessary environment variables, like CNICOMMAND, CNICONTAINERID, etc. The error message must contain the names of invalid variables. | | 5 | I/O failure. For example, failed to read network config bytes from stdin. | | 6 | Failed to decode" }, { "data": "For example, failed to unmarshal network config from bytes or failed to decode version info from string. | | 7 | Invalid network config. If some validations on network configs do not pass, this error will be raised. | | 11 | Try again later. If the plugin detects some transient condition that should clear up, it can use this code to notify the runtime it should re-try the operation later. | In addition, stderr can be used for unstructured output such as logs. Plugins must output a JSON object with the following keys upon a VERSION operation: Example: ``` { \"cniVersion\": \"1.0.0\", \"supportedVersions\": [ \"0.1.0\", \"0.2.0\", \"0.3.0\", \"0.3.1\", \"0.4.0\", \"1.0.0\" ] } ``` We assume the network configuration shown above in section 1. For this attachment, the runtime produces portmap and mac capability args, along with the generic argument argA=foo. The examples uses CNI_IFNAME=eth0. The container runtime would perform the following steps for the add operation. ``` { \"cniVersion\": \"1.0.0\", \"name\": \"dbnet\", \"type\": \"bridge\", \"bridge\": \"cni0\", \"keyA\": [\"some more\", \"plugin specific\", \"configuration\"], \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.0.0/16\", \"gateway\": \"10.1.0.1\" }, \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } ``` The bridge plugin, as it delegates IPAM to the host-local plugin, would execute the host-local binary with the exact same input, CNI_COMMAND=ADD. The host-local plugin returns the following result: ``` { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\" } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } ``` The bridge plugin returns the following result, configuring the interface according to the delegated IPAM configuration: ``` { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"99:88:77:66:55:44\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } ``` ``` { \"cniVersion\": \"1.0.0\", \"name\": \"dbnet\", \"type\": \"tuning\", \"sysctl\": { \"net.core.somaxconn\": \"500\" }, \"runtimeConfig\": { \"mac\": \"00:11:22:33:44:66\" }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"99:88:77:66:55:44\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` The plugin returns the following result. Note that the mac has changed. ``` { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } ``` ``` { \"cniVersion\": \"1.0.0\", \"name\": \"dbnet\", \"type\": \"portmap\", \"runtimeConfig\": { \"portMappings\" : [ { \"hostPort\": 8080, \"containerPort\": 80, \"protocol\": \"tcp\" } ] }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` The portmap plugin outputs the exact same result as that returned by bridge, as the plugin has not modified anything that would change the result (i.e. it only created iptables rules). Given the previous Add, the container runtime would perform the following steps for the Check action: ``` { \"cniVersion\":" }, { "data": "\"name\": \"dbnet\", \"type\": \"bridge\", \"bridge\": \"cni0\", \"keyA\": [\"some more\", \"plugin specific\", \"configuration\"], \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.0.0/16\", \"gateway\": \"10.1.0.1\" }, \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` The bridge plugin, as it delegates IPAM, calls host-local, CNI_COMMAND=CHECK. It returns no error. Assuming the bridge plugin is satisfied, it produces no output on standard out and exits with a 0 return code. ``` { \"cniVersion\": \"1.0.0\", \"name\": \"dbnet\", \"type\": \"tuning\", \"sysctl\": { \"net.core.somaxconn\": \"500\" }, \"runtimeConfig\": { \"mac\": \"00:11:22:33:44:66\" }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` Likewise, the tuning plugin exits indicating success. ``` { \"cniVersion\": \"1.0.0\", \"name\": \"dbnet\", \"type\": \"portmap\", \"runtimeConfig\": { \"portMappings\" : [ { \"hostPort\": 8080, \"containerPort\": 80, \"protocol\": \"tcp\" } ] }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` Given the same network configuration JSON list, the container runtime would perform the following steps for the Delete action. Note that plugins are executed in reverse order from the Add and Check actions. ``` { \"cniVersion\": \"1.0.0\", \"name\": \"dbnet\", \"type\": \"portmap\", \"runtimeConfig\": { \"portMappings\" : [ { \"hostPort\": 8080, \"containerPort\": 80, \"protocol\": \"tcp\" } ] }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` ``` { \"cniVersion\": \"1.0.0\", \"name\": \"dbnet\", \"type\": \"tuning\", \"sysctl\": { \"net.core.somaxconn\": \"500\" }, \"runtimeConfig\": { \"mac\": \"00:11:22:33:44:66\" }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` ``` { \"cniVersion\": \"1.0.0\", \"name\": \"dbnet\", \"type\": \"bridge\", \"bridge\": \"cni0\", \"keyA\": [\"some more\", \"plugin specific\", \"configuration\"], \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.0.0/16\", \"gateway\": \"10.1.0.1\" }, \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` The bridge plugin executes the host-local delegated plugin with CNI_COMMAND=DEL before returning. 2024 The CNI Authors | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "DANM", "subcategory": "Cloud Native Network" }
[ { "data": "Kubernetes 1.30 supports Container Network Interface (CNI) plugins for cluster networking. You must use a CNI plugin that is compatible with your cluster and that suits your needs. Different plugins are available (both open- and closed- source) in the wider Kubernetes ecosystem. A CNI plugin is required to implement the Kubernetes network model. You must use a CNI plugin that is compatible with the v0.4.0 or later releases of the CNI specification. The Kubernetes project recommends using a plugin that is compatible with the v1.0.0 CNI specification (plugins can be compatible with multiple spec versions). A Container Runtime, in the networking context, is a daemon on a node configured to provide CRI Services for kubelet. In particular, the Container Runtime must be configured to load the CNI plugins required to implement the Kubernetes network model. Prior to Kubernetes 1.24, the CNI plugins could also be managed by the kubelet using the cni-bin-dir and network-plugin command-line parameters. These command-line parameters were removed in Kubernetes 1.24, with management of the CNI no longer in scope for kubelet. See Troubleshooting CNI plugin-related errors if you are facing issues following the removal of dockershim. For specific information about how a Container Runtime manages the CNI plugins, see the documentation for that Container Runtime, for example: For specific information about how to install and manage a CNI plugin, see the documentation for that plugin or networking provider. In addition to the CNI plugin installed on the nodes for implementing the Kubernetes network model, Kubernetes also requires the container runtimes to provide a loopback interface lo, which is used for each sandbox (pod sandboxes, vm sandboxes, ...). Implementing the loopback interface can be accomplished by re-using the CNI loopback plugin. or by developing your own code to achieve this (see this example from CRI-O). The CNI networking plugin supports hostPort. You can use the official portmap plugin offered by the CNI plugin team or use your own plugin with portMapping functionality. If you want to enable hostPort support, you must specify portMappings capability in your cni-conf-dir. For example: ``` { \"name\": \"k8s-pod-network\", \"cniVersion\": \"0.4.0\", \"plugins\": [ { \"type\": \"calico\", \"log_level\": \"info\", \"datastore_type\": \"kubernetes\", \"nodename\": \"127.0.0.1\", \"ipam\": { \"type\": \"host-local\", \"subnet\": \"usePodCidr\" }, \"policy\": { \"type\": \"k8s\" }, \"kubernetes\": { \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\" } }, { \"type\": \"portmap\", \"capabilities\": {\"portMappings\": true}, \"externalSetMarkChain\": \"KUBE-MARK-MASQ\" } ] } ``` Experimental Feature The CNI networking plugin also supports pod ingress and egress traffic shaping. You can use the official bandwidth plugin offered by the CNI plugin team or use your own plugin with bandwidth control functionality. If you want to enable traffic shaping support, you must add the bandwidth plugin to your CNI configuration file (default /etc/cni/net.d) and ensure that the binary is included in your CNI bin dir (default /opt/cni/bin). ``` { \"name\": \"k8s-pod-network\", \"cniVersion\": \"0.4.0\", \"plugins\": [ { \"type\": \"calico\", \"log_level\": \"info\", \"datastore_type\": \"kubernetes\", \"nodename\": \"127.0.0.1\", \"ipam\": { \"type\": \"host-local\", \"subnet\": \"usePodCidr\" }, \"policy\": { \"type\": \"k8s\" }, \"kubernetes\": { \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\" } }, { \"type\": \"bandwidth\", \"capabilities\": {\"bandwidth\": true} } ] } ``` Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example: ``` apiVersion: v1 kind: Pod metadata: annotations: kubernetes.io/ingress-bandwidth: 1M kubernetes.io/egress-bandwidth: 1M ... ``` Was this page helpful? Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub Repository if you want to report a problem or suggest an improvement." } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Quickly get started and learn how to install, deploy and configure it Discover Cilium Tetragon and its capabilities Learn how to quickly install and start using Tetragon. Tetragon installation and configuration options The concepts section helps you understand various Tetragon abstractions and mechanisms. Library of Tetragon Policies This section presents various use cases on process, files, network and security monitoring and enforcement. How to contribute to the project Low level reference documentation for Tetragon Learn how to troubleshoot Tetragon Additional resources to learn about Tetragon 2024 The Tetragon Authors. All rights reserved The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page. Linux is a registered trademark of Linus Torvalds. Privacy Policy and Terms of Use." } ]
{ "category": "Runtime", "file_name": "github-privacy-statement.md", "project_name": "FabEdge", "subcategory": "Cloud Native Network" }
[ { "data": "Effective date: February 1, 2024 Welcome to the GitHub Privacy Statement. This is where we describe how we handle your Personal Data, which is information that is directly linked or can be linked to you. It applies to the Personal Data that GitHub, Inc. or GitHub B.V., processes as the Data Controller when you interact with websites, applications, and services that display this Statement (collectively, Services). This Statement does not apply to services or products that do not display this Statement, such as Previews, where relevant. When a school or employer supplies your GitHub account, they assume the role of Data Controller for most Personal Data used in our Services. This enables them to: Should you access a GitHub Service through an account provided by an organization, such as your employer or school, the organization becomes the Data Controller, and this Privacy Statement's direct applicability to you changes. Even so, GitHub remains dedicated to preserving your privacy rights. In such circumstances, GitHub functions as a Data Processor, adhering to the Data Controller's instructions regarding your Personal Data's processing. A Data Protection Agreement governs the relationship between GitHub and the Data Controller. For further details regarding their privacy practices, please refer to the privacy statement of the organization providing your account. In cases where your organization grants access to GitHub products, GitHub acts as the Data Controller solely for specific processing activities. These activities are clearly defined in a contractual agreement with your organization, known as a Data Protection Agreement. You can review our standard Data Protection Agreement at GitHub Data Protection Agreement. For those limited purposes, this Statement governs the handling of your Personal Data. For all other aspects of GitHub product usage, your organization's policies apply. When you use third-party extensions, integrations, or follow references and links within our Services, the privacy policies of these third parties apply to any Personal Data you provide or consent to share with them. Their privacy statements will govern how this data is processed. Personal Data is collected from you directly, automatically from your device, and also from third parties. The Personal Data GitHub processes when you use the Services depends on variables like how you interact with our Services (such as through web interfaces, desktop or mobile applications), the features you use (such as pull requests, Codespaces, or GitHub Copilot) and your method of accessing the Services (your preferred IDE). Below, we detail the information we collect through each of these channels: The Personal Data we process depends on your interaction and access methods with our Services, including the interfaces (web, desktop, mobile apps), features used (pull requests, Codespaces, GitHub Copilot), and your preferred access tools (like your IDE). This section details all the potential ways GitHub may process your Personal Data: When carrying out these activities, GitHub practices data minimization and uses the minimum amount of Personal Information required. We may share Personal Data with the following recipients: If your GitHub account has private repositories, you control the access to that information. GitHub personnel does not access private repository information without your consent except as provided in this Privacy Statement and for: GitHub will provide you with notice regarding private repository access unless doing so is prohibited by law or if GitHub acted in response to a security threat or other risk to security. GitHub processes Personal Data in compliance with the GDPR, ensuring a lawful basis for each processing" }, { "data": "The basis varies depending on the data type and the context, including how you access the services. Our processing activities typically fall under these lawful bases: Depending on your residence location, you may have specific legal rights regarding your Personal Data: To exercise these rights, please send an email to privacy[at]github[dot]com and follow the instructions provided. To verify your identity for security, we may request extra information before addressing your data-related request. Please contact our Data Protection Officer at dpo[at]github[dot]com for any feedback or concerns. Depending on your region, you have the right to complain to your local Data Protection Authority. European users can find authority contacts on the European Data Protection Board website, and UK users on the Information Commissioners Office website. We aim to promptly respond to requests in compliance with legal requirements. Please note that we may retain certain data as necessary for legal obligations or for establishing, exercising, or defending legal claims. GitHub stores and processes Personal Data in a variety of locations, including your local region, the United States, and other countries where GitHub, its affiliates, subsidiaries, or subprocessors have operations. We transfer Personal Data from the European Union, the United Kingdom, and Switzerland to countries that the European Commission has not recognized as having an adequate level of data protection. When we engage in such transfers, we generally rely on the standard contractual clauses published by the European Commission under Commission Implementing Decision 2021/914, to help protect your rights and enable these protections to travel with your data. To learn more about the European Commissions decisions on the adequacy of the protection of personal data in the countries where GitHub processes personal data, see this article on the European Commission website. GitHub also complies with the EU-U.S. Data Privacy Framework (EU-U.S. DPF), the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. Data Privacy Framework (Swiss-U.S. DPF) as set forth by the U.S. Department of Commerce. GitHub has certified to the U.S. Department of Commerce that it adheres to the EU-U.S. Data Privacy Framework Principles (EU-U.S. DPF Principles) with regard to the processing of personal data received from the European Union in reliance on the EU-U.S. DPF and from the United Kingdom (and Gibraltar) in reliance on the UK Extension to the EU-U.S. DPF. GitHub has certified to the U.S. Department of Commerce that it adheres to the Swiss-U.S. Data Privacy Framework Principles (Swiss-U.S. DPF Principles) with regard to the processing of personal data received from Switzerland in reliance on the Swiss-U.S. DPF. If there is any conflict between the terms in this privacy statement and the EU-U.S. DPF Principles and/or the Swiss-U.S. DPF Principles, the Principles shall govern. To learn more about the Data Privacy Framework (DPF) program, and to view our certification, please visit https://www.dataprivacyframework.gov/. GitHub has the responsibility for the processing of Personal Data it receives under the Data Privacy Framework (DPF) Principles and subsequently transfers to a third party acting as an agent on GitHubs behalf. GitHub shall remain liable under the DPF Principles if its agent processes such Personal Data in a manner inconsistent with the DPF Principles, unless the organization proves that it is not responsible for the event giving rise to the damage. In compliance with the EU-U.S. DPF, the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. DPF, GitHub commits to resolve DPF Principles-related complaints about our collection and use of your personal" }, { "data": "EU, UK, and Swiss individuals with inquiries or complaints regarding our handling of personal data received in reliance on the EU-U.S. DPF, the UK Extension, and the Swiss-U.S. DPF should first contact GitHub at: dpo[at]github[dot]com. If you do not receive timely acknowledgment of your DPF Principles-related complaint from us, or if we have not addressed your DPF Principles-related complaint to your satisfaction, please visit https://go.adr.org/dpf_irm.html for more information or to file a complaint. The services of the International Centre for Dispute Resolution are provided at no cost to you. An individual has the possibility, under certain conditions, to invoke binding arbitration for complaints regarding DPF compliance not resolved by any of the other DPF mechanisms. For additional information visit https://www.dataprivacyframework.gov/s/article/ANNEX-I-introduction-dpf?tabset-35584=2. GitHub is subject to the investigatory and enforcement powers of the Federal Trade Commission (FTC). Under Section 5 of the Federal Trade Commission Act (15 U.S.C. 45), an organization's failure to abide by commitments to implement the DPF Principles may be challenged as deceptive by the FTC. The FTC has the power to prohibit such misrepresentations through administrative orders or by seeking court orders. GitHub uses appropriate administrative, technical, and physical security controls to protect your Personal Data. Well retain your Personal Data as long as your account is active and as needed to fulfill contractual obligations, comply with legal requirements, resolve disputes, and enforce agreements. The retention duration depends on the purpose of data collection and any legal obligations. GitHub uses administrative, technical, and physical security controls where appropriate to protect your Personal Data. Contact us via our contact form or by emailing our Data Protection Officer at dpo[at]github[dot]com. Our addresses are: GitHub B.V. Prins Bernhardplein 200, Amsterdam 1097JB The Netherlands GitHub, Inc. 88 Colin P. Kelly Jr. St. San Francisco, CA 94107 United States Our Services are not intended for individuals under the age of 13. We do not intentionally gather Personal Data from such individuals. If you become aware that a minor has provided us with Personal Data, please notify us. GitHub may periodically revise this Privacy Statement. If there are material changes to the statement, we will provide at least 30 days prior notice by updating our website or sending an email to your primary email address associated with your GitHub account. Below are translations of this document into other languages. In the event of any conflict, uncertainty, or apparent inconsistency between any of those versions and the English version, this English version is the controlling version. Cliquez ici pour obtenir la version franaise: Dclaration de confidentialit de GitHub (PDF). For translations of this statement into other languages, please visit https://docs.github.com/ and select a language from the drop-down menu under English. GitHub uses cookies to provide, secure and improve our Service or to develop new features and functionality of our Service. For example, we use them to (i) keep you logged in, (ii) remember your preferences, (iii) identify your device for security and fraud purposes, including as needed to maintain the integrity of our Service, (iv) compile statistical reports, and (v) provide information and insight for future development of GitHub. We provide more information about cookies on GitHub that describes the cookies we set, the needs we have for those cookies, and the expiration of such cookies. For Enterprise Marketing Pages, we may also use non-essential cookies to (i) gather information about enterprise users interests and online activities to personalize their experiences, including by making the ads, content, recommendations, and marketing seen or received more relevant and (ii) serve and measure the effectiveness of targeted advertising and other marketing" }, { "data": "If you disable the non-essential cookies on the Enterprise Marketing Pages, the ads, content, and marketing you see may be less relevant. Our emails to users may contain a pixel tag, which is a small, clear image that can tell us whether or not you have opened an email and what your IP address is. We use this pixel tag to make our email communications more effective and to make sure we are not sending you unwanted email. The length of time a cookie will stay on your browser or device depends on whether it is a persistent or session cookie. Session cookies will only stay on your device until you stop browsing. Persistent cookies stay until they expire or are deleted. The expiration time or retention period applicable to persistent cookies depends on the purpose of the cookie collection and tool used. You may be able to delete cookie data. For more information, see \"GitHub General Privacy Statement.\" We use cookies and similar technologies, such as web beacons, local storage, and mobile analytics, to operate and provide our Services. When visiting Enterprise Marketing Pages, like resources.github.com, these and additional cookies, like advertising IDs, may be used for sales and marketing purposes. Cookies are small text files stored by your browser on your device. A cookie can later be read when your browser connects to a web server in the same domain that placed the cookie. The text in a cookie contains a string of numbers and letters that may uniquely identify your device and can contain other information as well. This allows the web server to recognize your browser over time, each time it connects to that web server. Web beacons are electronic images (also called single-pixel or clear GIFs) that are contained within a website or email. When your browser opens a webpage or email that contains a web beacon, it automatically connects to the web server that hosts the image (typically operated by a third party). This allows that web server to log information about your device and to set and read its own cookies. In the same way, third-party content on our websites (such as embedded videos, plug-ins, or ads) results in your browser connecting to the third-party web server that hosts that content. Mobile identifiers for analytics can be accessed and used by apps on mobile devices in much the same way that websites access and use cookies. When visiting Enterprise Marketing pages, like resources.github.com, on a mobile device these may allow us and our third-party analytics and advertising partners to collect data for sales and marketing purposes. We may also use so-called flash cookies (also known as Local Shared Objects or LSOs) to collect and store information about your use of our Services. Flash cookies are commonly used for advertisements and videos. The GitHub Services use cookies and similar technologies for a variety of purposes, including to store your preferences and settings, enable you to sign-in, analyze how our Services perform, track your interaction with the Services, develop inferences, combat fraud, and fulfill other legitimate purposes. Some of these cookies and technologies may be provided by third parties, including service providers and advertising" }, { "data": "For example, our analytics and advertising partners may use these technologies in our Services to collect personal information (such as the pages you visit, the links you click on, and similar usage information, identifiers, and device information) related to your online activities over time and across Services for various purposes, including targeted advertising. GitHub will place non-essential cookies on pages where we market products and services to enterprise customers, for example, on resources.github.com. We and/or our partners also share the information we collect or infer with third parties for these purposes. The table below provides additional information about how we use different types of cookies: | Purpose | Description | |:--|:--| | Required Cookies | GitHub uses required cookies to perform essential website functions and to provide the services. For example, cookies are used to log you in, save your language preferences, provide a shopping cart experience, improve performance, route traffic between web servers, detect the size of your screen, determine page load times, improve user experience, and for audience measurement. These cookies are necessary for our websites to work. | | Analytics | We allow third parties to use analytics cookies to understand how you use our websites so we can make them better. For example, cookies are used to gather information about the pages you visit and how many clicks you need to accomplish a task. We also use some analytics cookies to provide personalized advertising. | | Social Media | GitHub and third parties use social media cookies to show you ads and content based on your social media profiles and activity on GitHubs websites. This ensures that the ads and content you see on our websites and on social media will better reflect your interests. This also enables third parties to develop and improve their products, which they may use on websites that are not owned or operated by GitHub. | | Advertising | In addition, GitHub and third parties use advertising cookies to show you new ads based on ads you've already seen. Cookies also track which ads you click or purchases you make after clicking an ad. This is done both for payment purposes and to show you ads that are more relevant to you. For example, cookies are used to detect when you click an ad and to show you ads based on your social media interests and website browsing history. | You have several options to disable non-essential cookies: Specifically on GitHub Enterprise Marketing Pages Any GitHub page that serves non-essential cookies will have a link in the pages footer to cookie settings. You can express your preferences at any time by clicking on that linking and updating your settings. Some users will also be able to manage non-essential cookies via a cookie consent banner, including the options to accept, manage, and reject all non-essential cookies. Generally for all websites You can control the cookies you encounter on the web using a variety of widely-available tools. For example: These choices are specific to the browser you are using. If you access our Services from other devices or browsers, take these actions from those systems to ensure your choices apply to the data collected when you use those systems. This section provides extra information specifically for residents of certain US states that have distinct data privacy laws and regulations. These laws may grant specific rights to residents of these states when the laws come into effect. This section uses the term personal information as an equivalent to the term Personal Data. These rights are common to the US State privacy laws: We may collect various categories of personal information about our website visitors and users of \"Services\" which includes GitHub applications, software, products, or" }, { "data": "That information includes identifiers/contact information, demographic information, payment information, commercial information, internet or electronic network activity information, geolocation data, audio, electronic, visual, or similar information, and inferences drawn from such information. We collect this information for various purposes. This includes identifying accessibility gaps and offering targeted support, fostering diversity and representation, providing services, troubleshooting, conducting business operations such as billing and security, improving products and supporting research, communicating important information, ensuring personalized experiences, and promoting safety and security. To make an access, deletion, correction, or opt-out request, please send an email to privacy[at]github[dot]com and follow the instructions provided. We may need to verify your identity before processing your request. If you choose to use an authorized agent to submit a request on your behalf, please ensure they have your signed permission or power of attorney as required. To opt out of the sharing of your personal information, you can click on the \"Do Not Share My Personal Information\" link on the footer of our Websites or use the Global Privacy Control (\"GPC\") if available. Authorized agents can also submit opt-out requests on your behalf. We also make the following disclosures for purposes of compliance with California privacy law: Under California Civil Code section 1798.83, also known as the Shine the Light law, California residents who have provided personal information to a business with which the individual has established a business relationship for personal, family, or household purposes (California Customers) may request information about whether the business has disclosed personal information to any third parties for the third parties direct marketing purposes. Please be aware that we do not disclose personal information to any third parties for their direct marketing purposes as defined by this law. California Customers may request further information about our compliance with this law by emailing (privacy[at]github[dot]com). Please note that businesses are required to respond to one request per California Customer each year and may not be required to respond to requests made by means other than through the designated email address. California residents under the age of 18 who are registered users of online sites, services, or applications have a right under California Business and Professions Code Section 22581 to remove, or request and obtain removal of, content or information they have publicly posted. To remove content or information you have publicly posted, please submit a Private Information Removal request. Alternatively, to request that we remove such content or information, please send a detailed description of the specific content or information you wish to have removed to GitHub support. Please be aware that your request does not guarantee complete or comprehensive removal of content or information posted online and that the law may not permit or require removal in certain circumstances. If you have any questions about our privacy practices with respect to California residents, please send an email to privacy[at]github[dot]com. We value the trust you place in us and are committed to handling your personal information with care and respect. If you have any questions or concerns about our privacy practices, please email our Data Protection Officer at dpo[at]github[dot]com. If you live in Colorado, Connecticut, or Virginia you have some additional rights: We do not sell your covered information, as defined under Chapter 603A of the Nevada Revised Statutes. If you still have questions about your covered information or anything else in our Privacy Statement, please send an email to privacy[at]github[dot]com. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Runtime", "file_name": "github-privacy-statement.md", "project_name": "DANM", "subcategory": "Cloud Native Network" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Runtime", "file_name": "understanding-github-code-search-syntax.md", "project_name": "FabEdge", "subcategory": "Cloud Native Network" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Runtime", "file_name": "installing.md", "project_name": "FD.io", "subcategory": "Cloud Native Network" }
[ { "data": "About VPP Use Cases Getting started Developer Documentation Interfacing with VPP Contributing Debug CLI Configuration file If you want to use VPP it can be convenient to install the binaries from existing packages. This guide describes how to pull, install and run the VPP packages. FD.io VPP packages are stored in Packagecloud.io package repositories. There is a package repository for the latest VPP release packages as well as a package repository associated with each branch in the VPP git repository. The VPP merge jobs which run on Jenkins (https://jenkins.fd.io) for each actively supported git branch uploads packages to packagecloud that are built from the vpp code in the branch. The following are instructions on how to install VPP on Ubuntu. The following is a brief description of the packages to be installed with VPP. Copyright 2018-2022, Linux Foundation." } ]
{ "category": "Runtime", "file_name": "advanced.html.md", "project_name": "FD.io", "subcategory": "Cloud Native Network" }
[ { "data": "Introduction Latest version Installation Management Applications API flexiEdge UI Troubleshooting Guides Videos Open Source The following section covers advanced troubleshooting techniques. flexiWAN relies on number of underlying components, from Ubuntu Server as OS and all its functionalites such as netplan, to VPP and FRR components. flexiWAN utilizes VPP, so packet capture using tcpdump will not show all traffic. Instead use the following commands to capture packets. This can also be done from flexiManage Send Command tab. Baremetal: vppctl pcap dispatch trace on max 1000 file vrouter.pcap buffer-trace dpdk-input 1000 wait 10-15 seconds vppctl pcap dispatch trace off VMware: vppctl pcap dispatch trace on max 10000 file vrouter.pcap buffer-trace vmxnet3-input 1000 wait 10-15 seconds vppctl pcap dispatch trace off After executing the above commands, download the .pcap file from /tmp. In case the file doesnt appear, simply re-run the packet capture commands again. Inspect the pcap using Wireshark. With the VPP pcap each packet appears multiple times showing the path of the packet through the VPP nodes. This is useful to troubleshoot network issues and tunnel connectivity issues. flexiWAN uses VPP network stack. VPP CLI and vppctl commands are available from flexiEdge shell. Learn more about VPP CLI and full list of supported commands here. Most commonly used vppctl commnands: vppctl show int vppctl show hard vppctl show ip addr vppctl show ip fib vppctl arp vppctl show ip arp flexiWAN offers FRR for routing, and its component OSPF to share routing information between sites. The way the system works is that OSPF learns possible shortest paths routes. In this section we cover OSPF troubleshooting basics. Commands can be executed from shell or from the Send Command tab per device. Check if ospf process is running: ps -ef | grep ospf Capture OSPF packets using tcpdump on flexiEdge: tcpdump -n -v -s0 -c 10 -i <Linux i/f>:nnnp proto ospf - capture 10 ospf packets To troubleshoot FRR use the vtysh: Show current FRR configuration: vtysh -c \"show running-config\" Show learned neighbours: vtysh -c \"show ip ospf neighbor\" Show interfaces: vtysh -c \"show ip ospf interface\" Show routes: vtysh -c \"show ip ospf route\" Warning Manually editing OSPF is not supported. Please make all changes through flexiManage instead. flexiWAN supports BGP for routing. Several status commands are supported via vtysh. The following commands can be executed via Command tab from flexiMaange. Summary of BGP neighbor status: vtysh -c \"show bgp summary\" BGP nexthop table: vtysh -c \"show bgp nexthop\" Display number of prefixes for all afi/safi: vtysh -c \"statistics-all\" Show BGP VRFs: vtysh -c \"vrfs\" Global BGP memory statistics: vtysh -c \"show bgp memory\" flexiWAN uses netplan.io for network interfaces configuration. Through Netplan YAML files each interface can be configured. Learn more about Netplan here. During Ubuntu installation user is prompted to select a network interface for internet access. The interface which is selected during setup will be automatically defined in the default Netplan YAML file. This file is used when the vRouter is not running. /etc/netplan/50-cloud-init.yaml Interfaces configuration through flexiManage is saved in netplan files once the vRouter is started. flexiManage does not change unassigned" }, { "data": "Checking tunnel network connectivity with UDP test In case of packet captures showing flexiWAN tunnel traffic is not arriving, run the following test to see if UDP port 4789 is open and not filtered. The test consist of running UDP server script on site A and sending UDP traffic to it from site B. Use the public IP and ports in the case of NAT as seen in the flexiManage public IP on the Device -> Interfaces page. The test in question uses phython3 so make sure to first install pip3: apt install python3-pip Then follow the steps below on each of the flexiEdge devices. Install the tool pip3 install udp-test stop the vRouter from flexiManage devices page on both routers in question. On site A start the server: udp-test server -p 4789 Connect to server on site A from site B udp-test client -h server_IP -p 4789 -l 4789 Where server_IP is IP of the remote site. Try sending messages and confirm its passing. This section will explain how to check the policies in Flexiwan Edge/Router using basic VPP commands Once you configure the policies on the Path Selection page you need to install into the respective Flexiwan Edge/Router, Once you install the policies FlexiManage will push the configuration to the respective Flexiwan Edge/Router. To check whether the policies are pushed we can use the following VPP CLI commands VPP CLI Commands to check Policy To check if LAN outgoing traffic is matching the policy, run the following command: vppctl show fwabf policy The output should be: The above command shows few important counters: matched - Number of packets matched the policy defined applied - Number of packets for which the policy is applied fallback - Number of packets for which the policy is not matched and following fallback action dropped - Number of packets dropped by applying the policy. acl: - Displays acl index number, note it for for the next command. vppctl show acl-plugin acl index <acl index number> - replace acl index number from the output of above command. The above command will show the access control list of a specific ACL index number. flexiWAN offers several commands to display information about firewall and NAT. vppctl show nat44 sessions - view current WAN NAT sessions vppctl show acl-plugin interface - list interfaces with ACL index values. To be used with the next command. vppctl show acl-plugin acl <index> - view allowed / blocked per ACL rule. Under index put the interface from the above command. To view status of IKEv2 connections enter the following command from the device Command tab or using shell: vppctl show ikev2 sa details Advanced logging may be set running the following commands via Command tab or shell: vppctl ikev2 set logging level 5 vppctl event-logger clear vppctl show event-logger After entering the above commands, IKEv2/IPsec logging will be outputed to the device syslog. Syslog can be fetched from flexiManage, by navigating to device Logs tab. To view the DNS servers used by PPPoE conection enter the following command via shell only. systemd-resolve --status The command will output DNS IPs set for all interfaces, search for the one with PPP in name, for example ppp-eth0. flexiWAN includes several commands to troubleshoot and manipulate LTE settings from the command" }, { "data": "First find out the device names by entering the following command: ls -l /dev/cdc-* Output will show device name /dev/cdc-wdm0 or in case of multiple modems a list where each will follow the numbering cdc-wdm0, cdc-wdm1 etc. Get mobile provider name: mbimcli --device /dev/cdc-wdm0 -p --query-subscriber-ready-status mbimcli --device /dev/cdc-wdm0 -p --query-registration-state Query IP address: mbimcli --device /dev/cdc-wdm0 -p --query-ip-configuration Show SIM card status: qmicli --device=/dev/cdc-wdm0 -p --uim-get-card-status Output for Card state should be present. Switching SIM slots or when changing SIM card in any slot, run the following commands: qmicli --device=/dev/cdc-wdm0 -p --uim-sim-power-off=1 qmicli --device=/dev/cdc-wdm0 -p --uim-sim-power-on=1 Disconnect from provider network: mbimcli --device /dev/cdc-wdm0 -p --disconnect=0 Connect to the provider network: mbimcli --device /dev/cdc-wdm0 -p --query-subscriber-ready-status mbimcli --device /dev/cdc-wdm0 -p --query-registration-state mbimcli --device /dev/cdc-wdm0 -p --attach-packet-service mbimcli --device /dev/cdc-wdm0 -p --connect=apn=APNNAME,username=USERNAME,password=PASS,auth=PAP For step 4. replace APNNAME, USERNAME, PASS with provider info. Supported auth methods are: PAP, CHAP and MSCHAPV2. Note, use the commands above only for troubleshooting, in case when the device shows LTE as not connected. In most cases, LTE should automatically connect. List all other available: qmicli --help-all Note, not all other commands may work or are supported. Certain Quectel modules such as Quectel RM500Q-AE may experience issue when SIM card isnt detected. A specific modem configuration is required before registering the device to flexiManage. Note, if the device is already registered, simply delete it it from flexiManage and re-register after completing the steps below. First step is to install minicom to access the modem using its serial port. apt install minicom After installtion run minicom -s to start minicom. Then select Serial port setup. On the next screen type A on the keyboard to select Serial Device and change it to /dev/ttyUSB2. Press enter twice to confirm. Finally, select Save setup as dfl and select Exit from Minicom. To verify sim card can now be detected, enter minicom -s again and run the following commands: AT - to make sure modem is responding to commands. AT+QUIMSLOT? - the expected output is 1. If the value is 2. follow the steps 3. and 4. AT+QUIMSLOT=1 AT+QPOWD=0 - the following command restarts the modem. After modem restart re-try step 2. flexiWAN can switch supported LTE modems from QMI to MBIM and vice versa from the command line. Please add to our docs how to switch modem from QMI to MBIM, and from MBIM to QMI. Sierra Wireless qmicli --device=/dev/cdc-wdm0 --device-open-proxy --dms-swi-set-usb-composition=8 qmicli --device=/dev/cdc-wdm0 --device-open-proxy --dms-set-operating-mode=offline qmicli --device=/dev/cdc-wdm0 --device-open-proxy --dms-set-operating-mode=reset Quectel Use minicom, run: AT+QCFG=\"usbnet\",2 AT+QPOWD=0 Sierra Wireless qmicli --device=/dev/cdc-wdm0 --device-open-proxy --dms-swi-set-usb-composition=6 qmicli --device=/dev/cdc-wdm0 --device-open-proxy --dms-set-operating-mode=offline qmicli --device=/dev/cdc-wdm0 --device-open-proxy --dms-set-operating-mode=reset Quectel Use minicom, run: AT+QCFG=\"usbnet\",0 AT+QPOWD=0 flexiWAN 6.2.1 introduced new Host OS which upgrade should be painless and fast. However, in some rare cases device may not be bootable due kernel issue. If an error in boot is as following, please follow the steps bellow to resolution for a quick fix. end kernel panic not syncing vfs unable to mount root fs on unknown blocked Reboot again the device On the bootloader (GRUB) menu, select the previous boot entry, the one with second latest Linux kernel version available Once the device has been booted up properly, login to the device with admin user and run the following commands to recover from that situation: sudo update-initramfs -u -k all sudo update-grub Grab the output of the above commands. Reboot again the device by executing reboot. flexiWAN should boot now normally. Copyright 2019-2024, flexiWAN" } ]
{ "category": "Runtime", "file_name": "github-terms-of-service.md", "project_name": "FabEdge", "subcategory": "Cloud Native Network" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Runtime", "file_name": "NE-platform-architecture.htm.md", "project_name": "FD.io", "subcategory": "Cloud Native Network" }
[ { "data": "This topic focuses on the guiding principles and standards of the architecture. The software stack and major components of the OSS/BSS, as well as the packet walk from virtual devices to the cloud and other destinations are covered. The Network Edge architecture includes a fully capable stack of hardware, software, and design principles that derive from multiple standards bodies and vendors. Equinix has built a full stack platform for the Network Edge based on the standards set forth by ETSI, the European Telecommunications Standards Institute. Specifically, the ETSI established an NFV Industry Specification Group that has defined much of the landscape for network functions virtualization. The ETSI NFV framework consists of three major components: Network Functions Virtualization Infrastructure (NFVI): A subsystem that consists of all the hardware (servers, storage, and networking) and software components on which Virtual Network Functions (VNFs) are deployed. This includes the compute, storage, and networking resources, and the associated virtualization layer (hypervisor). Management and Orchestration (MANO): A subsystem that includes the Network Functions Virtualization Orchestrator (NFVO), the virtualized infrastructure manager (VIM), and the Virtual Network Functions Manager (VNFM). Virtual Network Functions (VNFs): The software implementation of network functions that are instantiated as one or more virtual machines (VMs) on the NFVI. Overlaid on this framework is legacy, current, and new operational and business support systems that Equinix has procured or built over the years, resulting in a standardized architecture: Within each component are multiple systems, some of which are described in more detail below. The core concept behind NFV is to implement these network functions as pure software that runs over the NFVI. A VNF is a virtualized version of a traditional network function, such as a router or firewall but it could also be a discrete action such as NAT or BGP. This concept is radically different from the traditional hardware deployment implementation in many ways. Decoupling of the software from the hardware allows the lifecycle and development of each of these network functions in separate cycles. This decoupling allows for a model where the hardware/infrastructure resources can be shared across many software network functions. A VNF implementation (like a virtual router or virtual switch) doesnt usually change the essential functional behavior and the external operational interfaces of a traditional Physical Network Function (PNF), like a traditional router or a switch. The VNF can be implemented as a single virtual machine, multiple VMs, or a function implemented within a single shared VM with other functions. Within the NFVi component of the architecture resides most of the hardware deployment. Equinix deploys a full complement of compute nodes, management devices, top of rack aggregation switches, border routers to other services, storage, and other aspects that enable the full" }, { "data": "The depth and size of each deployment can vary depending on market, projections, capacity and other factors. We refer to this full suite as a Point of Deployment, or POD. Each POD is independent of every other POD, even if more than one POD is deployed in the same metro. A full POD also includes redundant top of rack aggregation switches and management switches for internal use such as operations/support, monitoring or ongoing orchestration of new assets. Within the POD Equinix hosts virtual machines that run the software images of each VNF. Our VMs are KVM-based and the infrastructure is on an Openstack platform. Each virtual device is logically connected to the aggregation switches and interconnection platforms above it using VXLAN technology, and a VPP orchestrates the networking between them and in and out of the POD: The VPP is the vector packet processing software that makes intelligent decisions about switching and routing of packets. The VPP passes traffic back and forth to Equinix Fabric and EC (Internet) interconnection platforms and maintains full redundancy in case of failures at the POD level. For information about redundant and resilient architecture, see Architecting Resiliency. The NFV Management and Orchestration suite has several key software components that facilitate the platform. This portion of the reference architecture is often referred to as management and orchestration (MANO) Virtual Infrastructure Management (VIM) Handles the instantiation, configuration, reservation, and other functions of the compute, storage, and other traditional infrastructure elements. Virtual Network Functions manager (VNFM) Handles lifecycle, monitoring, and other activities of active virtual devices. Runs the workflow of deploying a device, change management, and ultimately teardown/deletion of devices. Network Functions Virtualization Orchestrator (NFVO) ensures that the right configs are loaded to software images, inventory is fetched and reserved (such as IP addresses and VXLANs), and other features where coordination with other systems and OSS/BSS are required. Equinix maintains redundant orchestrators in each region. When a request is made through the portal or API, it reaches into the relevant orchestrator to begin the process of reserving assets, inventory, and selecting an appropriate configuration and image for the requested device or service. Here is an example of the flow and interaction between the various systems in a specific region: When needed, the Network Edge orchestrator interacts with the Equinix Fabric orchestrator to coordinate activities of activating a connection from the interface of a VNF to the cloud or other destination of choice. Each activity is regularly checking inventory to see what is available and reserve bandwidth, IP addressing, VLANs or other logical resources so that they are not taken by any other device in the" }, { "data": "Equinix also includes a host of internal management and monitoring tools, some parts which you might see. Our suite includes: Monitoring Health and performance of physical and logical assets, such as CPU and RAM utilization POD-level views into physical and virtual active components and objects Analysis and Reporting Service impact analysis to determine the relationships between different components and the effect each has on the other when changes or events occur POD capacity forecasting let's our engineers know in advance when augments to compute, network, or other assets is going to be needed Automation Auto-discovery when capacity is augmented and added to the POD or uplinks to other platforms, and quickly becomes usable Reports on and reacts to POD-level health and changes Fully integrated with the VIM Network Edge uses EVPN/VXLAN for the control and data plane functions. The main purpose for the Layer 2 control plane and MAC learning is to establish Layer 2 reachability between the VNF and the respective CSP router. Once Layer 2 connectivity is established, Layer 3 peering can be established between the VNF and its respective peer. Therefore, only two MAC addresses are learned across a single VNI because that is all that is needed for connectivity, while many MAC addresses are learned across the VTEP (two for each VNI). The following shows the data flow from the middle working outward to establish route peering. The infrastructure control plane consists of EVPN between the compute VTEP and Equinix Fabric VTEP to enable dynamic MAC address learning while VXLAN is used as the data plane between the compute and Equinix Fabric nodes. Additionally, the VNI is mapped to the VPP vSwitch and the MAC address gets encapsulated on ingress to the VPP and tagged in the example below with the VNI of 10. Before an overlay control plane session between the private cloud and VNF can be established, one more leg of the Layer 2 control plane between the Equinix Fabric and the respective private cloud must exist. During the provisioning process when connecting to a CSP a VLAN gets dynamically instantiated and connected to the Equinix Fabric switch, typically over a .1q connection. The MAC address of the CSP is then learned over this .1q trunk port as shown below. In the example below, MAC:01B from the CSP is learned at the physical port on the Equinix Fabric switch through VLAN 462. The last connection needed to complete the Layer 2 control plane is done through routing instances (RI) on the Equinix Fabric switch which forms an internal link for the EVPN session. Once the last leg of the Layer 2 control plane has been completed, the overlay Layer 3 control plane for BGP peering can be established. The entire solution looks like this end to end:" } ]
{ "category": "Runtime", "file_name": "VPP_Implementation.md", "project_name": "FD.io", "subcategory": "Cloud Native Network" }
[ { "data": "This document describes Terragraph's datapath implementation using VPP. Terragraph uses VPP (Vector Packet Processing framework), along with DPDK and a custom wil6210 PMD, to implement its datapath. Terragraph uses the NXP fork of standard VPP, with a large number of Terragraph-specific patches applied on top. The current version is NXP release 21.08-LSDK, corresponding to DPDK 20.11.2 and VPP 21.01. VPP is installed via meta-qoriq/recipes-extended/vpp/vpp_21.08-lsdk.bb. There are also some scripts related to VPP service management and startup configuration installed via meta-qoriq/recipes-facebook/tg-vpp/tg-vpp_0.1.bb. Terragraph builds additional features on top of VPP, summarized below: | Name | Type | Source Location | Description | |:--|:|:-|:| | vpp-tgcfg | plugin | src/vpp_plugins/tgcfg/ | Configures the Terragraph-specific slow datapath. | | vpp-chaperone | application | src/vpp_plugins/vpp-chaperone/ | Applies user configs to VPP (e.g. POP, CPE, SR) over the shared memory API. | | openr-fib-vpp | application | src/vpp_plugins/openr-fib-vpp/ | The Open/R FIB (forwarding information base) implementation for VPP. | | vpp-ptptc | plugin | src/vpp_plugins/ptptc/ | Configures PTP-TC (Precision Timing Protocol Transparent Clock) timestamping in VPP. | | vpp-esmc | plugin | src/vpp_plugins/esmc/ | Configures SyncE (Synchronous Ethernet) ESMC (Ethernet Synchronization Messaging Channel) operation in VPP. | The following VPP services are managed by runit: vpp, fib_vpp, and vpp_chaperone. Logs for each service are written to /var/log/<service>/current by svlogd. Additionally, DPDK/PMD logs captured by VPP are extracted from syslog and written to /var/log/vpp/vnet.log. For more details, see Node Services. The lifetimes of certain services are tied together: Startup configuration for VPP is generated at boot time by src/terragraph-e2e/lua/updatevppstartup_conf.lua using parameters read from the node configuration and node info files. A separate coop service monitors the Linux loopback prefix for changes and reconfigures the VPP loopback prefix if necessary by running vpp_chaperone. Some VPP stats are scraped by the stats_agent process and published on a ZMQ port (see Stats, Events, Logs for details). The flowchart below demonstrates the path that data packets take through VPP and special hooks established in the VPP packet processing graph by the vpp-tgcfg plugin. This is explained in detail in subsequent sections. All data packets received from hardware enter VPP through the dpdk-input node. This node normally passes data to the ethernet-input node, which then classifies packets as IPv4, IPv6, WPA, and so on. This document will focus on IPv6 packets, which are the primary data packet type being dealt with. Each IPv6 packet ends up in the ip6-input node, which is mostly responsible for determining if the packet is supposed to transit out of the node. This involves somewhat complex logic that is covered by the \"Local?\" decision node, without going into details on how this is actually implemented. For example, all unicast packets get their destination address looked up in the IPv6 FIB. All link-local addresses will hit an entry in the FIB that will redirect them to the ip6-link-local node, which in turn will perform a lookup in a separate FIB to find out if the destination address matches one of the interfaces that VPP recognizes. If it does, the packet gets forwarded into the ip6-local node to be handled on the node. Otherwise, the packet is for an invalid address and gets" }, { "data": "The FIB entry to redirect all link-local packets to ip6-link-local looks like this: ``` fe80::/10unicast-ip6-chain [@15]: ip6-link-local``` The following trace fragment demonstrates the handling of link-local packets in VPP by this path: ``` 00:53:14:979091: ip6-lookup fib 0 dpo-idx 0 flow hash: 0x00000000 ICMP6: fe80::6ce:14ff:feff:4281 -> fe80::6ce:14ff:feff:428b tos 0x00, flow label 0xb271e, hop limit 64, payload length 64 ICMP echorequest checksum 0xfd8700:53:14:979102: ip6-link-local swifindex:14 fibindex:1000:53:14:979106: ip6-lookup fib 10 dpo-idx 14 flow hash: 0x00000000 ICMP6: fe80::6ce:14ff:feff:4281 -> fe80::6ce:14ff:feff:428b tos 0x00, flow label 0xb271e, hop limit 64, payload length 64 ICMP echorequest checksum 0xfd8700:53:14:979110: ip6-local ICMP6: fe80::6ce:14ff:feff:4281 -> fe80::6ce:14ff:feff:428b tos 0x00, flow label 0xb271e, hop limit 64, payload length 64 ICMP echorequest checksum 0xfd87``` Similarly, all multicast addresses are looked up in the ip6-mfib FIB table and either get dropped or get forwarded to ip6-local to be handled on the node: ``` 00:52:58:405106: ethernet-input IP6: 04:ce:14:ff:42:81 -> 04:ce:14:ff:42:8b00:52:58:405118: ip6-input UDP: fe80::6ce:14ff:feff:4281 -> ff02::1 tos 0xc0, flow label 0x5d69e, hop limit 255, payload length 171 UDP: 6666 -> 6666 length 171, checksum 0x63e700:52:58:405128: ip6-mfib-forward-lookup fib 0 entry 400:52:58:405136: ip6-mfib-forward-rpf entry 4 itf 14 flags Accept,00:52:58:405144: ip6-replicate replicate: 2 via [@1]: dpo-receive00:52:58:405153: ip6-local UDP: fe80::6ce:14ff:feff:4281 -> ff02::1 tos 0xc0, flow label 0x5d69e, hop limit 255, payload length 171 UDP: 6666 -> 6666 length 171, checksum 0x63e7``` All known unicast addresses for existing interfaces in VPP will get an entry in the IPv6 FIB, similar to this one, which will redirect matching packets to the ip6-local node as well: ``` 71::3/128 unicast-ip6-chain [@0]: dpo-load-balance: [proto:ip6 index:66 buckets:1 uRPF:71 to:[3:312]] [0] [@2]: dpo-receive: 71::3 on loop1``` A sample trace fragment is given below: ``` 01:21:15:353214: ethernet-input IP6: 04:ce:14:ff:42:81 -> 04:ce:14:ff:42:8b01:21:15:353224: ip6-input ICMP6: 2801:b8:fb:fb9c::1 -> 71::3 tos 0x00, flow label 0x27108, hop limit 63, payload length 64 ICMP echorequest checksum 0xbd0201:21:15:353233: ip6-lookup fib 0 dpo-idx 5 flow hash: 0x00000000 ICMP6: 2801:b8:fb:fb9c::1 -> 71::3 tos 0x00, flow label 0x27108, hop limit 63, payload length 64 ICMP echorequest checksum 0xbd0201:21:15:353243: ip6-local ICMP6: 2801:b8:fb:fb9c::1 -> 71::3 tos 0x00, flow label 0x27108, hop limit 63, payload length 64 ICMP echo_request checksum 0xbd0201:21:15:353252: tg-slowpath-terra-rx``` The important takeaway from the above examples is as follows: any packet that is to be handled on the node itself eventually goes though a FIB lookup that returns a dpo-receive object. This tells VPP that the packet should be handled by the ip6-local node as the next step. The DPO (\"data-path object\") also encodes the interface for which the packet is received, so upon entry ip6-local knows the interface that has received the packet and the interface for which the packet was sent. All other packets either match a valid entry in the FIB and are forwarded to the appropriate interface to be sent out, or they are dropped as invalid. The vpp-tgcfg plugin installs several hooks into the VPP packet graph and configures certain interfaces and their properties on startup. The primary goal for these is to handle the Terragraph-specific slow datapath, where some traffic is handled entirely in VPP (fast path) and some gets forwarded to the Linux kernel to be handled there (slow path). The slow path is enabled by" }, { "data": "It can be switched off by adding the \"slowpath\" directive in the \"terragraph\" section of VPP's startup configuration: ``` terragraph { slowpath off}``` VPP maintains several \"split-level\" interfaces: The VPP plugin identifies all Wigig interfaces at startup time, establishes a side-channel API connection to corresponding PMD and consequently dpdk-dhd.ko running in the kernel automatically. This auto-probe can be switched off by adding the \"auto-probe off\" directive in the \"terragraph\" section of VPP's startup configuration: ``` terragraph { auto-probe off}``` VPP creation of the vpp-vnet0/vnet0 tap interface pair is enabled by the following directive in the startup configuration: ``` terragraph { host interface vnet0}``` The vpp-nic0/nic0 tap interface pair is created by the following directive in the startup configuration: ``` terragraph { interface TenGigabitEthernet0 { tap nic0 }}``` The directive example above will create nic0 on the kernel side and vpp-nic0 in VPP, and arrange for slow path processing similar to processing done for Wigig/vpp-terra0/terra0. The MAC address of the nic0 interface in the kernel will match the MAC address of the corresponding TenGigabitEthernet0 interface in VPP. The following section lists the graph nodes that vpp-tgcfg implements, their functions, and where they are connected into the packet processing graph. This node intercepts all packets received from WigigX/Y/Z/0 interfaces and examines packet metadata to find out what vpp-terraX interface should be marked as the receiving interface for the packet. The packet metadata is stored in the rte_mbuf structure using a custom dynamic field (or dynfield). It then forwards the packets to the normal VPP data processing path, namely the ethernet-input node. As far as VPP is concerned, the packets are received by proper vpp-terraX interfaces from now on. The node is inserted into the graph by a call in the tginterfaceenable() function: ``` / Let incoming traffic to be assigned to correct link interface / vnetfeatureenabledisable (\"device-input\", \"tg-link-input\", swifindex, enabledisable, 0, 0);``` These two nodes intercept all packets from vpp-terraX and slow path-enabled wired ports hitting the ip6-local node, before any processing has been done for them. These nodes examine the interface addressed by the packet and if it happens to belong to any slow path-enabled interface, the packet is forwarded to tg-link-local-tx or tg-wired-local-tx nodes for packets received over Wigig or over wired ports, respectively. The packet will then be forwarded to the kernel and inserted into the Linux stack as appropriate. Packets addressed to any other interface are processed by VPP locally. Packets addressing loop0 fall into this category. Instantiated as features on the ip6-local arc in the tgwiredinterfaceenable() and tginterface_enable() functions. This node accepts packets and calls into the PMD/dpdk-dhd to get them delivered to the kernel. The kernel module then injects packets onto the Linux networking stack using netif_rx on behalf of the proper terraX interface. The node is scheduled as the next node explicitly by tg-slowpath-terra-rx when necessary. This node accepts packets that came from a slow path-enabled wired port (e.g. TenGigabitEthernet0), and forwards them to the interface output of the corresponding vpp-nicX interface. This in turn will make tapcli-tx send packets over to the Linux side and be inserted into the Linux kernel stack on behalf of the matching nicX interface. The node is scheduled as the next node explicitly by tg-slowpath-wired-rx when necessary. This node accepts all packets that VPP wants to be sent over the air using one of the vpp-terraX interfaces as the" }, { "data": "Since vpp-terraX are virtual interfaces representing the link, the node performs a function logically opposite to that of the tg-link-input node: it uses vpp-terraX information to mark the packet with the desired link to be used and then forwards the packet to WigigX/Y/Z/0 for actual transmission. This is not an actual node, but rather a TX function tglinkinterface_tx() registered as the output handler of the vpp-terraX interface class. VPP creates vpp-terraX-tx nodes with this function as the handler automatically. ``` VNETDEVICECLASS (tglinkinterfacedeviceclass) = { .name = \"TGLink\", .formatdevicename = formattglinkname, .formattxtrace = formattglinktxtrace, .txfunction = tglinkinterfacetx, .adminupdownfunction = tglinkinterfaceadminup_down,};``` This node gets scheduled every time the kernel wants to send packets over any terraX interface. The dpdk-dhd.ko will relay them into the AF_PACKET queue and make the queue socket FD readable, and that eventually results in VPP invoking the node's tglinklocal_rx() function. It will fetch all packets from the kernel, convert them into VPP buffers, mark them with the proper link interface, and invoke TX on the real Wigig interface. ``` clibfilemaint *fm = &filemain; clibfilet template = {0}; template.readfunction = tglinklocalrxfdreadready; template.filedescriptor = wi.datafd; template.description = format (0, \"%s\", \"wigig-local-rx\"); template.privatedata = veclen (tm->wigigdevs); wdev.clibfileindex = clibfileadd (fm, &template);``` This node intercepts packets received from the nicX tap interface and sends them directly over the corresponding wired port. It is similar in function to tg-link-local-rx, but for wired interfaces. Instantiated in the tgwiredinterface_enable() function in the plugin. Terragraph uses a custom Hierarchical Quality-of-Service (HQoS) scheduler on traffic transmitted out of Wigig interfaces for managing prioritization of different traffic types. The implementation is originally based on VPP's now-deprecated HQoS functionality that used the DPDK QoS framework implemented in DPDK's librte_sched library. The HQoS implementation exists as a separate module within VPP's DPDK plugin. It is included as a series of patches in meta-qoriq/recipes-extended/vpp/vpp_19.09-lsdk.bb, with code added into the directory vpp/src/plugins/dpdk/tghqos. The HQoS scheduler has 4 hierarchical levels, consisting of (1) port, (2) pipe, (3) traffic class, (4) queue. A port is a Wigig device, and a pipe is a Wigig peer. Packet IPv6 DSCP headers are used to classify packets into different traffic classes with 3 different colors. Currently Terragraph uses 4 traffic classes with 1 queue per traffic class. The color is coded as the drop precedence of the packet in accordance with RFC 2597. To avoid congestion, packets arriving at the scheduler go through a dropper that supports Random Early Detection (RED), Weighted Random Early Detection (WRED), and tail drop algorithms. The RED dropper is the same as the original one in librte_sched, ported into VPP to decouple the HQoS implementation from any specific DPDK libraries. The HQoS scheduler has many associated CLI commands for configuration, testing, and stat collection purposes. The HQoS-specific CLI commands are provided in and documented in tghqos_cli.c, including example command invocations and outputs. The HQoS scheduler is logically part of the VPP Wigig output interface. When a packet is directed to a Wigig interface for tx (WigigX/Y/Z/0-tx), it will first go through the HQoS scheduler before being passsed to the driver for transmission. The HQoS scheduler performs classification, enqueue, and dequeue operations on each" }, { "data": "For classification, the scheduler reads the mbuf's link id metadata stored in a dynfield of the mbuf to determine the pipe to which the packet is assigned, and it maps the DSCP value to the appropriate traffic class, queue, and color. This information gets stored in the mbuf's sched field. The enqueue operation uses this information to store the packet in the appropriate queue, dropping the packet as the RED dropper algorithm deems necessary or if the queue is full. The dequeue operation currently uses strict priority scheduling by default, where packets are dequeued from the HQoS scheduler in strict priority order according to traffic class. Since the Wigig driver has separate transmit rings for each peer, the HQoS scheduler uses feedback from the driver about each transmit ring's occupancy to determine how many packets to dequeue for each pipe. Note that the traffic class prioritization is thus applied within each pipe independently, and not across all pipes at once. After packets are dequeued from the HQoS scheduler, they are passed to the driver for transmission. Weighted round robin scheduling with priority levels is also provided as an option. Traffic classes are assigned a priority level and a weight. Traffic classes at a higher priority level are served before traffic classes at a lower priority level. Traffic classes at the same priority level are served in accordance to assigned weight. Each traffic class must have a non-zero weight in one priority level. Besides that requirement, weights can be arbitrary non-negative integers; for each traffic class the scheduler will use the proportion of its weight to the sum of weights in that priority level to determine the proportion of packet segments that traffic class can send during each transmission period. By default, the HQoS operations for a Wigig device run in the VPP node graph on the same worker thread used for its rx processing, but it can also be configured to use separate threads (see Interface Thread Assignment). The Wigig device software has an MTU of 4000 bytes, so any packets of length greater than 4000 are considered jumbo frames that must be segmented before being transmitted across a Wigig link. Packets are represented as DPDK rtembuf structs, and each rtembuf has 2176 bytes of data space in VPP. Packets of total packet length greater than 2176 are represented as a chain of rte_mbuf segments. Multi-segment packets are also chained with vlibbuffert linkage for processing in VPP. After VPP receives packets via dpdk-input, the vlib buffer's current_data offset pointer, which is used as the start of data to be processed in VPP, is set to mbuf->data_off - 128 (RTEPKTMBUFHEADROOM = VLIBBUFFERPREDATASIZE = 128): ``` b[0]->currentdata = mb[0]->dataoff - RTEPKTMBUFHEADROOM;``` This refers to the same address as mbuf->bufaddr + mbuf->dataoff. Jumbo frames can be easily segmented by cutting the mbuf linkage along the mbuf chain. After inserting a custom header, these segments can be enqueued for transmission as individual packets. Segmented Terragraph jumbo frame packets are identified by the custom Ethernet type 0xFF71 (TG_JUMBO). Since segmented Terragraph packets exist only on a Wigig segment between two TG nodes, the value of the Ethernet type itself is not important. The combination of the custom fields packetid, segindex, and lastseg are used in segmentation and reassembly. A protocolversion field is included for" }, { "data": "The current maximum supported jumbo frame packet size in VPP is about 18KB. This is determined by the number of segments that can be stored according to TGJUMBOFRAMESEGARRAY_LEN. Segmentation is done prior to the input of the HQoS transmit queue in the DPDK plugin. Jumbo frames are identified by packet length and are segmented by breaking the mbuf chain. Custom segmentation headers are prepended to each segment's packet data. There is no guarantee that each segment has sufficient headroom to fit the segmentation header, so packet data must be shifted to allow for enough space for the header. Each rte_mbuf would typically be allocated RTEPKTMBUFHEADROOM (128) bytes of headroom, but drivers may choose to leave 0 bytes of headroom for noninitial segments (e.g. DPAA2 driver). Accordingly, segments that require data shifting will likely not have enough tailroom, and will require the tail end of its packet data to be copied to the next segment. After cutting segment chains, shifting and copying data, and inserting headers, segments are enqueued to the HQoS transmit queue as individual packets. No further VPP node processing will occur so there is no need to address vlibbuffert linkage. Reassembly is done as soon as packets are received from the driver in the DPDK plugin, prior to injecting the packet into the rest of the VPP node graph. Only one fragmented packet can be in flight at a time, per flow. Since the Wigig has no concept of traffic class, there is a maximum of one fragmented packet per peer per sector. After an array of mbufs is received from the driver, jumbo frame segment packets are identified by their Ethernet type and removed from the mbuf processing array. They are stored in an array per device according to the link id found in a dynfield of the mbuf and its seg_index field. After a segment with the last_seg field set is received, reassembly is attempted by first checking that there is a segment for every prior index, and verifying all stored segments have the same packet_id. Then, create the linkage for the packet by chaining the mbufs, and increment each mbuf segment's data_off field to skip over the custom segmentation header. The reassembled single packet is then passed on to dpdkprocessrx_burst for further processing, including creating vlibbuffert linkage. The table below lists all VPP and PMD threads along with their CPU core allocation on Puma hardware (4 cores). | Thread Name | Core # | Purpose | |:-|:|:-| | vpp_main | 1 | Handles CLI, Wigig, and network events, etc. | | vppwkX | 2-3 | Datapath workers (one per CPU core) | | eal-intr-thread | 0 | Primary PCI interrupt dispatcher | | wil6210_wmiX | 1 | Interrupt handlers (one per Wigig card) | | poll-dhdX | 1 | ioctl dispatcher (one per Wigig card) | | wil-fw-log-poll | 1 | Firmware/ucode log polling (one per Wigig card) | | nl60g-poll | 1 | Communicate with hostmanager11ad utility (one per Wigig card) | The core allocation is set by updatevppstartup_conf.lua using the VPP configuration options cpu skip-cores, cpu workers, and dpdk dev <pci_id> workers. Threads created by the PMD are set to use the master lcore (\"logical core\") for DPDK, i.e. the core used for" }, { "data": "On Puma, CPU cores 2 and 3 are reserved for the datapath, enforced by passing isolcpus=2,3 in the Linux kernel configuration. Most user processes are expected to run on CPU core 1, configured by runit via taskset (refer to runit scripts for more details). However, the remaining core is not used for the datapath, as several firmware crashes have been observed when reserving three datapath cores. When an interface is assigned to a worker thread, its rx processing begins on that thread, and packets received on that interface continue through the VPP node graph on that thread. When these packets are directed to tx interfaces without HQoS enabled, they are sent to the driver of the tx interface on the same thread as it was received. When these packets are directed to tx interfaces with HQoS enabled, packets are not immediately sent to the driver, and the next immediate step is determined by whether there are separate HQoS threads enabled: The current default configuration has HQoS enabled on Wigig devices and no HQoS threads enabled. HQoS threads can be enabled by using the cpu corelist-tghqos-threads option in VPP's startup configuration and moving all interface rx processing onto one worker thread. Enabling HQoS threads may be beneficial for certain types of traffic, particularly with small packet sizes on nodes with 1 or 2 Wigig sectors, but performance suffers when passing traffic at maximum throughput on 3 or 4 sectors. This section describes the flow of firmware ioctl calls (initiated by user-space programs) and firmware events (initiated by firmware) within VPP. For a system-level view of these message paths, refer to Driver Stack. Firmware ioctl calls are initiated from user-space programs (e.g. e2e_minion) and require a completion notification from firmware. The path is as follows: | Sequence | Entity | Actions | |--:|:-|:-| | 1 | poll-dhdX | Wake on dhdX network interface wmi_send()Write ioctl message to PCI | | 2 | Wigig firmware | Process ioctl and generate completion event | | 3 | eal-intr-thread | Wake on VFIO_MSI interruptRead firmware message from PCIDispatch to per-interface work thread | | 4 | wil6210wmiX | Wake on Linux futex (pthreadcond_wait())Read firmware message from work queueProcess completion event | | 5 | poll-dhdX | Wake on Linux futex (pthreadcondwait())Send completion event to driver | Firmware events are initiated by firmware and do not require a completion notification. The path is as follows: | Sequence | Entity | Actions | |--:|:-|:| | 1 | Wigig firmware | Generate event | | 2 | eal-intr-thread | Wake on VFIO_MSI interruptRead firmware message from PCIDispatch to per-interface work thread | | 3 | wil6210wmiX | Wake on Linux futex (pthreadcond_wait())Read firmware message from work queueSend event to driver | This section briefly describes possible debugging options for VPP. Recompile VPP and any plugins (ex. vpp-tgcfg) with debug information by adding these lines to conf/local.conf: ``` DEBUGBUILDpn-vpp = \"1\"DEBUGBUILDpn-vpp-tgcfg = \"1\"``` VPP core dumps can be enabled via the node configuration field envParams.VPPCOREDUMPENABLED (enabled by default). Core dumps are written to /var/volatile/cores/ and can be loaded into gdb. VPP allows collecting packet traces at different points of the datapath using the \"trace\" commands (e.g. trace add <node>, show trace, clear trace). Some important graph nodes include:" } ]
{ "category": "Runtime", "file_name": "master.md", "project_name": "FD.io", "subcategory": "Cloud Native Network" }
[ { "data": "Introduction Latest version Installation Management Applications API flexiEdge UI Troubleshooting Guides Videos Open Source The following section covers advanced troubleshooting techniques. flexiWAN relies on number of underlying components, from Ubuntu Server as OS and all its functionalites such as netplan, to VPP and FRR components. flexiWAN utilizes VPP, so packet capture using tcpdump will not show all traffic. Instead use the following commands to capture packets. This can also be done from flexiManage Send Command tab. Baremetal: vppctl pcap dispatch trace on max 1000 file vrouter.pcap buffer-trace dpdk-input 1000 wait 10-15 seconds vppctl pcap dispatch trace off VMware: vppctl pcap dispatch trace on max 10000 file vrouter.pcap buffer-trace vmxnet3-input 1000 wait 10-15 seconds vppctl pcap dispatch trace off After executing the above commands, download the .pcap file from /tmp. In case the file doesnt appear, simply re-run the packet capture commands again. Inspect the pcap using Wireshark. With the VPP pcap each packet appears multiple times showing the path of the packet through the VPP nodes. This is useful to troubleshoot network issues and tunnel connectivity issues. flexiWAN uses VPP network stack. VPP CLI and vppctl commands are available from flexiEdge shell. Learn more about VPP CLI and full list of supported commands here. Most commonly used vppctl commnands: vppctl show int vppctl show hard vppctl show ip addr vppctl show ip fib vppctl arp vppctl show ip arp flexiWAN offers FRR for routing, and its component OSPF to share routing information between sites. The way the system works is that OSPF learns possible shortest paths routes. In this section we cover OSPF troubleshooting basics. Commands can be executed from shell or from the Send Command tab per device. Check if ospf process is running: ps -ef | grep ospf Capture OSPF packets using tcpdump on flexiEdge: tcpdump -n -v -s0 -c 10 -i <Linux i/f>:nnnp proto ospf - capture 10 ospf packets To troubleshoot FRR use the vtysh: Show current FRR configuration: vtysh -c \"show running-config\" Show learned neighbours: vtysh -c \"show ip ospf neighbor\" Show interfaces: vtysh -c \"show ip ospf interface\" Show routes: vtysh -c \"show ip ospf route\" Warning Manually editing OSPF is not supported. Please make all changes through flexiManage instead. flexiWAN supports BGP for routing. Several status commands are supported via vtysh. The following commands can be executed via Command tab from flexiMaange. Summary of BGP neighbor status: vtysh -c \"show bgp summary\" BGP nexthop table: vtysh -c \"show bgp nexthop\" Display number of prefixes for all afi/safi: vtysh -c \"statistics-all\" Show BGP VRFs: vtysh -c \"vrfs\" Global BGP memory statistics: vtysh -c \"show bgp memory\" flexiWAN uses netplan.io for network interfaces configuration. Through Netplan YAML files each interface can be configured. Learn more about Netplan here. During Ubuntu installation user is prompted to select a network interface for internet access. The interface which is selected during setup will be automatically defined in the default Netplan YAML file. This file is used when the vRouter is not running. /etc/netplan/50-cloud-init.yaml Interfaces configuration through flexiManage is saved in netplan files once the vRouter is started. flexiManage does not change unassigned" }, { "data": "Checking tunnel network connectivity with UDP test In case of packet captures showing flexiWAN tunnel traffic is not arriving, run the following test to see if UDP port 4789 is open and not filtered. The test consist of running UDP server script on site A and sending UDP traffic to it from site B. Use the public IP and ports in the case of NAT as seen in the flexiManage public IP on the Device -> Interfaces page. The test in question uses phython3 so make sure to first install pip3: apt install python3-pip Then follow the steps below on each of the flexiEdge devices. Install the tool pip3 install udp-test stop the vRouter from flexiManage devices page on both routers in question. On site A start the server: udp-test server -p 4789 Connect to server on site A from site B udp-test client -h server_IP -p 4789 -l 4789 Where server_IP is IP of the remote site. Try sending messages and confirm its passing. This section will explain how to check the policies in Flexiwan Edge/Router using basic VPP commands Once you configure the policies on the Path Selection page you need to install into the respective Flexiwan Edge/Router, Once you install the policies FlexiManage will push the configuration to the respective Flexiwan Edge/Router. To check whether the policies are pushed we can use the following VPP CLI commands VPP CLI Commands to check Policy To check if LAN outgoing traffic is matching the policy, run the following command: vppctl show fwabf policy The output should be: The above command shows few important counters: matched - Number of packets matched the policy defined applied - Number of packets for which the policy is applied fallback - Number of packets for which the policy is not matched and following fallback action dropped - Number of packets dropped by applying the policy. acl: - Displays acl index number, note it for for the next command. vppctl show acl-plugin acl index <acl index number> - replace acl index number from the output of above command. The above command will show the access control list of a specific ACL index number. flexiWAN offers several commands to display information about firewall and NAT. vppctl show nat44 sessions - view current WAN NAT sessions vppctl show acl-plugin interface - list interfaces with ACL index values. To be used with the next command. vppctl show acl-plugin acl <index> - view allowed / blocked per ACL rule. Under index put the interface from the above command. To view status of IKEv2 connections enter the following command from the device Command tab or using shell: vppctl show ikev2 sa details Advanced logging may be set running the following commands via Command tab or shell: vppctl ikev2 set logging level 5 vppctl event-logger clear vppctl show event-logger After entering the above commands, IKEv2/IPsec logging will be outputed to the device syslog. Syslog can be fetched from flexiManage, by navigating to device Logs tab. To view the DNS servers used by PPPoE conection enter the following command via shell only. systemd-resolve --status The command will output DNS IPs set for all interfaces, search for the one with PPP in name, for example ppp-eth0. flexiWAN includes several commands to troubleshoot and manipulate LTE settings from the command" }, { "data": "First find out the device names by entering the following command: ls -l /dev/cdc-* Output will show device name /dev/cdc-wdm0 or in case of multiple modems a list where each will follow the numbering cdc-wdm0, cdc-wdm1 etc. Get mobile provider name: mbimcli --device /dev/cdc-wdm0 -p --query-subscriber-ready-status mbimcli --device /dev/cdc-wdm0 -p --query-registration-state Query IP address: mbimcli --device /dev/cdc-wdm0 -p --query-ip-configuration Show SIM card status: qmicli --device=/dev/cdc-wdm0 -p --uim-get-card-status Output for Card state should be present. Switching SIM slots or when changing SIM card in any slot, run the following commands: qmicli --device=/dev/cdc-wdm0 -p --uim-sim-power-off=1 qmicli --device=/dev/cdc-wdm0 -p --uim-sim-power-on=1 Disconnect from provider network: mbimcli --device /dev/cdc-wdm0 -p --disconnect=0 Connect to the provider network: mbimcli --device /dev/cdc-wdm0 -p --query-subscriber-ready-status mbimcli --device /dev/cdc-wdm0 -p --query-registration-state mbimcli --device /dev/cdc-wdm0 -p --attach-packet-service mbimcli --device /dev/cdc-wdm0 -p --connect=apn=APNNAME,username=USERNAME,password=PASS,auth=PAP For step 4. replace APNNAME, USERNAME, PASS with provider info. Supported auth methods are: PAP, CHAP and MSCHAPV2. Note, use the commands above only for troubleshooting, in case when the device shows LTE as not connected. In most cases, LTE should automatically connect. List all other available: qmicli --help-all Note, not all other commands may work or are supported. Certain Quectel modules such as Quectel RM500Q-AE may experience issue when SIM card isnt detected. A specific modem configuration is required before registering the device to flexiManage. Note, if the device is already registered, simply delete it it from flexiManage and re-register after completing the steps below. First step is to install minicom to access the modem using its serial port. apt install minicom After installtion run minicom -s to start minicom. Then select Serial port setup. On the next screen type A on the keyboard to select Serial Device and change it to /dev/ttyUSB2. Press enter twice to confirm. Finally, select Save setup as dfl and select Exit from Minicom. To verify sim card can now be detected, enter minicom -s again and run the following commands: AT - to make sure modem is responding to commands. AT+QUIMSLOT? - the expected output is 1. If the value is 2. follow the steps 3. and 4. AT+QUIMSLOT=1 AT+QPOWD=0 - the following command restarts the modem. After modem restart re-try step 2. flexiWAN can switch supported LTE modems from QMI to MBIM and vice versa from the command line. Please add to our docs how to switch modem from QMI to MBIM, and from MBIM to QMI. Sierra Wireless qmicli --device=/dev/cdc-wdm0 --device-open-proxy --dms-swi-set-usb-composition=8 qmicli --device=/dev/cdc-wdm0 --device-open-proxy --dms-set-operating-mode=offline qmicli --device=/dev/cdc-wdm0 --device-open-proxy --dms-set-operating-mode=reset Quectel Use minicom, run: AT+QCFG=\"usbnet\",2 AT+QPOWD=0 Sierra Wireless qmicli --device=/dev/cdc-wdm0 --device-open-proxy --dms-swi-set-usb-composition=6 qmicli --device=/dev/cdc-wdm0 --device-open-proxy --dms-set-operating-mode=offline qmicli --device=/dev/cdc-wdm0 --device-open-proxy --dms-set-operating-mode=reset Quectel Use minicom, run: AT+QCFG=\"usbnet\",0 AT+QPOWD=0 flexiWAN 6.2.1 introduced new Host OS which upgrade should be painless and fast. However, in some rare cases device may not be bootable due kernel issue. If an error in boot is as following, please follow the steps bellow to resolution for a quick fix. end kernel panic not syncing vfs unable to mount root fs on unknown blocked Reboot again the device On the bootloader (GRUB) menu, select the previous boot entry, the one with second latest Linux kernel version available Once the device has been booted up properly, login to the device with admin user and run the following commands to recover from that situation: sudo update-initramfs -u -k all sudo update-grub Grab the output of the above commands. Reboot again the device by executing reboot. flexiWAN should boot now normally. Copyright 2019-2024, flexiWAN" } ]
{ "category": "Runtime", "file_name": "github-terms-of-service.md", "project_name": "Flannel", "subcategory": "Cloud Native Network" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Runtime", "file_name": "docs.md", "project_name": "Flannel", "subcategory": "Cloud Native Network" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:-|:-|:-|-:|-:| | parent directory.. | parent directory.. | parent directory.. | nan | nan | | README.md | README.md | README.md | nan | nan | | index.yaml | index.yaml | index.yaml | nan | nan | | View all files | View all files | View all files | nan | nan | Flannel Helm Repository" } ]
{ "category": "Runtime", "file_name": "github-privacy-statement.md", "project_name": "Flannel", "subcategory": "Cloud Native Network" }
[ { "data": "Effective date: February 1, 2024 Welcome to the GitHub Privacy Statement. This is where we describe how we handle your Personal Data, which is information that is directly linked or can be linked to you. It applies to the Personal Data that GitHub, Inc. or GitHub B.V., processes as the Data Controller when you interact with websites, applications, and services that display this Statement (collectively, Services). This Statement does not apply to services or products that do not display this Statement, such as Previews, where relevant. When a school or employer supplies your GitHub account, they assume the role of Data Controller for most Personal Data used in our Services. This enables them to: Should you access a GitHub Service through an account provided by an organization, such as your employer or school, the organization becomes the Data Controller, and this Privacy Statement's direct applicability to you changes. Even so, GitHub remains dedicated to preserving your privacy rights. In such circumstances, GitHub functions as a Data Processor, adhering to the Data Controller's instructions regarding your Personal Data's processing. A Data Protection Agreement governs the relationship between GitHub and the Data Controller. For further details regarding their privacy practices, please refer to the privacy statement of the organization providing your account. In cases where your organization grants access to GitHub products, GitHub acts as the Data Controller solely for specific processing activities. These activities are clearly defined in a contractual agreement with your organization, known as a Data Protection Agreement. You can review our standard Data Protection Agreement at GitHub Data Protection Agreement. For those limited purposes, this Statement governs the handling of your Personal Data. For all other aspects of GitHub product usage, your organization's policies apply. When you use third-party extensions, integrations, or follow references and links within our Services, the privacy policies of these third parties apply to any Personal Data you provide or consent to share with them. Their privacy statements will govern how this data is processed. Personal Data is collected from you directly, automatically from your device, and also from third parties. The Personal Data GitHub processes when you use the Services depends on variables like how you interact with our Services (such as through web interfaces, desktop or mobile applications), the features you use (such as pull requests, Codespaces, or GitHub Copilot) and your method of accessing the Services (your preferred IDE). Below, we detail the information we collect through each of these channels: The Personal Data we process depends on your interaction and access methods with our Services, including the interfaces (web, desktop, mobile apps), features used (pull requests, Codespaces, GitHub Copilot), and your preferred access tools (like your IDE). This section details all the potential ways GitHub may process your Personal Data: When carrying out these activities, GitHub practices data minimization and uses the minimum amount of Personal Information required. We may share Personal Data with the following recipients: If your GitHub account has private repositories, you control the access to that information. GitHub personnel does not access private repository information without your consent except as provided in this Privacy Statement and for: GitHub will provide you with notice regarding private repository access unless doing so is prohibited by law or if GitHub acted in response to a security threat or other risk to security. GitHub processes Personal Data in compliance with the GDPR, ensuring a lawful basis for each processing" }, { "data": "The basis varies depending on the data type and the context, including how you access the services. Our processing activities typically fall under these lawful bases: Depending on your residence location, you may have specific legal rights regarding your Personal Data: To exercise these rights, please send an email to privacy[at]github[dot]com and follow the instructions provided. To verify your identity for security, we may request extra information before addressing your data-related request. Please contact our Data Protection Officer at dpo[at]github[dot]com for any feedback or concerns. Depending on your region, you have the right to complain to your local Data Protection Authority. European users can find authority contacts on the European Data Protection Board website, and UK users on the Information Commissioners Office website. We aim to promptly respond to requests in compliance with legal requirements. Please note that we may retain certain data as necessary for legal obligations or for establishing, exercising, or defending legal claims. GitHub stores and processes Personal Data in a variety of locations, including your local region, the United States, and other countries where GitHub, its affiliates, subsidiaries, or subprocessors have operations. We transfer Personal Data from the European Union, the United Kingdom, and Switzerland to countries that the European Commission has not recognized as having an adequate level of data protection. When we engage in such transfers, we generally rely on the standard contractual clauses published by the European Commission under Commission Implementing Decision 2021/914, to help protect your rights and enable these protections to travel with your data. To learn more about the European Commissions decisions on the adequacy of the protection of personal data in the countries where GitHub processes personal data, see this article on the European Commission website. GitHub also complies with the EU-U.S. Data Privacy Framework (EU-U.S. DPF), the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. Data Privacy Framework (Swiss-U.S. DPF) as set forth by the U.S. Department of Commerce. GitHub has certified to the U.S. Department of Commerce that it adheres to the EU-U.S. Data Privacy Framework Principles (EU-U.S. DPF Principles) with regard to the processing of personal data received from the European Union in reliance on the EU-U.S. DPF and from the United Kingdom (and Gibraltar) in reliance on the UK Extension to the EU-U.S. DPF. GitHub has certified to the U.S. Department of Commerce that it adheres to the Swiss-U.S. Data Privacy Framework Principles (Swiss-U.S. DPF Principles) with regard to the processing of personal data received from Switzerland in reliance on the Swiss-U.S. DPF. If there is any conflict between the terms in this privacy statement and the EU-U.S. DPF Principles and/or the Swiss-U.S. DPF Principles, the Principles shall govern. To learn more about the Data Privacy Framework (DPF) program, and to view our certification, please visit https://www.dataprivacyframework.gov/. GitHub has the responsibility for the processing of Personal Data it receives under the Data Privacy Framework (DPF) Principles and subsequently transfers to a third party acting as an agent on GitHubs behalf. GitHub shall remain liable under the DPF Principles if its agent processes such Personal Data in a manner inconsistent with the DPF Principles, unless the organization proves that it is not responsible for the event giving rise to the damage. In compliance with the EU-U.S. DPF, the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. DPF, GitHub commits to resolve DPF Principles-related complaints about our collection and use of your personal" }, { "data": "EU, UK, and Swiss individuals with inquiries or complaints regarding our handling of personal data received in reliance on the EU-U.S. DPF, the UK Extension, and the Swiss-U.S. DPF should first contact GitHub at: dpo[at]github[dot]com. If you do not receive timely acknowledgment of your DPF Principles-related complaint from us, or if we have not addressed your DPF Principles-related complaint to your satisfaction, please visit https://go.adr.org/dpf_irm.html for more information or to file a complaint. The services of the International Centre for Dispute Resolution are provided at no cost to you. An individual has the possibility, under certain conditions, to invoke binding arbitration for complaints regarding DPF compliance not resolved by any of the other DPF mechanisms. For additional information visit https://www.dataprivacyframework.gov/s/article/ANNEX-I-introduction-dpf?tabset-35584=2. GitHub is subject to the investigatory and enforcement powers of the Federal Trade Commission (FTC). Under Section 5 of the Federal Trade Commission Act (15 U.S.C. 45), an organization's failure to abide by commitments to implement the DPF Principles may be challenged as deceptive by the FTC. The FTC has the power to prohibit such misrepresentations through administrative orders or by seeking court orders. GitHub uses appropriate administrative, technical, and physical security controls to protect your Personal Data. Well retain your Personal Data as long as your account is active and as needed to fulfill contractual obligations, comply with legal requirements, resolve disputes, and enforce agreements. The retention duration depends on the purpose of data collection and any legal obligations. GitHub uses administrative, technical, and physical security controls where appropriate to protect your Personal Data. Contact us via our contact form or by emailing our Data Protection Officer at dpo[at]github[dot]com. Our addresses are: GitHub B.V. Prins Bernhardplein 200, Amsterdam 1097JB The Netherlands GitHub, Inc. 88 Colin P. Kelly Jr. St. San Francisco, CA 94107 United States Our Services are not intended for individuals under the age of 13. We do not intentionally gather Personal Data from such individuals. If you become aware that a minor has provided us with Personal Data, please notify us. GitHub may periodically revise this Privacy Statement. If there are material changes to the statement, we will provide at least 30 days prior notice by updating our website or sending an email to your primary email address associated with your GitHub account. Below are translations of this document into other languages. In the event of any conflict, uncertainty, or apparent inconsistency between any of those versions and the English version, this English version is the controlling version. Cliquez ici pour obtenir la version franaise: Dclaration de confidentialit de GitHub (PDF). For translations of this statement into other languages, please visit https://docs.github.com/ and select a language from the drop-down menu under English. GitHub uses cookies to provide, secure and improve our Service or to develop new features and functionality of our Service. For example, we use them to (i) keep you logged in, (ii) remember your preferences, (iii) identify your device for security and fraud purposes, including as needed to maintain the integrity of our Service, (iv) compile statistical reports, and (v) provide information and insight for future development of GitHub. We provide more information about cookies on GitHub that describes the cookies we set, the needs we have for those cookies, and the expiration of such cookies. For Enterprise Marketing Pages, we may also use non-essential cookies to (i) gather information about enterprise users interests and online activities to personalize their experiences, including by making the ads, content, recommendations, and marketing seen or received more relevant and (ii) serve and measure the effectiveness of targeted advertising and other marketing" }, { "data": "If you disable the non-essential cookies on the Enterprise Marketing Pages, the ads, content, and marketing you see may be less relevant. Our emails to users may contain a pixel tag, which is a small, clear image that can tell us whether or not you have opened an email and what your IP address is. We use this pixel tag to make our email communications more effective and to make sure we are not sending you unwanted email. The length of time a cookie will stay on your browser or device depends on whether it is a persistent or session cookie. Session cookies will only stay on your device until you stop browsing. Persistent cookies stay until they expire or are deleted. The expiration time or retention period applicable to persistent cookies depends on the purpose of the cookie collection and tool used. You may be able to delete cookie data. For more information, see \"GitHub General Privacy Statement.\" We use cookies and similar technologies, such as web beacons, local storage, and mobile analytics, to operate and provide our Services. When visiting Enterprise Marketing Pages, like resources.github.com, these and additional cookies, like advertising IDs, may be used for sales and marketing purposes. Cookies are small text files stored by your browser on your device. A cookie can later be read when your browser connects to a web server in the same domain that placed the cookie. The text in a cookie contains a string of numbers and letters that may uniquely identify your device and can contain other information as well. This allows the web server to recognize your browser over time, each time it connects to that web server. Web beacons are electronic images (also called single-pixel or clear GIFs) that are contained within a website or email. When your browser opens a webpage or email that contains a web beacon, it automatically connects to the web server that hosts the image (typically operated by a third party). This allows that web server to log information about your device and to set and read its own cookies. In the same way, third-party content on our websites (such as embedded videos, plug-ins, or ads) results in your browser connecting to the third-party web server that hosts that content. Mobile identifiers for analytics can be accessed and used by apps on mobile devices in much the same way that websites access and use cookies. When visiting Enterprise Marketing pages, like resources.github.com, on a mobile device these may allow us and our third-party analytics and advertising partners to collect data for sales and marketing purposes. We may also use so-called flash cookies (also known as Local Shared Objects or LSOs) to collect and store information about your use of our Services. Flash cookies are commonly used for advertisements and videos. The GitHub Services use cookies and similar technologies for a variety of purposes, including to store your preferences and settings, enable you to sign-in, analyze how our Services perform, track your interaction with the Services, develop inferences, combat fraud, and fulfill other legitimate purposes. Some of these cookies and technologies may be provided by third parties, including service providers and advertising" }, { "data": "For example, our analytics and advertising partners may use these technologies in our Services to collect personal information (such as the pages you visit, the links you click on, and similar usage information, identifiers, and device information) related to your online activities over time and across Services for various purposes, including targeted advertising. GitHub will place non-essential cookies on pages where we market products and services to enterprise customers, for example, on resources.github.com. We and/or our partners also share the information we collect or infer with third parties for these purposes. The table below provides additional information about how we use different types of cookies: | Purpose | Description | |:--|:--| | Required Cookies | GitHub uses required cookies to perform essential website functions and to provide the services. For example, cookies are used to log you in, save your language preferences, provide a shopping cart experience, improve performance, route traffic between web servers, detect the size of your screen, determine page load times, improve user experience, and for audience measurement. These cookies are necessary for our websites to work. | | Analytics | We allow third parties to use analytics cookies to understand how you use our websites so we can make them better. For example, cookies are used to gather information about the pages you visit and how many clicks you need to accomplish a task. We also use some analytics cookies to provide personalized advertising. | | Social Media | GitHub and third parties use social media cookies to show you ads and content based on your social media profiles and activity on GitHubs websites. This ensures that the ads and content you see on our websites and on social media will better reflect your interests. This also enables third parties to develop and improve their products, which they may use on websites that are not owned or operated by GitHub. | | Advertising | In addition, GitHub and third parties use advertising cookies to show you new ads based on ads you've already seen. Cookies also track which ads you click or purchases you make after clicking an ad. This is done both for payment purposes and to show you ads that are more relevant to you. For example, cookies are used to detect when you click an ad and to show you ads based on your social media interests and website browsing history. | You have several options to disable non-essential cookies: Specifically on GitHub Enterprise Marketing Pages Any GitHub page that serves non-essential cookies will have a link in the pages footer to cookie settings. You can express your preferences at any time by clicking on that linking and updating your settings. Some users will also be able to manage non-essential cookies via a cookie consent banner, including the options to accept, manage, and reject all non-essential cookies. Generally for all websites You can control the cookies you encounter on the web using a variety of widely-available tools. For example: These choices are specific to the browser you are using. If you access our Services from other devices or browsers, take these actions from those systems to ensure your choices apply to the data collected when you use those systems. This section provides extra information specifically for residents of certain US states that have distinct data privacy laws and regulations. These laws may grant specific rights to residents of these states when the laws come into effect. This section uses the term personal information as an equivalent to the term Personal Data. These rights are common to the US State privacy laws: We may collect various categories of personal information about our website visitors and users of \"Services\" which includes GitHub applications, software, products, or" }, { "data": "That information includes identifiers/contact information, demographic information, payment information, commercial information, internet or electronic network activity information, geolocation data, audio, electronic, visual, or similar information, and inferences drawn from such information. We collect this information for various purposes. This includes identifying accessibility gaps and offering targeted support, fostering diversity and representation, providing services, troubleshooting, conducting business operations such as billing and security, improving products and supporting research, communicating important information, ensuring personalized experiences, and promoting safety and security. To make an access, deletion, correction, or opt-out request, please send an email to privacy[at]github[dot]com and follow the instructions provided. We may need to verify your identity before processing your request. If you choose to use an authorized agent to submit a request on your behalf, please ensure they have your signed permission or power of attorney as required. To opt out of the sharing of your personal information, you can click on the \"Do Not Share My Personal Information\" link on the footer of our Websites or use the Global Privacy Control (\"GPC\") if available. Authorized agents can also submit opt-out requests on your behalf. We also make the following disclosures for purposes of compliance with California privacy law: Under California Civil Code section 1798.83, also known as the Shine the Light law, California residents who have provided personal information to a business with which the individual has established a business relationship for personal, family, or household purposes (California Customers) may request information about whether the business has disclosed personal information to any third parties for the third parties direct marketing purposes. Please be aware that we do not disclose personal information to any third parties for their direct marketing purposes as defined by this law. California Customers may request further information about our compliance with this law by emailing (privacy[at]github[dot]com). Please note that businesses are required to respond to one request per California Customer each year and may not be required to respond to requests made by means other than through the designated email address. California residents under the age of 18 who are registered users of online sites, services, or applications have a right under California Business and Professions Code Section 22581 to remove, or request and obtain removal of, content or information they have publicly posted. To remove content or information you have publicly posted, please submit a Private Information Removal request. Alternatively, to request that we remove such content or information, please send a detailed description of the specific content or information you wish to have removed to GitHub support. Please be aware that your request does not guarantee complete or comprehensive removal of content or information posted online and that the law may not permit or require removal in certain circumstances. If you have any questions about our privacy practices with respect to California residents, please send an email to privacy[at]github[dot]com. We value the trust you place in us and are committed to handling your personal information with care and respect. If you have any questions or concerns about our privacy practices, please email our Data Protection Officer at dpo[at]github[dot]com. If you live in Colorado, Connecticut, or Virginia you have some additional rights: We do not sell your covered information, as defined under Chapter 603A of the Nevada Revised Statutes. If you still have questions about your covered information or anything else in our Privacy Statement, please send an email to privacy[at]github[dot]com. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Runtime", "file_name": "understanding-github-code-search-syntax.md", "project_name": "Flannel", "subcategory": "Cloud Native Network" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Runtime", "file_name": "multi-cluster-services.md", "project_name": "Kilo", "subcategory": "Cloud Native Network" }
[ { "data": "Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes. Kilo connects nodes in a cluster by providing an encrypted layer 3 network that can span across data centers and public clouds. The Pod network created by Kilo is always fully connected, even when the nodes are in different networks or behind NAT. By allowing pools of nodes in different locations to communicate securely, Kilo enables the operation of multi-cloud clusters. Kilo's design allows clients to VPN to a cluster in order to securely access services running on the cluster. In addition to creating multi-cloud clusters, Kilo enables the creation of multi-cluster services, i.e. services that span across different Kubernetes clusters. An introductory video about Kilo from KubeCon EU 2019 can be found on youtube. Kilo uses WireGuard, a performant and secure VPN, to create a mesh between the different nodes in a cluster. The Kilo agent, kg, runs on every node in the cluster, setting up the public and private keys for the VPN as well as the necessary rules to route packets between locations. Kilo can operate both as a complete, independent networking provider as well as an add-on complimenting the cluster-networking solution currently installed on a cluster. This means that if a cluster uses, for example, Flannel for networking, Kilo can be installed on top to enable pools of nodes in different locations to join; Kilo will take care of the network between locations, while Flannel will take care of the network within locations. Kilo can be installed on any Kubernetes cluster either pre- or post-bring-up. Kilo requires the WireGuard kernel module to be loaded on all nodes in the cluster. Starting at Linux 5.6, the kernel includes WireGuard in-tree; Linux distributions with older kernels will need to install WireGuard. For most Linux distributions, this can be done using the system package manager. See the WireGuard website for up-to-date instructions for installing WireGuard. Clusters with nodes on which the WireGuard kernel module cannot be installed can use Kilo by leveraging a userspace WireGuard implementation. The nodes in the mesh will require an open UDP port in order to communicate. By default, Kilo uses UDP port 51820. By default, Kilo creates a mesh between the different logical locations in the cluster, e.g. data-centers, cloud providers, etc. For this, Kilo needs to know which groups of nodes are in each location. If the cluster does not automatically set the topology.kubernetes.io/region node label, then the kilo.squat.ai/location annotation can be used. For example, the following snippet could be used to annotate all nodes with GCP in the name: ``` for node in $(kubectl get nodes | grep -i gcp | awk '{print $1}'); do kubectl annotate node $node" }, { "data": "done``` Kilo allows the topology of the encrypted network to be completely customized. See the topology docs for more details. At least one node in each location must have an IP address that is routable from the other locations. If the locations are in different clouds or private networks, then this must be a public IP address. If this IP address is not automatically configured on the node's Ethernet device, it can be manually specified using the kilo.squat.ai/force-endpoint annotation. Kilo can be installed by deploying a DaemonSet to the cluster. To run Kilo on kubeadm: ``` kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/crds.yamlkubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-kubeadm.yaml``` To run Kilo on bootkube: ``` kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/crds.yamlkubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-bootkube.yaml``` To run Kilo on Typhoon: ``` kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/crds.yamlkubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-typhoon.yaml``` To run Kilo on k3s: ``` kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/crds.yamlkubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-k3s.yaml``` Administrators of existing clusters who do not want to swap out the existing networking solution can run Kilo in add-on mode. In this mode, Kilo will add advanced features to the cluster, such as VPN and multi-cluster services, while delegating CNI management and local networking to the cluster's current networking provider. Kilo currently supports running on top of Flannel. For example, to run Kilo on a Typhoon cluster running Flannel: ``` kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/crds.yamlkubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-typhoon-flannel.yaml``` See the manifests directory for more examples. Kilo also enables peers outside of a Kubernetes cluster to connect to the VPN, allowing cluster applications to securely access external services and permitting developers and support to securely debug cluster resources. In order to declare a peer, start by defining a Kilo Peer resource: ``` cat <<'EOF' | kubectl apply -f -apiVersion: kilo.squat.ai/v1alpha1kind: Peermetadata: name: squatspec: allowedIPs: - 10.5.0.1/32 publicKey: GY5aT1N9dTR/nJnT1N2f4ClZWVj0jOAld0r8ysWLyjg= persistentKeepalive: 10EOF``` This configuration can then be applied to a local WireGuard interface, e.g. wg0, to give it access to the cluster with the help of the kgctl tool: ``` kgctl showconf peer squat > peer.inisudo wg setconf wg0 peer.ini``` See the VPN docs for more details. A logical application of Kilo's VPN is to connect two different Kubernetes clusters. This allows workloads running in one cluster to access services running in another. For example, if cluster1 is running a Kubernetes Service that we need to access from Pods running in cluster2, we could do the following: ``` Now, important-service can be used on cluster2 just like any other Kubernetes Service. See the multi-cluster services docs for more details. The topology and configuration of a Kilo network can be analyzed using the kgctl command line tool. For example, the graph command can be used to generate a graph of the network in Graphviz format: ``` kgctl graph | circo -Tsvg > cluster.svg```" } ]
{ "category": "Runtime", "file_name": "vpn.md", "project_name": "Kilo", "subcategory": "Cloud Native Network" }
[ { "data": "Kilo enables peers outside of a Kubernetes cluster to connect to the created WireGuard network. This enables several use cases, for example: In order to declare a peer, start by defining a Kilo Peer resource. See the following peer.yaml, where the publicKey field holds a generated WireGuard public key: ``` apiVersion: kilo.squat.ai/v1alpha1kind: Peermetadata: name: squatspec: allowedIPs: - 10.5.0.1/32 # Example IP address on the peer's interface. publicKey: GY5aT1N9dTR/nJnT1N2f4ClZWVj0jOAld0r8ysWLyjg= persistentKeepalive: 10``` Then, apply the resource to the cluster: ``` kubectl apply -f peer.yaml``` Now, the kgctl tool can be used to generate the WireGuard configuration for the newly defined peer: ``` PEER=squatkgctl showconf peer $PEER``` This will produce some output like: ``` [Peer]PublicKey = 2/xU029dz/WtvMZAbnSzmhicl8U1/Y3NYmunRr8EJ0Q=AllowedIPs = 10.4.0.2/32, 10.2.3.0/24, 10.1.0.3/32Endpoint = 108.61.142.123:51820``` The configuration can then be applied to a local WireGuard interface, e.g. wg0: ``` IFACE=wg0kgctl showconf peer $PEER > peer.inisudo wg setconf $IFACE peer.ini``` Finally, in order to access the cluster, the client will need appropriate routes for the new configuration. For example, on a Linux machine, the creation of these routes could be automated by running: ``` for ip in $(kgctl showconf peer $PEER | grep AllowedIPs | cut -f 3- -d ' ' | tr -d ','); do sudo ip route add $ip dev $IFACEdone``` Once the routes are in place, the connection to the cluster can be tested. For example, try connecting to the API server: ``` curl -k https://$(kubectl get endpoints kubernetes | tail -n +2 | tr , \\\\t | awk '{print $2}')``` Likewise, the cluster now also has layer 3 access to the newly added peer. From any node or Pod on the cluster, one can now ping the peer: ``` ping 10.5.0.1``` If the peer exposes a layer 4 service, for example an HTTP server listening on TCP port 80, then one could also make requests against that endpoint from the cluster: ``` curl http://10.5.0.1``` Kubernetes Services can be created to provide better discoverability to cluster workloads for services exposed by peers, for example: ``` cat <<'EOF' | kubectl apply -f -apiVersion: v1kind: Servicemetadata: name: important-servicespec: ports: - port: 80apiVersion: v1kind: Endpointsmetadata: name: important-servicesubsets: - addresses: - ip: 10.5.0.1 ports: - port: 80EOF``` See the multi-cluster services docs for more details on connecting clusters to external services. Although it is not a primary goal of the project, the VPN created by Kilo can also be used by peers as a gateway to the Internet; for more details, see the VPN server docs." } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "Kube-router", "subcategory": "Cloud Native Network" }
[ { "data": "The best way to get started is to deploy Kubernetes with Kube-router is with a cluster installer. Please see the steps to deploy Kubernetes cluster with Kube-router using Kops Please see the steps to deploy Kubernetes cluster with Kube-router using Kubeadm k0s by default uses kube-router as a CNI option. Please see the steps to deploy Kubernetes cluster with Kube-router using k0s k3s by default uses kube-router's network policy controller implementation for its NetworkPolicy enforcement. Please see the steps to deploy kube-router on manually installed clusters When running in an AWS environment that requires an explicit proxy you need to inject the proxy server as a environment variable in your kube-router deployment Example: ``` env: name: HTTP_PROXY value: \"http://proxy.example.com:80\" ``` Azure does not support IPIP packet encapsulation which is the default packet encapsulation that kube-router uses. If you need to use an overlay network in an Azure environment with kube-router, please ensure that you set --overlay-encap=fou. See kube-router Tunnel Documentation for more information. Depending on what functionality of kube-router you want to use, multiple deployment options are possible. You can use the flags --run-firewall, --run-router, --run-service-proxy, --run-loadbalancer to selectively enable only required functionality of kube-router. Also you can choose to run kube-router as agent running on each cluster node. Alternativley you can run kube-router as pod on each node through daemonset. ``` Usage of kube-router: --advertise-cluster-ip Add Cluster IP of the service to the RIB so that it gets advertises to the BGP peers. --advertise-external-ip Add External IP of service to the RIB so that it gets advertised to the BGP peers. --advertise-loadbalancer-ip Add LoadbBalancer IP of service status as set by the LB provider to the RIB so that it gets advertised to the BGP peers. --advertise-pod-cidr Add Node's POD cidr to the RIB so that it gets advertised to the BGP peers. (default true) --auto-mtu Auto detect and set the largest possible MTU for kube-bridge and pod interfaces (also accounts for IPIP overlay network when enabled). (default true) --bgp-graceful-restart Enables the BGP Graceful Restart capability so that routes are preserved on unexpected restarts --bgp-graceful-restart-deferral-time duration BGP Graceful restart deferral time according to RFC4724 4.1, maximum 18h. (default 6m0s) --bgp-graceful-restart-time duration BGP Graceful restart time according to RFC4724 3, maximum 4095s. (default 1m30s) --bgp-holdtime duration This parameter is mainly used to modify the holdtime declared to BGP peer. When Kube-router goes down abnormally, the local saving time of BGP route will be affected. Holdtime must be in the range 3s to 18h12m16s. (default 1m30s) --bgp-port uint32 The port open for incoming BGP connections and to use for connecting with other BGP peers. (default 179) --cache-sync-timeout duration The timeout for cache synchronization (e.g. '5s', '1m'). Must be greater than 0. (default 1m0s) --cleanup-config Cleanup iptables rules, ipvs, ipset configuration and exit. --cluster-asn uint ASN number under which cluster nodes will run iBGP. --disable-source-dest-check Disable the source-dest-check attribute for AWS EC2 instances. When this option is false, it must be set some other way. (default true) --enable-cni Enable CNI plugin. Disable if you want to use kube-router features alongside another CNI plugin. (default true) --enable-ibgp Enables peering with nodes with the same ASN, if disabled will only peer with external BGP peers (default true) --enable-ipv4 Enables IPv4 support (default true) --enable-ipv6 Enables IPv6 support --enable-overlay When enable-overlay is set to true, IP-in-IP tunneling is used for pod-to-pod networking across nodes in different" }, { "data": "When set to false no tunneling is used and routing infrastructure is expected to route traffic for pod-to-pod networking across nodes in different subnets (default true) --enable-pod-egress SNAT traffic from Pods to destinations outside the cluster. (default true) --enable-pprof Enables pprof for debugging performance and memory leak issues. --excluded-cidrs strings Excluded CIDRs are used to exclude IPVS rules from deletion. --hairpin-mode Add iptables rules for every Service Endpoint to support hairpin traffic. --health-port uint16 Health check port, 0 = Disabled (default 20244) -h, --help Print usage information. --hostname-override string Overrides the NodeName of the node. Set this if kube-router is unable to determine your NodeName automatically. --injected-routes-sync-period duration The delay between route table synchronizations (e.g. '5s', '1m', '2h22m'). Must be greater than 0. (default 1m0s) --iptables-sync-period duration The delay between iptables rule synchronizations (e.g. '5s', '1m'). Must be greater than 0. (default 5m0s) --ipvs-graceful-period duration The graceful period before removing destinations from IPVS services (e.g. '5s', '1m', '2h22m'). Must be greater than 0. (default 30s) --ipvs-graceful-termination Enables the experimental IPVS graceful terminaton capability --ipvs-permit-all Enables rule to accept all incoming traffic to service VIP's on the node. (default true) --ipvs-sync-period duration The delay between ipvs config synchronizations (e.g. '5s', '1m', '2h22m'). Must be greater than 0. (default 5m0s) --kubeconfig string Path to kubeconfig file with authorization information (the master location is set by the master flag). --loadbalancer-default-class Handle loadbalancer services without a class (default true) --loadbalancer-ip-range strings CIDR values from which loadbalancer services addresses are assigned (can be specified multiple times) --loadbalancer-sync-period duration The delay between checking for missed services (e.g. '5s', '1m'). Must be greater than 0. (default 1m0s) --masquerade-all SNAT all traffic to cluster IP/node port. --master string The address of the Kubernetes API server (overrides any value in kubeconfig). --metrics-addr string Prometheus metrics address to listen on, (Default: all interfaces) --metrics-path string Prometheus metrics path (default \"/metrics\") --metrics-port uint16 Prometheus metrics port, (Default 0, Disabled) --nodeport-bindon-all-ip For service of NodePort type create IPVS service that listens on all IP's of the node. --nodes-full-mesh Each node in the cluster will setup BGP peering with rest of the nodes. (default true) --overlay-encap string Valid encapsulation types are \"ipip\" or \"fou\" (if set to \"fou\", the udp port can be specified via \"overlay-encap-port\") (default \"ipip\") --overlay-encap-port uint16 Overlay tunnel encapsulation port (only used for \"fou\" encapsulation) (default 5555) --overlay-type string Possible values: subnet,full - When set to \"subnet\", the default, default \"--enable-overlay=true\" behavior is used. When set to \"full\", it changes \"--enable-overlay=true\" default behavior so that IP-in-IP tunneling is used for pod-to-pod networking across nodes regardless of the subnet the nodes are in. (default \"subnet\") --override-nexthop Override the next-hop in bgp routes sent to peers with the local ip. --peer-router-asns uints ASN numbers of the BGP peer to which cluster nodes will advertise cluster ip and node's pod cidr. (default []) --peer-router-ips ipSlice The ip address of the external router to which all nodes will peer and advertise the cluster ip and pod cidr's. (default []) --peer-router-multihop-ttl uint8 Enable eBGP multihop supports -- sets multihop-ttl. (Relevant only if ttl >= 2) --peer-router-passwords strings Password for authenticating against the BGP peer defined with \"--peer-router-ips\". --peer-router-passwords-file string Path to file containing password for authenticating against the BGP peer defined with \"--peer-router-ips\". --peer-router-passwords will be preferred if both are set. --peer-router-ports uints The remote port of the external BGP to which all nodes will" }, { "data": "If not set, default BGP port (179) will be used. (default []) --router-id string BGP router-id. Must be specified in a ipv6 only cluster, \"generate\" can be specified to generate the router id. --routes-sync-period duration The delay between route updates and advertisements (e.g. '5s', '1m', '2h22m'). Must be greater than 0. (default 5m0s) --run-firewall Enables Network Policy -- sets up iptables to provide ingress firewall for pods. (default true) --run-loadbalancer Enable loadbalancer address allocator --run-router Enables Pod Networking -- Advertises and learns the routes to Pods via iBGP. (default true) --run-service-proxy Enables Service Proxy -- sets up IPVS for Kubernetes Services. (default true) --runtime-endpoint string Path to CRI compatible container runtime socket (used for DSR mode). Currently known working with containerd. --service-cluster-ip-range strings CIDR values from which service cluster IPs are assigned (can be specified up to 2 times) (default [10.96.0.0/12]) --service-external-ip-range strings Specify external IP CIDRs that are used for inter-cluster communication (can be specified multiple times) --service-node-port-range string NodePort range specified with either a hyphen or colon (default \"30000-32767\") --service-tcp-timeout duration Specify TCP timeout for IPVS services in standard duration syntax (e.g. '5s', '1m'), default 0s preserves default system value (default: 0s) --service-tcpfin-timeout duration Specify TCP FIN timeout for IPVS services in standard duration syntax (e.g. '5s', '1m'), default 0s preserves default system value (default: 0s) --service-udp-timeout duration Specify UDP timeout for IPVS services in standard duration syntax (e.g. '5s', '1m'), default 0s preserves default system value (default: 0s) -v, --v string log level for V logs (default \"0\") -V, --version Print version information. ``` ``` kube-router --master=http://192.168.1.99:8080/` or `kube-router --kubeconfig=<path to kubeconfig file> ``` If you run kube-router as agent on the node, ipset package must be installed on each of the nodes (when run as daemonset, container image is prepackaged with ipset) If you choose to use kube-router for pod-to-pod network connectivity then Kubernetes controller manager need to be configured to allocate pod CIDRs by passing --allocate-node-cidrs=true flag and providing a cluster-cidr (i.e. by passing --cluster-cidr=10.1.0.0/16 for e.g.) If you choose to run kube-router as daemonset in Kubernetes version below v1.15, both kube-apiserver and kubelet must be run with --allow-privileged=true option. In later Kubernetes versions, only kube-apiserver must be run with --allow-privileged=true option and if PodSecurityPolicy admission controller is enabled, you should create PodSecurityPolicy, allowing privileged kube-router pods. Additionally, when run in daemonset mode, it is highly recommended that you keep netfilter related userspace host tooling like iptables, ipset, and ipvsadm in sync with the versions that are distributed by Alpine inside the kube-router container. This will help avoid conflicts that can potentially arise when both the host's userspace and kube-router's userspace tooling modifies netfilter kernel definitions. See: this kube-router issue for more information. If you choose to use kube-router for pod-to-pod network connecitvity then Kubernetes cluster must be configured to use CNI network plugins. On each node CNI conf file is expected to be present as /etc/cni/net.d/10-kuberouter.conf bridge CNI plugin and host-local for IPAM should be used. A sample conf file that can be downloaded as ``` wget -O /etc/cni/net.d/10-kuberouter.conf https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/cni/10-kuberouter.conf` ``` This is quickest way to deploy kube-router in Kubernetes (dont forget to ensure the requirements above). Just run: ``` kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kube-router-all-service-daemonset.yaml ``` Above will run kube-router as pod on each node automatically. You can change the arguments in the daemonset definition as required to suit your" }, { "data": "Some sample deployment configuration can be found in our daemonset examples with different arguments used to select a set of the services kube-router should run. You can choose to run kube-router as agent runnng on each node. For e.g if you just want kube-router to provide ingress firewall for the pods then you can start kube-router as: ``` kube-router --master=http://192.168.1.99:8080/ --run-firewall=true --run-service-proxy=false --run-router=false ``` Please delete kube-router daemonset and then clean up all the configurations done (to ipvs, iptables, ipset, ip routes etc) by kube-router on the node by running below command. ``` docker run --privileged --net=host \\ --mount type=bind,source=/lib/modules,target=/lib/modules,readonly \\ --mount type=bind,source=/run/xtables.lock,target=/run/xtables.lock,bind-propagation=rshared \\ cloudnativelabs/kube-router /usr/local/bin/kube-router --cleanup-config ``` ``` $ ctr image pull docker.io/cloudnativelabs/kube-router:latest $ ctr run --privileged -t --net-host \\ --mount type=bind,src=/lib/modules,dst=/lib/modules,options=rbind:ro \\ --mount type=bind,src=/run/xtables.lock,dst=/run/xtables.lock,options=rbind:rw \\ docker.io/cloudnativelabs/kube-router:latest kube-router-cleanup /usr/local/bin/kube-router --cleanup-config ``` If you have a kube-proxy in use, and want to try kube-router just for service proxy you can do ``` kube-proxy --cleanup-iptables ``` followed by ``` kube-router --master=http://192.168.1.99:8080/ --run-service-proxy=true --run-firewall=false --run-router=false ``` and if you want to move back to kube-proxy then clean up config done by kube-router by running ``` kube-router --cleanup-config ``` and run kube-proxy with the configuration you have. kube-router can advertise Cluster, External and LoadBalancer IPs to BGP peers. It does this by: To set the default for all services use the --advertise-cluster-ip, --advertise-external-ip and --advertise-loadbalancer-ip flags. To selectively enable or disable this feature per-service use the kube-router.io/service.advertise.clusterip, kube-router.io/service.advertise.externalip and kube-router.io/service.advertise.loadbalancerip annotations. e.g.: $ kubectl annotate service my-advertised-service \"kube-router.io/service.advertise.clusterip=true\" $ kubectl annotate service my-advertised-service \"kube-router.io/service.advertise.externalip=true\" $ kubectl annotate service my-advertised-service \"kube-router.io/service.advertise.loadbalancerip=true\" $ kubectl annotate service my-non-advertised-service \"kube-router.io/service.advertise.clusterip=false\" $ kubectl annotate service my-non-advertised-service \"kube-router.io/service.advertise.externalip=false\" $ kubectl annotate service my-non-advertised-service \"kube-router.io/service.advertise.loadbalancerip=false\" By combining the flags with the per-service annotations you can choose either a opt-in or opt-out strategy for advertising IPs. Advertising LoadBalancer IPs works by inspecting the services status.loadBalancer.ingress IPs that are set by external LoadBalancers like for example MetalLb. This has been successfully tested together with MetalLB in ARP mode. Service availability both externally and locally (within the cluster) can be controlled via the Kubernetes standard Traffic Policies and via the custom kube-router service annotation: kube-router.io/service.local: true. Refer to the previously linked upstream Kubernetes documentation for more information on spec.internalTrafficPolicy and spec.externalTrafficPolicy. In order to keep backwards compatibility the kube-router.io/service.local: true annotation effectively overrides spec.internalTrafficPolicy and spec.externalTrafficPolicy and forces kube-router to behave as if both were set to Local. Communication from a Pod that is behind a Service to its own ClusterIP:Port is not supported by default. However, it can be enabled per-service by adding the kube-router.io/service.hairpin= annotation, or for all Services in a cluster by passing the flag --hairpin-mode=true to kube-router. Additionally, the hairpin_mode sysctl option must be set to 1 for all veth interfaces on each node. This can be done by adding the \"hairpinMode\": true option to your CNI configuration and rebooting all cluster nodes if they are already running kubernetes. Hairpin traffic will be seen by the pod it originated from as coming from the Service ClusterIP if it is logging the source IP. 10-kuberouter.conf ``` { \"name\":\"mynet\", \"type\":\"bridge\", \"bridge\":\"kube-bridge\", \"isDefaultGateway\":true, \"hairpinMode\":true, \"ipam\": { \"type\":\"host-local\" } } ``` To enable hairpin traffic for Service my-service: ``` kubectl annotate service my-service \"kube-router.io/service.hairpin=\" ``` If you want to also hairpin externalIPs declared for Service my-service (note, you must also either enable global hairpin or service hairpin (see above ^^^) for this to have an effect): ``` kubectl annotate service my-service" }, { "data": "``` By default, as traffic ingresses into the cluster, kube-router will source nat the traffic to ensure symmetric routing if it needs to proxy that traffic to ensure it gets to a node that has a service pod that is capable of servicing the traffic. This has a potential to cause issues when network policies are applied to that service since now the traffic will appear to be coming from a node in your cluster instead of the traffic originator. This is an issue that is common to all proxy's and all Kubernetes service proxies in general. You can read more information about this issue at: Source IP for Services In addition to the fix mentioned in the linked upstream documentation (using service.spec.externalTrafficPolicy), kube-router also provides DSR, which by its nature preserves the source IP, to solve this problem. For more information see the section above. Kube-router uses LVS for service proxy. LVS support rich set of scheduling alogirthms. You can annotate the service to choose one of the scheduling alogirthms. When a service is not annotated round-robin scheduler is selected by default ``` $ kubectl annotate service my-service \"kube-router.io/service.scheduler=lc\" $ kubectl annotate service my-service \"kube-router.io/service.scheduler=rr\" $ kubectl annotate service my-service \"kube-router.io/service.scheduler=sh\" $ kubectl annotate service my-service \"kube-router.io/service.scheduler=dh\" ``` If you would like to use HostPort functionality below changes are required in the manifest. By default kube-router assumes CNI conf file to be /etc/cni/net.d/10-kuberouter.conf. Add an environment variable KUBEROUTERCNICONFFILE to kube-router manifest and set it to /etc/cni/net.d/10-kuberouter.conflist Modify kube-router-cfg ConfigMap with CNI config that supports portmap as additional plug-in ``` { \"cniVersion\":\"0.3.0\", \"name\":\"mynet\", \"plugins\":[ { \"name\":\"kubernetes\", \"type\":\"bridge\", \"bridge\":\"kube-bridge\", \"isDefaultGateway\":true, \"ipam\":{ \"type\":\"host-local\" } }, { \"type\":\"portmap\", \"capabilities\":{ \"snat\":true, \"portMappings\":true } } ] } ``` For an e.g manifest please look at manifest with necessary changes required for HostPort functionality. As of 0.2.6 we support experimental graceful termination of IPVS destinations. When possible the pods's TerminationGracePeriodSeconds is used, if it cannot be retrived for some reason the fallback period is 30 seconds and can be adjusted with --ipvs-graceful-period cli-opt graceful termination works in such a way that when kube-router receives a delete endpoint notification for a service it's weight is adjusted to 0 before getting deleted after he termination grace period has passed or the Active & Inactive connections goes down to 0. The maximum transmission unit (MTU) determines the largest packet size that can be transmitted through your network. MTU for the pod interfaces should be set appropriately to prevent fragmentation and packet drops thereby achieving maximum performance. If auto-mtu is set to true (auto-mtu is set to true by default as of kube-router 1.1), kube-router will determine right MTU for both kube-bridge and pod interfaces. If you set auto-mtu to false kube-router will not attempt to configure MTU. However you can choose the right MTU and set in the cni-conf.json section of the 10-kuberouter.conflist in the kube-router daemonsets. For e.g. ``` cni-conf.json: | { \"cniVersion\":\"0.3.0\", \"name\":\"mynet\", \"plugins\":[ { \"name\":\"kubernetes\", \"type\":\"bridge\", \"mtu\": 1400, \"bridge\":\"kube-bridge\", \"isDefaultGateway\":true, \"ipam\":{ \"type\":\"host-local\" } } ] } ``` If you set MTU yourself via the CNI config, you'll also need to set MTU of kube-bridge manually to the right value to avoid packet fragmentation in case of existing nodes on which kube-bridge is already created. On node reboot or in case of new nodes joining the cluster both the pod's interface and kube-bridge will be setup with specified MTU value. Configuring BGP Peers Configure metrics gathering" } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "Ligato", "subcategory": "Cloud Native Network" }
[ { "data": "INTRODUCTION USER GUIDE TUTORIALS PLUGINS DEVELOPER GUIDE API Troubleshooting TESTING This section describes the REST APIs exposed by the KV Scheduler. Reference: KV Scheduler plugin rest.go file Note Ligato REST APIs use 9191 as the default port number. You can change this value using one of the REST plugin configuration options. OpenAPI (swagger) definitions provide additional details for describing, producing and consuming KV Scheduler REST APIs. Description: GET list of the descriptors registered with the KV Scheduler, list of the key prefixes under watch in the NB direction, and view options from the perspective of the KV Scheduler ``` curl -X GET http://localhost:9191/scheduler/dump``` Sample response: ``` { \"Descriptors\": [ \"bond-interface\", \"dhcp-proxy\", \"linux-interface-address\", \"linux-interface-watcher\", \"microservice\", \"netalloc-ip-address\", \"linux-interface\", \"linux-arp\", \"linux-ipt-rulechain-descriptor\", \"linux-route\", \"vpp-acl-to-interface\", \"vpp-bd-interface\", \"vpp-interface\", \"vpp-acl\", \"vpp-arp\", \"vpp-bridge-domain\", \"vpp-dhcp\", \"vpp-interface-address\", \"vpp-interface-has-address\", \"vpp-interface-link-state\", \"vpp-interface-rx-mode\", \"vpp-interface-rx-placement\", \"vpp-interface-vrf\", \"vpp-ip-scan-neighbor\", \"vpp-l2-fib\", \"vpp-l3xc\", \"vpp-nat44-dnat\", \"vpp-nat44-global\", \"vpp-nat44-address-pool\", \"vpp-nat44-global-address\", \"vpp-nat44-global-interface\", \"vpp-nat44-interface\", \"vpp-proxy-arp\", \"vpp-proxy-arp-interface\", \"vpp-punt-exception\", \"vpp-punt-ipredirect\", \"vpp-punt-to-host\", \"vpp-span\", \"vpp-sr-localsid\", \"vpp-sr-policy\", \"vpp-sr-steering\", \"vpp-srv6-global\", \"vpp-stn-rules\", \"vpp-unnumbered-interface\", \"vpp-vrf-table\", \"vpp-route\", \"vpp-xconnect\" ], \"KeyPrefixes\": [ \"\", \"config/vpp/v2/dhcp-proxy/\", \"\", \"\", \"\", \"config/netalloc/v1/ip/\", \"config/linux/interfaces/v2/interface/\", \"config/linux/l3/v2/arp/\", \"config/linux/iptables/v2/rulechain/\", \"config/linux/l3/v2/route/\", \"\", \"\", \"config/vpp/v2/interfaces/\", \"config/vpp/acls/v2/acl/\", \"config/vpp/v2/arp/\", \"config/vpp/l2/v2/bridge-domain/\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"config/vpp/v2/ipscanneigh-global\", \"config/vpp/l2/v2/fib/\", \"config/vpp/v2/l3xconnect/\", \"config/vpp/nat/v2/dnat44/\", \"config/vpp/nat/v2/nat44-global\", \"config/vpp/nat/v2/nat44-pool/\", \"\", \"\", \"config/vpp/nat/v2/nat44-interface/\", \"config/vpp/v2/proxyarp-global\", \"\", \"config/vpp/v2/exception/\", \"config/vpp/v2/ipredirect/\", \"config/vpp/v2/tohost/\", \"config/vpp/v2/span/\", \"config/vpp/srv6/v2/localsid/\", \"config/vpp/srv6/v2/policy/\", \"config/vpp/srv6/v2/steering/\", \"config/vpp/srv6/v2/srv6-global\", \"config/vpp/stn/v2/rule/\", \"\", \"config/vpp/v2/vrf-table/\", \"config/vpp/v2/route/\", \"config/vpp/l2/v2/xconnect/\" ], \"Views\": [\"SB\", \"NB\", \"cached\"] }``` Description: GET key-value data by view for a specific key prefix. The parameters are: This example dumps the system state in the SB direction for the key prefix of config/vpp/v2/interfaces/: ``` curl -X GET \"http://localhost:9191/scheduler/dump?view=SB&key-prefix=config/vpp/v2/interfaces/\"``` Sample response: ``` [ { \"Key\": \"config/vpp/v2/interfaces/mem01\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 memif:<id:2 socket_filename:\\\"/run/vpp/memif.sock\\\" > \" }, \"Metadata\": { \"SwIfIndex\": 2, \"Vrf\": 0, \"IPAddresses\": [ \"100.100.100.102/24\" ], \"TAPHostIfName\": \"\" }, \"Origin\": 1 }, { \"Key\": \"config/vpp/v2/interfaces/UNTAGGED-local0\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"UNTAGGED-local0\\\" type:SOFTWARELOOPBACK physaddress:\\\"00:00:00:00:00:00\\\" \" }, \"Metadata\": { \"SwIfIndex\": 0, \"Vrf\": 0, \"IPAddresses\": null, \"TAPHostIfName\": \"\" }, \"Origin\": 2 }, { \"Key\": \"config/vpp/v2/interfaces/loop1\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"loop1\\\" type:SOFTWARELOOPBACK enabled:true physaddress:\\\"de:ad:00:00:00:00\\\" ip_addresses:\\\"192.168.1.1/24\\\" \" }, \"Metadata\": { \"SwIfIndex\": 1, \"Vrf\": 0, \"IPAddresses\": [ \"192.168.1.1/24\" ], \"TAPHostIfName\": \"\" }, \"Origin\": 1 } ]``` Description: GET a complete history of planned and executed transactions. In addition, the following parameters can be included to scope the response: To GET the complete transaction history, use: ``` curl -X GET http://localhost:9191/scheduler/txn-history``` Sample response: ``` [ { \"Start\": \"2020-05-28T16:50:52.752025508Z\", \"Stop\": \"2020-05-28T16:50:52.769882769Z\", \"SeqNum\": 0, \"TxnType\": \"NBTransaction\", \"ResyncType\": \"FullResync\", \"Values\": [ { \"Key\": \"config/vpp/ipfix/v2/ipfix\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.ipfix.IPFIX\", \"ProtoMsgData\": \"collector:<address:\\\"0.0.0.0\\\" > sourceaddress:\\\"0.0.0.0\\\" vrfid:4294967295 \" }, \"Origin\": 2 }, { \"Key\": \"config/vpp/nat/v2/nat44-global\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.nat.Nat44Global\", \"ProtoMsgData\": \"\" }, \"Origin\": 2 }, { \"Key\": \"config/vpp/v2/interfaces/UNTAGGED-local0\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"UNTAGGED-local0\\\" type:SOFTWARELOOPBACK physaddress:\\\"00:00:00:00:00:00\\\" \" }, \"Origin\": 2 }, { \"Key\": \"config/vpp/v2/proxyarp-global\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.ProxyARP\", \"ProtoMsgData\": \"\" }, \"Origin\": 2 }, { \"Key\": \"config/vpp/v2/route/vrf/0/dst/0.0.0.0/0/gw/0.0.0.0\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.Route\", \"ProtoMsgData\": \"type:DROP dstnetwork:\\\"0.0.0.0/0\\\" nexthop_addr:\\\"0.0.0.0\\\" weight:1 \" }, \"Origin\": 2 }, { \"Key\": \"config/vpp/v2/route/vrf/0/dst/0.0.0.0/32/gw/0.0.0.0\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.Route\", \"ProtoMsgData\": \"type:DROP dstnetwork:\\\"0.0.0.0/32\\\" nexthop_addr:\\\"0.0.0.0\\\" weight:1 \" }, \"Origin\": 2 }, { \"Key\": \"config/vpp/v2/route/vrf/0/dst/224.0.0.0/4/gw/0.0.0.0\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.Route\", \"ProtoMsgData\": \"type:DROP dstnetwork:\\\"224.0.0.0/4\\\" nexthop_addr:\\\"0.0.0.0\\\" weight:1 \" }, \"Origin\": 2 }, { \"Key\": \"config/vpp/v2/route/vrf/0/dst/240.0.0.0/4/gw/0.0.0.0\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.Route\", \"ProtoMsgData\": \"type:DROP dstnetwork:\\\"240.0.0.0/4\\\" nexthop_addr:\\\"0.0.0.0\\\" weight:1 \" }, \"Origin\": 2 }, { \"Key\": \"config/vpp/v2/route/vrf/0/dst/255.255.255.255/32/gw/0.0.0.0\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.Route\", \"ProtoMsgData\": \"type:DROP dstnetwork:\\\"255.255.255.255/32\\\"" }, { "data": "weight:1 \" }, \"Origin\": 2 }, { \"Key\": \"config/vpp/v2/route/vrf/0/dst/::/0/gw/::\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.Route\", \"ProtoMsgData\": \"type:DROP dstnetwork:\\\"::/0\\\" nexthop_addr:\\\"::\\\" weight:1 \" }, \"Origin\": 2 }, { \"Key\": \"config/vpp/v2/route/vrf/0/dst/fe80::/10/gw/::\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.Route\", \"ProtoMsgData\": \"dstnetwork:\\\"fe80::/10\\\" nexthop_addr:\\\"::\\\" weight:1 \" }, \"Origin\": 2 }, { \"Key\": \"config/vpp/v2/vrf-table/id/0/protocol/IPV4\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.VrfTable\", \"ProtoMsgData\": \"label:\\\"ipv4-VRF:0\\\" \" }, \"Origin\": 2 }, { \"Key\": \"config/vpp/v2/vrf-table/id/0/protocol/IPV6\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.VrfTable\", \"ProtoMsgData\": \"protocol:IPV6 label:\\\"ipv6-VRF:0\\\" \" }, \"Origin\": 2 }, { \"Key\": \"linux/interface/host-name/eth0\", \"Value\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"Origin\": 2 }, { \"Key\": \"linux/interface/host-name/lo\", \"Value\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"Origin\": 2 }, { \"Key\": \"vpp/interface/UNTAGGED-local0/link-state/DOWN\", \"Value\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"Origin\": 2 } ] }, { \"Start\": \"2020-05-28T17:05:48.16322751Z\", \"Stop\": \"2020-05-28T17:05:48.166554253Z\", \"SeqNum\": 1, \"TxnType\": \"NBTransaction\", \"Values\": [ { \"Key\": \"config/vpp/v2/interfaces/loop1\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"loop1\\\" type:SOFTWARELOOPBACK enabled:true ipaddresses:\\\"192.168.1.1/24\\\" \" }, \"Origin\": 1 } ], \"Executed\": [ { \"Operation\": \"CREATE\", \"Key\": \"config/vpp/v2/interfaces/loop1\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"loop1\\\" type:SOFTWARELOOPBACK enabled:true ipaddresses:\\\"192.168.1.1/24\\\" \" } }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/loop1/vrf/0/ip-version/v4\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/loop1/address/static/192.168.1.1/24\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/loop1/has-IP-address\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true } ] }, { \"Start\": \"2020-05-28T17:05:48.166962782Z\", \"Stop\": \"2020-05-28T17:05:48.16711382Z\", \"SeqNum\": 2, \"TxnType\": \"SBNotification\", \"Values\": [ { \"Key\": \"vpp/interface/loop1/link-state/UP\", \"Value\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"Origin\": 2 } ], \"Executed\": [ { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/loop1/link-state/UP\", \"NewState\": \"OBTAINED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" } } ] }, { \"Start\": \"2020-05-28T18:53:19.211937573Z\", \"Stop\": \"2020-05-28T18:53:19.214134123Z\", \"SeqNum\": 3, \"TxnType\": \"NBTransaction\", \"Values\": [ { \"Key\": \"config/vpp/v2/interfaces/mem01\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2 socketfilename:\\\"/var/run/contiv/memif_ts1-host4.sock\\\" > \" }, \"Origin\": 1 } ], \"Executed\": [ { \"Operation\": \"CREATE\", \"Key\": \"config/vpp/v2/interfaces/mem01\", \"NewState\": \"RETRYING\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2 socketfilename:\\\"/var/run/contiv/memif_ts1-host4.sock\\\" > \" }, \"NewErr\": {} } ] }, { \"WithSimulation\": true, \"Start\": \"2020-05-28T18:53:20.230484533Z\", \"Stop\": \"2020-05-28T18:53:20.233056755Z\", \"SeqNum\": 4, \"TxnType\": \"RetryFailedOps\", \"RetryForTxn\": 3, \"RetryAttempt\": 1, \"Values\": [ { \"Key\": \"config/vpp/v2/interfaces/mem01\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2 socketfilename:\\\"/var/run/contiv/memif_ts1-host4.sock\\\" > \" }, \"Origin\": 1 } ], \"Planned\": [ { \"Operation\": \"CREATE\", \"Key\": \"config/vpp/v2/interfaces/mem01\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2 socketfilename:\\\"/var/run/contiv/memif_ts1-host4.sock\\\" > \" }, \"PrevState\": \"RETRYING\", \"PrevValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2 socketfilename:\\\"/var/run/contiv/memif_ts1-host4.sock\\\" > \" }, \"PrevErr\": {}, \"IsRetry\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/mem01/rx-modes\", \"NewState\": \"PENDING\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF rx_modes:<mode:ADAPTIVE > \" }, \"NOOP\": true, \"IsDerived\": true, \"IsRetry\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/mem01/vrf/0/ip-version/v4\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true, \"IsRetry\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/mem01/address/static/100.100.100.102/24\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true, \"IsRetry\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/mem01/has-IP-address\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true, \"IsRetry\": true } ], \"Executed\": [ { \"Operation\": \"CREATE\", \"Key\": \"config/vpp/v2/interfaces/mem01\", \"NewState\": \"RETRYING\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2 socketfilename:\\\"/var/run/contiv/memif_ts1-host4.sock\\\" > \" }, \"NewErr\": {}, \"PrevState\": \"RETRYING\", \"PrevValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2" }, { "data": "> \" }, \"PrevErr\": {}, \"IsRetry\": true } ] }, { \"WithSimulation\": true, \"Start\": \"2020-05-28T18:53:22.246068802Z\", \"Stop\": \"2020-05-28T18:53:22.248180185Z\", \"SeqNum\": 5, \"TxnType\": \"RetryFailedOps\", \"RetryForTxn\": 3, \"RetryAttempt\": 2, \"Values\": [ { \"Key\": \"config/vpp/v2/interfaces/mem01\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2 socketfilename:\\\"/var/run/contiv/memif_ts1-host4.sock\\\" > \" }, \"Origin\": 1 } ], \"Planned\": [ { \"Operation\": \"CREATE\", \"Key\": \"config/vpp/v2/interfaces/mem01\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2 socketfilename:\\\"/var/run/contiv/memif_ts1-host4.sock\\\" > \" }, \"PrevState\": \"RETRYING\", \"PrevValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2 socketfilename:\\\"/var/run/contiv/memif_ts1-host4.sock\\\" > \" }, \"PrevErr\": {}, \"IsRetry\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/mem01/rx-modes\", \"NewState\": \"PENDING\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF rx_modes:<mode:ADAPTIVE > \" }, \"NOOP\": true, \"IsDerived\": true, \"IsRetry\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/mem01/vrf/0/ip-version/v4\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true, \"IsRetry\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/mem01/address/static/100.100.100.102/24\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true, \"IsRetry\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/mem01/has-IP-address\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true, \"IsRetry\": true } ], \"Executed\": [ { \"Operation\": \"CREATE\", \"Key\": \"config/vpp/v2/interfaces/mem01\", \"NewState\": \"RETRYING\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2 socketfilename:\\\"/var/run/contiv/memif_ts1-host4.sock\\\" > \" }, \"NewErr\": {}, \"PrevState\": \"RETRYING\", \"PrevValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2 socketfilename:\\\"/var/run/contiv/memif_ts1-host4.sock\\\" > \" }, \"PrevErr\": {}, \"IsRetry\": true } ] }, { \"WithSimulation\": true, \"Start\": \"2020-05-28T18:53:26.267778173Z\", \"Stop\": \"2020-05-28T18:53:26.27082962Z\", \"SeqNum\": 6, \"TxnType\": \"RetryFailedOps\", \"RetryForTxn\": 3, \"RetryAttempt\": 3, \"Values\": [ { \"Key\": \"config/vpp/v2/interfaces/mem01\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2 socketfilename:\\\"/var/run/contiv/memif_ts1-host4.sock\\\" > \" }, \"Origin\": 1 } ], \"Planned\": [ { \"Operation\": \"CREATE\", \"Key\": \"config/vpp/v2/interfaces/mem01\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2 socketfilename:\\\"/var/run/contiv/memif_ts1-host4.sock\\\" > \" }, \"PrevState\": \"RETRYING\", \"PrevValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2 socketfilename:\\\"/var/run/contiv/memif_ts1-host4.sock\\\" > \" }, \"PrevErr\": {}, \"IsRetry\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/mem01/rx-modes\", \"NewState\": \"PENDING\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF rx_modes:<mode:ADAPTIVE > \" }, \"NOOP\": true, \"IsDerived\": true, \"IsRetry\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/mem01/vrf/0/ip-version/v4\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true, \"IsRetry\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/mem01/address/static/100.100.100.102/24\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true, \"IsRetry\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/mem01/has-IP-address\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true, \"IsRetry\": true } ], \"Executed\": [ { \"Operation\": \"CREATE\", \"Key\": \"config/vpp/v2/interfaces/mem01\", \"NewState\": \"FAILED\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2 socketfilename:\\\"/var/run/contiv/memif_ts1-host4.sock\\\" > \" }, \"NewErr\": {}, \"PrevState\": \"RETRYING\", \"PrevValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2 socketfilename:\\\"/var/run/contiv/memif_ts1-host4.sock\\\" > \" }, \"PrevErr\": {}, \"IsRetry\": true } ] }, { \"Start\": \"2020-05-28T19:10:44.973288991Z\", \"Stop\": \"2020-05-28T19:10:44.979247779Z\", \"SeqNum\": 7, \"TxnType\": \"NBTransaction\", \"Values\": [ { \"Key\": \"config/vpp/v2/interfaces/mem01\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rx_modes:<mode:ADAPTIVE > memif:<id:2 > \" }, \"Origin\": 1 } ], \"Executed\": [ { \"Operation\": \"CREATE\", \"Key\": \"config/vpp/v2/interfaces/mem01\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rx_modes:<mode:ADAPTIVE > memif:<id:2 > \" }, \"PrevState\": \"FAILED\", \"PrevValue\": { \"ProtoMsgName\":" }, { "data": "\"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rxmodes:<mode:ADAPTIVE > memif:<id:2 socketfilename:\\\"/var/run/contiv/memif_ts1-host4.sock\\\" > \" }, \"PrevErr\": {} }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/mem01/rx-modes\", \"NewState\": \"PENDING\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF rx_modes:<mode:ADAPTIVE > \" }, \"NOOP\": true, \"IsDerived\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/mem01/vrf/0/ip-version/v4\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/mem01/address/static/100.100.100.102/24\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/mem01/has-IP-address\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true } ] }, { \"Start\": \"2020-05-28T19:10:44.980035739Z\", \"Stop\": \"2020-05-28T19:10:44.980288903Z\", \"SeqNum\": 8, \"TxnType\": \"SBNotification\", \"Values\": [ { \"Key\": \"vpp/interface/mem01/link-state/DOWN\", \"Value\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"Origin\": 2 } ], \"Executed\": [ { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/mem01/link-state/DOWN\", \"NewState\": \"OBTAINED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" } } ] } ]``` To GET the transaction history for a sequence number = 1, use: ``` curl -X GET http://localhost:9191/scheduler/txn-history?seq-num=1``` Sample response: ``` { \"Start\": \"2020-05-28T17:05:48.16322751Z\", \"Stop\": \"2020-05-28T17:05:48.166554253Z\", \"SeqNum\": 1, \"TxnType\": \"NBTransaction\", \"Values\": [ { \"Key\": \"config/vpp/v2/interfaces/loop1\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"loop1\\\" type:SOFTWARELOOPBACK enabled:true ipaddresses:\\\"192.168.1.1/24\\\" \" }, \"Origin\": 1 } ], \"Executed\": [ { \"Operation\": \"CREATE\", \"Key\": \"config/vpp/v2/interfaces/loop1\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"loop1\\\" type:SOFTWARELOOPBACK enabled:true ipaddresses:\\\"192.168.1.1/24\\\" \" } }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/loop1/vrf/0/ip-version/v4\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/loop1/address/static/192.168.1.1/24\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/interface/loop1/has-IP-address\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"IsDerived\": true } ] }``` To GET the transaction history for a window in time, where the start time = 1591031902 and end time = 1591031903, use: ``` curl -X GET \"http://localhost:9191/scheduler/txn-history?since=1591031902&until=1591031903\"``` Description: GET the timeline of value changes for a specific key. To GET the timeline of value changes for key=config/vpp/v2/interfaces/loop1, use: ``` curl -X GET \"http://localhost:9191/scheduler/key-timeline?key=config/vpp/v2/interfaces/loop1\"``` Sample response: ``` [ { \"Since\": \"2020-05-28T17:05:48.166536476Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"config/vpp/v2/interfaces/loop1\", \"Label\": \"loop1\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"loop1\\\" type:SOFTWARELOOPBACK enabled:true ipaddresses:\\\"192.168.1.1/24\\\" \" }, \"Flags\": { \"last-update\": \"TXN-1\", \"value-state\": \"CONFIGURED\", \"descriptor\": \"vpp-interface\" }, \"MetadataFields\": { \"index\": [ \"1\" ], \"ip_addresses\": [ \"192.168.1.1/24\" ] }, \"Targets\": [ { \"Relation\": \"derives\", \"Label\": \"vpp/interface/loop1/address/static/192.168.1.1/24\", \"ExpectedKey\": \"vpp/interface/loop1/address/static/192.168.1.1/24\", \"MatchingKeys\": [ \"vpp/interface/loop1/address/static/192.168.1.1/24\" ] }, { \"Relation\": \"derives\", \"Label\": \"vpp/interface/loop1/has-IP-address\", \"ExpectedKey\": \"vpp/interface/loop1/has-IP-address\", \"MatchingKeys\": [ \"vpp/interface/loop1/has-IP-address\" ] }, { \"Relation\": \"derives\", \"Label\": \"vpp/interface/loop1/vrf/0/ip-version/v4\", \"ExpectedKey\": \"vpp/interface/loop1/vrf/0/ip-version/v4\", \"MatchingKeys\": [ \"vpp/interface/loop1/vrf/0/ip-version/v4\" ] } ], \"TargetUpdateOnly\": false } ]``` Description: A graph-based representation of the system state, used internally by the KV Scheduler, can be displayed using any modern web browser supporting SVG. Reference: How to visualize the graph section of the Developer Guide. Description: GET a snapshot of the KV Scheduler internal graph at a point in time. If there is no parameter passed in the request, then the current state is returned. Use the following parameter to specify a snapshot point in time: To GET the current graph snapshot, use: ``` curl -X GET http://localhost:9191/scheduler/graph-snapshot``` Sample response: ``` [ { \"Since\": \"2020-05-28T16:50:52.769880703Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"config/vpp/v2/route/vrf/0/dst/0.0.0.0/32/gw/0.0.0.0\", \"Label\": \"config/vpp/v2/route/vrf/0/dst/0.0.0.0/32/gw/0.0.0.0\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.Route\", \"ProtoMsgData\": \"type:DROP dstnetwork:\\\"0.0.0.0/32\\\" nexthop_addr:\\\"0.0.0.0\\\" weight:1 \" }, \"Flags\": { \"last-update\": \"TXN-0\", \"value-state\": \"OBTAINED\", \"descriptor\": \"vpp-route\" }, \"MetadataFields\": null, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T16:50:52.769881793Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"vpp/interface/UNTAGGED-local0/link-state/DOWN\", \"Label\": \"vpp/interface/UNTAGGED-local0/link-state/DOWN\", \"Value\": { \"ProtoMsgName\":" }, { "data": "\"ProtoMsgData\": \"\" }, \"Flags\": { \"last-update\": \"TXN-0\", \"value-state\": \"OBTAINED\", \"descriptor\": \"vpp-interface-link-state\" }, \"MetadataFields\": null, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T17:05:48.166546952Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"vpp/interface/loop1/vrf/0/ip-version/v4\", \"Label\": \"vpp/interface/loop1/vrf/0/ip-version/v4\", \"Value\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"Flags\": { \"last-update\": \"TXN-1\", \"value-state\": \"CONFIGURED\", \"descriptor\": \"vpp-interface-vrf\", \"derived\": \"config/vpp/v2/interfaces/loop1\" }, \"MetadataFields\": null, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T19:10:44.979182721Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"config/vpp/v2/interfaces/mem01\", \"Label\": \"mem01\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF enabled:true physaddress:\\\"02:00:00:00:00:03\\\" ipaddresses:\\\"100.100.100.102/24\\\" mtu:1500 rx_modes:<mode:ADAPTIVE > memif:<id:2 > \" }, \"Flags\": { \"last-update\": \"TXN-7\", \"value-state\": \"CONFIGURED\", \"descriptor\": \"vpp-interface\" }, \"MetadataFields\": { \"index\": [ \"2\" ], \"ip_addresses\": [ \"100.100.100.102/24\" ] }, \"Targets\": [ { \"Relation\": \"derives\", \"Label\": \"vpp/interface/mem01/address/static/100.100.100.102/24\", \"ExpectedKey\": \"vpp/interface/mem01/address/static/100.100.100.102/24\", \"MatchingKeys\": [ \"vpp/interface/mem01/address/static/100.100.100.102/24\" ] }, { \"Relation\": \"derives\", \"Label\": \"vpp/interface/mem01/has-IP-address\", \"ExpectedKey\": \"vpp/interface/mem01/has-IP-address\", \"MatchingKeys\": [ \"vpp/interface/mem01/has-IP-address\" ] }, { \"Relation\": \"derives\", \"Label\": \"vpp/interface/mem01/rx-modes\", \"ExpectedKey\": \"vpp/interface/mem01/rx-modes\", \"MatchingKeys\": [ \"vpp/interface/mem01/rx-modes\" ] }, { \"Relation\": \"derives\", \"Label\": \"vpp/interface/mem01/vrf/0/ip-version/v4\", \"ExpectedKey\": \"vpp/interface/mem01/vrf/0/ip-version/v4\", \"MatchingKeys\": [ \"vpp/interface/mem01/vrf/0/ip-version/v4\" ] } ], \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T16:50:52.769826989Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"config/vpp/v2/route/vrf/0/dst/::/0/gw/::\", \"Label\": \"config/vpp/v2/route/vrf/0/dst/::/0/gw/::\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.Route\", \"ProtoMsgData\": \"type:DROP dstnetwork:\\\"::/0\\\" nexthop_addr:\\\"::\\\" weight:1 \" }, \"Flags\": { \"last-update\": \"TXN-0\", \"value-state\": \"OBTAINED\", \"descriptor\": \"vpp-route\" }, \"MetadataFields\": null, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T19:10:44.979238506Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"vpp/interface/mem01/has-IP-address\", \"Label\": \"vpp/interface/mem01/has-IP-address\", \"Value\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"Flags\": { \"last-update\": \"TXN-7\", \"value-state\": \"CONFIGURED\", \"descriptor\": \"vpp-interface-has-address\", \"derived\": \"config/vpp/v2/interfaces/mem01\" }, \"MetadataFields\": null, \"Targets\": [ { \"Relation\": \"depends-on\", \"Label\": \"interface-has-IP\", \"ExpectedKey\": \"vpp/interface/mem01/address/*\", \"MatchingKeys\": [ \"vpp/interface/mem01/address/static/100.100.100.102/24\" ] } ], \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T16:50:52.769827446Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"config/vpp/v2/route/vrf/0/dst/fe80::/10/gw/::\", \"Label\": \"config/vpp/v2/route/vrf/0/dst/fe80::/10/gw/::\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.Route\", \"ProtoMsgData\": \"dstnetwork:\\\"fe80::/10\\\" nexthop_addr:\\\"::\\\" weight:1 \" }, \"Flags\": { \"last-update\": \"TXN-0\", \"value-state\": \"OBTAINED\", \"descriptor\": \"vpp-route\" }, \"MetadataFields\": null, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T16:50:52.769867279Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"config/vpp/v2/route/vrf/0/dst/240.0.0.0/4/gw/0.0.0.0\", \"Label\": \"config/vpp/v2/route/vrf/0/dst/240.0.0.0/4/gw/0.0.0.0\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.Route\", \"ProtoMsgData\": \"type:DROP dstnetwork:\\\"240.0.0.0/4\\\" nexthop_addr:\\\"0.0.0.0\\\" weight:1 \" }, \"Flags\": { \"last-update\": \"TXN-0\", \"value-state\": \"OBTAINED\", \"descriptor\": \"vpp-route\" }, \"MetadataFields\": null, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T19:10:44.979234834Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"vpp/interface/mem01/address/static/100.100.100.102/24\", \"Label\": \"vpp/interface/mem01/address/static/100.100.100.102/24\", \"Value\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"Flags\": { \"last-update\": \"TXN-7\", \"value-state\": \"CONFIGURED\", \"descriptor\": \"vpp-interface-address\", \"derived\": \"config/vpp/v2/interfaces/mem01\" }, \"MetadataFields\": null, \"Targets\": [ { \"Relation\": \"depends-on\", \"Label\": \"interface-assigned-to-vrf-table\", \"ExpectedKey\": \"vpp/interface/mem01/vrf/*\", \"MatchingKeys\": [ \"vpp/interface/mem01/vrf/0/ip-version/v4\" ] } ], \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T16:50:52.76982628Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"config/vpp/v2/route/vrf/0/dst/224.0.0.0/4/gw/0.0.0.0\", \"Label\": \"config/vpp/v2/route/vrf/0/dst/224.0.0.0/4/gw/0.0.0.0\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.Route\", \"ProtoMsgData\": \"type:DROP dstnetwork:\\\"224.0.0.0/4\\\" nexthop_addr:\\\"0.0.0.0\\\" weight:1 \" }, \"Flags\": { \"last-update\": \"TXN-0\", \"value-state\": \"OBTAINED\", \"descriptor\": \"vpp-route\" }, \"MetadataFields\": null, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T17:05:48.166546244Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"vpp/interface/loop1/has-IP-address\", \"Label\": \"vpp/interface/loop1/has-IP-address\", \"Value\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"Flags\": { \"last-update\": \"TXN-1\", \"value-state\": \"CONFIGURED\", \"descriptor\": \"vpp-interface-has-address\", \"derived\": \"config/vpp/v2/interfaces/loop1\" }, \"MetadataFields\": null, \"Targets\": [ { \"Relation\": \"depends-on\", \"Label\": \"interface-has-IP\", \"ExpectedKey\": \"vpp/interface/loop1/address/*\", \"MatchingKeys\": [ \"vpp/interface/loop1/address/static/192.168.1.1/24\" ] } ], \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T16:50:52.769871655Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"linux/interface/host-name/lo\", \"Label\": \"linux/interface/host-name/lo\", \"Value\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"Flags\": { \"last-update\": \"TXN-0\", \"value-state\": \"OBTAINED\", \"descriptor\": \"linux-interface-watcher\" }, \"MetadataFields\": null, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T16:50:52.769877741Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"config/vpp/ipfix/v2/ipfix\", \"Label\": \"\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.ipfix.IPFIX\", \"ProtoMsgData\": \"collector:<address:\\\"0.0.0.0\\\" > sourceaddress:\\\"0.0.0.0\\\" vrfid:4294967295 \" }, \"Flags\": { \"last-update\": \"TXN-0\", \"value-state\": \"OBTAINED\", \"descriptor\": \"vpp-ipfix\" }, \"MetadataFields\": null, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T16:50:52.769878645Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"config/vpp/nat/v2/nat44-global\", \"Label\": \"config/vpp/nat/v2/nat44-global\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.nat.Nat44Global\", \"ProtoMsgData\": \"\" }, \"Flags\": { \"last-update\": \"TXN-0\", \"value-state\": \"OBTAINED\", \"descriptor\": \"vpp-nat44-global\" }, \"MetadataFields\": null, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\":" }, { "data": "\"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"config/vpp/v2/vrf-table/id/0/protocol/IPV6\", \"Label\": \"id/0/protocol/IPV6\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.VrfTable\", \"ProtoMsgData\": \"protocol:IPV6 label:\\\"ipv6-VRF:0\\\" \" }, \"Flags\": { \"last-update\": \"TXN-0\", \"value-state\": \"OBTAINED\", \"descriptor\": \"vpp-vrf-table\" }, \"MetadataFields\": { \"index\": [ \"0\" ] }, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T16:50:52.769827916Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"config/vpp/v2/vrf-table/id/0/protocol/IPV4\", \"Label\": \"id/0/protocol/IPV4\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.VrfTable\", \"ProtoMsgData\": \"label:\\\"ipv4-VRF:0\\\" \" }, \"Flags\": { \"last-update\": \"TXN-0\", \"value-state\": \"OBTAINED\", \"descriptor\": \"vpp-vrf-table\" }, \"MetadataFields\": { \"index\": [ \"0\" ] }, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T16:50:52.769866017Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"config/vpp/v2/proxyarp-global\", \"Label\": \"config/vpp/v2/proxyarp-global\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.ProxyARP\", \"ProtoMsgData\": \"\" }, \"Flags\": { \"last-update\": \"TXN-0\", \"value-state\": \"OBTAINED\", \"descriptor\": \"vpp-proxy-arp\" }, \"MetadataFields\": null, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T19:10:44.979177611Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"vpp/interface/mem01/vrf/0/ip-version/v4\", \"Label\": \"vpp/interface/mem01/vrf/0/ip-version/v4\", \"Value\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"Flags\": { \"last-update\": \"TXN-7\", \"value-state\": \"CONFIGURED\", \"descriptor\": \"vpp-interface-vrf\", \"derived\": \"config/vpp/v2/interfaces/mem01\" }, \"MetadataFields\": null, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T17:05:48.166544965Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"vpp/interface/loop1/address/static/192.168.1.1/24\", \"Label\": \"vpp/interface/loop1/address/static/192.168.1.1/24\", \"Value\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"Flags\": { \"last-update\": \"TXN-1\", \"value-state\": \"CONFIGURED\", \"descriptor\": \"vpp-interface-address\", \"derived\": \"config/vpp/v2/interfaces/loop1\" }, \"MetadataFields\": null, \"Targets\": [ { \"Relation\": \"depends-on\", \"Label\": \"interface-assigned-to-vrf-table\", \"ExpectedKey\": \"vpp/interface/loop1/vrf/*\", \"MatchingKeys\": [ \"vpp/interface/loop1/vrf/0/ip-version/v4\" ] } ], \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T19:10:44.980286279Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"vpp/interface/mem01/link-state/DOWN\", \"Label\": \"vpp/interface/mem01/link-state/DOWN\", \"Value\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"Flags\": { \"last-update\": \"TXN-8\", \"value-state\": \"OBTAINED\", \"descriptor\": \"vpp-interface-link-state\" }, \"MetadataFields\": null, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T16:50:52.769823066Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"config/vpp/v2/interfaces/UNTAGGED-local0\", \"Label\": \"UNTAGGED-local0\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"UNTAGGED-local0\\\" type:SOFTWARELOOPBACK physaddress:\\\"00:00:00:00:00:00\\\" \" }, \"Flags\": { \"last-update\": \"TXN-0\", \"value-state\": \"OBTAINED\", \"descriptor\": \"vpp-interface\" }, \"MetadataFields\": { \"index\": [ \"0\" ] }, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T16:50:52.76987092Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"linux/interface/host-name/eth0\", \"Label\": \"linux/interface/host-name/eth0\", \"Value\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"Flags\": { \"last-update\": \"TXN-0\", \"value-state\": \"OBTAINED\", \"descriptor\": \"linux-interface-watcher\" }, \"MetadataFields\": null, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T17:05:48.166536476Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"config/vpp/v2/interfaces/loop1\", \"Label\": \"loop1\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"loop1\\\" type:SOFTWARELOOPBACK enabled:true ipaddresses:\\\"192.168.1.1/24\\\" \" }, \"Flags\": { \"last-update\": \"TXN-1\", \"value-state\": \"CONFIGURED\", \"descriptor\": \"vpp-interface\" }, \"MetadataFields\": { \"index\": [ \"1\" ], \"ip_addresses\": [ \"192.168.1.1/24\" ] }, \"Targets\": [ { \"Relation\": \"derives\", \"Label\": \"vpp/interface/loop1/address/static/192.168.1.1/24\", \"ExpectedKey\": \"vpp/interface/loop1/address/static/192.168.1.1/24\", \"MatchingKeys\": [ \"vpp/interface/loop1/address/static/192.168.1.1/24\" ] }, { \"Relation\": \"derives\", \"Label\": \"vpp/interface/loop1/has-IP-address\", \"ExpectedKey\": \"vpp/interface/loop1/has-IP-address\", \"MatchingKeys\": [ \"vpp/interface/loop1/has-IP-address\" ] }, { \"Relation\": \"derives\", \"Label\": \"vpp/interface/loop1/vrf/0/ip-version/v4\", \"ExpectedKey\": \"vpp/interface/loop1/vrf/0/ip-version/v4\", \"MatchingKeys\": [ \"vpp/interface/loop1/vrf/0/ip-version/v4\" ] } ], \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T16:50:52.76987227Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"config/vpp/v2/route/vrf/0/dst/0.0.0.0/0/gw/0.0.0.0\", \"Label\": \"config/vpp/v2/route/vrf/0/dst/0.0.0.0/0/gw/0.0.0.0\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.Route\", \"ProtoMsgData\": \"type:DROP dstnetwork:\\\"0.0.0.0/0\\\" nexthop_addr:\\\"0.0.0.0\\\" weight:1 \" }, \"Flags\": { \"last-update\": \"TXN-0\", \"value-state\": \"OBTAINED\", \"descriptor\": \"vpp-route\" }, \"MetadataFields\": null, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T16:50:52.769881236Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"config/vpp/v2/route/vrf/0/dst/255.255.255.255/32/gw/0.0.0.0\", \"Label\": \"config/vpp/v2/route/vrf/0/dst/255.255.255.255/32/gw/0.0.0.0\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.l3.Route\", \"ProtoMsgData\": \"type:DROP dstnetwork:\\\"255.255.255.255/32\\\" nexthop_addr:\\\"0.0.0.0\\\" weight:1 \" }, \"Flags\": { \"last-update\": \"TXN-0\", \"value-state\": \"OBTAINED\", \"descriptor\": \"vpp-route\" }, \"MetadataFields\": null, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T17:05:48.167112121Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"vpp/interface/loop1/link-state/UP\", \"Label\": \"vpp/interface/loop1/link-state/UP\", \"Value\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"Flags\": { \"last-update\": \"TXN-2\", \"value-state\": \"OBTAINED\", \"descriptor\": \"vpp-interface-link-state\" }, \"MetadataFields\": null, \"Targets\": null, \"TargetUpdateOnly\": false }, { \"Since\": \"2020-05-28T19:10:44.979173377Z\", \"Until\": \"0001-01-01T00:00:00Z\", \"Key\": \"vpp/interface/mem01/rx-modes\", \"Label\": \"vpp/interface/mem01/rx-modes\", \"Value\": { \"ProtoMsgName\": \"ligato.vpp.interfaces.Interface\", \"ProtoMsgData\": \"name:\\\"mem01\\\" type:MEMIF rx_modes:<mode:ADAPTIVE > \" }, \"Flags\": { \"last-update\": \"TXN-7\", \"value-state\": \"PENDING\", \"unavailable\": \"\", \"descriptor\": \"vpp-interface-rx-mode\", \"derived\": \"config/vpp/v2/interfaces/mem01\" }, \"MetadataFields\": null, \"Targets\": [ { \"Relation\": \"depends-on\", \"Label\": \"interface-link-is-UP\", \"ExpectedKey\": \"vpp/interface/mem01/link-state/UP\", \"MatchingKeys\": [] } ], \"TargetUpdateOnly\": false } ]``` Description: GET value state by descriptor, by key, or" }, { "data": "The parameters are: To GET all value states, use: ``` curl -X GET http://localhost:9191/scheduler/status``` Sample response: ``` [ { \"value\": { \"key\": \"config/vpp/ipfix/v2/ipfix\", \"state\": \"OBTAINED\" } }, { \"value\": { \"key\": \"config/vpp/nat/v2/nat44-global\", \"state\": \"OBTAINED\" } }, { \"value\": { \"key\": \"config/vpp/v2/interfaces/UNTAGGED-local0\", \"state\": \"OBTAINED\" } }, { \"value\": { \"key\": \"config/vpp/v2/interfaces/loop1\", \"state\": \"CONFIGURED\", \"lastOperation\": \"CREATE\" }, \"derived_values\": [ { \"key\": \"vpp/interface/loop1/address/static/192.168.1.1/24\", \"state\": \"CONFIGURED\", \"lastOperation\": \"CREATE\" }, { \"key\": \"vpp/interface/loop1/has-IP-address\", \"state\": \"CONFIGURED\", \"lastOperation\": \"CREATE\" }, { \"key\": \"vpp/interface/loop1/vrf/0/ip-version/v4\", \"state\": \"CONFIGURED\", \"lastOperation\": \"CREATE\" } ] }, { \"value\": { \"key\": \"config/vpp/v2/interfaces/mem01\", \"state\": \"CONFIGURED\", \"lastOperation\": \"CREATE\" }, \"derived_values\": [ { \"key\": \"vpp/interface/mem01/address/static/100.100.100.102/24\", \"state\": \"CONFIGURED\", \"lastOperation\": \"CREATE\" }, { \"key\": \"vpp/interface/mem01/has-IP-address\", \"state\": \"CONFIGURED\", \"lastOperation\": \"CREATE\" }, { \"key\": \"vpp/interface/mem01/rx-modes\", \"state\": \"PENDING\", \"lastOperation\": \"CREATE\", \"details\": [ \"interface-link-is-UP\" ] }, { \"key\": \"vpp/interface/mem01/vrf/0/ip-version/v4\", \"state\": \"CONFIGURED\", \"lastOperation\": \"CREATE\" } ] }, { \"value\": { \"key\": \"config/vpp/v2/proxyarp-global\", \"state\": \"OBTAINED\" } }, { \"value\": { \"key\": \"config/vpp/v2/route/vrf/0/dst/0.0.0.0/0/gw/0.0.0.0\", \"state\": \"OBTAINED\" } }, { \"value\": { \"key\": \"config/vpp/v2/route/vrf/0/dst/0.0.0.0/32/gw/0.0.0.0\", \"state\": \"OBTAINED\" } }, { \"value\": { \"key\": \"config/vpp/v2/route/vrf/0/dst/224.0.0.0/4/gw/0.0.0.0\", \"state\": \"OBTAINED\" } }, { \"value\": { \"key\": \"config/vpp/v2/route/vrf/0/dst/240.0.0.0/4/gw/0.0.0.0\", \"state\": \"OBTAINED\" } }, { \"value\": { \"key\": \"config/vpp/v2/route/vrf/0/dst/255.255.255.255/32/gw/0.0.0.0\", \"state\": \"OBTAINED\" } }, { \"value\": { \"key\": \"config/vpp/v2/route/vrf/0/dst/::/0/gw/::\", \"state\": \"OBTAINED\" } }, { \"value\": { \"key\": \"config/vpp/v2/route/vrf/0/dst/fe80::/10/gw/::\", \"state\": \"OBTAINED\" } }, { \"value\": { \"key\": \"config/vpp/v2/vrf-table/id/0/protocol/IPV4\", \"state\": \"OBTAINED\" } }, { \"value\": { \"key\": \"config/vpp/v2/vrf-table/id/0/protocol/IPV6\", \"state\": \"OBTAINED\" } }, { \"value\": { \"key\": \"linux/interface/host-name/eth0\", \"state\": \"OBTAINED\" } }, { \"value\": { \"key\": \"linux/interface/host-name/lo\", \"state\": \"OBTAINED\" } }, { \"value\": { \"key\": \"vpp/interface/UNTAGGED-local0/link-state/DOWN\", \"state\": \"OBTAINED\" } }, { \"value\": { \"key\": \"vpp/interface/loop1/link-state/UP\", \"state\": \"OBTAINED\" } }, { \"value\": { \"key\": \"vpp/interface/mem01/link-state/DOWN\", \"state\": \"OBTAINED\" } } ]``` To GET the value state for the config/vpp/v2/interfaces/loop1 key, use: ``` curl -X GET http://localhost:9191/scheduler/status?key=config/vpp/v2/interfaces/loop1``` Sample response: ``` { \"value\": { \"key\": \"config/vpp/v2/interfaces/loop1\", \"state\": \"CONFIGURED\", \"lastOperation\": \"CREATE\" }, \"derived_values\": [ { \"key\": \"vpp/interface/loop1/address/static/192.168.1.1/24\", \"state\": \"CONFIGURED\", \"lastOperation\": \"CREATE\" }, { \"key\": \"vpp/interface/loop1/has-IP-address\", \"state\": \"CONFIGURED\", \"lastOperation\": \"CREATE\" }, { \"key\": \"vpp/interface/loop1/vrf/0/ip-version/v4\", \"state\": \"CONFIGURED\", \"lastOperation\": \"CREATE\" } ] }``` Description: GET total and per-value counts by value flag. The following parameters are used to specify a value flag: To GET the flag-stats by descriptor flag, use: ``` curl -X GET http://localhost:9191/scheduler/flag-stats?flag=descriptor``` Sample response: ``` { \"TotalCount\": 31, \"PerValueCount\": { \"linux-interface-watcher\": 2, \"vpp-interface\": 7, \"vpp-interface-address\": 2, \"vpp-interface-has-address\": 2, \"vpp-interface-link-state\": 3, \"vpp-interface-rx-mode\": 1, \"vpp-interface-vrf\": 2, \"vpp-ipfix\": 1, \"vpp-nat44-global\": 1, \"vpp-proxy-arp\": 1, \"vpp-route\": 7, \"vpp-vrf-table\": 2 } }``` To GET the flag-stats by last-update flag, use: ``` curl -X GET http://localhost:9191/scheduler/flag-stats?flag=last-update``` Sample response: ``` { \"TotalCount\": 31, \"PerValueCount\": { \"TXN-0\": 16, \"TXN-1\": 4, \"TXN-2\": 1, \"TXN-3\": 1, \"TXN-4\": 1, \"TXN-5\": 1, \"TXN-6\": 1, \"TXN-7\": 5, \"TXN-8\": 1 } }``` To GET the flag-stats by value-state flag, use: ``` curl -X GET http://localhost:9191/scheduler/flag-stats?flag=value-state``` Sample response: ``` { \"TotalCount\": 31, \"PerValueCount\": { \"CONFIGURED\": 8, \"FAILED\": 1, \"OBTAINED\": 18, \"PENDING\": 1, \"RETRYING\": 3 } }``` To GET the flag-stats by derived flag, use: ``` curl -X GET http://localhost:9191/scheduler/flag-stats?flag=derived``` To GET the flag-stats by unavailable flag, use: ``` curl -X GET http://localhost:9191/scheduler/flag-stats?flag=unavailable``` To GET the flag-stats by error flag, use: ``` curl -X GET http://localhost:9191/scheduler/flag-stats?flag=error``` Description: Triggers a downstream resync ``` curl -X POST http://localhost:9191/scheduler/downstream-resync``` Sample response: ``` { \"Start\": \"2020-05-29T22:22:32.79908837Z\", \"Stop\": \"2020-05-29T22:22:32.829282034Z\", \"SeqNum\": 9, \"TxnType\": \"NBTransaction\", \"ResyncType\": \"DownstreamResync\", \"Executed\": [ { \"Operation\": \"DELETE\", \"Key\": \"vpp/spd/1/interface/tap1\", \"NewState\": \"REMOVED\", \"PrevState\": \"PENDING\", \"PrevValue\": { \"ProtoMsgName\":" }, { "data": "\"ProtoMsgData\": \"name:\\\"tap1\\\" \" }, \"NOOP\": true, \"IsDerived\": true, \"IsRecreate\": true }, { \"Operation\": \"DELETE\", \"Key\": \"vpp/spd/1/sa/10\", \"NewState\": \"REMOVED\", \"PrevState\": \"PENDING\", \"PrevValue\": { \"ProtoMsgName\": \"ligato.vpp.ipsec.SecurityPolicyDatabase.PolicyEntry\", \"ProtoMsgData\": \"saindex:10 priority:10 isoutbound:true remoteaddrstart:\\\"10.0.0.1\\\" remoteaddrstop:\\\"10.0.0.1\\\" localaddrstart:\\\"10.0.0.2\\\" localaddrstop:\\\"10.0.0.2\\\" action:PROTECT \" }, \"NOOP\": true, \"IsDerived\": true, \"IsRecreate\": true }, { \"Operation\": \"DELETE\", \"Key\": \"vpp/spd/1/sa/20\", \"NewState\": \"REMOVED\", \"PrevState\": \"PENDING\", \"PrevValue\": { \"ProtoMsgName\": \"ligato.vpp.ipsec.SecurityPolicyDatabase.PolicyEntry\", \"ProtoMsgData\": \"saindex:20 priority:10 remoteaddrstart:\\\"10.0.0.1\\\" remoteaddrstop:\\\"10.0.0.1\\\" localaddrstart:\\\"10.0.0.2\\\" localaddr_stop:\\\"10.0.0.2\\\" action:PROTECT \" }, \"NOOP\": true, \"IsDerived\": true, \"IsRecreate\": true }, { \"Operation\": \"DELETE\", \"Key\": \"config/vpp/ipsec/v2/spd/1\", \"NewState\": \"REMOVED\", \"PrevState\": \"CONFIGURED\", \"PrevValue\": { \"ProtoMsgName\": \"ligato.vpp.ipsec.SecurityPolicyDatabase\", \"ProtoMsgData\": \"index:1 \" }, \"IsRecreate\": true }, { \"Operation\": \"CREATE\", \"Key\": \"config/vpp/ipsec/v2/spd/1\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.ipsec.SecurityPolicyDatabase\", \"ProtoMsgData\": \"index:1 interfaces:<name:\\\"tap1\\\" > policyentries:<saindex:20 priority:10 remoteaddrstart:\\\"10.0.0.1\\\" remoteaddrstop:\\\"10.0.0.1\\\" localaddrstart:\\\"10.0.0.2\\\" localaddrstop:\\\"10.0.0.2\\\" action:PROTECT > policyentries:<saindex:10 priority:10 isoutbound:true remoteaddrstart:\\\"10.0.0.1\\\" remoteaddrstop:\\\"10.0.0.1\\\" localaddrstart:\\\"10.0.0.2\\\" localaddr_stop:\\\"10.0.0.2\\\" action:PROTECT > \" }, \"PrevState\": \"REMOVED\", \"IsRecreate\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/spd/1/interface/tap1\", \"NewState\": \"PENDING\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.ipsec.SecurityPolicyDatabase.Interface\", \"ProtoMsgData\": \"name:\\\"tap1\\\" \" }, \"NOOP\": true, \"IsDerived\": true, \"IsRecreate\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/spd/1/sa/10\", \"NewState\": \"PENDING\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.ipsec.SecurityPolicyDatabase.PolicyEntry\", \"ProtoMsgData\": \"saindex:10 priority:10 isoutbound:true remoteaddrstart:\\\"10.0.0.1\\\" remoteaddrstop:\\\"10.0.0.1\\\" localaddrstart:\\\"10.0.0.2\\\" localaddrstop:\\\"10.0.0.2\\\" action:PROTECT \" }, \"NOOP\": true, \"IsDerived\": true, \"IsRecreate\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/spd/1/sa/20\", \"NewState\": \"PENDING\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.ipsec.SecurityPolicyDatabase.PolicyEntry\", \"ProtoMsgData\": \"saindex:20 priority:10 remoteaddrstart:\\\"10.0.0.1\\\" remoteaddrstop:\\\"10.0.0.1\\\" localaddrstart:\\\"10.0.0.2\\\" localaddr_stop:\\\"10.0.0.2\\\" action:PROTECT \" }, \"NOOP\": true, \"IsDerived\": true, \"IsRecreate\": true }, { \"Operation\": \"CREATE\", \"Key\": \"config/vpp/abfs/v2/abf/1\", \"NewState\": \"PENDING\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.abf.ABF\", \"ProtoMsgData\": \"index:1 aclname:\\\"aclip1\\\" attachedinterfaces:<inputinterface:\\\"tap1\\\" priority:40 > attachedinterfaces:<inputinterface:\\\"memif1\\\" priority:60 > forwardingpaths:<nexthopip:\\\"10.0.0.10\\\" interface_name:\\\"loop1\\\" weight:20 preference:25 > \" }, \"PrevState\": \"PENDING\", \"PrevValue\": { \"ProtoMsgName\": \"ligato.vpp.abf.ABF\", \"ProtoMsgData\": \"index:1 aclname:\\\"aclip1\\\" attachedinterfaces:<inputinterface:\\\"tap1\\\" priority:40 > attachedinterfaces:<inputinterface:\\\"memif1\\\" priority:60 > forwardingpaths:<nexthopip:\\\"10.0.0.10\\\" interface_name:\\\"loop1\\\" weight:20 preference:25 > \" }, \"NOOP\": true }, { \"Operation\": \"CREATE\", \"Key\": \"config/vpp/l2/v2/xconnect/if1\", \"NewState\": \"PENDING\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.l2.XConnectPair\", \"ProtoMsgData\": \"receiveinterface:\\\"if1\\\" transmitinterface:\\\"if2\\\" \" }, \"PrevState\": \"PENDING\", \"PrevValue\": { \"ProtoMsgName\": \"ligato.vpp.l2.XConnectPair\", \"ProtoMsgData\": \"receiveinterface:\\\"if1\\\" transmitinterface:\\\"if2\\\" \" }, \"NOOP\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/acl/acl1/interface/egress/tap1\", \"NewState\": \"PENDING\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"PrevState\": \"PENDING\", \"PrevValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"NOOP\": true, \"IsDerived\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/acl/acl1/interface/egress/tap2\", \"NewState\": \"PENDING\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"PrevState\": \"PENDING\", \"PrevValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"NOOP\": true, \"IsDerived\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/acl/acl1/interface/ingress/tap3\", \"NewState\": \"PENDING\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"PrevState\": \"PENDING\", \"PrevValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"NOOP\": true, \"IsDerived\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/acl/acl1/interface/ingress/tap4\", \"NewState\": \"PENDING\", \"NewValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"PrevState\": \"PENDING\", \"PrevValue\": { \"ProtoMsgName\": \"google.protobuf.Empty\", \"ProtoMsgData\": \"\" }, \"NOOP\": true, \"IsDerived\": true }, { \"Operation\": \"UPDATE\", \"Key\": \"config/vpp/l2/v2/bridge-domain/bd1\", \"NewState\": \"CONFIGURED\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.l2.BridgeDomain\", \"ProtoMsgData\": \"name:\\\"bd1\\\" flood:true unknownunicastflood:true forward:true learn:true arptermination:true interfaces:<name:\\\"if1\\\" bridgedvirtualinterface:true > interfaces:<name:\\\"if2\\\" > interfaces:<name:\\\"if2\\\" > arpterminationtable:<ipaddress:\\\"192.168.10.10\\\" physaddress:\\\"a7:65:f1:b5:dc:f6\\\" > arpterminationtable:<ipaddress:\\\"10.10.0.1\\\" phys_address:\\\"59:6C:45:59:8E:BC\\\" > \" }, \"PrevState\": \"CONFIGURED\", \"PrevValue\": { \"ProtoMsgName\": \"ligato.vpp.l2.BridgeDomain\", \"ProtoMsgData\": \"name:\\\"bd1\\\" flood:true unknownunicastflood:true forward:true learn:true arptermination:true arpterminationtable:<ipaddress:\\\"192.168.10.10\\\" physaddress:\\\"a7:65:9d:c8:c7:7f\\\" > arpterminationtable:<ipaddress:\\\"10.10.0.1\\\" phys_address:\\\"59:6c:9d:c8:c7:7f\\\" > \" } }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/bd/bd1/interface/if1\", \"NewState\": \"PENDING\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.l2.BridgeDomain.Interface\", \"ProtoMsgData\": \"name:\\\"if1\\\" bridgedvirtualinterface:true \" }, \"PrevState\": \"PENDING\", \"PrevValue\": { \"ProtoMsgName\": \"ligato.vpp.l2.BridgeDomain.Interface\", \"ProtoMsgData\": \"name:\\\"if1\\\" bridgedvirtualinterface:true \" }, \"NOOP\": true, \"IsDerived\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/bd/bd1/interface/if2\", \"NewState\": \"PENDING\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.l2.BridgeDomain.Interface\", \"ProtoMsgData\": \"name:\\\"if2\\\" \" }, \"PrevState\": \"PENDING\", \"PrevValue\": { \"ProtoMsgName\": \"ligato.vpp.l2.BridgeDomain.Interface\", \"ProtoMsgData\": \"name:\\\"if2\\\" \" }, \"NOOP\": true, \"IsDerived\": true }, { \"Operation\": \"CREATE\", \"Key\": \"vpp/bd/bd2/interface/loop1\", \"NewState\": \"PENDING\", \"NewValue\": { \"ProtoMsgName\": \"ligato.vpp.l2.BridgeDomain.Interface\", \"ProtoMsgData\": \"name:\\\"loop1\\\" \" }, \"PrevState\": \"PENDING\", \"PrevValue\": { \"ProtoMsgName\": \"ligato.vpp.l2.BridgeDomain.Interface\", \"ProtoMsgData\": \"name:\\\"loop1\\\" \" }, \"NOOP\": true, \"IsDerived\": true } ] }```" } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "kube-vip", "subcategory": "Cloud Native Network" }
[ { "data": "kube-vip is an open-source project that aims to simplify providing load balancing services for Kubernetes clusters. The original purpose of kube-vip was to simplify the building of highly-available (HA) Kubernetes clusters, which at the time involved a few components and configurations that all needed to be managed. This was blogged about in detail by thebsdbox here. Since the project has evolved, it can now use those same technologies to provide load balancing capabilities for Kubernetes Service resources of type LoadBalancer. 2024 The Linux Foundation. All Rights Reserved." } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "Network Service Mesh", "subcategory": "Cloud Native Network" }
[ { "data": "NSM Concepts for Enterprise Users Table of contents How do you enable workloads collaborating together to produce an App communicate independent of where those workloads are running? Historically, workloads have been run in some sort of Runtime Domain: A Runtime Domain is a system on which workloads are run. Its fundamentally a compute domain. Each of those Runtime Domains has brought along exactly one Connectivity Domain: Each workload has a single option of what connectivity domain to be connected to, and only workloads in a given runtime domain could be part of its connectivity domain. In short: Connectivity Domains are Strongly Coupled to Runtime Domains. A central tenant of Cloud Native is Loose Coupling. In a Loosely Coupled system, the ability for each workload to receive service from alternative providers is preserved. What Runtime Domain a workload is running in is a non-sequitur to its communications needs. Workloads that are part of the same App need Connectivity between each other no matter where they are running. One example of this problem is connectivity between workloads running in multiple K8s Clusters in a multi-cloud/hybrid cloud environment: How do workloads communicate independently of where they are running? Its not just a problem of cluster to cluster communication. In the diagram below: Network Service Mesh allows individual workloads, where ever they are running to connect securely to Network Service(s) that are independent of where they run: The Kubernetes CNI of your choice provides intra-cluster networking continues for every Pod. Network Service Mesh does not require you to replace your CNI, nor does it interfere in any way with what you are accustomed to getting from your CNI for intra-cluster networking. NSM is architecturally independent of the Runtime Domain. While it supports K8s, it is not limited to it: Network Service Mesh is complementary to traditional Service Meshes like Linkerd, Istio, Kuma, and" }, { "data": "Traditional Service Meshes predominantly focus on L7 payloads like HTTPS. If a workload sends an HTTPS message, they only guarantee that the HTTPS message itself gets to the other side and the HTTPS response gets back to the workload. In the intervening process the ethernet headers, IP headers, and even the TCP connection may have been stripped away and replaced. The payload being transported across the Mesh truly is the L7 HTTPS message. Network Service Mesh provides a similar service for transporting payloads that are IP Packets. This can be particularly effective for certain kinds of legacy workloads, like DBs, that are using bespoke protocols for replication over IP: A Traditional Service Mesh itself can be viewed as a Network Service. Network Service Mesh can be used to connect a workload to multiple Service Mesh Network Services that are not associated with the local cluster running the workload. Because Network Service Mesh decouples Network Services from the underlying Runtime Domain, it is possible to workloads from multiple companies connected to a single shared Network Service Mesh to allow collaboration between specific workloads from those companies without having to expose the entire Runtime domain in which those workloads run: The recent White House Executive Order on Cyber Security says of Zero Trust: In essence, a Zero Trust Architecture allows users full access but only to the bare minimum they need to perform their jobs. If a device is compromised, zero trust can ensure that the damage is contained. This is the heart and soul of Network Service Mesh. Workloads can be connected to small highly granular Network Services that only involve their immediate collaborators for a particular purpose (like DB replication). Because Network Service Mesh authentication uses the same Spiffe ID that the workloads themselves use to communicate at L7, the auditability of the system based on a cryptographic identity extends from L3-L7. Table of contents Copyright 2024 The Network Service Mesh authors" } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "Open vSwitch", "subcategory": "Cloud Native Network" }
[ { "data": "Mailing Lists Reporting Bugs Patchwork Release Process Security Process Authors Submitting Patches Backporting patches Inclusive Language Coding Style Windows Datapath Coding Style The Linux Foundation Open vSwitch Project Charter Committers Expectations for Developers with Open vSwitch Repo Access OVS Committer Grant/Revocation Policy Emeritus Status for OVS Committers Getting Started Tutorials How-to Guides Deep Dive Reference Guide Open vSwitch Internals Open vSwitch Documentation FAQ Looking for specific information? Full Table of Contents Index Reach out to us here. 2016-2023 A Linux Foundation Collaborative Project. All Rights Reserved. Linux Foundation is a registered trademark of The Linux Foundation. Linux is a registered trademark of Linus Torvalds. Open vSwitch and OvS are trademarks of The Linux Foundation. Please see our privacy policy and terms of use." } ]
{ "category": "Runtime", "file_name": "dist-docs.md", "project_name": "Open vSwitch", "subcategory": "Cloud Native Network" }
[ { "data": "| 0 | 1 | |:|:-| | ovs-actions(7) | PDF, HTML, plain text | | ovs-appctl(8) | PDF, HTML, plain text | | ovs-bugtool(8) | PDF, HTML, plain text | | ovs-ctl(8) | PDF, HTML, plain text | | ovsdb(5) | PDF, HTML, plain text | | ovsdb(7) | PDF, HTML, plain text | | ovsdb-client(1) | PDF, HTML, plain text | | ovsdb.local-config(5) | PDF, HTML, plain text | | ovsdb-server(1) | PDF, HTML, plain text | | ovsdb-server(5) | PDF, HTML, plain text | | ovsdb-server(7) | PDF, HTML, plain text | | ovsdb-tool(1) | PDF, HTML, plain text | | ovs-dpctl(8) | PDF, HTML, plain text | | ovs-dpctl-top(8) | PDF, HTML, plain text | | ovs-fields(7) | PDF, HTML, plain text | | ovs-kmod-ctl(8) | PDF, HTML, plain text | | ovs-l3ping(8) | PDF, HTML, plain text | | ovs-ofctl(8) | PDF, HTML, plain text | | ovs-parse-backtrace(8) | PDF, HTML, plain text | | ovs-pcap(1) | PDF, HTML, plain text | | ovs-pki(8) | PDF, HTML, plain text | | ovs-tcpdump(8) | PDF, HTML, plain text | | ovs-tcpundump(1) | PDF, HTML, plain text | | ovs-test(8) | PDF, HTML, plain text | | ovs-testcontroller(8) | PDF, HTML, plain text | | ovs-vlan-test(8) | PDF, HTML, plain text | | ovs-vsctl(8) | PDF, HTML, plain text | | ovs-vswitchd(8) | PDF, HTML, plain text | | ovs-vswitchd.conf.db(5) | PDF, HTML, plain text | | vtep(5) | PDF, HTML, plain text | | vtep-ctl(8) | PDF, HTML, plain text |" } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "Alluxio v2.9.4 (stable) Alluxio is worlds first open source data orchestration technology for analytics and AI for the cloud. It bridges the gap between data driven applications and storage systems, bringing data from the storage tier closer to the data driven applications and makes it easily accessible enabling applications to connect to numerous storage systems through a common interface. Alluxios memory-first tiered architecture enables data access at speeds orders of magnitude faster than existing solutions. In the data ecosystem, Alluxio lies between data driven applications, such as Apache Spark, Presto, Tensorflow, Apache HBase, Apache Hive, or Apache Flink, and various persistent storage systems, such as Amazon S3, Google Cloud Storage, OpenStack Swift, HDFS, IBM Cleversafe, EMC ECS, Ceph, NFS, Minio, and Alibaba OSS. Alluxio unifies the data stored in these different storage systems, presenting unified client APIs and a global namespace to its upper layer data driven applications. The Alluxio project originated from the UC Berkeley AMPLab (see papers) as the data access layer of the Berkeley Data Analytics Stack (BDAS). It is open source under Apache License 2.0. Alluxio is one of the fastest growing open source projects that has attracted more than 1000 contributors from over 300 institutions including Alibaba, Alluxio, Baidu, CMU, Google, IBM, Intel, NJU, Red Hat, Tencent, UC Berkeley, and Yahoo. Today, Alluxio is deployed in production by hundreds of organizations with the largest deployment exceeding 1,500 nodes. Alluxio helps overcome the obstacles in extracting insight from data by simplifying how applications access their data, regardless of format or location. The benefits of Alluxio include: Memory-Speed I/O: Alluxio can be used as a distributed shared caching service so compute applications talking to Alluxio can transparently cache frequently accessed data, especially from remote locations, to provide in-memory I/O throughput. In addition, Alluxios tiered storage which can leverage both memory and disk (SSD/HDD) makes elastically scaling data-driven applications cost effective. Simplified Cloud and Object Storage Adoption: Cloud and object storage systems use different semantics that have performance implications compared to traditional file systems. Common file system operations such as directory listing and renaming often incur significant performance overhead. When accessing data in cloud storage, applications have no node-level locality or cross-application caching. Deploying Alluxio with cloud or object storage mitigates these problems by serving data from Alluxio instead of the underlying cloud or object storage. Simplified Data Management: Alluxio provides a single point of access to multiple data" }, { "data": "In addition to connecting data sources of different types, Alluxio also enables users to simultaneously connect to different versions of the same storage system, such as multiple versions of HDFS, without complex system configuration and management. Easy Application Deployment: Alluxio manages communication between applications and file or object storages, translating data access requests from applications into underlying storage interfaces. Alluxio is Hadoop compatible. Existing data analytics applications, such as Spark and MapReduce programs, can run on top of Alluxio without any code changes. Alluxio brings three key areas of innovation together to provide a unique set of capabilities. Global Namespace: Alluxio serves as a single point of access to multiple independent storage systems regardless of physical location. This provides a unified view of all data sources and a standard interface for applications. See Namespace Management for more details. Intelligent Multi-tiering Caching: Alluxio clusters act as a read and write cache for data in connected storage systems. Configurable policies automatically optimize data placement for performance and reliability across both memory and disk (SSD/HDD). Caching is transparent to the user and uses buffering to maintain consistency with persistent storage. See Alluxio Storage Management for more details. Server-Side API Translation: Alluxio supports industry common APIs, such as HDFS API, S3 API, FUSE API, REST API. It transparently converts from a standard client-side interface to any storage interface. Alluxio manages communication between applications and file or object storage, eliminating the need for complex system configuration and management. File data can look like object data and vice versa. To understand more details on Alluxio internals, please read Alluxio architecture and data flow. To quickly get Alluxio up and running, take a look at our Getting Started page, which explains how to deploy Alluxio and run examples in a local environment. Also try our getting started tutorial for Presto & Alluxio via: Released versions of Alluxio are available from the Project Downloads Page. Each release comes with prebuilt binaries compatible with various Hadoop versions. Building From Master Branch Documentation explains how to build the project from source code. Questions can be directed to our User Mailing List or our Community Slack Channel. Downloads | User Guide | Developer Guide | Meetup Group | Issue Tracking | Community Slack Channel | User Mailing List | Videos | Github | Releases Copyright 2024 Alluxio, Inc. All rights reserved. Alluxio is a trademark of Alluxio, Inc. Terms of Service | PrivacyPolicy" } ]
{ "category": "Runtime", "file_name": "Overview.html.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "Alluxio v2.9.4 (stable) Alluxio is worlds first open source data orchestration technology for analytics and AI for the cloud. It bridges the gap between data driven applications and storage systems, bringing data from the storage tier closer to the data driven applications and makes it easily accessible enabling applications to connect to numerous storage systems through a common interface. Alluxios memory-first tiered architecture enables data access at speeds orders of magnitude faster than existing solutions. In the data ecosystem, Alluxio lies between data driven applications, such as Apache Spark, Presto, Tensorflow, Apache HBase, Apache Hive, or Apache Flink, and various persistent storage systems, such as Amazon S3, Google Cloud Storage, OpenStack Swift, HDFS, IBM Cleversafe, EMC ECS, Ceph, NFS, Minio, and Alibaba OSS. Alluxio unifies the data stored in these different storage systems, presenting unified client APIs and a global namespace to its upper layer data driven applications. The Alluxio project originated from the UC Berkeley AMPLab (see papers) as the data access layer of the Berkeley Data Analytics Stack (BDAS). It is open source under Apache License 2.0. Alluxio is one of the fastest growing open source projects that has attracted more than 1000 contributors from over 300 institutions including Alibaba, Alluxio, Baidu, CMU, Google, IBM, Intel, NJU, Red Hat, Tencent, UC Berkeley, and Yahoo. Today, Alluxio is deployed in production by hundreds of organizations with the largest deployment exceeding 1,500 nodes. Alluxio helps overcome the obstacles in extracting insight from data by simplifying how applications access their data, regardless of format or location. The benefits of Alluxio include: Memory-Speed I/O: Alluxio can be used as a distributed shared caching service so compute applications talking to Alluxio can transparently cache frequently accessed data, especially from remote locations, to provide in-memory I/O throughput. In addition, Alluxios tiered storage which can leverage both memory and disk (SSD/HDD) makes elastically scaling data-driven applications cost effective. Simplified Cloud and Object Storage Adoption: Cloud and object storage systems use different semantics that have performance implications compared to traditional file systems. Common file system operations such as directory listing and renaming often incur significant performance overhead. When accessing data in cloud storage, applications have no node-level locality or cross-application caching. Deploying Alluxio with cloud or object storage mitigates these problems by serving data from Alluxio instead of the underlying cloud or object storage. Simplified Data Management: Alluxio provides a single point of access to multiple data" }, { "data": "In addition to connecting data sources of different types, Alluxio also enables users to simultaneously connect to different versions of the same storage system, such as multiple versions of HDFS, without complex system configuration and management. Easy Application Deployment: Alluxio manages communication between applications and file or object storages, translating data access requests from applications into underlying storage interfaces. Alluxio is Hadoop compatible. Existing data analytics applications, such as Spark and MapReduce programs, can run on top of Alluxio without any code changes. Alluxio brings three key areas of innovation together to provide a unique set of capabilities. Global Namespace: Alluxio serves as a single point of access to multiple independent storage systems regardless of physical location. This provides a unified view of all data sources and a standard interface for applications. See Namespace Management for more details. Intelligent Multi-tiering Caching: Alluxio clusters act as a read and write cache for data in connected storage systems. Configurable policies automatically optimize data placement for performance and reliability across both memory and disk (SSD/HDD). Caching is transparent to the user and uses buffering to maintain consistency with persistent storage. See Alluxio Storage Management for more details. Server-Side API Translation: Alluxio supports industry common APIs, such as HDFS API, S3 API, FUSE API, REST API. It transparently converts from a standard client-side interface to any storage interface. Alluxio manages communication between applications and file or object storage, eliminating the need for complex system configuration and management. File data can look like object data and vice versa. To understand more details on Alluxio internals, please read Alluxio architecture and data flow. To quickly get Alluxio up and running, take a look at our Getting Started page, which explains how to deploy Alluxio and run examples in a local environment. Also try our getting started tutorial for Presto & Alluxio via: Released versions of Alluxio are available from the Project Downloads Page. Each release comes with prebuilt binaries compatible with various Hadoop versions. Building From Master Branch Documentation explains how to build the project from source code. Questions can be directed to our User Mailing List or our Community Slack Channel. Downloads | User Guide | Developer Guide | Meetup Group | Issue Tracking | Community Slack Channel | User Mailing List | Videos | Github | Releases Copyright 2024 Alluxio, Inc. All rights reserved. Alluxio is a trademark of Alluxio, Inc. Terms of Service | PrivacyPolicy" } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "Arrikto", "subcategory": "Cloud Native Storage" }
[ { "data": "Open Source (OSS) Kubeflow enables you to operationalize much of an ML workflow on top of Kubernetes. It comprises a number of ML components and services; SDKs and APIs; integrated development environments (IDEs); and libraries for data science. The Arrikto Enterprise Kubeflow (EKF) distribution introduces important additional features to address gaps in OSS Kubeflow and commonly expressed needs of MLOps engineers and data scientists. The easiest way to start with EKF is to follow one or more of the tutorials below! You can find Roks End-User License Agreement here. The third party legal notices accompanying Rok can be found here." } ]
{ "category": "Runtime", "file_name": "understanding-github-code-search-syntax.md", "project_name": "Container Storage Interface (CSI)", "subcategory": "Cloud Native Storage" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Runtime", "file_name": "index.html.md", "project_name": "CloudCasa by Catalogic Software", "subcategory": "Cloud Native Storage" }
[ { "data": "Contents CloudCasa by Catalogic is a data protection, recovery, and data mobility solution for Kubernetes, cloud databases, and cloud native applications. As a Software-as-a-Service (SaaS) solution, CloudCasa eliminates the complexity of managing traditional backup infrastructure, while providing the same level of application consistent data protection that more traditional backup solutions provide for server-based applications. Further, it provides disaster recovery, security, workload migration, and replication features that traditional backup solutions dont. The CloudCasa users guide is logically divided into nine chapters. The Overview chapter provides introductory sections on the CloudCasa service and its major features and available service plans. The Dashboard, Clusters Tab, Databases Tab, and Configuration Tab chapters contain sections detailing the pages or sets of pages available under those tabs in the CloudCasa web console. These sections are used to provide context-sensitive help from within CloudCasa. The Quick Guides, Reference, Knowledge Base, and Release Notes chapters contain procedural documentation, technical reference documents, knowledge base articles, and all past release notes. We recommend taking a few minutes to familiarize yourself with the layout of the documentation. Contents For additional information about CloudCasa, see the Frequently Asked Questions (FAQ) page, or contact Catalogic Support for assistance. Catalogic is a registered trademark of Catalogic Software Inc. CloudCasa is a trademark of Catalogic Software Inc. All other company and product names used herein may be the trademarks of their respective companies." } ]
{ "category": "Runtime", "file_name": "github-terms-of-service.md", "project_name": "Container Storage Interface (CSI)", "subcategory": "Cloud Native Storage" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Runtime", "file_name": "docs.github.com.md", "project_name": "Container Storage Interface (CSI)", "subcategory": "Cloud Native Storage" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Runtime", "file_name": "article.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "CubeFS is a new generation cloud-native storage that supports access protocols such as S3, HDFS, and POSIX. It is widely applicable in various scenarios such as big data, AI/LLMs, container platforms, separation of storage and computing for databases and middleware, data sharing and protectionetc. Get Started Download Compatible with various access protocols such as S3, POSIX, HDFS, etc., and the access between protocols can be interoperable Support replicas and erasure coding engines, users can choose flexibly according to business scenarios Easy to build a PB or EB-scale distributed storage service, and each module can be expanded horizontally Supports multi-tenant management and provides fine-grained tenant isolation policies Supports multi-level caching, multiple high-performance replication protocols, and optimizes specific performance for small files Easy to use CubeFS in Kubernetes via CSI Driver CubeFS has developed a CSI plugin based on the Container Storage Interface (CSI) interface specification to support cloud storage in Kubernetes clusters. CubeFS is a cloud-native storage infrastructure that is widely used in a variety of scenarios, including big data storage, machine learning platforms, large-scale container platforms, as well as database and middleware storage and computing separation. It is also used for data sharing and protection. As data continues to grow, businesses face greater cost challenges. In order to alleviate the storage cost pressure caused by multi-copy mode, CubeFS has introduced a erasure code subsystem (also known as BlobStore), which is a highly reliable, highly available, low-cost, and EB-scale independent key-value storage system. With the growth of the BIGO machine learning platform's models and data, more demands are placed on the underlying file system, such as multi-tenancy, high concurrency, high availability across multiple datacenters, high performance, and cloud-native features. CubeFS effectively meets these requirements, and BIGO is attempting to use CubeFS to address these challenges. OPPO has been storing its large-scale data in HDFS clusters with a total storage capacity of EB-level. However, due to the rapidly growing business, the company faced several operational issues, including lack of machine resources, high operational costs, and high redundancy in storing cold and hot data. In order to solve these problems, OPPO implemented CubeFS The article discusses OPPO's machine learning platform which supports over 100 AI businesses with tens of thousands of daily training tasks. In 2021, AI pioneers Oppo built a large-scale, end-to-end machine learning platform, leveraging CubeFS, to support AI training for more than 100 Oppo businesses running tens of thousands of training tasks every day for business fields including voice assistant, commercial advertisement, OPPO game, ctag multi-mode, NLP and more. A young dynamic architecture firm, CubeFS provides peace of mind to any developer or homeowner dedicated to excellence. 2024-04-22 By default, data in CubeFS is kept permanently, and in order to store data cost-effectively throughout its lifecycle, CubeFS supports an AWS S3-compatible object storage lifecycle management strategy. CubeFS currently supports lifecycle expiration deletion policies, which can effectively clean up expired data and release storage resources. This article will guide you to understand how to use CubeFS object storage lifecycle management policies. 2024-01-09 CubeFS is happy to announce the completion of its third-party security audit. The audit was conducted by Ada Logics in collaboration with the CubeFS maintainers, OSTIF and the CNCF. 2023-12-22 The given text describes the CubeFS erasure coding services metadata management module (Blobstore/ClusterMgr), Raft algorithm practice, and daily operation recommendations. CubeFS is a Cloud Native Computing Foundation Incubating Project. For more details, please refer to SIGMOD 2019 paper . Docs Blog Community GitHub Discussions Mail Slack Twitter / X" } ]
{ "category": "Runtime", "file_name": "node.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "CubeFS is a new generation cloud-native storage that supports access protocols such as S3, HDFS, and POSIX. It is widely applicable in various scenarios such as big data, AI/LLMs, container platforms, separation of storage and computing for databases and middleware, data sharing and protectionetc. Get Started Download Compatible with various access protocols such as S3, POSIX, HDFS, etc., and the access between protocols can be interoperable Support replicas and erasure coding engines, users can choose flexibly according to business scenarios Easy to build a PB or EB-scale distributed storage service, and each module can be expanded horizontally Supports multi-tenant management and provides fine-grained tenant isolation policies Supports multi-level caching, multiple high-performance replication protocols, and optimizes specific performance for small files Easy to use CubeFS in Kubernetes via CSI Driver CubeFS has developed a CSI plugin based on the Container Storage Interface (CSI) interface specification to support cloud storage in Kubernetes clusters. CubeFS is a cloud-native storage infrastructure that is widely used in a variety of scenarios, including big data storage, machine learning platforms, large-scale container platforms, as well as database and middleware storage and computing separation. It is also used for data sharing and protection. As data continues to grow, businesses face greater cost challenges. In order to alleviate the storage cost pressure caused by multi-copy mode, CubeFS has introduced a erasure code subsystem (also known as BlobStore), which is a highly reliable, highly available, low-cost, and EB-scale independent key-value storage system. With the growth of the BIGO machine learning platform's models and data, more demands are placed on the underlying file system, such as multi-tenancy, high concurrency, high availability across multiple datacenters, high performance, and cloud-native features. CubeFS effectively meets these requirements, and BIGO is attempting to use CubeFS to address these challenges. OPPO has been storing its large-scale data in HDFS clusters with a total storage capacity of EB-level. However, due to the rapidly growing business, the company faced several operational issues, including lack of machine resources, high operational costs, and high redundancy in storing cold and hot data. In order to solve these problems, OPPO implemented CubeFS The article discusses OPPO's machine learning platform which supports over 100 AI businesses with tens of thousands of daily training tasks. In 2021, AI pioneers Oppo built a large-scale, end-to-end machine learning platform, leveraging CubeFS, to support AI training for more than 100 Oppo businesses running tens of thousands of training tasks every day for business fields including voice assistant, commercial advertisement, OPPO game, ctag multi-mode, NLP and more. A young dynamic architecture firm, CubeFS provides peace of mind to any developer or homeowner dedicated to excellence. 2024-04-22 By default, data in CubeFS is kept permanently, and in order to store data cost-effectively throughout its lifecycle, CubeFS supports an AWS S3-compatible object storage lifecycle management strategy. CubeFS currently supports lifecycle expiration deletion policies, which can effectively clean up expired data and release storage resources. This article will guide you to understand how to use CubeFS object storage lifecycle management policies. 2024-01-09 CubeFS is happy to announce the completion of its third-party security audit. The audit was conducted by Ada Logics in collaboration with the CubeFS maintainers, OSTIF and the CNCF. 2023-12-22 The given text describes the CubeFS erasure coding services metadata management module (Blobstore/ClusterMgr), Raft algorithm practice, and daily operation recommendations. CubeFS is a Cloud Native Computing Foundation Incubating Project. For more details, please refer to SIGMOD 2019 paper . Docs Blog Community GitHub Discussions Mail Slack Twitter / X" } ]
{ "category": "Runtime", "file_name": "build.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "CubeFS is a new generation cloud-native storage that supports access protocols such as S3, HDFS, and POSIX. It is widely applicable in various scenarios such as big data, AI/LLMs, container platforms, separation of storage and computing for databases and middleware, data sharing and protectionetc. Get Started Download Compatible with various access protocols such as S3, POSIX, HDFS, etc., and the access between protocols can be interoperable Support replicas and erasure coding engines, users can choose flexibly according to business scenarios Easy to build a PB or EB-scale distributed storage service, and each module can be expanded horizontally Supports multi-tenant management and provides fine-grained tenant isolation policies Supports multi-level caching, multiple high-performance replication protocols, and optimizes specific performance for small files Easy to use CubeFS in Kubernetes via CSI Driver CubeFS has developed a CSI plugin based on the Container Storage Interface (CSI) interface specification to support cloud storage in Kubernetes clusters. CubeFS is a cloud-native storage infrastructure that is widely used in a variety of scenarios, including big data storage, machine learning platforms, large-scale container platforms, as well as database and middleware storage and computing separation. It is also used for data sharing and protection. As data continues to grow, businesses face greater cost challenges. In order to alleviate the storage cost pressure caused by multi-copy mode, CubeFS has introduced a erasure code subsystem (also known as BlobStore), which is a highly reliable, highly available, low-cost, and EB-scale independent key-value storage system. With the growth of the BIGO machine learning platform's models and data, more demands are placed on the underlying file system, such as multi-tenancy, high concurrency, high availability across multiple datacenters, high performance, and cloud-native features. CubeFS effectively meets these requirements, and BIGO is attempting to use CubeFS to address these challenges. OPPO has been storing its large-scale data in HDFS clusters with a total storage capacity of EB-level. However, due to the rapidly growing business, the company faced several operational issues, including lack of machine resources, high operational costs, and high redundancy in storing cold and hot data. In order to solve these problems, OPPO implemented CubeFS The article discusses OPPO's machine learning platform which supports over 100 AI businesses with tens of thousands of daily training tasks. In 2021, AI pioneers Oppo built a large-scale, end-to-end machine learning platform, leveraging CubeFS, to support AI training for more than 100 Oppo businesses running tens of thousands of training tasks every day for business fields including voice assistant, commercial advertisement, OPPO game, ctag multi-mode, NLP and more. A young dynamic architecture firm, CubeFS provides peace of mind to any developer or homeowner dedicated to excellence. 2024-04-22 By default, data in CubeFS is kept permanently, and in order to store data cost-effectively throughout its lifecycle, CubeFS supports an AWS S3-compatible object storage lifecycle management strategy. CubeFS currently supports lifecycle expiration deletion policies, which can effectively clean up expired data and release storage resources. This article will guide you to understand how to use CubeFS object storage lifecycle management policies. 2024-01-09 CubeFS is happy to announce the completion of its third-party security audit. The audit was conducted by Ada Logics in collaboration with the CubeFS maintainers, OSTIF and the CNCF. 2023-12-22 The given text describes the CubeFS erasure coding services metadata management module (Blobstore/ClusterMgr), Raft algorithm practice, and daily operation recommendations. CubeFS is a Cloud Native Computing Foundation Incubating Project. For more details, please refer to SIGMOD 2019 paper . Docs Blog Community GitHub Discussions Mail Slack Twitter / X" } ]
{ "category": "Runtime", "file_name": "introduction.html.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "Overview Quick Start Deployment User Guide Cli Guide Design Feature Test and Evaluation Ops Config Manage Monitor Security Ecology Dev Guide Manage API FAQ Community Version Overview Quick Start Deployment User Guide Cli Guide Design Feature Test and Evaluation Ops Config Manage Monitor Security Ecology Dev Guide Manage API FAQ Community Version CubeFS is a next-generation cloud-native storage product that is currently an incubating open-source project hosted by the Cloud Native Computing Foundationopen in new window (CNCF). It is compatible with various data access protocols such as S3, POSIX, and HDFS, and supports two storage engines - multiple replicas and erasure coding. It provides users with multiple features such as multi-tenancy, multi-AZ deployment, and cross-regional replication, and is widely used in scenarios such as big data, AI, container platforms, databases, middleware storage and computing separation, data sharing and data protection. Compatible with various access protocols such as S3, POSIX, and HDFS, and access between protocols is interoperable. POSIX compatible: Compatible with the POSIX interface, making application development extremely simple for upper-layer applications, just as convenient as using a local file system. In addition, CubeFS relaxed the consistency requirements of POSIX semantics during implementation to balance the performance of file and metadata operations. S3 compatible: Compatible with the AWS S3 object storage protocol, users can use the native Amazon S3 SDK to manage resources in CubeFS. HDFS compatible: Compatible with the Hadoop FileSystem interface protocol, users can use CubeFS to replace the Hadoop file system (HDFS) without affecting upper-layer business. Supporting two engines: multi-replicas and erasure coding, users can flexibly choose according to their business scenarios. Supporting multi-tenant management and providing fine-grained tenant isolation policies. It can easily build distributed storage services with PB or EB level scale, and each module can be horizontally scaled. CubeFS supports multi-level caching to optimize small file access and supports multiple high-performance replication protocols. Based on the CSI plugin, CubeFS can be quickly used on Kubernetes. As a cloud-native distributed storage platform, CubeFS provides multiple access protocols, so it has a wide range of use cases. Below are some typical cases: Compatible with the HDFS protocol, CubeFS provides a unified storage foundation for the Hadoop ecosystem (such as Spark and Hive), providing unlimited storage space and high-bandwidth data storage capabilities for computing engines. As a distributed parallel file system, CubeFS supports AI training, model storage and distribution, IO acceleration and other requirements. The container cluster can store the configuration files or initialization loading data of container images on CubeFS, and read them in real-time when batch loading containers. Multiple PODs can share persistent data through CubeFS, and quick fault switching can be performed in case of POD failure. Provides high-concurrency, low-latency cloud disk services for database applications such as MySQL, ElasticSearch, and ClickHouse, achieving complete separation of storage and computing. Provides high-reliability, low-cost object storage services for online businesses (such as advertising, click-streams, and search) or end-users' graphics, text, audio, and video content. Replace traditional local storage and NAS offline and help IT business move to the cloud. Architecture" } ]
{ "category": "Runtime", "file_name": "requirement.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "CubeFS is a new generation cloud-native storage that supports access protocols such as S3, HDFS, and POSIX. It is widely applicable in various scenarios such as big data, AI/LLMs, container platforms, separation of storage and computing for databases and middleware, data sharing and protectionetc. Get Started Download Compatible with various access protocols such as S3, POSIX, HDFS, etc., and the access between protocols can be interoperable Support replicas and erasure coding engines, users can choose flexibly according to business scenarios Easy to build a PB or EB-scale distributed storage service, and each module can be expanded horizontally Supports multi-tenant management and provides fine-grained tenant isolation policies Supports multi-level caching, multiple high-performance replication protocols, and optimizes specific performance for small files Easy to use CubeFS in Kubernetes via CSI Driver CubeFS has developed a CSI plugin based on the Container Storage Interface (CSI) interface specification to support cloud storage in Kubernetes clusters. CubeFS is a cloud-native storage infrastructure that is widely used in a variety of scenarios, including big data storage, machine learning platforms, large-scale container platforms, as well as database and middleware storage and computing separation. It is also used for data sharing and protection. As data continues to grow, businesses face greater cost challenges. In order to alleviate the storage cost pressure caused by multi-copy mode, CubeFS has introduced a erasure code subsystem (also known as BlobStore), which is a highly reliable, highly available, low-cost, and EB-scale independent key-value storage system. With the growth of the BIGO machine learning platform's models and data, more demands are placed on the underlying file system, such as multi-tenancy, high concurrency, high availability across multiple datacenters, high performance, and cloud-native features. CubeFS effectively meets these requirements, and BIGO is attempting to use CubeFS to address these challenges. OPPO has been storing its large-scale data in HDFS clusters with a total storage capacity of EB-level. However, due to the rapidly growing business, the company faced several operational issues, including lack of machine resources, high operational costs, and high redundancy in storing cold and hot data. In order to solve these problems, OPPO implemented CubeFS The article discusses OPPO's machine learning platform which supports over 100 AI businesses with tens of thousands of daily training tasks. In 2021, AI pioneers Oppo built a large-scale, end-to-end machine learning platform, leveraging CubeFS, to support AI training for more than 100 Oppo businesses running tens of thousands of training tasks every day for business fields including voice assistant, commercial advertisement, OPPO game, ctag multi-mode, NLP and more. A young dynamic architecture firm, CubeFS provides peace of mind to any developer or homeowner dedicated to excellence. 2024-04-22 By default, data in CubeFS is kept permanently, and in order to store data cost-effectively throughout its lifecycle, CubeFS supports an AWS S3-compatible object storage lifecycle management strategy. CubeFS currently supports lifecycle expiration deletion policies, which can effectively clean up expired data and release storage resources. This article will guide you to understand how to use CubeFS object storage lifecycle management policies. 2024-01-09 CubeFS is happy to announce the completion of its third-party security audit. The audit was conducted by Ada Logics in collaboration with the CubeFS maintainers, OSTIF and the CNCF. 2023-12-22 The given text describes the CubeFS erasure coding services metadata management module (Blobstore/ClusterMgr), Raft algorithm practice, and daily operation recommendations. CubeFS is a Cloud Native Computing Foundation Incubating Project. For more details, please refer to SIGMOD 2019 paper . Docs Blog Community GitHub Discussions Mail Slack Twitter / X" } ]
{ "category": "Runtime", "file_name": "docs.gluster.org.md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }
[ { "data": "GlusterFS is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. GlusterFS is free and open source software and can utilize common off-the-shelf hardware. To learn more, please see the Gluster project home page. Get Started: Quick Start/Installation Guide Since Gluster can be used in different ways and for different tasks, it would be difficult to explain everything at once. We recommend that you follow the Quick Start Guide first. By utilizing a number of virtual machines, you will create a functional test setup to learn the basic concepts. You will then be much better equipped to read the more detailed Install Guide. Quick Start Guide - Start here if you are new to Gluster! Installation Guides describes the prerequisites and provides step-by-instructions to install GlusterFS on various operating systems. Presentations related to Gluster from Conferences and summits. More Documentation Administration Guide - describes the configuration and management of GlusterFS. GlusterFS Developer Guide - describes how you can contribute to this open source project; built through the efforts of its dedicated, passionate community. Upgrade Guide - if you need to upgrade from an older version of GlusterFS. Release Notes - Glusterfs Release Notes provides high-level insight into the improvements and additions that have been implemented in various Glusterfs releases. GlusterFS Tools - Guides for GlusterFS tools. Troubleshooting Guide - Guide for troubleshooting. How to Contribute? The Gluster documentation has its home on GitHub, and the easiest way to contribute is to use the \"Edit on GitHub\" link on the top right corner of each page. If you already have a GitHub account, you can simply edit the document in your browser, use the preview tab, and submit your changes for review in a pull request. If you want to help more with Gluster documentation, please subscribe to the Gluster Users and Gluster Developers mailing lists, and share your ideas with the Gluster developer community." } ]
{ "category": "Runtime", "file_name": "verify.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "CubeFS is a new generation cloud-native storage that supports access protocols such as S3, HDFS, and POSIX. It is widely applicable in various scenarios such as big data, AI/LLMs, container platforms, separation of storage and computing for databases and middleware, data sharing and protectionetc. Get Started Download Compatible with various access protocols such as S3, POSIX, HDFS, etc., and the access between protocols can be interoperable Support replicas and erasure coding engines, users can choose flexibly according to business scenarios Easy to build a PB or EB-scale distributed storage service, and each module can be expanded horizontally Supports multi-tenant management and provides fine-grained tenant isolation policies Supports multi-level caching, multiple high-performance replication protocols, and optimizes specific performance for small files Easy to use CubeFS in Kubernetes via CSI Driver CubeFS has developed a CSI plugin based on the Container Storage Interface (CSI) interface specification to support cloud storage in Kubernetes clusters. CubeFS is a cloud-native storage infrastructure that is widely used in a variety of scenarios, including big data storage, machine learning platforms, large-scale container platforms, as well as database and middleware storage and computing separation. It is also used for data sharing and protection. As data continues to grow, businesses face greater cost challenges. In order to alleviate the storage cost pressure caused by multi-copy mode, CubeFS has introduced a erasure code subsystem (also known as BlobStore), which is a highly reliable, highly available, low-cost, and EB-scale independent key-value storage system. With the growth of the BIGO machine learning platform's models and data, more demands are placed on the underlying file system, such as multi-tenancy, high concurrency, high availability across multiple datacenters, high performance, and cloud-native features. CubeFS effectively meets these requirements, and BIGO is attempting to use CubeFS to address these challenges. OPPO has been storing its large-scale data in HDFS clusters with a total storage capacity of EB-level. However, due to the rapidly growing business, the company faced several operational issues, including lack of machine resources, high operational costs, and high redundancy in storing cold and hot data. In order to solve these problems, OPPO implemented CubeFS The article discusses OPPO's machine learning platform which supports over 100 AI businesses with tens of thousands of daily training tasks. In 2021, AI pioneers Oppo built a large-scale, end-to-end machine learning platform, leveraging CubeFS, to support AI training for more than 100 Oppo businesses running tens of thousands of training tasks every day for business fields including voice assistant, commercial advertisement, OPPO game, ctag multi-mode, NLP and more. A young dynamic architecture firm, CubeFS provides peace of mind to any developer or homeowner dedicated to excellence. 2024-04-22 By default, data in CubeFS is kept permanently, and in order to store data cost-effectively throughout its lifecycle, CubeFS supports an AWS S3-compatible object storage lifecycle management strategy. CubeFS currently supports lifecycle expiration deletion policies, which can effectively clean up expired data and release storage resources. This article will guide you to understand how to use CubeFS object storage lifecycle management policies. 2024-01-09 CubeFS is happy to announce the completion of its third-party security audit. The audit was conducted by Ada Logics in collaboration with the CubeFS maintainers, OSTIF and the CNCF. 2023-12-22 The given text describes the CubeFS erasure coding services metadata management module (Blobstore/ClusterMgr), Raft algorithm practice, and daily operation recommendations. CubeFS is a Cloud Native Computing Foundation Incubating Project. For more details, please refer to SIGMOD 2019 paper . Docs Blog Community GitHub Discussions Mail Slack Twitter / X" } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }
[ { "data": "Managing a Cluster Setting Up Storage Setting Up Clients Volumes Configuring NFS-Ganesha Features Data Access With Other Interfaces GlusterFS Service Logs and Locations Monitoring Workload Securing GlusterFS Communication using SSL Puppet Gluster RDMA Transport GlusterFS iSCSI Linux Kernel Tuning Export and Netgroup Authentication Thin Arbiter volumes Trash for GlusterFS Split brain and ways to deal with it Arbiter volumes and quorum options Mandatory Locks GlusterFS coreutilities Events APIs Building QEMU With gfapi For Debian Based Systems Appendices" } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "Google Persistent Disk", "subcategory": "Cloud Native Storage" }
[ { "data": "The Transcoder API allows you to convert video files and package them for optimized delivery to web, mobile and connected TVs. Learn more Transcoder API is a service covered by Google's obligations set forth in the Cloud Data Processing Addendum. Quickstart: Transcode a video with the Transcoder API Creating and managing jobs Creating and managing job templates REST API RPC API Client libraries Pricing Quotas and limits Release notes Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-10 UTC." } ]
{ "category": "Runtime", "file_name": "add-persistent-disk#source-disk.md", "project_name": "Google Persistent Disk", "subcategory": "Cloud Native Storage" }
[ { "data": "If your workloads need high performance, low latency temporary storage, consider using Local solid-state drive (Local SSD) disks when you create your virtual machine (VM). Local SSD disks are always-encrypted solid-state storage for Compute Engine VMs. Local SSD disks are ideal when you need storage for any of the following use cases: Local SSD disks offer superior I/O operations per second (IOPS), and very low latency compared to Persistent Disk and Google Cloud Hyperdisk. This is because Local SSD disks are physically attached to the server that hosts your VM. For this same reason, Local SSD disks can only provide temporary storage. Because Local SSD is suitable only for temporary storage, you must store data that is not temporary or ephemeral in nature on one of our durable storage options. Each Local SSD disk comes in a fixed size, and you can attach multiple Local SSD disks to a single VM when you create it. The number of Local SSD disks that you can attach to a VM depends on the VM's machine type. For more information, see Choose a valid number of Local SSD disks. If Local SSD disks don't meet your redundancy or flexibility requirements, you can use Local SSD disks in combination with other storage options. Local SSD performance depends on several factors, including the number of attached Local SSD disks, the selected disk interface (NVMe or SCSI), and the VM's machine type. The available performance increases as you attach more Local SSD disks to your VM. The following tables list the maximum IOPS and throughput for NVMe- and SCSI-attached Local SSD disks. The metrics are listed by the total capacity of Local SSD disks attached to the VM. | ('# of attachedLocal SSD disks', 'Unnamed: 0level1') | ('Total storage space (GiB)', 'Unnamed: 1level1') | ('Capacity per disk (GiB)', 'Unnamed: 2level1') | ('IOPS', 'Read') | ('IOPS', 'Write') | ('Throughput (MiBps)', 'Read') | ('Throughput (MiBps)', 'Write') | |:|:|-:|-:|--:|:|-:| | 1 | 375 | 375 | 170000 | 90000 | 660 | 350 | | 2 | 750 | 375 | 340000 | 180000 | 1320 | 700 | | 3 | 1125 | 375 | 510000 | 270000 | 1980 | 1050 | | 4 | 1500 | 375 | 680000 | 360000 | 2650 | 1400 | | 5 | 1875 | 375 | 680000 | 360000 | 2650 | 1400 | | 6 | 2250 | 375 | 680000 | 360000 | 2650 | 1400 | | 7 | 2625 | 375 | 680000 | 360000 | 2650 | 1400 | | 8 | 3000 | 375 | 680000 | 360000 | 2650 | 1400 | | 16 | 6000 | 375 | 1.6e+06 | 800000 | 6240 | 3120 | | 24 | 9000 | 375 | 2.4e+06 | 1.2e+06 | 9360 | 4680 | | 32 | 12000 | 375 | 3.2e+06 | 1.6e+06 | 12480 | 6240 | | Z3 machine series | nan | nan | nan | nan | nan | nan | | 12 | 36000 | 3000 | 6e+06 | 6e+06 | 36000 | 30000 |" }, { "data": "('# of combinedLocal SSD disks', 'Unnamed: 0level1') | ('Storage space (GiB)', 'Unnamed: 1level1') | ('IOPS', 'Read') | ('IOPS', 'Write') | ('Throughput (MiBps)', 'Read') | ('Throughput (MiBps)', 'Write') | |:|:|-:|--:|:|-:| | 1 | 375 | 100000 | 70000 | 390 | 270 | | 2 | 750 | 200000 | 140000 | 780 | 550 | | 3 | 1125 | 300000 | 210000 | 1170 | 820 | | 4 | 1500 | 400000 | 280000 | 1560 | 1090 | | 5 | 1875 | 400000 | 280000 | 1560 | 1090 | | 6 | 2250 | 400000 | 280000 | 1560 | 1090 | | 7 | 2625 | 400000 | 280000 | 1560 | 1090 | | 8 | 3000 | 400000 | 280000 | 1560 | 1090 | | 16 | 6000 | 900000 | 800000 | 6240 | 3120 | | 24 | 9000 | 900000 | 800000 | 9360 | 4680 | To reach the stated performance levels, you must configure your VM as follows: Attach the Local SSD disks with the NVMe interface. Disks attached with the SCSI interface have lower performance. The following machine types also require a minimum number of vCPUs to reach these maximums: If your VM uses a custom Linux image, the image must use version 4.14.68 or later of the Linux kernel. If you use the public images provided by Compute Engine, you don't have to take any further action. For additional VM and disk configuration settings that can improve Local SSD performance, see Optimizing local SSD performance. For more information about selecting a disk interface, see Choose a disk interface. Compute Engine preserves the data on Local SSD disks in certain scenarios, and in other cases, Compute Engine does not guarantee Local SSD data persistence. The following information describes these scenarios and applies to each Local SSD disk attached to a VM. Data on Local SSD disks persist only through the following events: Data on Local SSD disks might be lost if a host error occurs on the VM and Compute Engine can't reconnect the VM to the Local SSD disk within a specified time. You can control how much time, if any, is spent attempting to recover the data with the Local SSD recovery timeout. If Compute Engine can't reconnect to the disk before the timeout expires, the VM is restarted. When the VM restarts, the Local SSD data is unrecoverable. Compute Engine attaches a blank Local SSD disk to the restarted VM. The Local SSD recovery timeout is part of a VM's host maintenance policy. For more information, see Local SSD recovery timeout. Data on Local SSD disks does not persist through the following events: If Compute Engine was unable to recover a VM's Local SSD data, Compute Engine restarts the VM with a mounted and attached Local SSD disk for each previously attached Local SSD disk. Compute Engine automatically encrypts your data when it is written to Local SSD storage space. You can't use customer-supplied encryption keys with Local SSD disk. Since you can't back up Local SSD data with disk images, standard snapshots, or disk clones, Google recommends that you always store valuable data on a durable storage option. If you need to preserve the data on a Local SSD disk, attach a Persistent Disk or Google Cloud Hyperdisk to the VM. After you mount the Persistent Disk or Hyperdisk copy the data from the Local SSD disk to the newly attached disk. To achieve the highest Local SSD performance, you must attach your disks to the VM with the NVMe interface. Performance is lower if you use the SCSI interface. The disk interface you choose also depends on the machine type and OS that your VM" }, { "data": "Some of the available machine types in Compute Engine allow you to choose between NVMe and SCSI interfaces, while others support either only NVMe or only SCSI. Similarly, some of the public OS images provided by Compute Engine might support both NVMe and SCSI, or only one of the two. The following pages provide more information about available machine types and supported public images, as well as performance details. Supported interfaces by machine types: See Machine series comparison. In the Choose VM properties to compare list, select Disk interface type. OS image: For a list of which public OS images provided by Compute Engine support SCSI or NVMe, see the Interfaces tab for each table in the operating system details documentation. If your VM uses a custom Linux image, you must use version 4.14.68 or later of the Linux kernel for optimal NVMe performance. If you have an existing setup that requires using a SCSI interface, consider using multi-queue SCSI to achieve better performance over the standard SCSI interface. If you are using a custom image that you imported, see Enable multi-queue SCSI. Most machine types available on Compute Engine support Local SSD disks. Some machine types always include a fixed number of Local SSD disks by default, while others allow you to add specific numbers of disks. You can only add Local SSD disks when you create the VM. You can't add Local SSD disks to a VM after you create it. For VMs created based on the Z3 machine series, each attached disk has 3,000GiB of capacity. For all other machine series, each disk you attach has 375GiB of capacity. The following table lists the machine types that include Local SSD disks by default, and shows how many disks are attached when you create the VM. | Machine type | Number of Local SSD disksautomatically attached per VM | |:-|:-| | C3 machine types | C3 machine types | | Only the -lssd variants of the C3 machine types support Local SSD. | Only the -lssd variants of the C3 machine types support Local SSD. | | c3-standard-4-lssd | 1 | | c3-standard-8-lssd | 2 | | c3-standard-22-lssd | 4 | | c3-standard-44-lssd | 8 | | c3-standard-88-lssd | 16 | | c3-standard-176-lssd | 32 | | C3D machine types | C3D machine types | | Only the -lssd variants of the C3D machine types support Local SSD. | Only the -lssd variants of the C3D machine types support Local SSD. | | c3d-standard-8-lssd | 1 | | c3d-standard-16-lssd | 1 | | c3d-standard-30-lssd | 2 | | c3d-standard-60-lssd | 4 | | c3d-standard-90-lssd | 8 | | c3d-standard-180-lssd | 16 | | c3d-standard-360-lssd | 32 | | c3d-highmem-8-lssd | 1 | | c3d-highmem-16-lssd | 1 | | c3d-highmem-30-lssd | 2 | | c3d-highmem-60-lssd | 4 | | c3d-highmem-90-lssd | 8 | | c3d-highmem-180-lssd | 16 | | c3d-highmem-360-lssd | 32 | | A3 machine types | A3 machine types | | a3-megagpu-8g | 16 | | a3-highgpu-8g | 16 | | A2 Ultra machine types | A2 Ultra machine types | | a2-ultragpu-1g | 1 | | a2-ultragpu-2g | 2 | | a2-ultragpu-4g | 4 | | a2-ultragpu-8g | 8 | | Z3 machine types | Z3 machine types | | Each disk is 3 TiB in size. | Each disk is 3 TiB in size. | | z3-standard-88-lssd | 12 | | z3-standard-176-lssd | 12 | The machine types listed in the following table don't automatically attach Local SSD disks to a newly created" }, { "data": "Because you can't add Local SSD disks to a VM after you create it, use the information in this section to determine how many Local SSD disks to attach when you create a" }, { "data": "| N1 machine types | Number of Local SSD disks allowed per VM | Unnamed: 2 | |:--|:|:| | All N1 machine types | 1 to 8, 16, or 24 | nan | | N2 machine types | N2 machine types | nan | | Machine types with 2 to 10 vCPUs, inclusive | 1, 2, 4, 8, 16, or 24 | nan | | Machine types with 12 to 20 vCPUs, inclusive | 2, 4, 8, 16, or 24 | nan | | Machine types with 22 to 40 vCPUs, inclusive | 4, 8, 16, or 24 | nan | | Machine types with 42 to 80 vCPUs, inclusive | 8, 16, or 24 | nan | | Machine types with 82 to 128 vCPUs, inclusive | 16 or 24 | nan | | N2D machine types | N2D machine types | nan | | Machine types with 2 to 16 vCPUs, inclusive | 1, 2, 4, 8, 16, or 24 | nan | | Machine types with 32 or 48 vCPUs | 2, 4, 8, 16, or 24 | nan | | Machine types with 64 or 80 vCPUs | 4, 8, 16, or 24 | nan | | Machine types with 96 to 224 vCPUs, inclusive | 8, 16, or 24 | nan | | C2 machine types | C2 machine types | nan | | Machine types with 4 or 8 vCPUs | 1, 2, 4, or 8 | nan | | Machine types with 16 vCPUs | 2, 4, or 8 | nan | | Machine types with 30 vCPUs | 4 or 8 | nan | | Machine types with 60 vCPUs | 8 | nan | | C2D machine types | C2D machine types | nan | | Machine types with 2 to 16 vCPUs, inclusive | 1, 2, 4, 8 | nan | | Machine types with 32 vCPUs | 2, 4, 8 | nan | | Machine types with 56 vCPUs | 4, 8 | nan | | Machine types with 112 vCPUs | 8 | nan | | A2 Standard machine types | A2 Standard machine types | nan | | a2-highgpu-1g | 1, 2, 4, or 8 | nan | | a2-highgpu-2g | 2, 4, or 8 | nan | | a2-highgpu-4g | 4 or 8 | nan | | a2-highgpu-8g or a2-megagpu-16g | 8 | nan | | G2 machine types | G2 machine types | nan | | g2-standard-4 | 1 | nan | | g2-standard-8 | 1 | nan | | g2-standard-12 | 1 | nan | | g2-standard-16 | 1 | nan | | g2-standard-24 | 2 | nan | | g2-standard-32 | 1 | nan | | g2-standard-48 | 4 | nan | | g2-standard-96 | 8 | nan | | M1 machine types | M1 machine types | M1 machine types | | m1-ultramem-40 | Not available | nan | | m1-ultramem-80 | Not available | nan | | m1-megamem-96 | 1 to 8 | nan | | m1-ultramem-160 | Not available | nan | | M3 machine types | M3 machine types | M3 machine types | | m3-ultramem-32 | 4, 8 | nan | | m3-megamem-64 | 4, 8 | nan | | m3-ultramem-64 | 4, 8 | nan | | m3-megamem-128 | 8 | nan | | m3-ultramem-128 | 8 | nan | | N4, E2, Tau T2D, Tau T2A, and M2 machine types | These machine types don't support Local SSD disks. | These machine types don't support Local SSD disks. | For each Local SSD disk you create," } ]
{ "category": "Runtime", "file_name": "disks#disk-types.md", "project_name": "Google Persistent Disk", "subcategory": "Cloud Native Storage" }
[ { "data": "Compute Engine is a computing and hosting service that lets you create and run virtual machines on Google infrastructure. Compute Engine offers scale, performance, and value that lets you easily launch large compute clusters on Google's infrastructure. There are no upfront investments, and you can run thousands of virtual CPUs on a system that offers quick, consistent performance. Learn more Machine families resource and comparison guide Choose a deployment strategy Create and start a VM About SSH connections Choose a migration path Data protection options VM instance lifecycle gcloud compute command-line tool Client libraries REST API Pricing Quotas and limits Release notes Support Samples & videos VM migration hands-on lab Get hands-on practice with Google's current solution set for VM assessment, planning, migration, and modernization. Networking in Google Cloud hands-on lab Set up a load balanced application on Google Cloud. Interactive walkthrough: Create a Linux VM in Compute Engine Create a Linux VM by using a tutorial that walks you through the steps directly in the Cloud console. Hosting a web app on Compute Engine hands-on lab Learn how to deploy and scale a sample application. VM migration Migrate for Compute Engine allows you to easily migrate VMs from your on-premises data center, AWS, or Azure into Compute Engine. You validate, run, and migrate applications into Google Cloud without rewriting them, modifying the image, or changing management processes. Patterns for scalable and resilient apps Learn patterns and practices for creating apps that are resilient and scalable, two essential goals of many modern architecture exercises. A well-designed app scales up and down as demand increases and decreases, and is resilient enough to withstand service disruptions. Strategies to migrate IBM Db2 to Compute Engine Learn best practices for a homogeneous Db2 migration to Compute Engine. This document is intended for database admins, system admins and software, database, and ops engineers who are migrating Db2 environments to Google Cloud. C# samples A set of .NET Cloud Client Library samples for Compute Engine. Go samples A set of Go Cloud Client Library samples for Compute Engine. Java samples A set of Java Cloud Client Library samples for Compute Engine. Node.js samples A set of Node.js Cloud Client Library samples for Compute Engine. PHP samples A set of PHP Cloud Client Library samples for Compute Engine. Python samples A set of Python Cloud Client Library samples for Compute Engine. Ruby samples A set of Ruby Cloud Client Library samples for Compute Engine. Terraform samples A set of Terraform samples for Compute Engine. All samples Browse all samples for Compute Engine. Create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-07 UTC." } ]
{ "category": "Runtime", "file_name": "disks#repds.md", "project_name": "Google Persistent Disk", "subcategory": "Cloud Native Storage" }
[ { "data": "If your workloads need high performance, low latency temporary storage, consider using Local solid-state drive (Local SSD) disks when you create your virtual machine (VM). Local SSD disks are always-encrypted solid-state storage for Compute Engine VMs. Local SSD disks are ideal when you need storage for any of the following use cases: Local SSD disks offer superior I/O operations per second (IOPS), and very low latency compared to Persistent Disk and Google Cloud Hyperdisk. This is because Local SSD disks are physically attached to the server that hosts your VM. For this same reason, Local SSD disks can only provide temporary storage. Because Local SSD is suitable only for temporary storage, you must store data that is not temporary or ephemeral in nature on one of our durable storage options. Each Local SSD disk comes in a fixed size, and you can attach multiple Local SSD disks to a single VM when you create it. The number of Local SSD disks that you can attach to a VM depends on the VM's machine type. For more information, see Choose a valid number of Local SSD disks. If Local SSD disks don't meet your redundancy or flexibility requirements, you can use Local SSD disks in combination with other storage options. Local SSD performance depends on several factors, including the number of attached Local SSD disks, the selected disk interface (NVMe or SCSI), and the VM's machine type. The available performance increases as you attach more Local SSD disks to your VM. The following tables list the maximum IOPS and throughput for NVMe- and SCSI-attached Local SSD disks. The metrics are listed by the total capacity of Local SSD disks attached to the VM. | ('# of attachedLocal SSD disks', 'Unnamed: 0level1') | ('Total storage space (GiB)', 'Unnamed: 1level1') | ('Capacity per disk (GiB)', 'Unnamed: 2level1') | ('IOPS', 'Read') | ('IOPS', 'Write') | ('Throughput (MiBps)', 'Read') | ('Throughput (MiBps)', 'Write') | |:|:|-:|-:|--:|:|-:| | 1 | 375 | 375 | 170000 | 90000 | 660 | 350 | | 2 | 750 | 375 | 340000 | 180000 | 1320 | 700 | | 3 | 1125 | 375 | 510000 | 270000 | 1980 | 1050 | | 4 | 1500 | 375 | 680000 | 360000 | 2650 | 1400 | | 5 | 1875 | 375 | 680000 | 360000 | 2650 | 1400 | | 6 | 2250 | 375 | 680000 | 360000 | 2650 | 1400 | | 7 | 2625 | 375 | 680000 | 360000 | 2650 | 1400 | | 8 | 3000 | 375 | 680000 | 360000 | 2650 | 1400 | | 16 | 6000 | 375 | 1.6e+06 | 800000 | 6240 | 3120 | | 24 | 9000 | 375 | 2.4e+06 | 1.2e+06 | 9360 | 4680 | | 32 | 12000 | 375 | 3.2e+06 | 1.6e+06 | 12480 | 6240 | | Z3 machine series | nan | nan | nan | nan | nan | nan | | 12 | 36000 | 3000 | 6e+06 | 6e+06 | 36000 | 30000 |" }, { "data": "('# of combinedLocal SSD disks', 'Unnamed: 0level1') | ('Storage space (GiB)', 'Unnamed: 1level1') | ('IOPS', 'Read') | ('IOPS', 'Write') | ('Throughput (MiBps)', 'Read') | ('Throughput (MiBps)', 'Write') | |:|:|-:|--:|:|-:| | 1 | 375 | 100000 | 70000 | 390 | 270 | | 2 | 750 | 200000 | 140000 | 780 | 550 | | 3 | 1125 | 300000 | 210000 | 1170 | 820 | | 4 | 1500 | 400000 | 280000 | 1560 | 1090 | | 5 | 1875 | 400000 | 280000 | 1560 | 1090 | | 6 | 2250 | 400000 | 280000 | 1560 | 1090 | | 7 | 2625 | 400000 | 280000 | 1560 | 1090 | | 8 | 3000 | 400000 | 280000 | 1560 | 1090 | | 16 | 6000 | 900000 | 800000 | 6240 | 3120 | | 24 | 9000 | 900000 | 800000 | 9360 | 4680 | To reach the stated performance levels, you must configure your VM as follows: Attach the Local SSD disks with the NVMe interface. Disks attached with the SCSI interface have lower performance. The following machine types also require a minimum number of vCPUs to reach these maximums: If your VM uses a custom Linux image, the image must use version 4.14.68 or later of the Linux kernel. If you use the public images provided by Compute Engine, you don't have to take any further action. For additional VM and disk configuration settings that can improve Local SSD performance, see Optimizing local SSD performance. For more information about selecting a disk interface, see Choose a disk interface. Compute Engine preserves the data on Local SSD disks in certain scenarios, and in other cases, Compute Engine does not guarantee Local SSD data persistence. The following information describes these scenarios and applies to each Local SSD disk attached to a VM. Data on Local SSD disks persist only through the following events: Data on Local SSD disks might be lost if a host error occurs on the VM and Compute Engine can't reconnect the VM to the Local SSD disk within a specified time. You can control how much time, if any, is spent attempting to recover the data with the Local SSD recovery timeout. If Compute Engine can't reconnect to the disk before the timeout expires, the VM is restarted. When the VM restarts, the Local SSD data is unrecoverable. Compute Engine attaches a blank Local SSD disk to the restarted VM. The Local SSD recovery timeout is part of a VM's host maintenance policy. For more information, see Local SSD recovery timeout. Data on Local SSD disks does not persist through the following events: If Compute Engine was unable to recover a VM's Local SSD data, Compute Engine restarts the VM with a mounted and attached Local SSD disk for each previously attached Local SSD disk. Compute Engine automatically encrypts your data when it is written to Local SSD storage space. You can't use customer-supplied encryption keys with Local SSD disk. Since you can't back up Local SSD data with disk images, standard snapshots, or disk clones, Google recommends that you always store valuable data on a durable storage option. If you need to preserve the data on a Local SSD disk, attach a Persistent Disk or Google Cloud Hyperdisk to the VM. After you mount the Persistent Disk or Hyperdisk copy the data from the Local SSD disk to the newly attached disk. To achieve the highest Local SSD performance, you must attach your disks to the VM with the NVMe interface. Performance is lower if you use the SCSI interface. The disk interface you choose also depends on the machine type and OS that your VM" }, { "data": "Some of the available machine types in Compute Engine allow you to choose between NVMe and SCSI interfaces, while others support either only NVMe or only SCSI. Similarly, some of the public OS images provided by Compute Engine might support both NVMe and SCSI, or only one of the two. The following pages provide more information about available machine types and supported public images, as well as performance details. Supported interfaces by machine types: See Machine series comparison. In the Choose VM properties to compare list, select Disk interface type. OS image: For a list of which public OS images provided by Compute Engine support SCSI or NVMe, see the Interfaces tab for each table in the operating system details documentation. If your VM uses a custom Linux image, you must use version 4.14.68 or later of the Linux kernel for optimal NVMe performance. If you have an existing setup that requires using a SCSI interface, consider using multi-queue SCSI to achieve better performance over the standard SCSI interface. If you are using a custom image that you imported, see Enable multi-queue SCSI. Most machine types available on Compute Engine support Local SSD disks. Some machine types always include a fixed number of Local SSD disks by default, while others allow you to add specific numbers of disks. You can only add Local SSD disks when you create the VM. You can't add Local SSD disks to a VM after you create it. For VMs created based on the Z3 machine series, each attached disk has 3,000GiB of capacity. For all other machine series, each disk you attach has 375GiB of capacity. The following table lists the machine types that include Local SSD disks by default, and shows how many disks are attached when you create the VM. | Machine type | Number of Local SSD disksautomatically attached per VM | |:-|:-| | C3 machine types | C3 machine types | | Only the -lssd variants of the C3 machine types support Local SSD. | Only the -lssd variants of the C3 machine types support Local SSD. | | c3-standard-4-lssd | 1 | | c3-standard-8-lssd | 2 | | c3-standard-22-lssd | 4 | | c3-standard-44-lssd | 8 | | c3-standard-88-lssd | 16 | | c3-standard-176-lssd | 32 | | C3D machine types | C3D machine types | | Only the -lssd variants of the C3D machine types support Local SSD. | Only the -lssd variants of the C3D machine types support Local SSD. | | c3d-standard-8-lssd | 1 | | c3d-standard-16-lssd | 1 | | c3d-standard-30-lssd | 2 | | c3d-standard-60-lssd | 4 | | c3d-standard-90-lssd | 8 | | c3d-standard-180-lssd | 16 | | c3d-standard-360-lssd | 32 | | c3d-highmem-8-lssd | 1 | | c3d-highmem-16-lssd | 1 | | c3d-highmem-30-lssd | 2 | | c3d-highmem-60-lssd | 4 | | c3d-highmem-90-lssd | 8 | | c3d-highmem-180-lssd | 16 | | c3d-highmem-360-lssd | 32 | | A3 standard machine types | A3 standard machine types | | a3-highgpu-8g | 16 | | A2 ultra machine types | A2 ultra machine types | | a2-ultragpu-1g | 1 | | a2-ultragpu-2g | 2 | | a2-ultragpu-4g | 4 | | a2-ultragpu-8g | 8 | | Z3 machine types | Z3 machine types | | Each disk is 3 TiB in size. | Each disk is 3 TiB in size. | | z3-standard-88-lssd | 12 | | z3-standard-176-lssd | 12 | The machine types listed in the following table don't automatically attach Local SSD disks to a newly created" }, { "data": "Because you can't add Local SSD disks to a VM after you create it, use the information in this section to determine how many Local SSD disks to attach when you create a" }, { "data": "| N1 machine types | Number of Local SSD disks allowed per VM | Unnamed: 2 | |:--|:|:| | All N1 machine types | 1 to 8, 16, or 24 | nan | | N2 machine types | N2 machine types | nan | | Machine types with 2 to 10 vCPUs, inclusive | 1, 2, 4, 8, 16, or 24 | nan | | Machine types with 12 to 20 vCPUs, inclusive | 2, 4, 8, 16, or 24 | nan | | Machine types with 22 to 40 vCPUs, inclusive | 4, 8, 16, or 24 | nan | | Machine types with 42 to 80 vCPUs, inclusive | 8, 16, or 24 | nan | | Machine types with 82 to 128 vCPUs, inclusive | 16 or 24 | nan | | N2D machine types | N2D machine types | nan | | Machine types with 2 to 16 vCPUs, inclusive | 1, 2, 4, 8, 16, or 24 | nan | | Machine types with 32 or 48 vCPUs | 2, 4, 8, 16, or 24 | nan | | Machine types with 64 or 80 vCPUs | 4, 8, 16, or 24 | nan | | Machine types with 96 to 224 vCPUs, inclusive | 8, 16, or 24 | nan | | C2 machine types | C2 machine types | nan | | Machine types with 4 or 8 vCPUs | 1, 2, 4, or 8 | nan | | Machine types with 16 vCPUs | 2, 4, or 8 | nan | | Machine types with 30 vCPUs | 4 or 8 | nan | | Machine types with 60 vCPUs | 8 | nan | | C2D machine types | C2D machine types | nan | | Machine types with 2 to 16 vCPUs, inclusive | 1, 2, 4, 8 | nan | | Machine types with 32 vCPUs | 2, 4, 8 | nan | | Machine types with 56 vCPUs | 4, 8 | nan | | Machine types with 112 vCPUs | 8 | nan | | A2 standard machine types | A2 standard machine types | nan | | a2-highgpu-1g | 1, 2, 4, or 8 | nan | | a2-highgpu-2g | 2, 4, or 8 | nan | | a2-highgpu-4g | 4 or 8 | nan | | a2-highgpu-8g or a2-megagpu-16g | 8 | nan | | G2 machine types | G2 machine types | nan | | g2-standard-4 | 1 | nan | | g2-standard-8 | 1 | nan | | g2-standard-12 | 1 | nan | | g2-standard-16 | 1 | nan | | g2-standard-24 | 2 | nan | | g2-standard-32 | 1 | nan | | g2-standard-48 | 4 | nan | | g2-standard-96 | 8 | nan | | M1 machine types | M1 machine types | M1 machine types | | m1-ultramem-40 | Not available | nan | | m1-ultramem-80 | Not available | nan | | m1-megamem-96 | 1 to 8 | nan | | m1-ultramem-160 | Not available | nan | | M3 machine types | M3 machine types | M3 machine types | | m3-ultramem-32 | 4, 8 | nan | | m3-megamem-64 | 4, 8 | nan | | m3-ultramem-64 | 4, 8 | nan | | m3-megamem-128 | 8 | nan | | m3-ultramem-128 | 8 | nan | | N4, E2, Tau T2D, Tau T2A, and M2 machine types | These machine types don't support Local SSD disks. | These machine types don't support Local SSD disks. | For each Local SSD disk you create," } ]
{ "category": "Runtime", "file_name": "tutorials_doctype=quickstart.md", "project_name": "Google Persistent Disk", "subcategory": "Cloud Native Storage" }
[ { "data": "Get started using Google Cloud by trying one of our product quickstarts, tutorials, or interactive walkthroughs. Get started with Google Cloud quickstarts Whether you're looking to deploy a web app, set up a database, or run big data workloads, it can be challenging to get started. Luckily, Google Cloud quickstarts offer step-by-step tutorials that cover basic use cases, operating the Google Cloud console, and how to use the Google command-line tools." } ]
{ "category": "Runtime", "file_name": "installation.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "In this section, we will introduce the installation procedure: Kubernetes You can use hwameistor-operator to deploy and manage HwameiStor system. This page takes 3-node kubernetes cluster as an example to perform post-check after installing HwameiStor. To ensure data security, it is strongly recommended not to uninstall the HwameiStor system in a production environment." } ]
{ "category": "Runtime", "file_name": "intro.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "HwameiStor is an HA local storage system for cloud-native stateful workloads. HwameiStor creates a local storage resource pool for centrally managing all disks such as HDD, SSD, and NVMe. It uses the CSI architecture to provide distributed services with local volumes, and provides data persistence capabilities for stateful cloud-native workloads or components. HwameiStor is an open source, lightweight, and cost-efficient local storage system that can replace expensive traditional SAN storage. The system architecture of HwameiStor is as follows. By using the CAS pattern, users can achieve the benefits of higher performance, better cost-efficiency, and easier management of their container storage. It can be deployed by helm charts or directly use the independent installation. You can easily enable high-performance local storage across the entire cluster with one click and automatically identify disks. HwameiStor is easy to deploy and ready to go. Automated Maintenance Disks can be automatically discovered, identified, managed, and allocated. Smart scheduling of applications and data based on affinity. Automatically monitor disk status and give early warning. High Availability Use cross-node replicas to synchronize data for high availability. When a problem occurs, the application will be automatically scheduled to the high-availability data node to ensure the continuity of the application. Full-Range support of Storage Medium Aggregate HDD, SSD, and NVMe disks to provide low-latency, high-throughput data services. Agile Linear Scalability Dynamically expand the cluster according to flexibly meet the data persistence requirements of the application." } ]
{ "category": "Runtime", "file_name": "quotas.md", "project_name": "Google Persistent Disk", "subcategory": "Cloud Native Storage" }
[ { "data": "This topic provides information about how to encrypt disks with customer-supplied encryption keys. For information about disk encryption, see About disk encryption. For information bout encrypting disks with customer-managed encryption keys (CMEK), see Protect resources by using Cloud KMS keys. Using CSEKs means you provide your own encryption keys and Compute Engine uses your keys to protect the Google-generated keys used to encrypt and decrypt your data. Only users who can provide the correct key can use resources protected by a customer-supplied encryption key (CSEK). Google does not store your keys on its servers and cannot access your protected data unless you provide the key. This also means that if you forget or lose your key, there is no way for Google to recover the key or to recover any data encrypted with the lost key. When you delete a persistent disk, Google discards the cipher keys, rendering the data irretrievable. This process is irreversible. Select the tab for how you plan to use the samples on this page: When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication. Install the Google Cloud CLI, then initialize it by running the following command: ``` gcloud init``` To use the Python samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials. To initialize the gcloud CLI, run the following command: ``` gcloud init``` Create local authentication credentials for your Google Account: ``` gcloud auth application-default login``` For more information, see Set up authentication for a local development environment. To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI. Install the Google Cloud CLI, then initialize it by running the following command: ``` gcloud init``` For more information, see Authenticate for using REST in the Google Cloud authentication documentation. For CSEK, the following restrictions apply: Availability of customer-supplied encryption keys depends on the location of your billing account, not the location of the resource. Customer-supplied encryption keys are not available for billing accounts that are in the following countries: You can only encrypt new persistent disks with your own key. You cannot encrypt existing persistent disks with your own key. You can't use your own keys with Local SSD disks as the keys are managed by Google infrastructure and deleted when the VM is terminated. Compute Engine does not store encryption keys with instance templates, so you need to store your own keys in KMS to encrypt disks in a managed instance group. You can't suspend instances that have CSEK-protected disks attached. This section describes the encryption specification and the format of CSEK. Compute Engine uses your encryption key to protect Google's encryption keys with AES-256 encryption. It is up to you to generate and manage your key. You must provide a key that is a 256-bit string encoded in RFC 4648 standard base64 to Compute Engine. The following is an example of a base64 encoded key, generated with the string \"Hello from Google Cloud Platform\" ``` SGVsbG8gZnJvbSBHb29nbGUgQ2xvdWQgUGxhdGZvcm0= ``` It can be generated using the following script: ``` read -sp \"String:\" ; \\" }, { "data": "${#REPLY} == 32 ]] && \\ echo \"$(echo -n \"$REPLY\" | base64)\" || \\ (>&2 echo -e \"\\nERROR:Wrong Size\"; false) ``` Beta This product or feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section of the Service Specific Terms. Pre-GA products and features are available \"as is\" and might have limited support. For more information, see the launch stage descriptions. In addition to encoding your key in base64, you can optionally wrap your key using an RSA public key certificate provided by Google, encode the key in base64, and then use that key in your requests. RSA wrapping is a process in which you use a public key to encrypt your data. After that data has been encrypted with the public key, it can only be decrypted by the respective private key. In this case, the private key is known only to Google Cloud services. By wrapping your key using the RSA certificate, you ensure that only Google Cloud services can unwrap your key and use it to protect your data. For more information, see RSA encryption. To create an RSA-wrapped key for Compute Engine, do the following: Download the public certificate maintained by Compute Engine from: ``` https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem ``` There are many ways of generate and RSA-wrap your key; use a method that is familiar to you. The following are two examples of RSA-wrapping your key that you could use. Example 1 The following instructions use the openssl command-line utility to RSA-wrap and encode a key. Optional: Generate a 256-bit (32-byte) random key. If you already have a key you want to use, you can skip this step. There are many ways you can generate a key. For example: ``` $ head -c 32 /dev/urandom | LC_CTYPE=C tr '\\n' = > mykey.txt ``` Download the public key certificate: ``` $ curl -s -O -L https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem``` Extract the public key from the certificate: ``` $ openssl x509 -pubkey -noout -in google-cloud-csek-ingress.pem > pubkey.pem ``` RSA-wrap your key, making sure to replace mykey.txt with your own key file. ``` $ openssl rsautl -oaep -encrypt -pubin -inkey pubkey.pem -in mykey.txt -out rsawrappedkey.txt ``` Encode your RSA-wrapped key in base64. ``` $ openssl enc -base64 -in rsawrappedkey.txt | tr -d '\\n' | sed -e '$a\\' > rsawrapencodedkey.txt ``` Example 2 The following is sample Python script that generates a 256-bit (32-byte) random string and creates a base64 encoded RSA-wrapped key using the cryptography library: ``` import argparse import base64 import os from typing import Optional from cryptography import x509 from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives import hashes from cryptography.hazmat.primitives.asymmetric import padding from cryptography.hazmat.primitives.asymmetric.rsa import RSAPublicKey import requests GOOGLEPUBLICCERT_URL = ( \"https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem\" ) def getgooglepubliccertkey() -> RSAPublicKey: \"\"\" Downloads the Google public certificate. Returns: RSAPublicKey object with the Google public certificate. \"\"\" r = requests.get(GOOGLEPUBLICCERT_URL) r.raiseforstatus() certificate = x509.loadpemx509certificate(r.content, defaultbackend()) publickey = certificate.publickey() return public_key def wraprsakey(publickey: RSAPublicKey, privatekey_bytes: bytes) -> bytes: \"\"\" Use the Google public key to encrypt the customer private key. This means that only the Google private key is capable of decrypting the customer private key. Args: public_key: The public key to use for encrypting. privatekeybytes: The private key to be encrypted. Returns: privatekeybytes encrypted using the public_key. Encoded using base64. \"\"\" wrappedkey = publickey.encrypt( privatekeybytes, padding.OAEP( mgf=padding.MGF1(algorithm=hashes.SHA1()), algorithm=hashes.SHA1(), label=None, ), ) encodedwrappedkey =" }, { "data": "return encodedwrappedkey def main(key_file: Optional[str]) -> None: \"\"\" This script will encrypt a private key with Google public key. Args: key_file: path to a file containing your private key. If not provided, a new key will be generated (256 bit). \"\"\" if not key_file: customerkeybytes = os.urandom(32) else: with open(key_file, \"rb\") as f: customerkeybytes = f.read() googlepublickey = getgooglepubliccertkey() wrappedrsakey = wraprsakey(googlepublickey, customerkeybytes) b64key = base64.b64encode(customerkey_bytes).decode(\"utf-8\") print(f\"Base-64 encoded private key: {b64_key}\") print(f\"Wrapped RSA key: {wrappedrsakey.decode('utf-8')}\") if name == \"main\": parser = argparse.ArgumentParser( description=doc, formatter_class=argparse.RawDescriptionHelpFormatter ) parser.addargument(\"--keyfile\", help=\"File containing your binary private key.\") args = parser.parse_args() main(args.key_file)``` Your key is now ready to use! Using the Google Cloud CLI, you can provide a regular key and a RSA-wrapped key in the same way. In the API, use the sha256 property instead of rawKey if you want to use a RSA-wrapped key instead. Encryption keys can be used through the Google Cloud CLI. Download and install gcloud. When you use the gcloud compute command-line tool to set your keys, you provide encoded keys using a key file that contains your encoded keys as a JSON list. A key file can contain multiple keys, letting you manage many keys in a single place. Alternatively, you can create single key files to handle each key separately. A key file is only usable with the gcloud CLI. When using REST, you must supply the key directly in your request. Each entry in your key file must provide: When you use the key file in your requests, the tool looks for matching resources and uses the respective keys. If no matching resources are found, the request fails. An example key file looks like this: ``` [ { \"uri\": \"https://www.googleapis.com/compute/v1/projects/myproject/zones/us-central1-a/disks/example-disk\", \"key\": \"acXTX3rxrKAFTF0tYVLvydU1riRZTvUNC4g5I11NY+c=\", \"key-type\": \"raw\" }, { \"uri\": \"https://www.googleapis.com/compute/v1/projects/myproject/global/snapshots/my-private-snapshot\", \"key\": \"ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFHz0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoDD6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oeQ5lAbtt7bYAAHf5l+gJWw3sUfs0/Glw5fpdjT8Uggrr+RMZezGrltJEF293rvTIjWOEB3z5OHyHwQkvdrPDFcTqsLfh+8Hr8g+mf+7zVPEC8nEbqpdl3GPv3A7AwpFp7MA==\", \"key-type\": \"rsa-encrypted\" } ] ``` If you use a key file, restrict access to your file to only those who need it. Make sure to set appropriate permissions on these files and consider encrypting these files using additional tools: You can encrypt a new persistent disk by supplying a key during VM or disk creation. Go to the Disks page. Go to Disks Click Create disk and enter the properties for the new disk. Under Encryption, select Customer-supplied key. Provide the encryption key for the disk in the text box and select Wrapped key if the key has been wrapped with the public RSA key. In the gcloud compute tool, encrypt a disk using the --csek-key-file flag during VM creation. If you are using an RSA-wrapped key, use the gcloud beta component: ``` gcloud (beta) compute instances create example-instance --csek-key-file example-file.json ``` To encrypt a standalone persistent disk: ``` gcloud (beta) compute disks create example-disk --csek-key-file example-file.json ``` You can encrypt a disk by using the diskEncryptionKey property and making a request to the v1 API for a raw (non-RSA wrapped) key, or to the Beta API for a RSA-wrapped key. Provide one of the following properties in your request: For example, to encrypt a new disk during VM creation with an RSA-wrapped key: ``` POST https://compute.googleapis.com/compute/beta/projects/myproject/zones/us-central1-a/instances { \"machineType\": \"zones/us-central1-a/machineTypes/e2-standard-2\", \"disks\": [ { \"type\": \"PERSISTENT\", \"diskEncryptionKey\": { \"rsaEncryptedKey\": \"ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFHz0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoDD6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oeQ5lAbtt7bYAAHf5l+gJWw3sUfs0/Glw5fpdjT8Uggrr+RMZezGrltJEF293rvTIjWOEB3z5OHyHwQkvdrPDFcTqsLfh+8Hr8g+mf+7zVPEC8nEbqpdl3GPv3A7AwpFp7MA==\" }, \"initializeParams\": { \"sourceImage\": \"projects/debian-cloud/global/images/debian-9-stretch-v20170619\" }, \"boot\": true } ], ... } ``` Similarly, you can also use REST to create a new standalone persistent disk and encrypt it with your own key: ``` POST https://compute.googleapis.com/compute/beta/projects/myproject/zones/" }, { "data": "alpha%2Fprojects%2Fdebian-cloud%2Fglobal%2Fimages%2Fdebian-9-stretch-v20170619 { \"name\": \"new-encrypted-disk-key\", \"diskEncryptionKey\": { \"rsaEncryptedKey\": \"ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFHz0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoDD6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oeQ5lAbtt7bYAAHf5l+gJWw3sUfs0/Glw5fpdjT8Uggrr+RMZezGrltJEF293rvTIjWOEB3z5OHyHwQkvdrPDFcTqsLfh+8Hr8g+mf+7zVPEC8nEbqpdl3GPv3A7AwpFp7MA==\" }, \"type\": \"zones/us-central1-a/diskTypes/pd-standard\" } ``` If you create a snapshot from an encrypted disk, the snapshot must also be encrypted. You must specify a key to encrypt the snapshot. You cannot convert encrypted disks or encrypted snapshots to use Compute Engine default encryption unless you create a new disk image and a new persistent disk. Snapshots of disks encrypted with CSEK are always full snapshots. This differs from snapshots of disks encrypted with customer-managed encryption keys (CMEK), which are incremental. Snapshots are priced based on the total size of the snapshot, so a full snapshot might cost more than an incremental snapshot. To create a persistent disk snapshot from an encrypted disk, your snapshot creation request must provide the encryption key that you used to encrypt the persistent disk. Review the Best practices for persistent disk snapshots before creating your snapshot. Go to the Snapshots page. Go to Snapshots Click Create snapshot. Under Source disk, choose the encrypted disk you want to create a snapshot of. Provide the encryption key for the disk in the text box and select Wrapped key if the key has been wrapped with the public RSA key. Encrypt the new snapshot by supplying an additional encryption key under the Encryption section. To make the request, provide the sourceDiskEncryptionKey property to access the source persistent disk. You must encrypt the new snapshot using the snapshotEncryptionKey property. Make a request to the v1 API for a raw (non-RSA wrapped) key, or to the Beta API for a RSA-wrapped key. ``` POST https://compute.googleapis.com/compute/beta/projects/myproject/zones/us-central1-a/disks/example-disk/createSnapshot { \"snapshotEncryptionKey\": { \"rsaEncryptedKey\": \"ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFHz0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoDD6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oeQ5lAbtt7bYAAHf5l+gJWw3sUfs0/Glw5fpdjT8Uggrr+RMZezGrltJEF293rvTIjWOEB3z5OHyHwQkvdrPDFcTqsLfh+8Hr8g+mf+7zVPEC8nEbqpdl3GPv3A7AwpFp7MA==\" }, \"sourceDiskEncryptionKey\": { \"rsaEncryptedKey\": \"ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFHz0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoDD6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oeQ5lAbtt7bYAAHf5l+gJWw3sUfs0/Glw5fpdjT8Uggrr+RMZezGrltJEF293rvTIjWOEB3z5OHyHwQkvdrPDFcTqsLfh+8Hr8g+mf+7zVPEC8nEbqpdl3GPv3A7AwpFp7MA==\" }, \"name\": \"snapshot-encrypted-disk\" } ``` The sourceDiskEncryptionKey property must match the key used to encrypt the persistent disk. Otherwise, the request fails. The snapshotEncryptionKey lets you supply a key to encrypt the snapshot so that if the snapshot is used to create new persistent disks, a matching key must be provided. This key must follow the preceding key format. You can also choose to leave this property undefined and the snapshot can be used to create new persistent disks without requiring a key. You can create custom images from encrypted persistent disks or copy encrypted images. You cannot use the console to copy images. Use the Google Cloud CLI or REST to copy images. Go to the Images page. Go to Images Click Create image. Under Source disk, choose the encrypted disk you want to create an image of. Under Encryption, select an encryption key management solution. If the key has been wrapped with the public RSA key, select Wrapped key. Follow the instructions to create an image, and add the --csek-key-file flag with a path to the encryption key file for the encrypted source object. Use the gcloud beta component if you are using an RSA-wrapped key: ``` gcloud (beta) compute images create .... --csek-key-file example-file.json ``` If you want to encrypt the new image with your key, add the key to the key file: ``` [ { \"uri\": \"https://www.googleapis.com/compute/v1/projects/myproject/zones/us-central1-a/disks/source-disk\", \"key\": \"acX3RqzxrKAFTF0tYVLvydU1riRZTvUNC4g5I11NY-c=\", \"key-type\": \"raw\" }, { \"uri\": \"https://www.googleapis.com/compute/v1/projects/myproject/global/snapshots/the-new-image\", \"key\": \"TF0t-cSfl7CT7xRF1LTbAgi7U6XXUNC4zU_dNgx0nQc=\", \"key-type\": \"raw\" } ] ``` Your API creation request must contain the encryption key property for your source object. For example, include one of the following properties depending on the source object type: Also include the rawKey or rsaEncryptedKey properties depending on key" }, { "data": "Make a request to the v1 API for a raw (non-RSA wrapped) key, or to the Beta API for a RSA-wrapped key. The following example converts an encrypted and RSA-wrapped persistent disk to an image that uses the same encryption key. ``` POST https://compute.googleapis.com/compute/beta/projects/myproject/global/images { \"name\": \"image-encrypted-disk\", \"sourceDiskEncryptionKey\": { \"rsaEncryptedKey\": \"ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFHz0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoDD6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oeQ5lAbtt7bYAAHf5l+gJWw3sUfs0/Glw5fpdjT8Uggrr+RMZezGrltJEF293rvTIjWOEB3z5OHyHwQkvdrPDFcTqsLfh+8Hr8g+mf+7zVPEC8nEbqpdl3GPv3A7AwpFp7MA==\" } \"imageEncryptionKey\": { \"rsaEncryptedKey\": \"ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFHz0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoDD6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oeQ5lAbtt7bYAAHf5l+gJWw3sUfs0/Glw5fpdjT8Uggrr+RMZezGrltJEF293rvTIjWOEB3z5OHyHwQkvdrPDFcTqsLfh+8Hr8g+mf+7zVPEC8nEbqpdl3GPv3A7AwpFp7MA==\" }, \"sourceDisk\": \"projects/myproject/zones/us-central1-a/disks/source-disks\" } ``` The optional imageEncryptionKey property lets you supply a key to encrypt the image so when the image is used to create new persistent disks, a matching key must be provided. This key must follow the same key format described above. You can also choose to leave this property undefined and the image can be used to create new persistent disks without requiring a key. You can encrypt a new image when you manually import a custom image to Compute Engine. Before you can import an image, you must create and compress a disk image file and upload that compressed file to Cloud Storage. Import the custom Compute Engine image that you want to encrypt. Specify the URI to the compressed file and also specify a path to your encryption key file. Go to the Images page. Go to Images Click Create image. Under Source, choose Cloud Storage file. Under Cloud Storage file, enter the Cloud Storage URI. Under Encryption, choose Customer-supplied key and provide the encryption key to encrypt the image in the text box. Use the compute images create command to create a new image, and specify the --csek-key-file flag with an encryption key file. If you are using an RSA-wrapped key, use the gcloud beta component: ``` gcloud (beta) compute images create [IMAGE_NAME] \\ --source-uri gs://[BUCKETNAME]/[COMPRESSEDFILE] \\ --csek-key-file [KEY_FILE] ``` Replace the following: To encrypt a new image created from a RAW file, add the new imageEncryptionKey property to the image creation request, followed by either rawKey or rsaEncryptedKey. Make a request to the v1 API for a raw (non-RSA wrapped) key, or to the Beta API for a RSA-wrapped key. ``` POST https://compute.googleapis.com/compute/beta/projects/myproject/global/images { \"rawDisk\": { \"source\": \"http://storage.googleapis.com/example-image/example-image.tar.gz\" }, \"name\": \"new-encrypted-image\", \"sourceType\": \"RAW\", \"imageEncryptionKey\": { \"rsaEncryptedKey\": \"ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFHz0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoDD6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oeQ5lAbtt7bYAAHf5l+gJWw3sUfs0/Glw5fpdjT8Uggrr+RMZezGrltJEF293rvTIjWOEB3z5OHyHwQkvdrPDFcTqsLfh+8Hr8g+mf+7zVPEC8nEbqpdl3GPv3A7AwpFp7MA==\" } } ``` Go to the Disks page. Go to Disks Click Create disk. Under Source type, select Snapshot. Under Encryption, select an encryption key management solution. If the key has been wrapped with the public RSA key, select Wrapped key. In the gcloud compute tool, provide the encryption key for the snapshot using the --csek-key-file flag when you create the disk. If you are using an RSA-wrapped key, use the gcloud beta component: ``` gcloud (beta) compute disks create ... --source-snapshot example-snapshot --csek-key-file example-file.json ``` To use an encrypted snapshot, supply the sourceSnapshotEncryptionKey in your request, followed by rawKey or rsaEncryptedKey. Make a request to the v1 API for a raw (non-RSA wrapped) key, or to the Beta API for a RSA-wrapped key. For example, to a new standalone persistent disk using an encrypted snapshot: ``` POST https://compute.googleapis.com/compute/beta/projects/myproject/zones/us-central1-a/disks { \"name\": \"disk-from-encrypted-snapshot\", \"sourceSnapshot\": \"global/snapshots/encrypted-snapshot\", \"sourceSnapshotEncryptionKey\": { \"rsaEncryptedKey\": \"ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFHz0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoDD6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oeQ5lAbtt7bYAAHf5l+gJWw3sUfs0/Glw5fpdjT8Uggrr+RMZezGrltJEF293rvTIjWOEB3z5OHyHwQkvdrPDFcTqsLfh+8Hr8g+mf+7zVPEC8nEbqpdl3GPv3A7AwpFp7MA==\" } } ``` Go to the Disks page. Go to Disks Click Create disk. Under Source type, select Image. Under Encryption, select an encryption key management solution. If the key has been wrapped with the public RSA key, select Wrapped" }, { "data": "In the gcloud compute tool, provide the encryption key for the image using the --csek-key-file flag when you create the disk. If you are using an RSA-wrapped key, use the gcloud beta component: ``` gcloud (beta) compute disks create ... --image example-image --csek-key-file example-file.json ``` To use an encrypted image, provide the sourceImageEncryptionKey, followed by either rawKey or rsaEncryptedKey. Make a request to the v1 API for a raw (non-RSA wrapped) key, or to the Beta API for a RSA-wrapped key. ``` POST https://compute.googleapis.com/compute/v1/projects/myproject/zones/us-central1-a/disks { \"name\": \"disk-from-encrypted-image\", \"sourceImageEncryptionKey\": { \"rsaEncryptedKey\": \"ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFHz0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoDD6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oeQ5lAbtt7bYAAHf5l+gJWw3sUfs0/Glw5fpdjT8Uggrr+RMZezGrltJEF293rvTIjWOEB3z5OHyHwQkvdrPDFcTqsLfh+8Hr8g+mf+7zVPEC8nEbqpdl3GPv3A7AwpFp7MA==\" }, \"sourceImage\": \"global/images/encrypted-image\" } ``` Go to the Create an instance page. Go to Create an instance In the Boot disk section, click Change, and do the following: Enter the encryption key in the text box and select Wrapped key if the key has been wrapped with the public RSA key. Click Select. Continue with the VM creation process. To create a VM and attach an encrypted disk, create a key file and provide the key using the --csek-key-file flag when you create the VM. If you are using an RSA-wrapped key, use the gcloud beta component: ``` gcloud (beta) compute instances create example-instance \\ --disk name=example-disk,boot=yes \\ --csek-key-file example-file.json ``` Create a VM using the Compute Engine API and provide either the rawKey or rsaEncryptedKey with the disk specification. Make a request to the v1 API for a raw (non-RSA wrapped) key, or to the Beta API for a RSA-wrapped key. Here is a snippet of an example disk specification: ``` \"disks\": [ { \"deviceName\": \"encrypted-disk\", \"source\": \"projects/myproject/zones/us-central1-f/disks/encrypted-disk\", \"diskEncryptionKey\": { \"rawKey\": \"SGVsbG8gZnJvbSBHb29nbGUgQ2xvdWQgUGxhdGZvcm0=\" } } ] ``` For details on stopping or starting a VM that has encrypted disks, read Restarting a VM with an encrypted disk. If you want to create a mix of customer-encrypted and standard-encrypted resources in a single request with the Google Cloud CLI, you can use the --csek-key-file flag with a key file and the --no-require-csek-key-create flag in your request. By providing both flags, gcloud CLI creates any customer-encrypted resources that are explicitly define in your key file and also creates any standard resources you specify. For example, assume a key file contains the following: ``` [ { \"uri\": \"https://www.googleapis.com/compute/beta/projects/myproject/zones/us-central1-a/disks/example-disk\", \"key\": \"ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFHz0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoDD6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oeQ5lAbtt7bYAAHf5l+gJWw3sUfs0/Glw5fpdjT8Uggrr+RMZezGrltJEF293rvTIjWOEB3z5OHyHwQkvdrPDFcTqsLfh+8Hr8g+mf+7zVPEC8nEbqpdl3GPv3A7AwpFp7MA==\", \"key-type\": \"rsa-encrypted\" } ] ``` If you wanted to create a VM with a customer-encrypted disk using the key file and simultaneously create a VM with a standard-encrypted disk in the same request, you can do so as follows: ``` gcloud beta compute instances create example-disk example-disk-2 \\ --csek-key-file mykeyfile.json --no-require-csek-key-create ``` Normally, it would not be possible to create example-disk-2 if you specified the --csek-key-file flag because the disk is not explicitly defined in the key file. By adding the --no-require-csek-key-create, both disks are created, one encrypted using the key file, and the other encrypted using Google encryption. You can decrypt the contents of a customer-encrypted disk and create a new disk that uses Compute Engine default encryption instead. After you create the new persistent disk, it uses Compute Engine default encryption to protect the disk contents. Any snapshots that you create from that disk must also use default encryption. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-07 UTC." } ]
{ "category": "Runtime", "file_name": "introduction.md", "project_name": "IOMesh", "subcategory": "Cloud Native Storage" }
[ { "data": "IOMesh is a Kubernetes-native storage system that manages storage resources within a Kubernetes cluster, automates operations and maintenance, and provides persistent storage, data protection and migration capabilities for data applications such as MySQL, Cassandra, MongoDB and middleware running on Kubernetes. Kubernetes Native IOMesh is fully built on the capabilities of Kubernetes and implements storage as code through declarative APIs, allowing for managing infrastructure and deployment environments through code to better support DevOps. High Performance IOMesh enables I/O-intensive databases and applications to run efficiently in the container environment. Leveraging the high-performance I/O link, IOMesh achieves high IOPS while maintaining low latency to ensure stable operation of data applications. No Kernel Dependencies IOMesh runs in user space rather than kernel space, isolated from other applications. This means if IOMesh fails, other applications on the same node can continue delivering services as usual without affecting the entire system. Since it is kernel independent, there is no need to install kernel modules or worry about compatibility issues. Tiered Storage IOMesh facilitates cost-effective, hybrid deployment of SSDs & HDDs, maximizing storage performance and capacity for different media while reducing storage costs from the outset. Data Protection & Security A system with multiple levels of data protection makes sure that data is always secure and available. IOMesh does this by placing multiple replicas on different nodes, allowing PV-level snapshots for easy recovery in case of trouble, while also isolating abnormal disks to minimize impact on system performance and reduce operational burden. Authentication is also provided for specific PVs to ensure secure access. Fully Integrated into Kubernetes Ecosystem IOMesh flexibly provides storage for stateful applications via CSI even when they are migrated. It also works seamlessly with the Kubernetes toolchain, easily deploying IOMesh using Helm Chart and integrating with Prometheus and Grafana to provide standardized, visualized monitoring and alerting service." } ]
{ "category": "Runtime", "file_name": "1.1.3.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "The Longhorn Documentation Cloud native distributed block storage for Kubernetes Longhorn is a lightweight, reliable, and powerful distributed block storage system for Kubernetes. Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Runtime", "file_name": "install-iomesh.md", "project_name": "IOMesh", "subcategory": "Cloud Native Storage" }
[ { "data": "IOMesh can be installed on all Kubernetes platforms using various methods. Choose the installation method based on your environment. If the Kubernetes cluster network cannot connect to the public network, opt for custom offline installation. Prerequisite Limitations Procedure Access a master node. Run the following command to install IOMesh. Make sure to replace 10.234.1.0/24 with your actual CIDR. After executing the following command, wait for a few minutes. NOTE: One-click online installation utilizes Helm, which is included in the following command and will be installed automatically if it is not found. ``` export IOMESHDATACIDR=10.234.1.0/24; curl -sSL https://iomesh.run/install_iomesh.sh | bash - ``` Verify that all pods are in Running state. If so, then IOMesh has been successfully installed. ``` watch kubectl get --namespace iomesh-system pods ``` NOTE: IOMesh resources left by running the above commands will be saved for troubleshooting if any error occurs during installation. You can run the command curl -sSL https://iomesh.run/uninstall_iomesh.sh | sh - to remove all IOMesh resources from the Kubernetes cluster. NOTE: After installing IOMesh, the prepare-csi pod will automatically start on all schedulable nodes in the Kubernetes cluster to install and configure open-iscsi. If the installation of open-iscsi is successful on all nodes, the system will automatically clean up all prepare-csi pods. However, if the installation of open-iscsi fails on any node, manual configuration of open-iscsi is required to determine the cause of the installation failure. NOTE: If open-iscsi is manually deleted after installing IOMesh, the prepare-csi pod will not automatically start to install open-iscsi when reinstalling IOMesh. In this case, manual configuration of open-iscsi is necessary. Prerequisite Make sure the CPU architecture of your Kubernetes cluster is Intel x8664, Hygon x8664, or Kunpeng AArch64. Procedure Access a master node in the Kubernetes cluster. Install Helm. Skip this step if Helm is already installed. ``` curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh ``` For more details, refer to Installing Helm. Add the IOMesh Helm repository. ``` helm repo add iomesh http://iomesh.com/charts ``` Export the IOMesh default configuration file iomesh.yaml. ``` helm show values iomesh/iomesh > iomesh.yaml ``` Configure iomesh.yaml. Set dataCIDR to the CIDR you previously configured in Prerequisites. ``` iomesh: chunk: dataCIDR: \"\" # Fill in the dataCIDR you configured in Prerequisites. ``` Set diskDeploymentMode according to your disk configurations. The system has a default value of hybridFlash. If your disk configuration is all-flash mode, change the value to allFlash. ``` diskDeploymentMode: \"hybridFlash\" # Set the disk deployment mode. ``` Specify the CPU architecture. If you have a hygonx8664 Kubernetes cluster, enter hygonx8664, or else leave the field blank. ``` platform: \"\" ``` Specify the IOMesh edition. The field is blank by default, and if left unspecified, the system will install the Community edition automatically. If you have purchased the Enterprise edition, set the value of edition to enterprise. For details, refer to IOMesh Specifications. ``` edition: \"\" # If left blank, Community Edition will be" }, { "data": "``` An optional step. The number of IOMesh chunk pods is three by default. If you install IOMesh Enterprise Edition, you can deploy more than three chunk pods. ``` iomesh: chunk: replicaCount: 3 # Enter the number of chunk pods. ``` An optional step. If you want IOMesh to only use the disks of specific Kubernetes nodes, configure the label of the corresponding node in the chunk.podPolicy.affinity field. ``` iomesh: chunk: podPolicy: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: matchExpressions: key: kubernetes.io/hostname operator: In values: iomesh-worker-0 # Specify the values of the node label. iomesh-worker-1 ``` It is recommended that you only configure values. For more configurations, refer to Pod Affinity. An optional step. Configure the podDeletePolicy field to determine whether the system should automatically delete the Pod and rebuild it on another healthy node when the Kubernetes node that hosts the Pod fails. This configuration applies only to the Pod with an IOMesh-created PVC mounted and the access mode set to ReadWriteOnly. If left unspecified, the value of this field will be set to no-delete-pod by default, indicating that the system won't automatically delete and rebuild the Pod in case of node failure. ``` csi-driver: driver: controller: driver: podDeletePolicy: \"no-delete-pod\" # Supports \"no-delete-pod\", \"delete-deployment-pod\", \"delete-statefulset-pod\", or \"delete-both-statefulset-and-deployment-pod\". ``` Back on the master node, run the following commands to deploy the IOMesh cluster. ``` helm install iomesh iomesh/iomesh \\ --create-namespace \\ --namespace iomesh-system \\ --values iomesh.yaml \\ --wait ``` If successful, you should see output like this: ``` NAME: iomesh LAST DEPLOYED: Wed Jun 30 16:00:32 2021 NAMESPACE: iomesh-system STATUS: deployed REVISION: 1 TEST SUITE: None ``` Verify that all pods are in Running state. If so, then IOMesh has been installed" }, { "data": "``` kubectl --namespace iomesh-system get pods ``` If successful, you should see output like this: ``` NAME READY STATUS RESTARTS AGE iomesh-blockdevice-monitor-76ddc8cf85-82d4h 1/1 Running 0 3m23s iomesh-blockdevice-monitor-prober-kk2qf 1/1 Running 0 3m23s iomesh-blockdevice-monitor-prober-w6g5q 1/1 Running 0 3m23s iomesh-blockdevice-monitor-prober-z6b7f 1/1 Running 0 3m23s iomesh-chunk-0 3/3 Running 2 2m17s iomesh-chunk-1 3/3 Running 0 2m8s iomesh-chunk-2 3/3 Running 0 113s iomesh-csi-driver-controller-plugin-856565b79d-brt2j 6/6 Running 0 3m23s iomesh-csi-driver-controller-plugin-856565b79d-g6rnd 6/6 Running 0 3m23s iomesh-csi-driver-controller-plugin-856565b79d-kp9ct 6/6 Running 0 3m23s iomesh-csi-driver-node-plugin-6pbpp 3/3 Running 4 3m23s iomesh-csi-driver-node-plugin-bpr7x 3/3 Running 4 3m23s iomesh-csi-driver-node-plugin-krjts 3/3 Running 4 3m23s iomesh-hostpath-provisioner-6ffbh 1/1 Running 0 3m23s iomesh-hostpath-provisioner-bqrjp 1/1 Running 0 3m23s iomesh-hostpath-provisioner-rm8ms 1/1 Running 0 3m23s iomesh-iscsi-redirector-2pc26 2/2 Running 1 2m19s iomesh-iscsi-redirector-7msvs 2/2 Running 1 2m19s iomesh-iscsi-redirector-nnbb2 2/2 Running 1 2m19s iomesh-localpv-manager-6flpl 4/4 Running 0 3m23s iomesh-localpv-manager-m8qgq 4/4 Running 0 3m23s iomesh-localpv-manager-p88x7 4/4 Running 0 3m23s iomesh-meta-0 2/2 Running 0 2m17s iomesh-meta-1 2/2 Running 0 2m17s iomesh-meta-2 2/2 Running 0 2m17s iomesh-openebs-ndm-9chdk 1/1 Running 0 3m23s iomesh-openebs-ndm-cluster-exporter-68c757948-2lgvr 1/1 Running 0 3m23s iomesh-openebs-ndm-f6qkg 1/1 Running 0 3m23s iomesh-openebs-ndm-ffbqv 1/1 Running 0 3m23s iomesh-openebs-ndm-node-exporter-pnc8h 1/1 Running 0 3m23s iomesh-openebs-ndm-node-exporter-scd6q 1/1 Running 0 3m23s iomesh-openebs-ndm-node-exporter-tksjh 1/1 Running 0 3m23s iomesh-openebs-ndm-operator-bd4b94fd6-zrpw7 1/1 Running 0 3m23s iomesh-zookeeper-0 1/1 Running 0 3m17s iomesh-zookeeper-1 1/1 Running 0 2m56s iomesh-zookeeper-2 1/1 Running 0 2m21s iomesh-zookeeper-operator-58f4df8d54-2wvgj 1/1 Running 0 3m23s operator-87bb89877-fkbvd 1/1 Running 0 3m23s operator-87bb89877-kfs9d 1/1 Running 0 3m23s operator-87bb89877-z9tfr 1/1 Running 0 3m23s ``` NOTE: After installing IOMesh, the prepare-csi pod will automatically start on all schedulable nodes in the Kubernetes cluster to install and configure open-iscsi. If the installation of open-iscsi is successful on all nodes, the system will automatically clean up all prepare-csi pods. However, if the installation of open-iscsi fails on any node, manual configuration of open-iscsi is required to determine the cause of the installation failure. NOTE: If open-iscsi is manually deleted after installing IOMesh, the prepare-csi pod will not automatically start to install open-iscsi when reinstalling IOMesh. In this case, manual configuration of open-iscsi is necessary. Prerequisite Make sure the CPU architecture of your Kubernetes cluster is Intel x8664, Hygon x8664, or Kunpeng AArch64. Procedure Download the IOMesh Offline Installation Package based on your CPU architecture on the master node and each worker node. Unpack the installation package on the master node and each worker node. Make sure to replace <VERSION> with v1.0.1 and <ARCH> based on your CPU architecture. ``` tar -xf iomesh-offline-<VERSION>-<ARCH>.tgz && cd iomesh-offline ``` Load the IOMesh image on the master node and each worker node. Then execute the corresponding script based on your container runtime and container manager. ``` docker load --input ./images/iomesh-offline-images.tar``` ``` ctr --namespace k8s.io image import ./images/iomesh-offline-images.tar``` ``` podman load --input ./images/iomesh-offline-images.tar``` On the master node, run the following command to export the IOMesh default configuration file iomesh.yaml. ``` ./helm show values charts/iomesh > iomesh.yaml ``` Configure iomesh.yaml. Set dataCIDR to the data CIDR you previously configured in Prerequisites. ``` iomesh: chunk: dataCIDR: \"\" # Fill in the dataCIDR you configured previously in Prerequisites. ``` Set diskDeploymentMode according to your disk configurations. The system has a default value of hybridFlash. If your disk configuration is all-flash mode, change the value to allFlash. ``` diskDeploymentMode: \"hybridFlash\" # Set the disk deployment mode. ``` Specify the CPU architecture. If you have a hygonx8664 Kubernetes cluster, enter hygonx8664, or else leave the field blank. ``` platform: \"\" ``` Specify the IOMesh edition. The field is blank by default, and if left unspecified, the system will install the Community edition automatically. If you have purchased the Enterprise edition, set the value of edition to enterprise. For details, refer to IOMesh Specifications. ``` edition: \"\" # If left blank, Community Edition will be installed. ``` An optional step. The number of IOMesh chunk pods is 3 by default. If you install IOMesh Enterprise Edition, you can deploy more than 3 chunk pods. ``` iomesh: chunk: replicaCount: 3 # Specify the number of chunk pods. ``` An optional step. If you want IOMesh to only use the disks of specific Kubernetes nodes, configure the values of the node label. ``` iomesh: chunk: podPolicy: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: matchExpressions: key: kubernetes.io/hostname operator: In values: iomesh-worker-0 # Specify the values of the node" }, { "data": "iomesh-worker-1 ``` It is recommended that you only configure values. For more configurations, refer to Pod Affinity. An optional step. Configure the podDeletePolicy field to determine whether the system should automatically delete the pod and rebuild it on another healthy node when the Kubernetes node that hosts the pod fails. This configuration applies only to the pod with an IOMesh-created PVC mounted and the access mode set to ReadWriteOnly. If left unspecified, the value of this field will be set to no-delete-pod by default, indicating that the system won't automatically delete and rebuild the pod in case of node failure. ``` csi-driver: driver: controller: driver: podDeletePolicy: \"no-delete-pod\" # Supports \"no-delete-pod\", \"delete-deployment-pod\", \"delete-statefulset-pod\", or \"delete-both-statefulset-and-deployment-pod\". ``` Back on the master node, run the following command to deploy the IOMesh cluster. ``` ./helm install iomesh ./charts/iomesh \\ --create-namespace \\ --namespace iomesh-system \\ --values iomesh.yaml \\ --wait ``` If successful, you should see output like this: ``` NAME: iomesh LAST DEPLOYED: Wed Jun 30 16:00:32 2021 NAMESPACE: iomesh-system STATUS: deployed REVISION: 1 TEST SUITE: None ``` Verify that all pods are in Running state. If so, then IOMesh has been installed successfully. ``` kubectl --namespace iomesh-system get pods ``` If successful, you should see output like this: ``` NAME READY STATUS RESTARTS AGE csi-driver-controller-plugin-89b55d6b5-8r2fc 6/6 Running 10 2m8s csi-driver-controller-plugin-89b55d6b5-d4rbr 6/6 Running 10 2m8s csi-driver-controller-plugin-89b55d6b5-n5s48 6/6 Running 10 2m8s csi-driver-node-plugin-9wccv 3/3 Running 2 2m8s csi-driver-node-plugin-mbpnk 3/3 Running 2 2m8s csi-driver-node-plugin-x6qrk 3/3 Running 2 2m8s iomesh-chunk-0 3/3 Running 0 52s iomesh-chunk-1 3/3 Running 0 47s iomesh-chunk-2 3/3 Running 0 43s iomesh-hostpath-provisioner-8fzvj 1/1 Running 0 2m8s iomesh-hostpath-provisioner-gfl9k 1/1 Running 0 2m8s iomesh-hostpath-provisioner-htzx9 1/1 Running 0 2m8s iomesh-iscsi-redirector-96672 2/2 Running 1 55s iomesh-iscsi-redirector-c2pwm 2/2 Running 1 55s iomesh-iscsi-redirector-pcx8c 2/2 Running 1 55s iomesh-meta-0 2/2 Running 0 55s iomesh-meta-1 2/2 Running 0 55s iomesh-meta-2 2/2 Running 0 55s iomesh-localpv-manager-jwng7 4/4 Running 0 6h23m iomesh-localpv-manager-khhdw 4/4 Running 0 6h23m iomesh-localpv-manager-xwmzb 4/4 Running 0 6h23m iomesh-openebs-ndm-5457z 1/1 Running 0 2m8s iomesh-openebs-ndm-599qb 1/1 Running 0 2m8s iomesh-openebs-ndm-cluster-exporter-68c757948-gszzx 1/1 Running 0 2m8s iomesh-openebs-ndm-node-exporter-kzjfc 1/1 Running 0 2m8s iomesh-openebs-ndm-node-exporter-qc9pt 1/1 Running 0 2m8s iomesh-openebs-ndm-node-exporter-v7sh7 1/1 Running 0 2m8s iomesh-openebs-ndm-operator-56cfb5d7b6-srfzm 1/1 Running 0 2m8s iomesh-openebs-ndm-svp9n 1/1 Running 0 2m8s iomesh-zookeeper-0 1/1 Running 0 2m3s iomesh-zookeeper-1 1/1 Running 0 102s iomesh-zookeeper-2 1/1 Running 0 76s iomesh-zookeeper-operator-7b5f4b98dc-6mztk 1/1 Running 0 2m8s operator-85877979-66888 1/1 Running 0 2m8s operator-85877979-s94vz 1/1 Running 0 2m8s operator-85877979-xqtml 1/1 Running 0 2m8s ``` NOTE: After installing IOMesh, the prepare-csi pod will automatically start on all schedulable nodes in the Kubernetes cluster to install and configure open-iscsi. If the installation of open-iscsi is successful on all nodes, the system will automatically clean up all prepare-csi Pods. However, if the installation of open-iscsi fails on any node, manual configuration of open-iscsi is required to determine the cause of the installation failure. NOTE: If open-iscsi is manually deleted after installing IOMesh, the prepare-csi pod will not automatically start to install open-iscsi when reinstalling IOMesh. In this case, manual configuration of open-iscsi is necessary." } ]
{ "category": "Runtime", "file_name": "prerequisites.md", "project_name": "IOMesh", "subcategory": "Cloud Native Storage" }
[ { "data": "Before installing and deploying IOMesh, verify the following requirements. NOTE: Expanding an IOMesh cluster to multiple clusters is not currently supported. You should decide at the beginning whether to deploy one or multiple clusters. For multi-cluster deployment and operations, refer to Multiple Cluster Management. Ensure that each worker node has the following hardware configurations, and note that IOMesh Community and Enterprise editions have the same hardware requirements. CPU Memory Storage Controller OS Disk Data & Cache Disk Depends on whether the storage architecture is tiered storage or non-tiered storage. | Architecture | Description | |:-|:-| | Tiered Storage | Faster storage media for cache and slower storage media for capacity. For example, use faster NVMe SSDs as cache disks and slower SATA SSDs or HDDs as data disks. | | Non-Tiered Storage | Cache disks are not required. All disks except the physical disk containing the system partition are used as data disks. | In this release, hybrid mode is only supported for tiered storage and all-flash mode for non-tiered storage. | Deployment Mode | Disk Requirements | |:|:-| | Hybrid Mode | Cache Disk: At least one SATA SSD, SAS SSD or NVMe SSD, and the capacity must be greater than 60 GB.Data Disk: At least one SATA HDD or SAS HDD.The total SSD capacity should be 10% to 20% of total HDD capacity. | | All-Flash Mode | At least one SSD with a capacity greater than 60G. | NIC To prevent network bandwidth contention, create a dedicated storage network for IOMesh or leverage an existing network." } ]
{ "category": "Runtime", "file_name": "1.3.0.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "The Longhorn Documentation Cloud native distributed block storage for Kubernetes Longhorn is a lightweight, reliable, and powerful distributed block storage system for Kubernetes. Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Runtime", "file_name": "1.4.2.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "The Longhorn Documentation Cloud native distributed block storage for Kubernetes Longhorn is a lightweight, reliable, and powerful distributed block storage system for Kubernetes. Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Runtime", "file_name": "1.3.1.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "The Longhorn Documentation Cloud native distributed block storage for Kubernetes Longhorn is a lightweight, reliable, and powerful distributed block storage system for Kubernetes. Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Runtime", "file_name": "1.4.0.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "The Longhorn Documentation Cloud native distributed block storage for Kubernetes Longhorn is a lightweight, reliable, and powerful distributed block storage system for Kubernetes. Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Runtime", "file_name": "1.5.5.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "The Longhorn Documentation Cloud native distributed block storage for Kubernetes Longhorn is a lightweight, reliable, and powerful distributed block storage system for Kubernetes. Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Runtime", "file_name": "1.4.3.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "The Longhorn Documentation Cloud native distributed block storage for Kubernetes Longhorn is a lightweight, reliable, and powerful distributed block storage system for Kubernetes. Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Runtime", "file_name": "1.6.1.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "The Longhorn Documentation Cloud native distributed block storage for Kubernetes Longhorn is a lightweight, reliable, and powerful distributed block storage system for Kubernetes. Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Runtime", "file_name": "1.5.4.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "The Longhorn Documentation Cloud native distributed block storage for Kubernetes Longhorn is a lightweight, reliable, and powerful distributed block storage system for Kubernetes. Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Runtime", "file_name": "1.6.2.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "The Longhorn Documentation Cloud native distributed block storage for Kubernetes Longhorn is a lightweight, reliable, and powerful distributed block storage system for Kubernetes. Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "Ondat", "subcategory": "Cloud Native Storage" }
[ { "data": "The Applications tab displays all of your applications. For a detailed explanation of the view, refer to the table below: | Column | Description | Possible Values | |:--|:--|:-| | App Name | The name of the app | String (can contain special characters) | | Kind | Indicates the kind of application. | Replica StatefulSet Deployment | | Pods | The number of pods for your application | Integer | | Pod Status | Indicates the number of pods that are ready/syncing or with unknown/failed status | Ready Syncing Unknown Failed | | PV Amount | Indicates the amount of PVs taken up by the app | Integer | | PVs Size | Indicates the size of all Persistent volumes as a percentage of all available storage on all pods | Available GB on the pods | | PVs Status | Indicates the number of PVs that are ready/syncing or with unknown/failed status | Ready Syncing Unknown Failed | To view more details of your application, click View Details and you will be given an overview of the status of the app." } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "MinIO", "subcategory": "Cloud Native Storage" }
[ { "data": "MinIO is an object storage solution that provides an Amazon Web Services S3-compatible API and supports all core S3 features. MinIO is built to deploy anywhere - public or private cloud, baremetal infrastructure, orchestrated environments, and edge infrastructure. This site documents Operations, Administration, and Development of MinIO deployments on Kubernetes platform for the latest stable version of the MinIO Operator: 5.0.15. MinIO is released under dual license GNU Affero General Public License v3.0 and MinIO Commercial License. Deployments registered through MinIO SUBNET use the commercial license and include access to 24/7 MinIO support. You can get started exploring MinIO features using the MinIO Console and our play server at https://play.min.io. play is a public MinIO cluster running the latest stable MinIO server. Any file uploaded to play should be considered public and non-protected. For more about connecting to play, see MinIO Console play Login. Object Storage Essentials How to Connect to MinIO with JavaScript This procedure deploys a Single-Node Single-Drive MinIO server onto Kubernetes for early development and evaluation of MinIO Object Storage and its S3-compatible API layer. Use the MinIO Operator to deploy and manage production-ready MinIO tenants on Kubernetes. An existing Kubernetes deployment where at least one Worker Node has a locally-attached drive. A local kubectl installation configured to create and access resources on the target Kubernetes deployment. Familiarity with Kubernetes environments Familiarity with using a Terminal or Shell environment Download the MinIO Object Download minio-dev.yaml to your host machine: ``` curl https://raw.githubusercontent.com/minio/docs/master/source/extra/examples/minio-dev.yaml -O ``` The file describes two Kubernetes resources: A new namespace minio-dev, and A MinIO pod using a drive or volume on the Worker Node for serving data Select the Overview of the MinIO Object YAML for a more detailed description of the object. The minio-dev.yaml contains the following Kubernetes resources: ``` apiVersion: v1 kind: Namespace metadata: name: minio-dev # Change this value if you want a different namespace name labels: name: minio-dev # Change this value to match metadata.name apiVersion: v1 kind: Pod metadata: labels: app: minio name: minio namespace: minio-dev # Change this value to match the namespace metadata.name spec: containers: name: minio image: quay.io/minio/minio:latest command: /bin/bash -c args: minio server /data --console-address :9001 volumeMounts: mountPath: /data name: localvolume # Corresponds to the `spec.volumes` Persistent Volume nodeSelector: kubernetes.io/hostname: kubealpha.local # Specify a node label associated to the Worker Node on which you want to deploy the pod. volumes: name: localvolume hostPath: # MinIO generally recommends using locally-attached volumes path: /mnt/disk1/data # Specify a path to a local drive or volume on the Kubernetes worker node type:" }, { "data": "# The path to the last directory must exist ``` The object deploys two resources: A new namespace minio-dev, and A MinIO pod using a drive or volume on the Worker Node for serving data The MinIO resource definition uses Kubernetes Node Selectors and Labels to restrict the pod to a node with matching hostname label. Use kubectl get nodes --show-labels to view all labels assigned to each node in the cluster. The MinIO Pod uses a hostPath volume for storing data. This path must correspond to a local drive or folder on the Kubernetes worker node. Users familiar with Kubernetes scheduling and volume provisioning may modify the spec.nodeSelector, volumeMounts.name, and volumes fields to meet more specific requirements. Apply the MinIO Object Definition The following command applies the minio-dev.yaml configuration and deploys the objects to Kubernetes: ``` kubectl apply -f minio-dev.yaml ``` The command output should resemble the following: ``` namespace/minio-dev created pod/minio created ``` You can verify the state of the pod by running kubectl get pods: ``` kubectl get pods -n minio-dev ``` The output should resemble the following: ``` NAME READY STATUS RESTARTS AGE minio 1/1 Running 0 77s ``` You can also use the following commands to retrieve detailed information on the pod status: ``` kubectl describe pod/minio -n minio-dev kubectl logs pod/minio -n minio-dev ``` Temporarily Access the MinIO S3 API and Console Use the kubectl port-forward command to temporarily forward traffic from the MinIO pod to the local machine: ``` kubectl port-forward pod/minio 9000 9090 -n minio-dev ``` The command forwards the pod ports 9000 and 9090 to the matching port on the local machine while active in the shell. The kubectl port-forward command only functions while active in the shell session. Terminating the session closes the ports on the local machine. Note The following steps of this procedure assume an active kubectl port-forward command. To configure long term access to the pod, configure Ingress or similar network control components within Kubernetes to route traffic to and from the pod. Configuring Ingress is out of the scope for this documentation. Connect your Browser to the MinIO Server Access the MinIO Console by opening a browser on the local machine and navigating to http://127.0.0.1:9001. Log in to the Console with the credentials minioadmin | minioadmin. These are the default root user credentials. You can use the MinIO Console for general administration tasks like Identity and Access Management, Metrics and Log Monitoring, or Server Configuration. Each MinIO server includes its own embedded MinIO Console. For more information, see the MinIO Console documentation. (Optional) Connect the MinIO Client If your local machine has mc installed, use the mc alias set command to authenticate and connect to the MinIO deployment: ``` mc alias set k8s-minio-dev http://127.0.0.1:9000 minioadmin minioadmin mc admin info k8s-minio-dev ``` The name of the alias The hostname or IP address and port of the MinIO server The Access Key for a MinIO user The Secret Key for a MinIO user Connect your applications to MinIO Configure Object Retention Configure Security Deploy MinIO for Production Environments" } ]
{ "category": "Runtime", "file_name": "v2.7.md", "project_name": "Ondat", "subcategory": "Cloud Native Storage" }
[ { "data": "Our self-evaluation guide is a step by step recipe for installing and testing Ondat. This guide is divided into three sections: For more comprehensive documentation including installation advice for complex setups, operational guides, and use-cases, see our main documentation site. Should you have questions or require support, there are several ways to get in touch with us. The fastest way to get in touch is to join our public Slack channel. You can also get in touch via email to info@storageos.com. In this document we detail a simple installation suitable for evaluation purposes. The etcd we install uses a 3 node cluster with local storage, and as such is not suitable for production workloads. However, for evaluation purposes it should be sufficient. For production deployments, see our main documentation pages. A standard Ondat installation uses the Ondat operator, which performs most platform-specific configuration for you. The Ondat operator has been certified by Red Hat and is open source. The basic installation steps are: While we do not require custom kernel modules or additional userspace tooling, Ondat does have a few basic prerequisites that are met by default by most modern distributions: Run the following command where kubectl is installed and with the context set for your Kubernetes cluster ``` curl -sSLo kubectl-storageos.tar.gz \\ https://github.com/storageos/kubectl-storageos/releases/download/v1.3.2/kubectl-storageos1.3.2linux_amd64.tar.gz \\ && tar -xf kubectl-storageos.tar.gz \\ && chmod +x kubectl-storageos \\ && sudo mv kubectl-storageos /usr/local/bin/ \\ && rm kubectl-storageos.tar.gz ``` You can find binaries for different architectures and systems in kubectl plugin. The following procedure deploys a local-path StorageClass for the Ondat Etcd. Note that this Etcd is suitable for evaluation purposes only. Do not use this cluster for production workloads. ``` kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.21/deploy/local-path-storage.yaml ``` The local-path StorageClass does not guarantee data safety or availability. Therefore the Ondat cluster cannot operate normally if the Etcd cluster becomes unavailable. For a production Etcd install check the Etcd prerequisites page. ``` kubectl storageos install \\ --include-etcd \\ --etcd-namespace storageos \\ --etcd-storage-class local-path \\ --admin-username storageos \\ --etcd-replicas=3 \\ --admin-password storageos ``` We have set the etcd-replicas to 3 in the example above assuming a cluster with 3 worker nodes. You can set the replicas as low as 1 for single node evaluations for edge deployments for example, although remember a single etcd cluster does not provide any resilience. Ondat installs all its components in the storageos namespace. ``` kubectl -n storageos get pod -w ``` ``` NAME READY STATUS RESTARTS AGE storageos-api-manager-65f5c9dbdf-59p2j 1/1 Running 0 36s storageos-api-manager-65f5c9dbdf-nhxg2 1/1 Running 0 36s storageos-csi-helper-65dc8ff9d8-ddsh9 3/3 Running 0 36s storageos-node-4njd4 3/3 Running 0 55s storageos-node-5qnl7 3/3 Running 0 56s storageos-node-7xc4s 3/3 Running 0 52s storageos-node-bkzkx 3/3 Running 0 58s storageos-node-gwp52 3/3 Running 0 62s storageos-node-zqkk7 3/3 Running 0 62s storageos-operator-8f7c946f8-npj7l 2/2 Running 0 64s storageos-scheduler-86b979c6df-wndj4 1/1 Running 0 64s ``` Wait until all the pods are ready. It usually takes ~60 seconds to complete ``` kubectl -n storageos create -f-<<END apiVersion: apps/v1 kind: Deployment metadata: name: storageos-cli labels: app: storageos run: cli spec: replicas: 1 selector: matchLabels: app: storageos-cli run: cli template: metadata: labels: app: storageos-cli run: cli spec: containers: command: /bin/sh -c \"while true; do sleep 3600; done\" env: name: STORAGEOS_ENDPOINTS value: http://storageos:5705 name: STORAGEOS_USERNAME value: storageos name: STORAGEOS_PASSWORD value: storageos image: storageos/cli:v2.9.0 name: cli END ``` You can get the ClusterId required on the next step using the CLI pod ``` POD=$(kubectl -n storageos get pod" }, { "data": "--no-headers -lapp=storageos-cli) kubectl -n storageos exec $POD -- storageos get licence ``` Newly installed Ondat clusters must be licensed within 24 hours. Our Community Edition tier supports up to 1TiB of provisioned storage. To obtain a license, follow the instructions on our licensing operations page. Now that we have a working Ondat cluster, we can provision a volume to test everything is working as expected. Create a PVC ``` kubectl create -f - <<END apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-1 spec: storageClassName: \"storageos\" accessModes: ReadWriteOnce resources: requests: storage: 5Gi END ``` Create 2 replicas by labeling your PVC ``` kubectl label pvc pvc-1 storageos.com/replicas=2 ``` Verify that the volume and replicas were created with the CLI pvc-1 should be listed in the CLI output ``` POD=$(kubectl -n storageos get pod -ocustom-columns=_:.metadata.name --no-headers -lapp=storageos-cli) kubectl -n storageos exec $POD -- storageos get volumes ``` Create a pod that consumes the PVC ``` kubectl create -f - <<END apiVersion: v1 kind: Pod metadata: name: d1 spec: containers: name: debian image: debian:9-slim command: [\"/bin/sh\",\"-c\",\"while true; do sleep 3600; done\"] volumeMounts: mountPath: /mnt name: v1 volumes: name: v1 persistentVolumeClaim: claimName: pvc-1 END ``` Check that the pod starts successfully. If the pod starts successfully then the Ondat cluster is working correctly ``` kubectl get pod d1 -w ``` The pod mounts an Ondat volume under /mnt so any files written there will persist beyond the lifetime of the pod. This can be demonstrated using the following commands. Execute a shell inside the pod and write some data to a file ``` kubectl exec -it d1 -- bash ``` Hello World! should be printed to the console. Delete the pod ``` kubectl delete pod d1 ``` Recreate the pod ``` kubectl create -f - <<END apiVersion: v1 kind: Pod metadata: name: d1 spec: containers: name: debian image: debian:9-slim command: [\"/bin/sh\",\"-c\",\"while true; do sleep 3600; done\"] volumeMounts: mountPath: /mnt name: v1 volumes: name: v1 persistentVolumeClaim: claimName: pvc-1 END ``` Open a shell inside the pod and check the contents of /mnt/hello ``` kubectl exec -it d1 -- cat /mnt/hello ``` Hello World! should be printed to the console. Now that you have a fully functional Ondat cluster we will explain some of our features that may be of use to you as you complete application and synthetic benchmarks. Ondat features are all enabled/disabled by applying labels to volumes. These labels can be passed to Ondat via persistent volume claims (PVCs) or can be applied to volumes using the Ondat CLI or GUI. The following is not an exhaustive feature list but outlines features which are commonly of use during a self-evaluation. Ondat enables synchronous replication of volumes using the storageos.com/replicas label. The volume that is active is referred to as the master volume. The master volume and its replicas are always placed on separate nodes. In fact if a replica cannot be placed on a node without a replica of the same volume, the volume will fail to be created. For example, in a three node Ondat cluster a volume with 3 replicas cannot be created as the third replica cannot be placed on a node that doesnt already contain a replica of the same volume. See our replication documentation for more information on volume replication. To test volume replication create the following PersistentVolumeClaim ``` kubectl create -f - <<END apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-replicated labels:" }, { "data": "\"1\" spec: storageClassName: \"storageos\" accessModes: ReadWriteOnce resources: requests: storage: 5Gi END ``` Note that volume replication is enabled by setting the storageos.com/replicas label on the volume. Confirm that a replicated volume has been created by using the Ondat CLI or UI ``` POD=$(kubectl -n storageos get pod -ocustom-columns=_:.metadata.name --no-headers -lapp=storageos-cli) kubectl -n storageos exec $POD -- storageos get volumes ``` Create a pod that uses the PVC ``` kubectl create -f - <<END apiVersion: v1 kind: Pod metadata: name: replicated-pod spec: containers: name: debian image: debian:9-slim command: [\"/bin/sleep\"] args: [ \"3600\" ] volumeMounts: mountPath: /mnt name: v1 volumes: name: v1 persistentVolumeClaim: claimName: pvc-replicated END ``` Write data to the volume ``` kubectl exec -it replicated-pod -- bash ``` Hello World! should be printed to the console. Find the location of the master volume and shutdown the node Shutting down a node causes all volumes, with online replicas, on the node to be evicted. For replicated volumes this immediately promotes a replica to become the new master. ``` kubectl get pvc ``` ``` NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-replicated Bound pvc-29e2ad6e-8c4e-11e9-8356-027bfbbece86 5Gi RWO storageos 1m ``` ``` POD=$(kubectl -n storageos get pod -ocustom-columns=_:.metadata.name --no-headers -lapp=storageos-cli) kubectl -n storageos exec $POD -- storageos get volumes ``` ``` NAMESPACE NAME SIZE LOCATION ATTACHED ON REPLICAS AGE default pvc-4e796a62-0271-45f9-9908-21d58789a3fe 5.0 GiB kind-worker (online) kind-worker2 1/1 26 seconds ago ``` Check the location of the master volume and notice that it is on a new node. If the pod that mounted the volume was located on the same node that was shutdown then the pod will need to be recreated. ``` POD=$(kubectl -n storageos get pod -ocustom-columns=_:.metadata.name --no-headers -lapp=storageos-cli) kubectl -n storageos exec $POD -- storageos get volumes ``` ``` NAMESPACE NAME SIZE LOCATION ATTACHED ON REPLICAS AGE default pvc-4e796a62-0271-45f9-9908-21d58789a3fe 5.0 GiB kind-worker2 (online) kind-worker2 1/1 46 seconds ago ``` Check that the data is still accessible to the pod ``` kubectl exec -it replicated-pod -- bash ``` Hello World! should be printed to the console. Benchmarking storage is a complex topic. Considering the many books and papers that have been written about benchmarking, we could write many paragraphs here and only begin to scratch the surface. Taking this complexity into account we present recipes for both synthetic benchmarks using FIO, and a sample application benchmark to test PostgreSQL using pgbench. The approaches are complementary - synthetic benchmarks allow us to strictly control the parameters of the IO we put through the system in order to stress various aspects of it. Application benchmarks allow us to get a sense of the performance of the system when running an actual representative workload - which of course is the ultimate arbiter of performance. Despite the inherent complexity of benchmarking storage there are a few general considerations to keep in mind. When a workload is placed on the same node as a volume, there is no network round trip for IO, and performance is consequently improved. When considering the performance of Ondat it is important to evaluate both local and remote volumes; since for certain workloads we want the high performance of a local attachment, but we also desire the flexibility of knowing that our less performance-sensitive workloads can run from any node in the" }, { "data": "Synchronous replication does not impact the read performance of a volume, but it can have a significant impact on the write performance of the volume, since all writes must be propagated to replicas before being acked to the application. The impact varies in proportion to the inter-node latency. For an inter-node latency of 1ms, we have a max ceiling of 1000 write IOPS, and in reality a little less than that to allow for processing time on the nodes. This is less concerning then it may first appear, since many applications will issue multiple writes in parallel (known as increasing the queue depth). Synthetic benchmarks using tools such as FIO are a useful way to begin measuring Ondat performance. While not fully representative of application performance, they allow us to reason about the performance of storage devices without the added complexity of simulating real world workloads, and provide results easily comparable across platforms. FIO has a number of parameters that can be adjusted to simulate a variety of workloads and configurations. Parameters that are particularly important are: For this self-evaluation we will run a set of tests based on the excellent DBench, which distills the numerous FIO options into a series of well-crafted scenarios: For convenience we present a single script to run the scenarios using local and remote volumes both with and without a replica. Deploy the FIO tests for the four scenarios using the following command: ``` curl -sL https://raw.githubusercontent.com/ondat/use-cases/main/scripts/deploy-synthetic-benchmarks.sh | bash ``` The script will take approximately 20 minutes to complete, and will print the results to STDOUT. The exact results observed will depend on the particular platform and environment, but the following trends should be observable: While synthetic benchmarks are useful for examining the behaviour of Ondat with very specific workloads, in order to get a realistic picture of Ondat performance actual applications should be tested. Many applications come with test suites which provide standard workloads. For best results, test using your application of choice with a representative configuration and real world data. As an example of benchmarking an application the following steps lay out how to benchmark a Postgres database backed by an Ondat volume. Start by cloning the Ondat use cases repository. Note this is the same repository that we cloned earlier so if you already have a copy just cd storageos-usecases/pgbench. ``` git clone https://github.com/storageos/use-cases.git storageos-usecases ``` Move into the Postgres examples folder ``` cd storageos-usecases/pgbench ``` Decide which node you want the pgbench pod and volume to be located on. The node needs to be labelled app=postgres ``` kubectl label node <NODE> app=postgres ``` Then set the storageos.com/hint.master label in 20-postgres-statefulset.yaml file to match the Ondat nodeID for the node you have chosen before creating all the files. The Ondat nodeID can be obtained using the cli and doing a describe node ``` kubectl create -f . ``` Confirm that Postgres is up and running ``` kubectl get pods -w -l app=postgres ``` Use the Ondat CLI or the GUI to check the master volume location and the mount location. They should match ``` POD=$(kubectl -n storageos get pod -ocustom-columns=_:.metadata.name --no-headers -lapp=storageos-cli) kubectl -n storageos exec $POD -- storageos get volumes ``` Exec into the pgbench container and run pgbench ``` kubectl exec -it pgbench -- bash -c '/opt/cpm/bin/start.sh' ``` After completing these steps you will have benchmark scores for Ondat. Do keep in mind that benchmarks are only part of the story and that there is no replacement for testing actual production or production like workloads. Ondat invites you to provide feedback on your self-evaluation to the slack channel or by" } ]
{ "category": "Runtime", "file_name": "v2.1.md", "project_name": "Ondat", "subcategory": "Cloud Native Storage" }
[ { "data": "We recommend always using tagged versions of Ondat rather than latest, and to perform upgrades only after reading the release notes. The latest tagged release is 2.10.0. For installation instructions see our Install page. The latest CLI release is 2.10.0, available from GitHub or containerised from DockerHub. This release contains various new features and improves stability. Features available for technical preview may contain bugs and are not recommended for production clusters This release improves stability and reduces resource consumption when creating many volumes at once. Storage Pools Online Volume Resize CSI Allowed Topologies Data Plane Data Plane Kubernetes Data Plane Kubernetes Control Plane Data Plane k8s Data Plane Control Plane k8s Control Plane Data Plane For Ondat 2.8.0, we recommend having at least a 5 node cluster when running etcd within Kubernetes, as we recommend running etcd with 5 replicas. k8s Control Plane Data Plane Portal Manager k8s k8s & Orchestrator Rolling Upgrade This is a tech preview, we only recommend using this feature on your test clusters Operator API Manager Control Plane Data Plane If you decide to upgrade to 2.7.0 and want to downgrade, you can only roll back to 2.6.0, not earlier versions. Roll back instruction can be found here Operator Control Plane Data Plane Portal Manager: Kubectl Plugin: Operator: Kubernetes: Components Dataplane: Dataplane: Control Plane: Kubernetes: controlplane: Improve error message when unable to set the cluster-wide log level on an individual node. dataplane: Fix rare assert when retrying some writes under certain conditions. dataplane: Log format string safety improvements. dataplane: Backuptool reliability improvements. k8s/cluster-operator: Allow api-manager to patch events for leader election. controlplane: Improve error message during failed --label argument parsing. controlplane: Double-check the OS performs the NFS mount as directed, and unmount on error. controlplane: Improved FSM and sync CC logging. dataplane: Log message quality, quantity and visibility improvements. dataplane: Volume backup tool error reporting improvements. k8s: Pod scheduler fixes. This release adds production-grade encryption at rest for Ondat volumes, as well as: Note: v2.4.0 requires Kubernetes 1.17 or newer. controlplane/api: Compression is not disabled by default when provisioning volumes via the API. controlplane/api: Spec has incorrect response body for partial bundle. controlplane/csi: Error incorrectly returned when concurrent namespace creation requests occur. controlplane/diagnostics: GetDiagnostics RPC response does not indicate if node timed out collecting some data. controlplane/diagnostics: Invalid character \\u0080 looking for beginning of value via CLI when a node is down. controlplane/diagnosticutil: Include attachment type in unpacking local volumes. controlplane/diagnotics: Node timing out during local diagnostics is missing logs. controlplane/healthcheck: Combined sources fires callback in initialisation. controlplane/volumerpc: Got unknown replica state 0 discards results. dataplane/fix: Check blob writes dont exceed internal limit. dataplane/fix: Checking the return code of InitiatorAddConnection(). dataplane/fix: Director signal hander thread is not joined. dataplane/fix: Dont block I/O when many retries are in progress. dataplane/fix: gRPC API robustness improvements. dataplane/fix: Initiator needs to include the node UUID in Endpoint. dataplane/fix: Low-level I/O engines dont propagate IO failures via" }, { "data": "dataplane/fix: Log available contextual information where possible. dataplane/fix: Ensure BackingStore is not deleted twice. dataplane/fix: Serialise LIO create/delete operations to avoid kernel bug. dataplane/fix: Dataplane shutdown time can exceed 10 seconds. dataplane/fix: Fix non-threadsafe access on TCMU device object. dataplane/fix: Dont hold lock unecessarily in Rdb::Reap. k8s/api-manager: First ip octet should not be 0, 127, 169 or 224. k8s/api-manager: Keygen should only operate on Ondat PVCs. k8s/cluster-operator: Add perm to allow VolumeAttachment finalizer removal. k8s/cluster-operator: Fix apiManagerContainer tag in v1 deploy CRD. k8s/cluster-operator: Fix docker credentials check. k8s/cluster-operator: Fix ServiceAccountName in the OLM bundle. k8s/cluster-operator: Set webhook service-for label to be unique. controlplane/api: Make version provided consistent for NFS/Host attach handler. controlplane/attachtracker: Cleanup NFS mounts at shutdown. controlplane/build: Migrate to go modules for dependency management. controlplane/build: Use sentry prod-url if build branch has release prefix. controlplane/cli: Colour for significant feedback. controlplane/cli: Update node must set compute only separately to other labels. controlplane/cli: Warn user that updating labels action can be reverted. controlplane/csi: Bound request handlers with timeout similar to HTTP API. controlplane/csi: Remove error obfuscation and clarify log messages. controlplane/csi: Stop logging not found. controlplane/dataplane: Remove UUID mappings during failed presentation creation rollback. controlplane/dataplaneevents: Decorate logs with extra event details. controlplane/diagnostics: Asymmetrically encrypt bundles. controlplane/diagnostics: Collect FSM state. controlplane/diagnostics: Support single node bundle collection. controlplane/diagnosticutil: Decorate log entries with well-known field corresponding to node id/name. controlplane/diagnosticutil: Parallelise unpacking of disjoint data. controlplane/diagnosticutil: Unpack gathered NFS config data. controlplane/fsm: Perform state match check before version check. controlplane/k8s: Use secret store. controlplane/log: Fix race condition writing logs. controlplane/log: Handle originator timestamps from dataplane logs. controlplane/meta: Error checking code uses Go 1.13 error features. controlplane/rpc: Make CP gRPC calls to the DP configuration endpoints idempotent. controlplane/sentry: Prevent some unnecessary alerts. controlplane/slog: Clean up error logging in RPC provision stack. controlplane/states: Add the from state as a log field for state transition msgs. controlplane/store/etcd: Decorate lock logs with associated ID fields. controlplane/ui: Warn user that updating labels action will be reverted. controlplane/vendor: Bump service repository. controlplane/volume: Encryption support in kubernetes. dataplane/fs: Dont return from PresentationCreate RPC until the device is fully created. dataplane/fs: Each LUN should have its own HBA. dataplane/fs: Improve device ready check. dataplane/internals: Improve DP stats implementation. dataplane/internals: Major director refactor. dataplane/log: Logs should output originating timestamps. dataplane/log: Move to log3 API exclusively. dataplane/log: Remove log2. dataplane/log: Set loglevel and logfilter via the supctl tool. dataplane/rdb: Handle unaligned I/O in RdbPlugin. dataplane/rdb: Implement low-level delete block functionality. dataplane/rdb: rocksdb Get() should use an iterator. dataplane/story: Support for block unmapping. dataplane/story: Add backuptool binary to export volume data. dataplane/story: Volume encryption-at-rest. dataplane/sync: Add retries for failed sync IOs. dataplane/sync: VolumeHash performance improvements. dataplane/sys: Find and check OS pids.max on startup. k8s/api-manager: Dont attempt service creation if the owning PVC doesnt exist. k8s/api-manager: Compare SC and PVC creation time during label sync. k8s/api-manager: Add action to ensure modules tidy & vendored. k8s/api-manager: Add defaults from StorageClass. k8s/api-manager: Add fencing controller. k8s/api-manager: Add flag and support for cert" }, { "data": "k8s/api-manager: Add flags to disable label sync controllers. k8s/api-manager: Add namespace delete controller. k8s/api-manager: Add node delete controller. k8s/api-manager: Add OpenTelemetry tracing with Jaeger backend. k8s/api-manager: Add PVC label sync controller. k8s/api-manager: Add PVC mutating controller. k8s/api-manager: Add support for failure-mode label. k8s/api-manager: Add support for volume encryption. k8s/api-manager: Allow multiple mutators. k8s/api-manager: Build and tests should use vendored deps. k8s/api-manager: Bump controller-runtime to v0.6.4. k8s/api-manager: Encrypt only provisioned PVCs. k8s/api-manager: Fix tracing example. k8s/api-manager: Introduce StorageClass to PVC annotation mutator. k8s/api-manager: Log API reason. k8s/api-manager: Migrate namespace delete to operator toolkit. k8s/api-manager: Migrate node delete to operator toolkit. k8s/api-manager: Migrate to kubebuilder v3. k8s/api-manager: Node label sync. k8s/api-manager: Node label update must include current reserved labels. k8s/api-manager: Pass context to API consistently. k8s/api-manager: Rename leader election config map. k8s/api-manager: RFC 3339 and flags to configure level & format. k8s/api-manager: Run shared volume controller with manager. k8s/api-manager: Set initial sync delay. k8s/api-manager: Set Pod scheduler. k8s/api-manager: Standardise on ObjectKeys for all API function signatures. k8s/api-manager: Ondat API interface and mocks. k8s/api-manager: Update dependencies and go version 1.16. k8s/api-manager: Update to new external object sync. k8s/api-manager: Use composite client in admission controllers. k8s/api-manager: Use Object interface. k8s/cluster-operator: Changes to the StorageOSCluster CR get applied to Ondat. k8s/cluster-operator: Increase provisioner timeout from 15 to 30s. k8s/cluster-operator: Reduce CSI provisioner worker pool. k8s/cluster-operator: Set priority class for helper pods. k8s/cluster-operator: Add pod anti-affinity to api-manager. k8s/cluster-operator: Add pvc mutator config. k8s/cluster-operator: Add rbac for api-manager fencing. k8s/cluster-operator: Add RBAC for encryption key management. k8s/cluster-operator: Add RBAC needed for csi-resizer v1.0.0. k8s/cluster-operator: Add webhook resource migration. k8s/cluster-operator: Add workflow to push image to RedHat registry. k8s/cluster-operator: Bump csi-provisioner to v2.1.1. k8s/cluster-operator: Call APIManagerWebhookServiceTest test. k8s/cluster-operator: Delete CSI expand secret when cluster is deleted. k8s/cluster-operator: Docker login to avoid toomanyrequests error. k8s/cluster-operator: Move pod scheduler webhook to api-manager. k8s/cluster-operator: RBAC to allow sync functions move to api-manager. k8s/cluster-operator: Remove pool from StorageClass, not used in v2. k8s/cluster-operator: Remove some other v1.14 specific logic. k8s/cluster-operator: Set the default container for kubectl logs. k8s/cluster-operator: Update code owners. k8s/cluster-operator: Update CSI sidecar images. k8s/cluster-operator: Validate minimum Kubernetes version. This release adds production-grade shared file support to v2, previously a technology preview in v1. This release focuses on performance. We analysed existing performance characteristics across a variety of real-world use cases and ended up with improvements across the board. Of particular note: We are extremely proud of our performance and we love to talk about it. Have a look at the Benchmarking section of the self-evaluation guide and consider sharing your results. Our PRE engineers are available to discuss in our slack channel. Data engine revamp focused on provable consistency and performance. Key characteristics: On-disk compression is now disabled by default as in most scenarios this offers better performance. To enable on-disk compression for a specific workload, see compression. Initial release of version 2.x. See Ondat v2.0 Release Blog for details. To upgrade from version 1.x to 2.x, contact Ondat support for assistance." } ]
{ "category": "Runtime", "file_name": "authentication.md", "project_name": "ORAS", "subcategory": "Cloud Native Storage" }
[ { "data": "This guide demonstrates the installation steps of ORAS CLI on different platforms. Install oras using Homebrew: ``` brew install oras``` Install oras using Snap: ``` snap install oras --classic``` Install ORAS from the latest release artifacts: If you want to install ORAS on an AMD64-based Linux machine, run the following command: ``` VERSION=\"1.2.0\"curl -LO \"https://github.com/oras-project/oras/releases/download/v${VERSION}/oras${VERSION}linuxamd64.tar.gz\"mkdir -p oras-install/tar -zxf oras${VERSION}*.tar.gz -C oras-install/sudo mv oras-install/oras /usr/local/bin/rm -rf oras${VERSION}_*.tar.gz oras-install/``` If you want to install ORAS on an ARM64-based Linux machine, you can download it from https://github.com/oras-project/oras/releases/download/v1.2.0/oras1.2.0linux_arm64.tar.gz. If you want to install ORAS on a Mac computer with Apple silicon, run the following command: ``` VERSION=\"1.2.0\"curl -LO \"https://github.com/oras-project/oras/releases/download/v${VERSION}/oras${VERSION}darwinarm64.tar.gz\"mkdir -p oras-install/tar -zxf oras${VERSION}*.tar.gz -C oras-install/sudo mv oras-install/oras /usr/local/bin/rm -rf oras${VERSION}_*.tar.gz oras-install/``` If you want to install ORAS on an Intel-based Mac, you can download it from https://github.com/oras-project/oras/releases/download/v1.2.0/oras1.2.0darwin_amd64.tar.gz. ``` winget install oras --version 1.2.0``` ``` set VERSION=\"1.2.0\"curl.exe -sLO \"https://github.com/oras-project/oras/releases/download/v%VERSION%/oras%VERSION%windowsamd64.zip\"tar.exe -xvzf oras%VERSION%windowsamd64.zipmkdir -p %USERPROFILE%\\bin\\copy oras.exe %USERPROFILE%\\bin\\set PATH=%USERPROFILE%\\bin\\;%PATH%``` A public Docker image containing the CLI is available on GitHub Container Registry: ``` docker run -it --rm -v $(pwd):/workspace ghcr.io/oras-project/oras:v1.2.0 help``` The default WORKDIR in the image is /workspace. Nix is a tool that takes a unique approach to package management and system configuration. The Nix Packages collection (Nixpkgs) is a set of over 80 000 packages for the Nix package manager. oras also has a Nix package available in the Nixpkgs repository. You can install nix CLI from here. You can install oras using the following command: ``` nix-env -iA nixpkgs.oras``` ``` $ oras versionVersion: 1.2.0Go version: go1.22.3Git commit: dcef719e208a9b226b15bc6512ad729a7dd93270Git tree state: clean``` You can check out our how-to guide if you would like to verify ORAS CLI Binaries." } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "ORAS", "subcategory": "Cloud Native Storage" }
[ { "data": "Registries are evolving as generic artifact stores. To enable this goal, the ORAS project provides a way to push and pull OCI Artifacts to and from OCI Registries. Users seeking a generic registry client can benefit from the ORAS CLI, while developers can build their own clients on top of one of the ORAS client libraries. The Open Container Initiative (OCI) defines the specifications and standards for container technologies. This includes the API for working with container registries, known formally as the OCI Distribution Specification. (a.k.a. the \"distribution-spec\"). The distribution-spec was written based on an open-source registry server originally released by the company Docker, which lives on GitHub at distribution/distribution (now a CNCF project). There are now a number of other open-source and commercial distribution-spec implementations, a list of which can be found here. Registries that implement the distribution-spec are referred to herein as OCI Registries. For a long time (pretty much since the beginning), people have been using/abusing OCI Registries to store non-container things. For example, you could upload a video to Docker Hub by just stuffing the video file into a layer in a Docker image (don't do this). The OCI Artifacts project is an attempt to define an opinionated way to leverage OCI Registries for arbitrary artifacts without masquerading them as container images. Specifically, OCI Image Manifests have a required field known as config.mediaType. According to the guidelines provided by OCI Artifacts, this field provides the ability to differentiate between various types of artifacts. Artifacts stored in an OCI Registry using this method are referred to herein as OCI Artifacts. ORAS works similarly to tools you may already be familiar with, such as docker. It allows you to push (upload) and pull (download) things to and from an OCI Registry, and also handles login (authentication) and token flow (authorization). What ORAS does differently is shift the focus from container images to other types of artifacts. ORAS is the de facto tool for working with OCI Artifacts. It treats media types as a critical piece of the puzzle. Container images are never assumed to be the artifact in question. By default, when pushing artifacts using ORAS, the config.mediaType field is set to unknown: ``` application/vnd.unknown.config.v1+json``` Authors of new OCI Artifacts are thus encouraged to define their own media types specific to their artifact, which their custom client(s) know how to operate on. If you wish to start publishing OCI Artifacts right away, take a look at the ORAS CLI. Developers who wish to provide their own user experience should use one of the ORAS client libraries." } ]
{ "category": "Runtime", "file_name": "quickstart.md", "project_name": "ORAS", "subcategory": "Cloud Native Storage" }
[ { "data": "To distribute OCI artifacts, we need to understand OCI registries. These registries store container images and other artifacts for easy access. Distributing OCI artifacts means pushing them to these registries so others can pull them for use. We will be using zot registry in this guide. Zot registry is an OCI-native container registry for distributing container images and OCI artifacts. In order to follow the steps given, you would be required to install the ORAS CLI. You can follow the installation guide to do so. We will be running zot using docker. However, you can refer to their installation guide to find more ways to install the registry. Prerequisites ``` docker run -d -p 5000:5000 --name oras-quickstart ghcr.io/project-zot/zot-linux-amd64:latest``` Let's push an OCI artifact to the registry using the ORAS CLI. ``` echo \"hello world\" > artifact.txt``` ``` oras push --plain-http localhost:5000/hello-artifact:v1 \\ --artifact-type application/vnd.acme.rocket.config \\ artifact.txt:text/plain``` ``` Uploading a948904f2f0f artifact.txtUploaded a948904f2f0f artifact.txtPushed [registry] localhost:5000/hello-artifact:v1Digest: sha256:bcdd6799fed0fca0eaedfc1c642f3d1dd7b8e78b43986a89935d6fe217a09cee``` After pushing the artifact, it can be seen in the zot user interface at http://localhost:5000/ Let's now pull the artifact we have pushed in the pervious step. ``` oras pull localhost:5000/hello-artifact:v1``` ``` Downloading a948904f2f0f artifact.txtDownloaded a948904f2f0f artifact.txtPulled [registry] localhost:5000/hello-artifact:v1Digest: sha256:19e1b5170646a1500a1ac56bad28675ab72dc49038e69ba56eb7556ec478859f``` First, let's create another sample file to attach to the previously uploaded artifact. ``` echo \"hello world\" > hi.txt``` You can use the command below to attach hi.txt to the artifact we pushed above: ``` oras attach --artifact-type doc/example localhost:5000/hello-artifact:v1 hi.txt``` ``` Exists a948904f2f0f hi.txtAttached to [registry] localhost:5000/hello-artifact@sha256:327db68f73d0ed53d528d927a6703c00739d7c1076e50762c3f6641b51b76fdcDigest: sha256:bcdd6799fed0fca0eaedfc1c642f3d1dd7b8e78b43986a89935d6fe217a09cee``` ``` oras discover localhost:5000/hello-artifact:v1``` ``` Discovered 1 artifact referencing v1Digest: sha256:327db68f73d0ed53d528d927a6703c00739d7c1076e50762c3f6641b51b76fdcArtifact Type Digestdoc/example sha256:bcdd6799fed0fca0eaedfc1c642f3d1dd7b8e78b43986a89935d6fe217a09cee``` Note: For settings up a registry with TLS follow these steps. Stop and remove the running quick start registry and the uploaded content. ``` docker rm $(docker stop oras-quickstart)``` You can now successfully push OCI artifacts to your zot registry! As OCI registries are used to securely store and share container images, they greatly help with collaboration and code sharing. They enable teams to acquire and use images and artifacts through a standardized artifact interface. This is why it is considered to play a crucial role in maintaining consistency among teams." } ]
{ "category": "Runtime", "file_name": "use_oras_cli.md", "project_name": "ORAS", "subcategory": "Cloud Native Storage" }
[ { "data": "To list available commands, either run oras with no parameters or execute oras help: ``` $ orasUsage: oras [command]Available Commands: attach Attach files to an existing artifact blob Blob operations completion Generate the autocompletion script for the specified shell cp Copy artifacts from one target to another discover Discover referrers of a manifest in the remote registry help Help about any command login Log in to a remote registry logout Log out from a remote registry manifest Manifest operations pull Pull files from remote registry push Push files to remote registry repo Repository operations tag Tag a manifest in the remote registry version Show the oras version informationFlags: -h, --help help for orasUse \"oras [command] --help\" for more information about a command.```" } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "Welcome to Rook! We hope you have a great experience installing the Rook cloud-native storage orchestrator platform to enable highly available, durable Ceph storage in Kubernetes clusters. Don't hesitate to ask questions in our Slack channel. Sign up for the Rook Slack here. This guide will walk through the basic setup of a Ceph cluster and enable K8s applications to consume block, object, and file storage. Always use a virtual machine when testing Rook. Never use a host system where local devices may mistakenly be consumed. Kubernetes versions v1.25 through v1.30 are supported. Architectures released are amd64 / x86_64 and arm64. To check if a Kubernetes cluster is ready for Rook, see the prerequisites. To configure the Ceph storage cluster, at least one of these local storage options are required: A simple Rook cluster is created for Kubernetes with the following kubectl commands and example manifests. | 0 | 1 | |:--|:-| | 1 2 3 4 | $ git clone --single-branch --branch v1.14.5 https://github.com/rook/rook.git cd rook/deploy/examples kubectl create -f crds.yaml -f common.yaml -f operator.yaml kubectl create -f cluster.yaml | ``` 1 2 3 4``` ``` $ git clone --single-branch --branch v1.14.5 https://github.com/rook/rook.git cd rook/deploy/examples kubectl create -f crds.yaml -f common.yaml -f operator.yaml kubectl create -f cluster.yaml ``` After the cluster is running, applications can consume block, object, or file storage. The first step is to deploy the Rook operator. Important The Rook Helm Chart is available to deploy the operator instead of creating the below manifests. Note Check that the example yaml files are from a tagged release of Rook. Note These steps are for a standard production Rook deployment in Kubernetes. For Openshift, testing, or more options, see the example configurations documentation. | 0 | 1 | |:-|:-| | 1 2 3 4 5 | cd deploy/examples kubectl create -f crds.yaml -f common.yaml -f operator.yaml # verify the rook-ceph-operator is in the `Running` state before proceeding kubectl -n rook-ceph get pod | ``` 1 2 3 4 5``` ``` cd deploy/examples kubectl create -f crds.yaml -f common.yaml -f operator.yaml kubectl -n rook-ceph get pod ``` Before starting the operator in production, consider these settings: The Rook documentation is focused around starting Rook in a variety of environments. While creating the cluster in this guide, consider these example cluster manifests: See the Ceph example configurations for more details. Now that the Rook operator is running we can create the Ceph cluster. Important The Rook Cluster Helm Chart is available to deploy the operator instead of creating the below manifests. Important For the cluster to survive reboots, set the dataDirHostPath property that is valid for the hosts. For more settings, see the documentation on configuring the cluster. Create the cluster: | 0 | 1 | |-:|:-| | 1 | kubectl create -f cluster.yaml | ``` 1``` ``` kubectl create -f cluster.yaml ``` Verify the cluster is running by viewing the pods in the rook-ceph namespace. The number of osd pods will depend on the number of nodes in the cluster and the number of devices configured. For the default cluster.yaml above, one OSD will be created for each available device found on each node. Hint If the rook-ceph-mon, rook-ceph-mgr, or rook-ceph-osd pods are not created, please refer to the Ceph common issues for more details and potential" }, { "data": "| 0 | 1 | |:--|:--| | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | $ kubectl -n rook-ceph get pod NAME READY STATUS RESTARTS AGE csi-cephfsplugin-provisioner-d77bb49c6-n5tgs 5/5 Running 0 140s csi-cephfsplugin-provisioner-d77bb49c6-v9rvn 5/5 Running 0 140s csi-cephfsplugin-rthrp 3/3 Running 0 140s csi-rbdplugin-hbsm7 3/3 Running 0 140s csi-rbdplugin-provisioner-5b5cd64fd-nvk6c 6/6 Running 0 140s csi-rbdplugin-provisioner-5b5cd64fd-q7bxl 6/6 Running 0 140s rook-ceph-crashcollector-minikube-5b57b7c5d4-hfldl 1/1 Running 0 105s rook-ceph-mgr-a-64cd7cdf54-j8b5p 2/2 Running 0 77s rook-ceph-mgr-b-657d54fc89-2xxw7 2/2 Running 0 56s rook-ceph-mon-a-694bb7987d-fp9w7 1/1 Running 0 105s rook-ceph-mon-b-856fdd5cb9-5h2qk 1/1 Running 0 94s rook-ceph-mon-c-57545897fc-j576h 1/1 Running 0 85s rook-ceph-operator-85f5b946bd-s8grz 1/1 Running 0 92m rook-ceph-osd-0-6bb747b6c5-lnvb6 1/1 Running 0 23s rook-ceph-osd-1-7f67f9646d-44p7v 1/1 Running 0 24s rook-ceph-osd-2-6cd4b776ff-v4d68 1/1 Running 0 25s rook-ceph-osd-prepare-node1-vx2rz 0/2 Completed 0 60s rook-ceph-osd-prepare-node2-ab3fd 0/2 Completed 0 60s rook-ceph-osd-prepare-node3-w4xyz 0/2 Completed 0 60s | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21``` ``` $ kubectl -n rook-ceph get pod NAME READY STATUS RESTARTS AGE csi-cephfsplugin-provisioner-d77bb49c6-n5tgs 5/5 Running 0 140s csi-cephfsplugin-provisioner-d77bb49c6-v9rvn 5/5 Running 0 140s csi-cephfsplugin-rthrp 3/3 Running 0 140s csi-rbdplugin-hbsm7 3/3 Running 0 140s csi-rbdplugin-provisioner-5b5cd64fd-nvk6c 6/6 Running 0 140s csi-rbdplugin-provisioner-5b5cd64fd-q7bxl 6/6 Running 0 140s rook-ceph-crashcollector-minikube-5b57b7c5d4-hfldl 1/1 Running 0 105s rook-ceph-mgr-a-64cd7cdf54-j8b5p 2/2 Running 0 77s rook-ceph-mgr-b-657d54fc89-2xxw7 2/2 Running 0 56s rook-ceph-mon-a-694bb7987d-fp9w7 1/1 Running 0 105s rook-ceph-mon-b-856fdd5cb9-5h2qk 1/1 Running 0 94s rook-ceph-mon-c-57545897fc-j576h 1/1 Running 0 85s rook-ceph-operator-85f5b946bd-s8grz 1/1 Running 0 92m rook-ceph-osd-0-6bb747b6c5-lnvb6 1/1 Running 0 23s rook-ceph-osd-1-7f67f9646d-44p7v 1/1 Running 0 24s rook-ceph-osd-2-6cd4b776ff-v4d68 1/1 Running 0 25s rook-ceph-osd-prepare-node1-vx2rz 0/2 Completed 0 60s rook-ceph-osd-prepare-node2-ab3fd 0/2 Completed 0 60s rook-ceph-osd-prepare-node3-w4xyz 0/2 Completed 0 60s ``` To verify that the cluster is in a healthy state, connect to the Rook toolbox and run the ceph status command. | 0 | 1 | |:--|:--| | 1 2 3 4 5 6 7 8 9 10 | $ ceph status cluster: id: a0452c76-30d9-4c1a-a948-5d8405f19a7c health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 3m) mgr:a(active, since 2m), standbys: b osd: 3 osds: 3 up (since 1m), 3 in (since 1m) []...] | ``` 1 2 3 4 5 6 7 8 9 10``` ``` $ ceph status cluster: id: a0452c76-30d9-4c1a-a948-5d8405f19a7c health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 3m) mgr:a(active, since 2m), standbys: b osd: 3 osds: 3 up (since 1m), 3 in (since 1m) []...] ``` Hint If the cluster is not healthy, please refer to the Ceph common issues for potential solutions. For a walkthrough of the three types of storage exposed by Rook, see the guides for: Ceph has a dashboard to view the status of the cluster. See the dashboard guide. Create a toolbox pod for full access to a ceph admin client for debugging and troubleshooting the Rook cluster. See the toolbox documentation for setup and usage information. The Rook kubectl plugin provides commands to view status and troubleshoot issues. See the advanced configuration document for helpful maintenance and tuning examples. Each Rook cluster has built-in metrics collectors/exporters for monitoring with Prometheus. To configure monitoring, see the monitoring guide. The Rook maintainers would like to receive telemetry reports for Rook clusters. The data is anonymous and does not include any identifying information. Enable the telemetry reporting feature with the following command in the toolbox: | 0 | 1 | |-:|:| | 1 | ceph telemetry on | ``` 1``` ``` ceph telemetry on ``` For more details on what is reported and how your privacy is protected, see the Ceph Telemetry Documentation. When finished with the test cluster, see the cleanup guide. Rook Authors 2022. Documentation distributed under CC-BY-4.0. 2022 The Linux Foundation. All rights reserved. The" } ]
{ "category": "Runtime", "file_name": "intro.md", "project_name": "Piraeus Datastore", "subcategory": "Cloud Native Storage" }
[ { "data": "This project README contains documentation on the initial deployment process. A quick summary can be found here: ``` Then, wait for the deployment to finish: ``` kubectl wait --for=condition=Ready --timeout=10m pod --all``` Piraeus uses LINSTOR as the storage backend. Most configuration needs can be handled by the Piraeus Operator by editing one of the LinstorController, LinstorSatelliteSet or LinstorCSIDriver resources. However, in some cases you might want to directly interface with the LINSTOR system using the linstor command. There are two ways to achieve this: ``` kubectl exec -it deployment/piraeus-op-cs-controller -- linstor ...``` To provision volumes, you need to configure storage pools. The LINSTOR backend supports a range of different storage providers. For some common providers, the Piraeus Operator provides convenient configuration via the LinstorSatelliteSet resource. You can read more on how to configure storage here. Once you have storage pools configured (confirm by running kubectl linstor storage-pool list), you can almost start creating Persistent Volumes (PV) with Piraeus. First you will need to create a new storage class in Kubernetes. The following example storage class configures piraeus to: ``` apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: piraeus-ssdprovisioner: linstor.csi.linbit.comallowVolumeExpansion: truevolumeBindingMode: WaitForFirstConsumerparameters: autoPlace: \"2\" storagePool: ssd csi.storage.k8s.io/fstype: xfs``` You can find a full list of supported options here. Using this storage class, you can provision volumes by applying a Persistent Volume Claim and waiting for Piraeus to provision the PV. The following PVC creates a 5GiB volume using the above storage class: ``` apiVersion: v1 kind: PersistentVolumeClaim metadata: name: piraeus-pvc-1 spec: storageClassName: piraeus-ssd accessModes: - ReadWriteOnce resources: requests: storage: 5Gi``` Piraeus supports snapshots via the CSI snapshotting feature. To enable this feature in your cluster, you need to add a Snapshot Controller to your cluster. Some Kubernetes distributions (for example: OpenShift) already bundle this snapshot controller. On distributions without a bundled snapshot controller, you can use our guide here. Should you require further information check out the following links:" } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "Stash by AppsCode", "subcategory": "Cloud Native Storage" }
[ { "data": "Run Production-Grade Databases on Kubernetes Backup and Recovery Solution for Kubernetes Run Production-Grade Vault on Kubernetes Secure Ingress Controller for Kubernetes Kubernetes Configuration Syncer Kubernetes Authentication WebHook Server KubeDB simplifies Provisioning, Upgrading, Scaling, Volume Expansion, Monitor, Backup, Restore for various Databases in Kubernetes on any Public & Private Cloud A complete Kubernetes native disaster recovery solution for backup and restore your volumes and databases in Kubernetes on any public and private clouds. KubeVault is a Git-Ops ready, production-grade solution for deploying and configuring Hashicorp's Vault on Kubernetes. Secure Ingress Controller for Kubernetes Kubernetes Configuration Syncer Kubernetes Authentication WebHook Server We use cookies and other similar technology to collect data to improve your experience on our site, as described in our Privacy Policy. The setup section contains instructions for installing the Stash and its various components in Kubernetes. This section has been divided into the following sub-sections: Install Stash: Installation instructions for Stash and its various components. Uninstall Stash: Uninstallation instructions for Stash and its various components. Upgrading Stash: Instruction for updating Stash license and upgrading between various Stash versions. No spam, we promise. Your mail address is secure 2024 AppsCode Inc. All rights reserved." } ]
{ "category": "Runtime", "file_name": "#administrator-documentation.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Python WSGI Middleware (or just middleware) can be used to wrap the request and response of a Python WSGI application (i.e. a webapp, or REST/HTTP API), like Swifts WSGI servers (proxy-server, account-server, container-server, object-server). Swift uses middleware to add (sometimes optional) behaviors to the Swift WSGI servers. Middleware can be added to the Swift WSGI servers by modifying their paste configuration file. The majority of Swift middleware is applied to the Proxy Server. Given the following basic configuration: ``` [DEFAULT] log_level = DEBUG user = <your-user-name> [pipeline:main] pipeline = proxy-server [app:proxy-server] use = egg:swift#proxy ``` You could add the Healthcheck middleware by adding a section for that filter and adding it to the pipeline: ``` [DEFAULT] log_level = DEBUG user = <your-user-name> [pipeline:main] pipeline = healthcheck proxy-server [filter:healthcheck] use = egg:swift#healthcheck [app:proxy-server] use = egg:swift#proxy ``` Some middleware is required and will be inserted into your pipeline automatically by core swift code (e.g. the proxy-server will insert CatchErrors and GateKeeper at the start of the pipeline if they are not already present). You can see which features are available on a given Swift endpoint (including middleware) using the Discoverability interface. The best way to see how to write middleware is to look at examples. Many optional features in Swift are implemented as Middleware and provided in swift.common.middleware, but Swift middleware may be packaged and distributed as a separate project. Some examples are listed on the Associated Projects page. A contrived middleware example that modifies request behavior by inspecting custom HTTP headers (e.g. X-Webhook) and uses System Metadata (Sysmeta) to persist data to backend storage as well as common patterns like a getcontainerinfo() cache/query and wsgify() decorator is presented below: ``` from swift.common.http import is_success from swift.common.swob import wsgify from swift.common.utils import splitpath, getlogger from swift.common.requesthelpers import getsysmetaprefix from swift.proxy.controllers.base import getcontainerinfo from eventlet import Timeout import six if six.PY3: from eventlet.green.urllib import request as urllib2 else: from eventlet.green import urllib2 SYSMETAWEBHOOK = getsysmetaprefix('container') + 'webhook' class WebhookMiddleware(object): def init(self, app, conf): self.app = app self.logger = getlogger(conf, logroute='webhook') @wsgify def call(self, req): obj = None try: (version, account, container, obj) = \\ splitpath(req.pathinfo, 4, 4, True) except ValueError: pass if 'x-webhook' in req.headers: req.headers[SYSMETA_WEBHOOK] = \\ req.headers['x-webhook'] if 'x-remove-webhook' in req.headers: req.headers[SYSMETA_WEBHOOK] = '' resp = req.get_response(self.app) if obj and issuccess(resp.statusint) and req.method == 'PUT': containerinfo = getcontainer_info(req.environ, self.app) webhook = container_info['sysmeta'].get('webhook') if webhook: webhook_req = urllib2.Request(webhook, data=obj) with Timeout(20): try: urllib2.urlopen(webhook_req).read() except (Exception, Timeout): self.logger.exception( 'failed POST to webhook %s' % webhook) else: self.logger.info( 'successfully called webhook %s' % webhook) if 'x-container-sysmeta-webhook' in resp.headers: resp.headers['x-webhook'] = resp.headers[SYSMETA_WEBHOOK] return resp def webhookfactory(globalconf, local_conf): conf = global_conf.copy() conf.update(local_conf) def webhook_filter(app): return WebhookMiddleware(app, conf) return webhook_filter ``` In practice this middleware will call the URL stored on the container as X-Webhook on all successful object uploads. If this example was at" }, { "data": "- you could add it to your proxy by creating a new filter section and adding it to the pipeline: ``` [DEFAULT] log_level = DEBUG user = <your-user-name> [pipeline:main] pipeline = healthcheck webhook proxy-server [filter:webhook] paste.filterfactory = swift.common.middleware.webhook:webhookfactory [filter:healthcheck] use = egg:swift#healthcheck [app:proxy-server] use = egg:swift#proxy ``` Most python packages expose middleware as entrypoints. See PasteDeploy documentation for more information about the syntax of the use option. All middleware included with Swift is installed to support the egg:swift syntax. Middleware may advertize its availability and capabilities via Swifts Discoverability support by using registerswiftinfo(): ``` from swift.common.registry import registerswiftinfo def webhookfactory(globalconf, local_conf): registerswiftinfo('webhook') def webhook_filter(app): return WebhookMiddleware(app) return webhook_filter ``` If a middleware handles sensitive information in headers or query parameters that may need redaction when logging, use the registersensitiveheader() and registersensitiveparam() functions. This should be done in the filter factory: ``` from swift.common.registry import registersensitiveheader def webhookfactory(globalconf, local_conf): registersensitiveheader('webhook-api-key') def webhook_filter(app): return WebhookMiddleware(app) return webhook_filter ``` Middlewares can override the status integer that is logged by proxy_logging middleware by setting swift.proxyloggingstatus in the request WSGI environment. The value should be an integer. The value will replace the default status integer in the log message, unless the proxy_logging middleware detects a client disconnect or exception while handling the request, in which case swift.proxyloggingstatus is overridden by a 499 or 500 respectively. Generally speaking metadata is information about a resource that is associated with the resource but is not the data contained in the resource itself - which is set and retrieved via HTTP headers. (e.g. the Content-Type of a Swift object that is returned in HTTP response headers) All user resources in Swift (i.e. account, container, objects) can have user metadata associated with them. Middleware may also persist custom metadata to accounts and containers safely using System Metadata. Some core Swift features which predate sysmeta have added exceptions for custom non-user metadata headers (e.g. ACLs, Large Object Support) User metadata takes the form of X-<type>-Meta-<key>: <value>, where <type> depends on the resources type (i.e. Account, Container, Object) and <key> and <value> are set by the client. User metadata should generally be reserved for use by the client or client applications. A perfect example use-case for user metadata is python-swiftclients X-Object-Meta-Mtime which it stores on object it uploads to implement its --changed option which will only upload files that have changed since the last upload. New middleware should avoid storing metadata within the User Metadata namespace to avoid potential conflict with existing user metadata when introducing new metadata keys. An example of legacy middleware that borrows the user metadata namespace is TempURL. An example of middleware which uses custom non-user metadata to avoid the user metadata namespace is Static Large" }, { "data": "User metadata that is stored by a PUT or POST request to a container or account resource persists until it is explicitly removed by a subsequent PUT or POST request that includes a header X-<type>-Meta-<key> with no value or a header X-Remove-<type>-Meta-<key>: <ignored-value>. In the latter case the <ignored-value> is not stored. All user metadata stored with an account or container resource is deleted when the account or container is deleted. User metadata that is stored with an object resource has a different semantic; object user metadata persists until any subsequent PUT or POST request is made to the same object, at which point all user metadata stored with that object is deleted en-masse and replaced with any user metadata included with the PUT or POST request. As a result, it is not possible to update a subset of the user metadata items stored with an object while leaving some items unchanged. System metadata takes the form of X-<type>-Sysmeta-<key>: <value>, where <type> depends on the resources type (i.e. Account, Container, Object) and <key> and <value> are set by trusted code running in a Swift WSGI Server. All headers on client requests in the form of X-<type>-Sysmeta-<key> will be dropped from the request before being processed by any middleware. All headers on responses from back-end systems in the form of X-<type>-Sysmeta-<key> will be removed after all middlewares have processed the response but before the response is sent to the client. See GateKeeper middleware for more information. System metadata provides a means to store potentially private custom metadata with associated Swift resources in a safe and secure fashion without actually having to plumb custom metadata through the core swift servers. The incoming filtering ensures that the namespace can not be modified directly by client requests, and the outgoing filter ensures that removing middleware that uses a specific system metadata key renders it benign. New middleware should take advantage of system metadata. System metadata may be set on accounts and containers by including headers with a PUT or POST request. Where a header name matches the name of an existing item of system metadata, the value of the existing item will be updated. Otherwise existing items are preserved. A system metadata header with an empty value will cause any existing item with the same name to be deleted. System metadata may be set on objects using only PUT requests. All items of existing system metadata will be deleted and replaced en-masse by any system metadata headers included with the PUT request. System metadata is neither updated nor deleted by a POST request: updating individual items of system metadata with a POST request is not yet supported in the same way that updating individual items of user metadata is not supported. In cases where middleware needs to store its own metadata with a POST request, it may use Object Transient Sysmeta. Objects have other metadata in addition to the user metadata and system metadata described above. Objects have several items of immutable" }, { "data": "Like system metadata, these may only be set using PUT requests. However, they do not follow the general X-Object-Sysmeta-<key> naming scheme and they are not automatically removed from client responses. Object immutable metadata includes: ``` X-Timestamp Content-Length Etag ``` X-Timestamp and Content-Length metadata MUST be included in PUT requests to object servers. Etag metadata is generated by object servers when they handle a PUT request, but checked against any Etag header sent with the PUT request. Object immutable metadata, along with Content-Type, is the only object metadata that is stored by container servers and returned in object listings. Object Content-Type metadata is treated differently from immutable metadata, system metadata and user metadata. Content-Type MUST be included in PUT requests to object servers. Unlike immutable metadata or system metadata, Content-Type is mutable and may be included in POST requests to object servers. However, unlike object user metadata, existing Content-Type metadata persists if a POST request does not include new Content-Type metadata. This is because an object must have Content-Type metadata, which is also stored by container servers and returned in object listings. Content-Type is the only item of object metadata that is both mutable and yet also persists when not specified in a POST request. If middleware needs to store object metadata with a POST request it may do so using headers of the form X-Object-Transient-Sysmeta-<key>: <value>. All headers on client requests in the form of X-Object-Transient-Sysmeta-<key> will be dropped from the request before being processed by any middleware. All headers on responses from back-end systems in the form of X-Object-Transient-Sysmeta-<key> will be removed after all middlewares have processed the response but before the response is sent to the client. See GateKeeper middleware for more information. Transient-sysmeta updates on an object have the same semantic as user metadata updates on an object (see User Metadata) i.e. whenever any PUT or POST request is made to an object, all existing items of transient-sysmeta are deleted en-masse and replaced with any transient-sysmeta included with the PUT or POST request. Transient-sysmeta set by a middleware is therefore prone to deletion by a subsequent client-generated POST request unless the middleware is careful to include its transient-sysmeta with every POST. Likewise, user metadata set by a client is prone to deletion by a subsequent middleware-generated POST request, and for that reason middleware should avoid generating POST requests that are independent of any client request. Transient-sysmeta deliberately uses a different header prefix to user metadata so that middlewares can avoid potential conflict with user metadata keys. Transient-sysmeta deliberately uses a different header prefix to system metadata to emphasize the fact that the data is only persisted until a subsequent POST. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "#developer-documentation.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Effective code review is a skill like any other professional skill you develop with experience. Effective code review requires trust. No one is perfect. Everyone makes mistakes. Trust builds over time. This document will enumerate behaviors commonly observed and associated with competent reviews of changes purposed to the Swift code base. No one is expected to follow these steps. Guidelines are not rules, not all behaviors will be relevant in all situations. Code review is collaboration, not judgement. Alistair Coles You will need to have a copy of the change in an environment where you can freely edit and experiment with the code in order to provide a non-superficial review. Superficial reviews are not terribly helpful. Always try to be helpful. ;) Check out the change so that you may begin. Commonly, git review -d <change-id> Imagine that you submit a patch to Swift, and a reviewer starts to take a look at it. Your commit message on the patch claims that it fixes a bug or adds a feature, but as soon as the reviewer downloads it locally and tries to test it, a severe and obvious error shows up. Something like a syntax error or a missing dependency. Did you even run this? is the review comment all contributors dread. Reviewers in particular need to be fearful merging changes that just dont work - or at least fail in frequently common enough scenarios to be considered horribly broken. A comment in our review that says roughly I ran this on my machine and observed description of behavior change is supposed to achieve is the most powerful defense we have against the terrible scorn from our fellow Swift developers and operators when we accidentally merge bad code. If youre doing a fair amount of reviews - you will participate in merging a change that will break my clusters - its cool - Ill do it to you at some point too (sorry about that). But when either of us go look at the reviews to understand the process gap that allowed this to happen - it better not be just because we were too lazy to check it out and run it before it got merged. Or be warned, you may receive, the dreaded Did you even run this? Im sorry, I know its rough. ;) Saying that should rarely happen is the same as saying that will happen Douglas Crockford Scale is an amazingly abusive partner. If you contribute changes to Swift your code is running - in production - at scale - and your bugs cannot hide. I wish on all of us that our bugs may be exceptionally rare - meaning they only happen in extremely unlikely edge cases. For example, bad things that happen only 1 out of every 10K times an op is performed will be discovered in minutes. Bad things that happen only 1 out of every one billion times something happens will be observed - by multiple deployments - over the course of a release. Bad things that happen 1/100 times some op is performed are considered horribly broken. Tests must exhaustively exercise possible scenarios. Every system call and network connection will raise an error and timeout - where will that Exception be caught? Yes, I know Gerrit does this already. You can do it" }, { "data": "You might not need to re-run all the tests on your machine - it depends on the change. But, if youre not sure which will be most useful - running all of them best - unit - functional - probe. If you cant reliably get all tests passing in your development environment you will not be able to do effective reviews. Whatever tests/suites you are able to exercise/validate on your machine against your config you should mention in your review comments so that other reviewers might choose to do other testing locally when they have the change checked out. e.g. I went ahead and ran probe/testobjectmetadata_replication.py on my machine with both syncmethod = rsync and syncmethod = ssync - that works for me - but I didnt try it with objectpostas_copy = false Style is an important component to review. The goal is maintainability. However, keep in mind that generally style, readability and maintainability are orthogonal to the suitability of a change for merge. A critical bug fix may be a well written pythonic masterpiece of style - or it may be a hack-y ugly mess that will absolutely need to be cleaned up at some point - but it absolutely should merge because: CRITICAL. BUG. FIX. You should comment inline to praise code that is obvious. You should comment inline to highlight code that you found to be obfuscated. Unfortunately readability is often subjective. We should remember that its probably just our own personal preference. Rather than a comment that says You should use a list comprehension here - rewrite the code as a list comprehension, run the specific tests that hit the relevant section to validate your code is correct, then leave a comment that says: I find this more readable: diff with working tested code If the author (or another reviewer) agrees - its possible the change will get updated to include that improvement before it is merged; or it may happen in a follow-up change. However, remember that style is non-material - it is useful to provide (via diff) suggestions to improve maintainability as part of your review - but if the suggestion is functionally equivalent - it is by definition optional. Read the commit message thoroughly before you begin the review. Commit messages must answer the why and the what for - more so than the how or what it does. Commonly this will take the form of a short description: What is broken - without this change What is impossible to do with Swift - without this change What is slower/worse/harder - without this change If youre not able to discern why a change is being made or how it would be used - you may have to ask for more details before you can successfully review it. Commit messages need to have a high consistent quality. While many things under source control can be fixed and improved in a follow-up change - commit messages are forever. Luckily its easy to fix minor mistakes using the in-line edit feature in Gerrit! If you can avoid ever having to ask someone to change a commit message you will find yourself an amazingly happier and more productive reviewer. Also commit messages should follow the OpenStack Commit Message guidelines, including references to relevant impact tags or bug numbers. You should hand out links to the OpenStack Commit Message guidelines liberally via comments when fixing commit messages during review. Here you go: GitCommitMessages New tests should be added for all code" }, { "data": "Historically you should expect good changes to have a diff line count ratio of at least 2:1 tests to code. Even if a change has to fix a lot of existing tests, if a change does not include any new tests it probably should not merge. If a change includes a good ratio of test changes and adds new tests - you should say so in your review comments. If it does not - you should write some! and offer them to the patch author as a diff indicating to them that something like these tests Im providing as an example will need to be included in this change before it is suitable to merge. Bonus points if you include suggestions for the author as to how they might improve or expand upon the tests stubs you provide. Be very careful about asking an author to add a test for a small change before attempting to do so yourself. Its quite possible there is a lack of existing test infrastructure needed to develop a concise and clear test - the author of a small change may not be the best person to introduce a large amount of new test infrastructure. Also, most of the time remember its harder to write the test than the change - if the author is unable to develop a test for their change on their own you may prevent a useful change from being merged. At a minimum you should suggest a specific unit test that you think they should be able to copy and modify to exercise the behavior in their change. If youre not sure if such a test exists - replace their change with an Exception and run tests until you find one that blows up. Most changes should include documentation. New functions and code should have Docstrings. Tests should obviate new or changed behaviors with descriptive and meaningful phrases. New features should include changes to the documentation tree. New config options should be documented in example configs. The commit message should document the change for the change log. Always point out typos or grammar mistakes when you see them in review, but also consider that if you were able to recognize the intent of the statement - documentation with typos may be easier to iterate and improve on than nothing. If a change does not have adequate documentation it may not be suitable to merge. If a change includes incorrect or misleading documentation or is contrary to existing documentation is probably is not suitable to merge. Every change could have better documentation. Like with tests, a patch isnt done until it has docs. Any patch that adds a new feature, changes behavior, updates configs, or in any other way is different than previous behavior requires docs. manpages, sample configs, docstrings, descriptive prose in the source tree, etc. Reviews have been shown to provide many benefits - one of which is shared ownership. After providing a positive review you should understand how the change works. Doing this will probably require you to play with the change. You might functionally test the change in various scenarios. You may need to write a new unit test to validate the change will degrade gracefully under failure. You might have to write a script to exercise the change under some superficial load. You might have to break the change and validate the new tests fail and provide useful" }, { "data": "You might have to step through some critical section of the code in a debugger to understand when all the possible branches are exercised in tests. When youre done with your review an artifact of your effort will be observable in the piles of code and scripts and diffs you wrote while reviewing. You should make sure to capture those artifacts in a paste or gist and include them in your review comments so that others may reference them. e.g. When I broke the change like this: diff it blew up like this: unit test failure Its not uncommon that a review takes more time than writing a change - hopefully the author also spent as much time as you did validating their change but thats not really in your control. When you provide a positive review you should be sure you understand the change - even seemingly trivial changes will take time to consider the ramifications. Leave. Lots. Of. Comments. A popular web comic has stated that WTFs/Minute is the only valid measurement of code quality. If something initially strikes you as questionable - you should jot down a note so you can loop back around to it. However, because of the distributed nature of authors and reviewers its imperative that you try your best to answer your own questions as part of your review. Do not say Does this blow up if it gets called when xyz - rather try and find a test that specifically covers that condition and mention it in the comment so others can find it more quickly. Or if you can find no such test, add one to demonstrate the failure, and include a diff in a comment. Hopefully you can say I thought this would blow up, so I wrote this test, but it seems fine. But if your initial reaction is I dont understand this or How does this even work? you should notate it and explain whatever you were able to figure out in order to help subsequent reviewers more quickly identify and grok the subtle or complex issues. Because you will be leaving lots of comments - many of which are potentially not highlighting anything specific - it is VERY important to leave a good summary. Your summary should include details of how you reviewed the change. You may include what you liked most, or least. If you are leaving a negative score ideally you should provide clear instructions on how the change could be modified such that it would be suitable for merge - again diffs work best. Scoring is subjective. Try to realize youre making a judgment call. A positive score means you believe Swift would be undeniably better off with this code merged than it would be going one more second without this change running in production immediately. It is indeed high praise - you should be sure. A negative score means that to the best of your abilities you have not been able to your satisfaction, to justify the value of a change against the cost of its deficiencies and risks. It is a surprisingly difficult chore to be confident about the value of unproven code or a not well understood use-case in an uncertain world, and unfortunately all too easy with a thorough review to uncover our defects, and be reminded of the risk of" }, { "data": "Reviewers must try very hard first and foremost to keep master stable. If you can demonstrate a change has an incorrect behavior its almost without exception that the change must be revised to fix the defect before merging rather than letting it in and having to also file a bug. Every commit must be deployable to production. Beyond that - almost any change might be merge-able depending on its merits! Here are some tips you might be able to use to find more changes that should merge! Fixing bugs is HUGELY valuable - the only thing which has a higher cost than the value of fixing a bug - is adding a new bug - if its broken and this change makes it fixed (without breaking anything else) you have a winner! Features are INCREDIBLY difficult to justify their value against the cost of increased complexity, lowered maintainability, risk of regression, or new defects. Try to focus on what is impossible without the feature - when you make the impossible possible, things are better. Make things better. Purely test/doc changes, complex refactoring, or mechanical cleanups are quite nuanced because theres less concrete objective value. Ive seen lots of these kind of changes get lost to the backlog. Ive also seen some success where multiple authors have collaborated to push-over a change rather than provide a review ultimately resulting in a quorum of three or more authors who all agree there is a lot of value in the change - however subjective. Because the bar is high - most reviews will end with a negative score. However, for non-material grievances (nits) - you should feel confident in a positive review if the change is otherwise complete correct and undeniably makes Swift better (not perfect, better). If you see something worth fixing you should point it out in review comments, but when applying a score consider if it need be fixed before the change is suitable to merge vs. fixing it in a follow up change? Consider if the change makes Swift so undeniably better and it was deployed in production without making any additional changes would it still be correct and complete? Would releasing the change to production without any additional follow up make it more difficult to maintain and continue to improve Swift? Endeavor to leave a positive or negative score on every change you review. Use your best judgment. Swift Core maintainers may provide positive reviews scores that look different from your reviews - a +2 instead of a +1. But its exactly the same as your +1. It means the change has been thoroughly and positively reviewed. The only reason its different is to help identify changes which have received multiple competent and positive reviews. If you consistently provide competent reviews you run a VERY high risk of being approached to have your future positive review scores changed from a +1 to +2 in order to make it easier to identify changes which need to get merged. Ideally a review from a core maintainer should provide a clear path forward for the patch author. If you dont know how to proceed respond to the reviewers comments on the change and ask for help. Wed love to try and help. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "docs.md", "project_name": "Stash by AppsCode", "subcategory": "Cloud Native Storage" }
[ { "data": "Run Production-Grade Databases on Kubernetes Backup and Recovery Solution for Kubernetes Run Production-Grade Vault on Kubernetes Secure Ingress Controller for Kubernetes Kubernetes Configuration Syncer Kubernetes Authentication WebHook Server KubeDB simplifies Provisioning, Upgrading, Scaling, Volume Expansion, Monitor, Backup, Restore for various Databases in Kubernetes on any Public & Private Cloud Assistance is available at any stage of your AppsCode journey. Whether you're just starting out or navigating more advanced territories, support is here for you. User Guides Register, add credentials, clusters, enable features, and manage databases seamlessly. Get started now! User Guides Manage profile, emails, avatar, security, OAuth2 applications, tokens, organizations, and Kubernetes credentials effortlessly User Guides Add, import, and remove clusters, manage workloads, helm charts, presets, security, and monitor constraints and violations seamlessly User Guides Create, scale, backup, upgrade, monitor, and secure your databases effortlessly. Stay informed with insights and handle violations New to AppsCode? Follow simple steps to set up your account. Manage your account, subscriptions, and security settings with our interface Simplify cluster management with AppsCode's intuitive tools and features Manage databases with AppsCode's user-friendly tools and solutions No spam, we promise. Your mail address is secure 2024 AppsCode Inc. All rights reserved." } ]
{ "category": "Runtime", "file_name": "#openstack-end-user-guide.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "To discover whether your Object Storage system supports this feature, see Discoverability. Alternatively, check with your service provider. With bulk delete, you can delete up to 10,000 objects or containers (configurable) in one request. To perform a bulk delete operation, add the bulk-delete query parameter to the path of a POST or DELETE operation. Note The DELETE operation is supported for backwards compatibility. The path is the account, such as /v1/12345678912345, that contains the objects and containers. In the request body of the POST or DELETE operation, list the objects or containers to be deleted. Separate each name with a newline character. You can include a maximum of 10,000 items (configurable) in the list. In addition, you must: UTF-8-encode and then URL-encode the names. To indicate an object, specify the container and object name as: CONTAINERNAME/OBJECTNAME. To indicate a container, specify the container name as: CONTAINER_NAME. Make sure that the container is empty. If it contains objects, Object Storage cannot delete the container. Set the Content-Type request header to text/plain. When Object Storage processes the request, it performs multiple sub-operations. Even if all sub-operations fail, the operation returns a 200 status. The bulk operation returns a response body that contains details that indicate which sub-operations have succeeded and failed. Some sub-operations might succeed while others fail. Examine the response body to determine the results of each delete sub-operation. You can set the Accept request header to one of the following values to define the response format: Formats response as plain text. If you omit the Accept header, text/plain is the default. Formats response as JSON. Formats response as XML. The response body contains the following information: The number of files actually deleted. The number of not found objects. Errors. A list of object names and associated error statuses for the objects that failed to delete. The format depends on the value that you set in the Accept header. The following bulk delete response is in application/xml format. In this example, the mycontainer container is not empty, so it cannot be deleted. ``` <delete> <numberdeleted>2</numberdeleted> <numbernotfound>4</numbernotfound> <errors> <object> <name>/v1/12345678912345/mycontainer</name> <status>409 Conflict</status> </object> </errors> </delete> ``` Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "#source-documentation.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "S3 is a product from Amazon, and as such, it includes features that are outside the scope of Swift itself. For example, Swift doesnt have anything to do with billing, whereas S3 buckets can be tied to Amazons billing system. Similarly, log delivery is a service outside of Swift. Its entirely possible for a Swift deployment to provide that functionality, but it is not part of Swift itself. Likewise, a Swift deployment can provide similar geographic availability as S3, but this is tied to the deployers willingness to build the infrastructure and support systems to do" }, { "data": "| S3 REST API method | Category | Swift S3 API | |:--|:--|:| | GET Object | Core-API | Yes | | HEAD Object | Core-API | Yes | | PUT Object | Core-API | Yes | | PUT Object Copy | Core-API | Yes | | DELETE Object | Core-API | Yes | | Initiate Multipart Upload | Core-API | Yes | | Upload Part | Core-API | Yes | | Upload Part Copy | Core-API | Yes | | Complete Multipart Upload | Core-API | Yes | | Abort Multipart Upload | Core-API | Yes | | List Parts | Core-API | Yes | | GET Object ACL | Core-API | Yes | | PUT Object ACL | Core-API | Yes | | PUT Bucket | Core-API | Yes | | GET Bucket List Objects | Core-API | Yes | | HEAD Bucket | Core-API | Yes | | DELETE Bucket | Core-API | Yes | | List Multipart Uploads | Core-API | Yes | | GET Bucket acl | Core-API | Yes | | PUT Bucket acl | Core-API | Yes | | Versioning | Versioning | Yes | | Bucket notification | Notifications | No | | Bucket Lifecycle [1] [2] [3] [4] [5] [6] | Bucket Lifecycle | No | | Bucket policy | Advanced ACLs | No | | Public website [7] [8] [9] [10] | Public Website | No | | Billing [11] [12] | Billing | No | | GET Bucket location | Advanced Feature | Yes | | Delete Multiple Objects | Advanced Feature | Yes | | Object tagging | Advanced Feature | No | | GET Object torrent | Advanced Feature | No | | Bucket inventory | Advanced Feature | No | | GET Bucket service | Advanced Feature | No | | Bucket accelerate | CDN Integration | No | S3 REST API method Category Swift S3 API GET Object Core-API Yes HEAD Object Core-API Yes PUT Object Core-API Yes PUT Object Copy Core-API Yes DELETE Object Core-API Yes Initiate Multipart Upload Core-API Yes Upload Part Core-API Yes Upload Part Copy Core-API Yes Complete Multipart Upload Core-API Yes Abort Multipart Upload Core-API Yes List Parts Core-API Yes GET Object ACL Core-API Yes PUT Object ACL Core-API Yes PUT Bucket Core-API Yes GET Bucket List Objects Core-API Yes HEAD Bucket Core-API Yes DELETE Bucket Core-API Yes List Multipart Uploads Core-API Yes GET Bucket acl Core-API Yes PUT Bucket acl Core-API Yes Versioning Versioning Yes Bucket notification Notifications No Bucket Lifecycle [1] [2] [3] [4] [5] [6] Bucket Lifecycle No Bucket policy Advanced ACLs No Public website [7] [8] [9] [10] Public Website No Billing [11] [12] Billing No GET Bucket location Advanced Feature Yes Delete Multiple Objects Advanced Feature Yes Object tagging Advanced Feature No GET Object torrent Advanced Feature No Bucket inventory Advanced Feature No GET Bucket service Advanced Feature No Bucket accelerate CDN Integration No POST restore Bucket lifecycle Bucket logging Bucket analytics Bucket metrics Bucket replication OPTIONS object Object POST from HTML form Bucket public website Bucket CORS Request payment Bucket tagging Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "#s3-compatibility-info.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "By default, the content of an object cannot be greater than 5 GB. However, you can use a number of smaller objects to construct a large object. The large object is comprised of two types of objects: Segment objects store the object content. You can divide your content into segments, and upload each segment into its own segment object. Segment objects do not have any special features. You create, update, download, and delete segment objects just as you would normal objects. A manifest object links the segment objects into one logical large object. When you download a manifest object, Object Storage concatenates and returns the contents of the segment objects in the response body of the request. This behavior extends to the response headers returned by GET and HEAD requests. The Content-Length response header value is the total size of all segment objects. Object Storage calculates the ETag response header value by taking the ETag value of each segment, concatenating them together, and returning the MD5 checksum of the result. The manifest object types are: The manifest object content is an ordered list of the names of the segment objects in JSON format. The manifest object has a X-Object-Manifest metadata header. The value of this header is {container}/{prefix}, where {container} is the name of the container where the segment objects are stored, and {prefix} is a string that all segment objects have in common. The manifest object should have no content. However, this is not enforced. If you make a COPY request by using a manifest object as the source, the new object is a normal, and not a segment, object. If the total size of the source segment objects exceeds 5 GB, the COPY request fails. However, you can make a duplicate of the manifest object and this new object can be larger than 5 GB. To create a static large object, divide your content into pieces and create (upload) a segment object to contain each piece. Create a manifest object. Include the multipart-manifest=put query parameter at the end of the manifest object name to indicate that this is a manifest object. The body of the PUT request on the manifest object comprises a json list, where each element is an object representing a segment. These objects may contain the following attributes: path (required). The container and object name in the format: {container-name}/{object-name} etag (optional). If provided, this value must match the ETag of the segment object. This was included in the response headers when the segment was created. Generally, this will be the MD5 sum of the segment. size_bytes (optional). The size of the segment object. If provided, this value must match the Content-Length of that object. range (optional). The subset of the referenced object that should be used for segment data. This behaves similar to the Range header. If omitted, the entire object will be" }, { "data": "Providing the optional etag and size_bytes attributes for each segment ensures that the upload cannot corrupt your data. Example Static large object manifest list This example shows three segment objects. You can use several containers and the object names do not have to conform to a specific pattern, in contrast to dynamic large objects. ``` [ { \"path\": \"mycontainer/objseg1\", \"etag\": \"0228c7926b8b642dfb29554cd1f00963\", \"size_bytes\": 1468006 }, { \"path\": \"mycontainer/pseudodir/seg-obj2\", \"etag\": \"5bfc9ea51a00b790717eeb934fb77b9b\", \"size_bytes\": 1572864 }, { \"path\": \"other-container/seg-final\", \"etag\": \"b9c3da507d2557c1ddc51f27c54bae51\", \"size_bytes\": 256 } ] ``` The Content-Length request header must contain the length of the json contentnot the length of the segment objects. However, after the PUT operation completes, the Content-Length metadata is set to the total length of all the object segments. When using the ETag request header in a PUT operation, it must contain the MD5 checksum of the concatenated ETag values of the object segments. You can also set the Content-Type request header and custom object metadata. When the PUT operation sees the multipart-manifest=put query parameter, it reads the request body and verifies that each segment object exists and that the sizes and ETags match. If there is a mismatch, the PUT operation fails. This verification process can take a long time to complete, particularly as the number of segments increases. You may include a heartbeat=on query parameter to have the server: send a 202 Accepted response before it begins validating segments, periodically send whitespace characters to keep the connection alive, and send a final response code in the body. Note The server may still immediately respond with 400 Bad Request if it can determine that the request is invalid before making backend requests. If everything matches, the manifest object is created. The X-Static-Large-Object metadata is set to true indicating that this is a static object manifest. Normally when you perform a GET operation on the manifest object, the response body contains the concatenated content of the segment objects. To download the manifest list, use the multipart-manifest=get query parameter. The resulting list is not formatted the same as the manifest you originally used in the PUT operation. If you use the DELETE operation on a manifest object, the manifest object is deleted. The segment objects are not affected. However, if you add the multipart-manifest=delete query parameter, the segment objects are deleted and if all are successfully deleted, the manifest object is also deleted. To change the manifest, use a PUT operation with the multipart-manifest=put query parameter. This request creates a manifest object. You can also update the object metadata in the usual way. You must segment objects that are larger than 5 GB before you can upload them. You then upload the segment objects like you would any other object and create a dynamic large manifest object. The manifest object tells Object Storage how to find the segment objects that comprise the large object. The segments remain individually addressable, but retrieving the manifest object streams all the segments" }, { "data": "There is no limit to the number of segments that can be a part of a single large object, but Content-Length is included in GET or HEAD response only if the number of segments is smaller than container listing limit. In other words, the number of segments that fit within a single container listing page. To ensure the download works correctly, you must upload all the object segments to the same container and ensure that each object name is prefixed in such a way that it sorts in the order in which it should be concatenated. You also create and upload a manifest file. The manifest file is a zero-byte file with the extra X-Object-Manifest {container}/{prefix} header, where {container} is the container the object segments are in and {prefix} is the common prefix for all the segments. You must UTF-8-encode and then URL-encode the container and common prefix in the X-Object-Manifest header. It is best to upload all the segments first and then create or update the manifest. With this method, the full object is not available for downloading until the upload is complete. Also, you can upload a new set of segments to a second location and update the manifest to point to this new location. During the upload of the new segments, the original manifest is still available to download the first set of segments. Note When updating a manifest object using a POST request, a X-Object-Manifest header must be included for the object to continue to behave as a manifest object. Example Upload segment of large object request: HTTP ``` PUT /{api_version}/{account}/{container}/{object} HTTP/1.1 Host: storage.clouddrive.com X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb ETag: 8a964ee2a5e88be344f36c22562a6486 Content-Length: 1 X-Object-Meta-PIN: 1234 ``` No response body is returned. A status code of 2``nn`` (between 200 and 299, inclusive) indicates a successful write; status 411 Length Required denotes a missing Content-Length or Content-Type header in the request. If the MD5 checksum of the data written to the storage system does NOT match the (optionally) supplied ETag value, a 422 Unprocessable Entity response is returned. You can continue uploading segments like this example shows, prior to uploading the manifest. Example Upload next segment of large object request: HTTP ``` PUT /{api_version}/{account}/{container}/{object} HTTP/1.1 Host: storage.clouddrive.com X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb ETag: 8a964ee2a5e88be344f36c22562a6486 Content-Length: 1 X-Object-Meta-PIN: 1234 ``` Next, upload the manifest you created that indicates the container the object segments reside within. Note that uploading additional segments after the manifest is created causes the concatenated object to be that much larger but you do not need to recreate the manifest file for subsequent additional segments. Example Upload manifest request: HTTP ``` PUT /{api_version}/{account}/{container}/{object} HTTP/1.1 Host: storage.clouddrive.com X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb Content-Length: 0 X-Object-Meta-PIN: 1234 X-Object-Manifest: {container}/{prefix} ``` Example Upload manifest response: HTTP ``` [...] ``` The Content-Type in the response for a GET or HEAD on the manifest is the same as the Content-Type set during the PUT request that created the manifest. You can easily change the Content-Type by reissuing the PUT request. While static and dynamic objects have similar behavior, here are their differences: With static large objects, integrity can be" }, { "data": "The list of segments may include the MD5 checksum (ETag) of each segment. You cannot upload the manifest object if the ETag in the list differs from the uploaded segment object. If a segment is somehow lost, an attempt to download the manifest object results in an error. With dynamic large objects, integrity is not guaranteed. The eventual consistency model means that although you have uploaded a segment object, it might not appear in the container listing until later. If you download the manifest before it appears in the container, it does not form part of the content returned in response to a GET request. With static large objects, you must upload the segment objects before you upload the manifest object. With dynamic large objects, you can upload manifest and segment objects in any order. In case a premature download of the manifest occurs, we recommend users upload the manifest object after the segments. However, the system does not enforce the order. With static large objects, you cannot add or remove segment objects from the manifest. However, you can create a completely new manifest object of the same name with a different manifest list. With dynamic large objects, you can upload new segment objects or remove existing segments. The names must simply match the {prefix} supplied in X-Object-Manifest. With static large objects, the segment objects must be at least 1 byte in size. However, if the segment objects are less than 1MB (by default), the SLO download is (by default) rate limited. At most, 1000 segments are supported (by default) and the manifest has a limit (by default) of 2MB in size. With dynamic large objects, segment objects can be any size. With static large objects, the manifest list includes the container name of each object. Segment objects can be in different containers. With dynamic large objects, all segment objects must be in the same container. With static large objects, the manifest object has X-Static-Large-Object set to true. You do not set this metadata directly. Instead the system sets it when you PUT a static manifest object. With dynamic large objects, the X-Object-Manifest value is the {container}/{prefix}, which indicates where the segment objects are located. You supply this request header in the PUT operation. The semantics are the same for both static and dynamic large objects. When copying large objects, the COPY operation does not create a manifest object but a normal object with content same as what you would get on a GET request to the original manifest object. To copy the manifest object, you include the multipart-manifest=get query parameter in the COPY request. The new object contains the same manifest as the original. The segment objects are not copied. Instead, both the original and new manifest objects share the same set of segment objects. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "OpenStack Nedir? OpenStack, kullanclarna kaynak hazrlama, yneticilere ise tam kontrol imkan veren bir web arayz zerinden ynetilen, hesaplama, depolama ve a kaynaklar havuzlarn bir verimerkezi zerinden kontrol eden bur bulut iletim sistemidir. Bu en son Trke belgeleri yayndr. Meny kullanarak nceki srmleri veya farkl dilleri seebilirsiniz. En ok kullanlan OpenStack servislerine giri OpenStack An (neutron) kurun ve ynetin Guidelines and scenarios for creating more secure OpenStack clouds OpenStack API Belgelendirmesi Uluslararaslatrma i ak ve gelenekleri Belgelendirmeye kod gibi davranlr, ve topluluk tarafndan glendirilmitir - ilginizi ekti mi? OpenStack projesi ve belgelerin evirilerine katlmak iin, ltfen Zanata'da Trke eviri ekibine katln. The OpenStack project is provided under the Apache 2.0 license. Openstack.org is powered by VEXXHOST ." } ]
{ "category": "Runtime", "file_name": "account.html#module-swift.account.reaper.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "This page contains project-specific documentation for using OpenStack services and libraries. Refer to the language bindings list for Python client library documentation and the Unified OpenStack command line client. Documentation treated like code, powered by the community - interested? Currently viewing which is the current supported release. The OpenStack project is provided under the Apache 2.0 license. Openstack.org is powered by VEXXHOST ." } ]
{ "category": "Runtime", "file_name": "associated_projects.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Swift supports a number of auth systems that share the following common characteristics: The authentication/authorization part can be an external system or a subsystem run within Swift as WSGI middleware The user of Swift passes in an auth token with each request Swift validates each token with the external auth system or auth subsystem and caches the result The token does not change from request to request, but does expire The token can be passed into Swift using the X-Auth-Token or the X-Storage-Token header. Both have the same format: just a simple string representing the token. Some auth systems use UUID tokens, some an MD5 hash of something unique, some use something else but the salient point is that the token is a string which can be sent as-is back to the auth system for validation. Swift will make calls to the auth system, giving the auth token to be validated. For a valid token, the auth system responds with an overall expiration time in seconds from now. To avoid the overhead in validating the same token over and over again, Swift will cache the token for a configurable time, but no longer than the expiration time. The Swift project includes two auth systems: TempAuth Keystone Auth It is also possible to write your own auth system as described in Extending Auth. TempAuth is used primarily in Swifts functional test environment and can be used in other test environments (such as SAIO (Swift All In One)). It is not recommended to use TempAuth in a production system. However, TempAuth is fully functional and can be used as a model to develop your own auth system. TempAuth has the concept of admin and non-admin users within an account. Admin users can do anything within the account. Non-admin users can only perform read operations. However, some privileged metadata such as X-Container-Sync-Key is not accessible to non-admin users. Users with the special group .reseller_admin can operate on any account. For an example usage please see swift.common.middleware.tempauth. If a request is coming from a reseller the auth system sets the request environ reseller_request to True. This can be used by other middlewares. Other users may be granted the ability to perform operations on an account or container via ACLs. TempAuth supports two types of ACL: Per container ACLs based on the containers X-Container-Read and X-Container-Write metadata. See Container ACLs for more information. Per account ACLs based on the accounts X-Account-Access-Control metadata. For more information see Account ACLs. TempAuth will now allow OPTIONS requests to go through without a token. The TempAuth middleware is responsible for creating its own tokens. A user makes a request containing their username and password and TempAuth responds with a token. This token is then used to perform subsequent requests on the users account, containers and objects. Swift is able to authenticate against OpenStack Keystone. In this environment, Keystone is responsible for creating and validating tokens. The KeystoneAuth middleware is responsible for implementing the auth system within Swift as described here. The KeystoneAuth middleware supports per container based ACLs on the containers X-Container-Read and X-Container-Write metadata. For more information see Container ACLs. The account-level ACL is not supported by Keystone" }, { "data": "In order to use the keystoneauth middleware the auth_token middleware from KeystoneMiddleware will need to be configured. The authtoken middleware performs the authentication token validation and retrieves actual user authentication information. It can be found in the KeystoneMiddleware distribution. The KeystoneAuth middleware performs authorization and mapping the Keystone roles to Swifts ACLs. Configuring Swift to use Keystone is relatively straightforward. The first step is to ensure that you have the auth_token middleware installed. It can either be dropped in your python path or installed via the KeystoneMiddleware package. You need at first make sure you have a service endpoint of type object-store in Keystone pointing to your Swift proxy. For example having this in your /etc/keystone/default_catalog.templates ``` catalog.RegionOne.object_store.name = Swift Service catalog.RegionOne.objectstore.publicURL = http://swiftproxy:8080/v1/AUTH$(tenant_id)s catalog.RegionOne.object_store.adminURL = http://swiftproxy:8080/ catalog.RegionOne.objectstore.internalURL = http://swiftproxy:8080/v1/AUTH$(tenant_id)s ``` On your Swift proxy server you will want to adjust your main pipeline and add auth_token and keystoneauth in your /etc/swift/proxy-server.conf like this ``` [pipeline:main] pipeline = [....] authtoken keystoneauth proxy-logging proxy-server ``` add the configuration for the authtoken middleware: ``` [filter:authtoken] paste.filterfactory = keystonemiddleware.authtoken:filter_factory wwwauthenticateuri = http://keystonehost:5000/ auth_url = http://keystonehost:5000/ auth_plugin = password projectdomainid = default userdomainid = default project_name = service username = swift password = password cache = swift.cache includeservicecatalog = False delayauthdecision = True ``` The actual values for these variables will need to be set depending on your situation, but in short: wwwauthenticateuri should point to a Keystone service from which users may retrieve tokens. This value is used in the WWW-Authenticate header that auth_token sends with any denial response. auth_url points to the Keystone Admin service. This information is used by the middleware to actually query Keystone about the validity of the authentication tokens. It is not necessary to append any Keystone API version number to this URI. The auth credentials (projectdomainid, userdomainid, username, project_name, password) will be used to retrieve an admin token. That token will be used to authorize user tokens behind the scenes. These credentials must match the Keystone credentials for the Swift service. The example values shown here assume a user named swift with admin role on a project named service, both being in the Keystone domain with id default. Refer to the KeystoneMiddleware documentation for other examples. cache is set to swift.cache. This means that the middleware will get the Swift memcache from the request environment. includeservicecatalog defaults to True if not set. This means that when validating a token, the service catalog is retrieved and stored in the X-Service-Catalog header. This is required if you use access-rules in Application Credentials. You may also need to increase maxheadersize. Note The authtoken config variable delayauthdecision must be set to True. The default is False, but that breaks public access, StaticWeb, FormPost, TempURL, and authenticated capabilities requests (using Discoverability). and you can finally add the keystoneauth configuration. Here is a simple configuration: ``` [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator ``` Use an appropriate list of roles in operator_roles. For example, in some systems, the role member or Member is used to indicate that the user is allowed to operate on project resources. Some OpenStack services such as Cinder and Glance may use a service" }, { "data": "In this mode, you configure a separate account where the service stores project data that it manages. This account is not used directly by the end-user. Instead, all access is done through the service. To access the service account, the service must present two tokens: one from the end-user and another from its own service user. Only when both tokens are present can the account be accessed. This section describes how to set the configuration options to correctly control access to both the normal and service accounts. In this example, end users use the AUTH_ prefix in account names, whereas services use the SERVICE_ prefix: ``` [filter:keystoneauth] use = egg:swift#keystoneauth reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator SERVICEserviceroles = service ``` The actual values for these variable will need to be set depending on your situation as follows: The first item in the reseller_prefix list must match Keystones endpoint (see /etc/keystone/default_catalog.templates above). Normally this is AUTH. The second item in the reseller_prefix list is the prefix used by the OpenStack services(s). You must configure this value (SERVICE in the example) with whatever the other OpenStack service(s) use. Set the operator_roles option to contain a role or roles that end-users have on projects they use. Set the SERVICEserviceroles value to a role or roles that only the OpenStack service user has. Do not use a role that is assigned to normal end users. In this example, the role service is used. The service user is granted this role to a single project only. You do not need to make the service user a member of every project. This configuration works as follows: The end-user presents a user token to an OpenStack service. The service then makes a Swift request to the account with the SERVICE prefix. The service forwards the original user token with the request. It also adds its own service token. Swift validates both tokens. When validated, the user token gives the admin or swiftoperator role(s). When validated, the service token gives the service role. Swift interprets the above configuration as follows: Did the user token provide one of the roles listed in operator_roles? Did the service token have the service role as described by the SERVICEserviceroles options. If both conditions are met, the request is granted. Otherwise, Swift rejects the request. In the above example, all services share the same account. You can separate each service into its own account. For example, the following provides a dedicated account for each of the Glance and Cinder services. In addition, you must assign the glanceservice and cinderservice to the appropriate service users: ``` [filter:keystoneauth] use = egg:swift#keystoneauth reseller_prefix = AUTH, IMAGE, VOLUME operator_roles = admin, swiftoperator IMAGEserviceroles = glance_service VOLUMEserviceroles = cinder_service ``` By default the only users able to perform operations (e.g. create a container) on an account are those having a Keystone role for the corresponding Keystone project that matches one of the roles specified in the operator_roles option. Users who have one of the operator_roles will be able to set container ACLs to grant other users permission to read and/or write objects in specific containers, using X-Container-Read and X-Container-Write headers respectively. In addition to the ACL formats described here, keystoneauth supports ACLs using the format: ```" }, { "data": "``` where otherprojectid is the UUID of a Keystone project and otheruserid is the UUID of a Keystone user. This will allow the other user to access a container provided their token is scoped on the other project. Both otherprojectid and otheruserid may be replaced with the wildcard character * which will match any project or user respectively. Be sure to use Keystone UUIDs rather than names in container ACLs. Note For backwards compatibility, keystoneauth will by default grant container ACLs expressed as otherprojectname:otherusername (i.e. using Keystone names rather than UUIDs) in the special case when both the other project and the other user are in Keystones default domain and the project being accessed is also in the default domain. For further information see KeystoneAuth Users with the Keystone role defined in reselleradminrole (ResellerAdmin by default) can operate on any account. The auth system sets the request environ reseller_request to True if a request is coming from a user with this role. This can be used by other middlewares. Some common mistakes can result in API requests failing when first deploying keystone with Swift: Incorrect configuration of the Swift endpoint in the Keystone service. By default, keystoneauth expects the account part of a URL to have the form AUTH<keystoneprojectid>. Sometimes the AUTH prefix is missed when configuring Swift endpoints in Keystone, as described in the Install Guide. This is easily diagnosed by inspecting the proxy-server log file for a failed request URL and checking that the URL includes the AUTH_ prefix (or whatever reseller prefix may have been configured for keystoneauth): ``` GOOD: proxy-server: 127.0.0.1 127.0.0.1 07/Sep/2016/16/06/58 HEAD /v1/AUTH_cfb8d9d45212408b90bc0776117aec9e HTTP/1.0 204 ... BAD: proxy-server: 127.0.0.1 127.0.0.1 07/Sep/2016/16/07/35 HEAD /v1/cfb8d9d45212408b90bc0776117aec9e HTTP/1.0 403 ... ``` Incorrect configuration of the authtoken middleware options in the Swift proxy server. The authtoken middleware communicates with the Keystone service to validate tokens that are presented with client requests. To do this authtoken must authenticate itself with Keystone using the credentials configured in the [filter:authtoken] section of /etc/swift/proxy-server.conf. Errors in these credentials can result in authtoken failing to validate tokens and may be revealed in the proxy server logs by a message such as: ``` proxy-server: Identity server rejected authorization ``` Note More detailed log messaging may be seen by setting the authtoken option log_level = debug. The authtoken configuration options may be checked by attempting to use them to communicate directly with Keystone using an openstack command line. For example, given the authtoken configuration sample shown in Configuring Swift to use Keystone, the following command should return a service catalog: ``` openstack --os-identity-api-version=3 --os-auth-url=http://keystonehost:5000/ \\ --os-username=swift --os-user-domain-id=default \\ --os-project-name=service --os-project-domain-id=default \\ --os-password=password catalog show object-store ``` If this openstack command fails then it is likely that there is a problem with the authtoken configuration. TempAuth is written as wsgi middleware, so implementing your own auth is as easy as writing new wsgi middleware, and plugging it in to the proxy server. See Auth Server and Middleware for detailed information on extending the auth system. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "authentication.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Swift is a highly available, distributed, eventually consistent object/blob store. Organizations can use Swift to store lots of data efficiently, safely, and cheaply. This documentation is generated by the Sphinx toolkit and lives in the source tree. Additional documentation on Swift and other components of OpenStack can be found on the OpenStack wiki and at http://docs.openstack.org. Note If youre looking for associated projects that enhance or use Swift, please see the Associated Projects page. See Complete Reference for the Object Storage REST API The following provides supporting information for the REST API: The OpenStack End User Guide has additional information on using Swift. See the Manage objects and containers section. Index Module Index Search Page Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "container.html#module-swift.container.backend.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Bases: DatabaseAuditor Audit accounts. alias of AccountBroker Pluggable Back-end for Account Server Bases: DatabaseBroker Encapsulates working with an account database. Create account_stat table which is specific to the account DB. Not a part of Pluggable Back-ends, internal to the baseline code. conn DB connection object put_timestamp put timestamp Create container table which is specific to the account DB. conn DB connection object Create policy_stat table which is specific to the account DB. Not a part of Pluggable Back-ends, internal to the baseline code. conn DB connection object Check if the account DB is empty. True if the database has no active containers. Get global data for the account. dict with keys: account, createdat, puttimestamp, deletetimestamp, statuschangedat, containercount, objectcount, bytesused, hash, id Get global policy stats for the account. do_migrations boolean, if True the policy stat dicts will always include the container_count key; otherwise it may be omitted on legacy databases until they are migrated. dict of policy stats where the key is the policy index and the value is a dictionary like {object_count: M, bytesused: N, containercount: L} Only returns true if the status field is set to DELETED. Get a list of containers sorted by name starting at marker onward, up to limit entries. Entries will begin with the prefix and will not have the delimiter after the prefix. limit maximum number of entries to get marker marker query end_marker end marker query prefix prefix query delimiter delimiter for query reverse reverse the result order. allow_reserved exclude names with reserved-byte by default list of tuples of (name, objectcount, bytesused, put_timestamp, 0) Turn this db record dict into the format this service uses for pending pickles. Merge items into the container table. itemlist list of dictionaries of {name, puttimestamp, deletetimestamp, objectcount, bytes_used, deleted, storagepolicyindex} source if defined, update incoming_sync with the source Create a container with the given attributes. name name of the container to create (a native string) puttimestamp puttimestamp of the container to create deletetimestamp deletetimestamp of the container to create object_count number of objects in the container bytes_used number of bytes used by the container storagepolicyindex the storage policy for this container Bases: Daemon Removes data from status=DELETED accounts. These are accounts that have been asked to be removed by the reseller via services removestorageaccount XMLRPC call. The account is not deleted immediately by the services call, but instead the account is simply marked for deletion by setting the status column in the account_stat table of the account database. This account reaper scans for such accounts and removes the data in the background. The background deletion process will occur on the primary account server for the" }, { "data": "server_conf The [account-server] dictionary of the account server configuration file reaper_conf The [account-reaper] dictionary of the account server configuration file See the etc/account-server.conf-sample for information on the possible configuration parameters. The account swift.common.ring.Ring for the cluster. The container swift.common.ring.Ring for the cluster. Get the ring identified by the policy index policy_idx Storage policy index A ring matching the storage policy Called once per pass for each account this server is the primary for and attempts to delete the data for the given account. The reaper will only delete one account at any given time. It will call reap_container() up to sqrt(self.concurrency) times concurrently while reaping the account. If there is any exception while deleting a single container, the process will continue for any other containers and the failed containers will be tried again the next time this function is called with the same parameters. If there is any exception while listing the containers for deletion, the process will stop (but will obviously be tried again the next time this function is called with the same parameters). This isnt likely since the listing comes from the local database. After the process completes (successfully or not) statistics about what was accomplished will be logged. This function returns nothing and should raise no exception but only update various self.stats_* values for what occurs. broker The AccountBroker for the account to delete. partition The partition in the account ring the account is on. nodes The primary node dicts for the account to delete. container_shard int used to shard containers reaped. If None, will reap all containers. See also swift.account.backend.AccountBroker for the broker class. See also swift.common.ring.Ring.get_nodes() for a description of the node dicts. Deletes the data and the container itself for the given container. This will call reap_object() up to sqrt(self.concurrency) times concurrently for the objects in the container. If there is any exception while deleting a single object, the process will continue for any other objects in the container and the failed objects will be tried again the next time this function is called with the same parameters. If there is any exception while listing the objects for deletion, the process will stop (but will obviously be tried again the next time this function is called with the same parameters). This is a possibility since the listing comes from querying just the primary remote container server. Once all objects have been attempted to be deleted, the container itself will be attempted to be deleted by sending a delete request to all container nodes. The format of the delete request is such that each container server will update a corresponding account server, removing the container from the accounts" }, { "data": "This function returns nothing and should raise no exception but only update various self.stats_* values for what occurs. account The name of the account for the container. account_partition The partition for the account on the account ring. account_nodes The primary node dicts for the account. container The name of the container to delete. See also: swift.common.ring.Ring.get_nodes() for a description of the account node dicts. Called once per pass for each device on the server. This will scan the accounts directory for the device, looking for partitions this device is the primary for, then looking for account databases that are marked status=DELETED and still have containers and calling reap_account(). Account databases marked status=DELETED that no longer have containers will eventually be permanently removed by the reclaim process within the account replicator (see swift.db_replicator). device The device to look for accounts to be deleted. Deletes the given object by issuing a delete request to each node for the object. The format of the delete request is such that each object server will update a corresponding container server, removing the object from the containers listing. This function returns nothing and should raise no exception but only update various self.stats_* values for what occurs. account The name of the account for the object. container The name of the container for the object. container_partition The partition for the container on the container ring. container_nodes The primary node dicts for the container. obj The name of the object to delete. policy_index The storage policy index of the objects container See also: swift.common.ring.Ring.get_nodes() for a description of the container node dicts. Main entry point when running the reaper in normal daemon mode. This repeatedly calls run_once() no quicker than the configuration interval. Main entry point when running the reaper in once mode, where it will do a single pass over all accounts on the server. This is called repeatedly by runforever(). This will call reapdevice() once for each device on the server. Bases: BaseStorageServer WSGI controller for the account server. Handle HTTP DELETE request. Handle HTTP GET request. Handle HTTP HEAD request. Handle HTTP POST request. Handle HTTP PUT request. Handle HTTP REPLICATE request. Handler for RPC calls for account replication. paste.deploy app factory for creating WSGI account server apps Split and validate path for an account. req a swob request a tuple of path parts as strings Split and validate path for a container. req a swob request a tuple of path parts as strings Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "bulk-delete.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "To discover whether your Object Storage system supports this feature, check with your service provider or send a GET request using the /info path. A temporary URL gives users temporary access to objects. For example, a website might want to provide a link to download a large object in Object Storage, but the Object Storage account has no public access. The website can generate a URL that provides time-limited GET access to the object. When the web browser user clicks on the link, the browser downloads the object directly from Object Storage, eliminating the need for the website to act as a proxy for the request. Furthermore, a temporary URL can be prefix-based. These URLs contain a signature which is valid for all objects which share a common prefix. They are useful for sharing a set of objects. Ask your cloud administrator to enable the temporary URL feature. For information, see TempURL in the Source Documentation. Note To use POST requests to upload objects to specific Object Storage locations, use Form POST middleware instead of temporary URL middleware. A temporary URL is comprised of the URL for an object with added query parameters: Example Temporary URL format ``` https://swift-cluster.example.com/v1/my_account/container/object ?tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b &tempurlexpires=1323479485 &filename=My+Test+File.pdf ``` The example shows these elements: Object URL: Required. The full path URL to the object. tempurlsig: Required. An HMAC cryptographic signature that defines the allowed HTTP method, expiration date, full path to the object, and the secret key for the temporary URL. The digest used (for example, SHA-256 or SHA-512) must be supported by the cluster; supported digests will be listed in the tempurl.allowed_digests key in the clusters capabilities. tempurlexpires: Required. An expiration date as a UNIX Epoch timestamp or ISO 8601 UTC timestamp. For example, 1390852007 or 2014-01-27T19:46:47Z can be used to represent Mon, 27 Jan 2014 19:46:47 GMT. For more information, see Epoch & Unix Timestamp Conversion Tools. filename: Optional. Overrides the default file name. Object Storage generates a default file name for GET temporary URLs that is based on the object name. Object Storage returns this value in the Content-Disposition response header. Browsers can interpret this file name value as a file attachment to be saved. A prefix-based temporary URL is similar but requires the parameter tempurlprefix, which must be equal to the common prefix shared by all object names for which the URL is valid. ``` https://swift-cluster.example.com/v1/myaccount/container/myprefix/object ?tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b &tempurlexpires=2011-12-10T01:11:25Z &tempurlprefix=my_prefix ``` The cryptographic signature used in Temporary URLs and also in Form POST middleware uses a secret key. Object Storage allows you to store two secret key values per account, and two per container. When validating a request, Object Storage checks signatures against all keys. Using two keys at each level enables key rotation without invalidating existing temporary" }, { "data": "To set the keys at the account level, set one or both of the following request headers to arbitrary values on a POST request to the account: X-Account-Meta-Temp-URL-Key X-Account-Meta-Temp-URL-Key-2 To set the keys at the container level, set one or both of the following request headers to arbitrary values on a POST or PUT request to the container: X-Container-Meta-Temp-URL-Key X-Container-Meta-Temp-URL-Key-2 The arbitrary values serve as the secret keys. For example, use the swift post command to set the secret key to ``MYKEY``: ``` $ swift post -m \"Temp-URL-Key:MYKEY\" ``` Note Changing these headers invalidates any previously generated temporary URLs within 60 seconds, which is the memcache time for the key. Temporary URL middleware uses an HMAC cryptographic signature. This signature includes these elements: The allowed method. Typically, GET or PUT. Expiry time. In the example for the HMAC-SHA256 signature for temporary URLs below, the expiry time is set to 86400 seconds (or 1 day) into the future. Please be aware that you have to use a UNIX timestamp for generating the signature (in the API request it is also allowed to use an ISO 8601 UTC timestamp). The path. Starting with /v1/ onwards and including a container name and object. The path for prefix-based signatures must start with prefix:/v1/. Do not URL-encode the path at this stage. The secret key. Use one of the key values as described in Secret Keys. These sample Python codes show how to compute a signature for use with temporary URLs: Example HMAC-SHA256 signature for object-based temporary URLs ``` import hmac from hashlib import sha256 from time import time method = 'GET' durationinseconds = 606024 expires = int(time() + durationinseconds) path = '/v1/my_account/container/object' key = 'MYKEY' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) signature = hmac.new(key, hmac_body, sha256).hexdigest() ``` Example HMAC-SHA512 signature for prefix-based temporary URLs ``` import hmac from hashlib import sha512 from time import time method = 'GET' durationinseconds = 606024 expires = int(time() + durationinseconds) path = 'prefix:/v1/myaccount/container/myprefix' key = 'MYKEY' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) signature = hmac.new(key, hmac_body, sha512).hexdigest() ``` Do not URL-encode the path when you generate the HMAC signature. However, when you make the actual HTTP request, you should properly URL-encode the URL. The ``MYKEY`` value is one of the key values as described in Secret Keys. For more information, see RFC 2104: HMAC: Keyed-Hashing for Message Authentication. If you want to transform a UNIX timestamp into an ISO 8601 UTC timestamp, you can use following code snippet: ``` import time time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime(timestamp)) ``` The swift tool provides the tempurl option that auto-generates the ``tempurlsig`` and ``tempurlexpires`` query parameters. For example, you might run this command: ``` $ swift tempurl GET 3600 /v1/my_account/container/object MYKEY ``` Note The swift tool is not yet updated and continues to use the deprecated cipher SHA1. This command returns the path: ``` /v1/my_account/container/object ?tempurlsig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91 &tempurlexpires=1374497657 ``` To create the temporary URL, prefix this path with the Object Storage storage host name. For example, prefix the path with https://swift-cluster.example.com, as follows: ``` https://swift-cluster.example.com/v1/my_account/container/object ?tempurlsig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91 &tempurlexpires=1374497657 ``` Note that if the above example is copied exactly, and used in a command shell, then the ampersand is interpreted as an operator and the URL will be truncated. Enclose the URL in quotation marks to avoid this. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "container.html#module-swift.container.server.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Bases: object Partitioned consistent hashing ring. serialized_path path to serialized RingData instance reload_time time interval in seconds to check for a ring change ring_name ring name string (basically specified from policy) validation_hook hook point to validate ring configuration ontime RingLoadError if the loaded ring data violates its constraint Number of devices with assignments in the ring. Number of devices in the ring. devices in the ring Generator to get extra nodes for a partition for hinted handoff. The handoff nodes will try to be in zones other than the primary zones, will take into account the device weights, and will usually keep the same sequences of handoffs even with ring changes. part partition to get handoff nodes for generator of node dicts See get_nodes() for a description of the node dicts. Get the partition and nodes for an account/container/object. If a node is responsible for more than one replica, it will only appear in the output once. account account name container container name obj object name a tuple of (partition, list of node dicts) Each node dict will have at least the following keys: | 0 | 1 | |:-|:| | id | unique integer identifier amongst devices | | index | offset into the primary node list for the partition | | weight | a float of the relative weight of this device as compared to others; this indicates how many partitions the builder will try to assign to this device | | zone | integer indicating which zone the device is in; a given partition will not be assigned to multiple devices within the same zone | | ip | the ip address of the device | | port | the tcp port of the device | | device | the devices name on disk (sdb1, for example) | | meta | general use extra field; for example: the online date, the hardware description | id unique integer identifier amongst devices index offset into the primary node list for the partition weight a float of the relative weight of this device as compared to others; this indicates how many partitions the builder will try to assign to this device zone integer indicating which zone the device is in; a given partition will not be assigned to multiple devices within the same zone ip the ip address of the device port the tcp port of the device device the devices name on disk (sdb1, for example) meta general use extra field; for example: the online date, the hardware description Get the partition for an account/container/object. account account name container container name obj object name the partition number Get the nodes that are responsible for the partition. If one node is responsible for more than one replica of the same partition, it will only appear in the output once. part partition to get nodes for list of node dicts See get_nodes() for a description of the node dicts. Check to see if the ring on disk is different than the current one in memory. True if the ring on disk has changed, False otherwise Number of partitions in the ring. Number of replicas (full or partial) used in the ring. Number of devices with weight in the" }, { "data": "Bases: object Partitioned consistent hashing ring data (used for serialization). Deserialize a v1 ring file into a dictionary with devs, part_shift, and replica2part2dev_id keys. If the optional kwarg metadata_only is True, then the replica2part2dev_id is not loaded and that key in the returned dictionary just has the value []. gz_file (file) An opened file-like object which has already consumed the 6 bytes of magic and version. metadataonly (bool) If True, only load devs and partshift A dict containing devs, part_shift, and replica2part2dev_id Load ring data from a file. filename Path to a file serialized by the save() method. metadataonly (bool) If True, only load devs and partshift. A RingData instance containing the loaded data. Number of replicas (full or partial) used in the ring. Serialize this RingData instance to disk. filename File into which this instance should be serialized. mtime time used to override mtime for gzip, default or None if the caller wants to include time Bases: object Bases: object Used to build swift.common.ring.RingData instances to be written to disk and used with swift.common.ring.Ring instances. See bin/swift-ring-builder for example usage. The instance variable devs_changed indicates if the device information has changed since the last balancing. This can be used by tools to know whether a rebalance request is an isolated request or due to added, changed, or removed devices. partpower number of partitions = 2**partpower. replicas number of replicas for each partition minparthours minimum number of hours between partition changes Add a device to the ring. This device dict should have a minimum of the following keys: | 0 | 1 | |:-|:| | id | unique integer identifier amongst devices. Defaults to the next id if the id key is not provided in the dict | | weight | a float of the relative weight of this device as compared to others; this indicates how many partitions the builder will try to assign to this device | | region | integer indicating which region the device is in | | zone | integer indicating which zone the device is in; a given partition will not be assigned to multiple devices within the same (region, zone) pair if there is any alternative | | ip | the ip address of the device | | port | the tcp port of the device | | device | the devices name on disk (sdb1, for example) | | meta | general use extra field; for example: the online date, the hardware description | id unique integer identifier amongst" }, { "data": "Defaults to the next id if the id key is not provided in the dict weight a float of the relative weight of this device as compared to others; this indicates how many partitions the builder will try to assign to this device region integer indicating which region the device is in zone integer indicating which zone the device is in; a given partition will not be assigned to multiple devices within the same (region, zone) pair if there is any alternative ip the ip address of the device port the tcp port of the device device the devices name on disk (sdb1, for example) meta general use extra field; for example: the online date, the hardware description Note This will not rebalance the ring immediately as you may want to make multiple changes for a single rebalance. dev device dict id of device (not used in the tree anymore, but unknown users may depend on it) Cancels a ring partition power increasement. This sets the nextpartpower to the current part_power. Object replicators will still skip replication, and a cleanup is still required. Finally, a finishincreasepartition_power needs to be run. False if nextpartpower was not set or is equal to current part_power, otherwise True. Changes the value used to decide if a given partition can be moved again. This restriction is to give the overall system enough time to settle a partition to its new location before moving it to yet another location. While no data would be lost if a partition is moved several times quickly, it could make that data unreachable for a short period of time. This should be set to at least the average full partition replication time. Starting it at 24 hours and then lowering it to what the replicator reports as the longest partition cycle is best. minparthours new value for minparthours Reinitializes this RingBuilder instance from data obtained from the builder dict given. Code example: ``` b = RingBuilder(1, 1, 1) # Dummy values b.copy_from(builder) ``` This is to restore a RingBuilder that has had its b.to_dict() previously saved. Temporarily enables debug logging, useful in tests, e.g.: ``` with rb.debug(): rb.rebalance() ``` Finish the partition power increase. The hard links from the old object locations should be removed by now. Get the balance of the ring. The balance value is the highest percentage of the desired amount of partitions a given device wants. For instance, if the worst device wants (based on its weight relative to the sum of all the devices weights) 123 partitions and it has 124 partitions, the balance value would be 0.83 (1 extra / 123 wanted * 100 for percentage). balance of the ring Get the devices that are responsible for the partition, filtering out duplicates. part partition to get devices for list of device dicts Returns the minimum overload value required to make the ring maximally dispersed. The required overload is the largest percentage change of any single device from its weighted replicanth to its wanted replicanth (note: under weighted devices have a negative percentage change) to archive dispersion - that is to say a single device that must be overloaded by 5% is worse than 5 devices in a single tier overloaded by 1%. Get the ring, or more specifically, the swift.common.ring.RingData. This ring data is the minimum required for use of the ring. The ring builder itself keeps additional data such as when partitions were last moved. Increases ring partition power by one. Devices will be assigned to partitions like this: OLD: 0, 3, 7, 5, 2, 1, NEW: 0, 0, 3, 3, 7, 7, 5, 5, 2, 2, 1, 1, False if nextpartpower was not set or is equal to current part_power, None if something went wrong, otherwise True. Obtain RingBuilder instance of the provided builder file builder_file path to builder file to load RingBuilder instance Get the total seconds until a rebalance can be performed Prepares a ring for partition power increase. This makes it possible to compute the future location of any object based on the next partition" }, { "data": "In this phase object servers should create hard links when finalizing a write to the new location as well. A relinker will be run after restarting object-servers, creating hard links to all existing objects in their future location. False if nextpartpower was not set, otherwise True. Override minparthours by marking all partitions as having been moved 255 hours ago and last move epoch to the beginning of time. This can be used to force a full rebalance on the next call to rebalance. Rebalance the ring. This is the main work function of the builder, as it will assign and reassign partitions to devices in the ring based on weights, distinct zones, recent reassignments, etc. The process doesnt always perfectly assign partitions (thatd take a lot more analysis and therefore a lot more time I had code that did that before). Because of this, it keeps rebalancing until the device skew (number of partitions a device wants compared to what it has) gets below 1% or doesnt change by more than 1% (only happens with a ring that cant be balanced no matter what). seed a value for the random seed (optional) (numberofpartitionsaltered, resultingbalance, numberofremoved_devices) Remove a device from the ring. Note This will not rebalance the ring immediately as you may want to make multiple changes for a single rebalance. dev_id device id Serialize this RingBuilder instance to disk. builder_file path to builder file to save Search devices by parameters. search_values a dictionary with search values to filter devices, supported parameters are id, region, zone, ip, port, replication_ip, replication_port, device, weight, meta list of device dicts Set the region of a device. This should be called rather than just altering the region key in the device dict directly, as the builder will need to rebuild some internal state to reflect the change. Note This will not rebalance the ring immediately as you may want to make multiple changes for a single rebalance. dev_id device id region new region for device Set the weight of a device. This should be called rather than just altering the weight key in the device dict directly, as the builder will need to rebuild some internal state to reflect the change. Note This will not rebalance the ring immediately as you may want to make multiple changes for a single rebalance. dev_id device id weight new weight for device Set the zone of a device. This should be called rather than just altering the zone key in the device dict directly, as the builder will need to rebuild some internal state to reflect the change. Note This will not rebalance the ring immediately as you may want to make multiple changes for a single rebalance. dev_id device id zone new zone for device Changes the number of replicas in this ring. If the new replica count is sufficiently different that self._replica2part2dev will change size, sets self.devs_changed. This is so tools like bin/swift-ring-builder can know to write out the new ring rather than bailing out due to lack of balance change. Returns a dict that can be used later with copy_from to restore a RingBuilder. swift-ring-builder uses this to pickle.dump the dict to a file and later load that dict into copy_from. Validate the" }, { "data": "This is a safety function to try to catch any bugs in the building process. It ensures partitions have been assigned to real devices, arent doubly assigned, etc. It can also optionally check the even distribution of partitions across devices. stats if True, check distribution of partitions across devices if stats is True, a tuple of (deviceusage, worststat), else (None, None). deviceusage[devid] will equal the number of partitions assigned to that device. worst_stat will equal the number of partitions the worst device is skewed from the number it should have. RingValidationError problem was found with the ring. Returns the weight of each partition as calculated from the total weight of all the devices. Bases: Warning A standard ring built using the ring-builder will attempt to randomly disperse replicas or erasure-coded fragments across failure domains, but does not provide any guarantees such as placing at least one replica of every partition into each region. Composite rings are intended to provide operators with greater control over the dispersion of object replicas or fragments across a cluster, in particular when there is a desire to have strict guarantees that some replicas or fragments are placed in certain failure domains. This is particularly important for policies with duplicated erasure-coded fragments. A composite ring comprises two or more component rings that are combined to form a single ring with a replica count equal to the sum of replica counts from the component rings. The component rings are built independently, using distinct devices in distinct regions, which means that the dispersion of replicas between the components can be guaranteed. The composite_builder utilities may then be used to combine components into a composite ring. For example, consider a normal ring ring0 with replica count of 4 and devices in two regions r1 and r2. Despite the best efforts of the ring-builder, it is possible for there to be three replicas of a particular partition placed in one region and only one replica placed in the other region. For example: ``` part_n -> r1z1h110/sdb r1z2h12/sdb r1z3h13/sdb r2z1h21/sdb ``` Now consider two normal rings each with replica count of 2: ring1 has devices in only r1; ring2 has devices in only r2. When these rings are combined into a composite ring then every partition is guaranteed to be mapped to two devices in each of r1 and r2, for example: ``` part_n -> r1z1h10/sdb r1z2h20/sdb r2z1h21/sdb r2z2h22/sdb || || | | ring1 ring2 ``` The dispersion of partition replicas across failure domains within each of the two component rings may change as they are modified and rebalanced, but the dispersion of replicas between the two regions is guaranteed by the use of a composite ring. For rings to be formed into a composite they must satisfy the following requirements: All component rings must have the same part power (and therefore number of partitions) All component rings must have an integer replica count Each region may only be used in one component ring Each device may only be used in one component ring Under the hood, the composite ring has a replica2part2devid table that is the union of the tables from the component rings. Whenever the component rings are rebalanced, the composite ring must be rebuilt. There is no dynamic rebuilding of the composite" }, { "data": "Note The order in which component rings are combined into a composite ring is very significant because it determines the order in which the Ring.getpartnodes() method will provide primary nodes for the composite ring and consequently the node indexes assigned to the primary nodes. For an erasure-coded policy, inadvertent changes to the primary node indexes could result in large amounts of data movement due to fragments being moved to their new correct primary. The id of each component RingBuilder is therefore stored in metadata of the composite and used to check for the component ordering when the same composite ring is re-composed. RingBuilder ids are normally assigned when a RingBuilder instance is first saved. Older RingBuilder instances loaded from file may not have an id assigned and will need to be saved before they can be used as components of a composite ring. This can be achieved by, for example: ``` swift-ring-builder <builder-file> rebalance --force ``` Bases: object Provides facility to create, persist, load, rebalance and update composite rings, for example: ``` crb = CompositeRingBuilder([\"region1.builder\", \"region2.builder\"]) crb.rebalance() ring_data = crb.compose() ringdata.save(\"compositering.gz\") crb.save(\"composite_builder.composite\") crb = CompositeRingBuilder.load(\"composite_builder.composite\") crb.compose([\"/path/to/region1.builder\", \"/path/to/region2.builder\"]) ``` Composite ring metadata is persisted to file in JSON format. The metadata has the structure shown below (using example values): ``` { \"version\": 4, \"components\": [ { \"version\": 3, \"id\": \"8e56f3b692d43d9a666440a3d945a03a\", \"replicas\": 1 }, { \"version\": 5, \"id\": \"96085923c2b644999dbfd74664f4301b\", \"replicas\": 1 } ] \"componentbuilderfiles\": { \"8e56f3b692d43d9a666440a3d945a03a\": \"/etc/swift/region1.builder\", \"96085923c2b644999dbfd74664f4301b\": \"/etc/swift/region2.builder\", } \"serialization_version\": 1, \"saved_path\": \"/etc/swift/multi-ring-1.composite\", } ``` version is an integer representing the current version of the composite ring, which increments each time the ring is successfully (re)composed. components is a list of dicts, each of which describes relevant properties of a component ring componentbuilderfiles is a dict that maps component ring builder ids to the file from which that component ring builder was loaded. serialization_version is an integer constant. saved_path is the path to which the metadata was written. a list of paths to builder files that will be used as components of the composite ring. Check with all component builders that it is ok to move a partition. part The partition to check. True if all component builders agree that the partition can be moved, False otherwise. Builds a composite ring using component ring builders loaded from a list of builder files and updates composite ring metadata. If a list of component ring builder files is given then that will be used to load component ring builders. Otherwise, component ring builders will be loaded using the list of builder files that was set when the instance was constructed. In either case, if metadata for an existing composite ring has been loaded then the component ring builders are verified for consistency with the existing composition of builders, unless the optional force flag if set True. builder_files Optional list of paths to ring builder files that will be used to load the component ring builders. Typically the list of component builder files will have been set when the instance was constructed, for example when using the load() class method. However, this parameter may be used if the component builder file paths have moved, or, in conjunction with the force parameter, if a new list of component builders is to be" }, { "data": "force if True then do not verify given builders are consistent with any existing composite ring (default is False). require_modified if True and force is False, then verify that at least one of the given builders has been modified since the composite ring was last built (default is False). An instance of swift.common.ring.ring.RingData ValueError if the component ring builders are not suitable for composing with each other, or are inconsistent with any existing composite ring, or if require_modified is True and there has been no change with respect to the existing ring. Load composite ring metadata. pathtofile Absolute path to a composite ring JSON file. an instance of CompositeRingBuilder IOError if there is a problem opening the file ValueError if the file does not contain valid composite ring metadata Loads component ring builders from builder files. Previously loaded component ring builders will discarded and reloaded. If a list of component ring builder files is given then that will be used to load component ring builders. Otherwise, component ring builders will be loaded using the list of builder files that was set when the instance was constructed. In either case, if metadata for an existing composite ring has been loaded then the component ring builders are verified for consistency with the existing composition of builders, unless the optional force flag if set True. builder_files Optional list of paths to ring builder files that will be used to load the component ring builders. Typically the list of component builder files will have been set when the instance was constructed, for example when using the load() class method. However, this parameter may be used if the component builder file paths have moved, or, in conjunction with the force parameter, if a new list of component builders is to be used. force if True then do not verify given builders are consistent with any existing composite ring (default is False). require_modified if True and force is False, then verify that at least one of the given builders has been modified since the composite ring was last built (default is False). A tuple of (builder files, loaded builders) ValueError if the component ring builders are not suitable for composing with each other, or are inconsistent with any existing composite ring, or if require_modified is True and there has been no change with respect to the existing ring. Cooperatively rebalances all component ring builders. This method does not change the state of the composite ring; a subsequent call to compose() is required to generate updated composite RingData. A list of dicts, one per component builder, each having the following keys: builder_file maps to the component builder file; builder maps to the corresponding instance of swift.common.ring.builder.RingBuilder; result maps to the results of the rebalance of that component i.e. a tuple of: (numberofpartitions_altered, resultingbalance, numberofremoveddevices) The list has the same order as components in the composite ring. RingBuilderError if there is an error while rebalancing any component builder. Save composite ring metadata to given file. See CompositeRingBuilder for details of the persisted metadata format. pathtofile Absolute path to a composite ring file ValueError if no composite ring has been built yet with this instance Transform the composite ring attributes to a dict. See CompositeRingBuilder for details of the persisted metadata" }, { "data": "a composite ring metadata dict Updates the record of how many hours ago each partition was moved in all component builders. Bases: RingBuilder A subclass of RingBuilder that participates in cooperative rebalance. During rebalance this subclass will consult with its parent_builder before moving a partition. The parent_builder may in turn check with co-builders (including this instance) to verify that none have moved that partition in the last minparthours. partpower number of partitions = 2**partpower. replicas number of replicas for each partition. minparthours minimum number of hours between partition changes. parent_builder an instance of CompositeRingBuilder. Check that in the context of this builder alone it is ok to move a partition. part The partition to check. True if the partition can be moved, False otherwise. Updates the record of how many hours ago each partition was moved in in this builder. Check that the given builders and their order are the same as that used to build an existing composite ring. Return True if any of the given builders has been modified with respect to its state when the given component_meta was created. oldcompositemeta a dict of the form returned by makecomposite_meta() newcompositemeta a dict of the form returned by makecomposite_meta() True if any of the components has been modified, False otherwise. Value Error if proposed new components do not match any existing components. Check that all builders in the given list have ids assigned and that no id appears more than once in the list. builders a list instances of swift.common.ring.builder.RingBuilder ValueError if any builder id is missing or repeated Check that no device appears in more than one of the given list of builders. builders a list of swift.common.ring.builder.RingBuilder instances ValueError if the same device is found in more than one builder Check that the given new_component metadata describes the same builder as the given oldcomponent metadata. The newcomponent builder does not necessarily need to be in the same state as when the old_component metadata was created to satisfy this check e.g. it may have changed devs and been rebalanced. old_component a dict of metadata describing a component builder new_component a dict of metadata describing a component builder ValueError if the new_component is not the same as that described by the old_component Given a list of component ring builders, perform validation on the list of builders and return a composite RingData instance. builders a list of swift.common.ring.builder.RingBuilder instances a new RingData instance built from the component builders ValueError if the builders are invalid with respect to each other Return True if the given builder has been modified with respect to its state when the given component_meta was created. old_component a dict of metadata describing a component ring new_component a dict of metadata describing a component ring True if the builder has been modified, False otherwise. ValueError if the version of the new_component is older than the version of the existing component. Pre-validation for all component ring builders that are to be included in the composite ring. Checks that all component rings are valid with respect to each other. builders a list of swift.common.ring.builder.RingBuilder instances ValueError if the builders are invalid with respect to each other Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "container.html#module-swift.container.sync.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Bases: object Partitioned consistent hashing ring. serialized_path path to serialized RingData instance reload_time time interval in seconds to check for a ring change ring_name ring name string (basically specified from policy) validation_hook hook point to validate ring configuration ontime RingLoadError if the loaded ring data violates its constraint Number of devices with assignments in the ring. Number of devices in the ring. devices in the ring Generator to get extra nodes for a partition for hinted handoff. The handoff nodes will try to be in zones other than the primary zones, will take into account the device weights, and will usually keep the same sequences of handoffs even with ring changes. part partition to get handoff nodes for generator of node dicts See get_nodes() for a description of the node dicts. Get the partition and nodes for an account/container/object. If a node is responsible for more than one replica, it will only appear in the output once. account account name container container name obj object name a tuple of (partition, list of node dicts) Each node dict will have at least the following keys: | 0 | 1 | |:-|:| | id | unique integer identifier amongst devices | | index | offset into the primary node list for the partition | | weight | a float of the relative weight of this device as compared to others; this indicates how many partitions the builder will try to assign to this device | | zone | integer indicating which zone the device is in; a given partition will not be assigned to multiple devices within the same zone | | ip | the ip address of the device | | port | the tcp port of the device | | device | the devices name on disk (sdb1, for example) | | meta | general use extra field; for example: the online date, the hardware description | id unique integer identifier amongst devices index offset into the primary node list for the partition weight a float of the relative weight of this device as compared to others; this indicates how many partitions the builder will try to assign to this device zone integer indicating which zone the device is in; a given partition will not be assigned to multiple devices within the same zone ip the ip address of the device port the tcp port of the device device the devices name on disk (sdb1, for example) meta general use extra field; for example: the online date, the hardware description Get the partition for an account/container/object. account account name container container name obj object name the partition number Get the nodes that are responsible for the partition. If one node is responsible for more than one replica of the same partition, it will only appear in the output once. part partition to get nodes for list of node dicts See get_nodes() for a description of the node dicts. Check to see if the ring on disk is different than the current one in memory. True if the ring on disk has changed, False otherwise Number of partitions in the ring. Number of replicas (full or partial) used in the ring. Number of devices with weight in the" }, { "data": "Bases: object Partitioned consistent hashing ring data (used for serialization). Deserialize a v1 ring file into a dictionary with devs, part_shift, and replica2part2dev_id keys. If the optional kwarg metadata_only is True, then the replica2part2dev_id is not loaded and that key in the returned dictionary just has the value []. gz_file (file) An opened file-like object which has already consumed the 6 bytes of magic and version. metadataonly (bool) If True, only load devs and partshift A dict containing devs, part_shift, and replica2part2dev_id Load ring data from a file. filename Path to a file serialized by the save() method. metadataonly (bool) If True, only load devs and partshift. A RingData instance containing the loaded data. Number of replicas (full or partial) used in the ring. Serialize this RingData instance to disk. filename File into which this instance should be serialized. mtime time used to override mtime for gzip, default or None if the caller wants to include time Bases: object Bases: object Used to build swift.common.ring.RingData instances to be written to disk and used with swift.common.ring.Ring instances. See bin/swift-ring-builder for example usage. The instance variable devs_changed indicates if the device information has changed since the last balancing. This can be used by tools to know whether a rebalance request is an isolated request or due to added, changed, or removed devices. partpower number of partitions = 2**partpower. replicas number of replicas for each partition minparthours minimum number of hours between partition changes Add a device to the ring. This device dict should have a minimum of the following keys: | 0 | 1 | |:-|:| | id | unique integer identifier amongst devices. Defaults to the next id if the id key is not provided in the dict | | weight | a float of the relative weight of this device as compared to others; this indicates how many partitions the builder will try to assign to this device | | region | integer indicating which region the device is in | | zone | integer indicating which zone the device is in; a given partition will not be assigned to multiple devices within the same (region, zone) pair if there is any alternative | | ip | the ip address of the device | | port | the tcp port of the device | | device | the devices name on disk (sdb1, for example) | | meta | general use extra field; for example: the online date, the hardware description | id unique integer identifier amongst" }, { "data": "Defaults to the next id if the id key is not provided in the dict weight a float of the relative weight of this device as compared to others; this indicates how many partitions the builder will try to assign to this device region integer indicating which region the device is in zone integer indicating which zone the device is in; a given partition will not be assigned to multiple devices within the same (region, zone) pair if there is any alternative ip the ip address of the device port the tcp port of the device device the devices name on disk (sdb1, for example) meta general use extra field; for example: the online date, the hardware description Note This will not rebalance the ring immediately as you may want to make multiple changes for a single rebalance. dev device dict id of device (not used in the tree anymore, but unknown users may depend on it) Cancels a ring partition power increasement. This sets the nextpartpower to the current part_power. Object replicators will still skip replication, and a cleanup is still required. Finally, a finishincreasepartition_power needs to be run. False if nextpartpower was not set or is equal to current part_power, otherwise True. Changes the value used to decide if a given partition can be moved again. This restriction is to give the overall system enough time to settle a partition to its new location before moving it to yet another location. While no data would be lost if a partition is moved several times quickly, it could make that data unreachable for a short period of time. This should be set to at least the average full partition replication time. Starting it at 24 hours and then lowering it to what the replicator reports as the longest partition cycle is best. minparthours new value for minparthours Reinitializes this RingBuilder instance from data obtained from the builder dict given. Code example: ``` b = RingBuilder(1, 1, 1) # Dummy values b.copy_from(builder) ``` This is to restore a RingBuilder that has had its b.to_dict() previously saved. Temporarily enables debug logging, useful in tests, e.g.: ``` with rb.debug(): rb.rebalance() ``` Finish the partition power increase. The hard links from the old object locations should be removed by now. Get the balance of the ring. The balance value is the highest percentage of the desired amount of partitions a given device wants. For instance, if the worst device wants (based on its weight relative to the sum of all the devices weights) 123 partitions and it has 124 partitions, the balance value would be 0.83 (1 extra / 123 wanted * 100 for percentage). balance of the ring Get the devices that are responsible for the partition, filtering out duplicates. part partition to get devices for list of device dicts Returns the minimum overload value required to make the ring maximally dispersed. The required overload is the largest percentage change of any single device from its weighted replicanth to its wanted replicanth (note: under weighted devices have a negative percentage change) to archive dispersion - that is to say a single device that must be overloaded by 5% is worse than 5 devices in a single tier overloaded by 1%. Get the ring, or more specifically, the swift.common.ring.RingData. This ring data is the minimum required for use of the ring. The ring builder itself keeps additional data such as when partitions were last moved. Increases ring partition power by one. Devices will be assigned to partitions like this: OLD: 0, 3, 7, 5, 2, 1, NEW: 0, 0, 3, 3, 7, 7, 5, 5, 2, 2, 1, 1, False if nextpartpower was not set or is equal to current part_power, None if something went wrong, otherwise True. Obtain RingBuilder instance of the provided builder file builder_file path to builder file to load RingBuilder instance Get the total seconds until a rebalance can be performed Prepares a ring for partition power increase. This makes it possible to compute the future location of any object based on the next partition" }, { "data": "In this phase object servers should create hard links when finalizing a write to the new location as well. A relinker will be run after restarting object-servers, creating hard links to all existing objects in their future location. False if nextpartpower was not set, otherwise True. Override minparthours by marking all partitions as having been moved 255 hours ago and last move epoch to the beginning of time. This can be used to force a full rebalance on the next call to rebalance. Rebalance the ring. This is the main work function of the builder, as it will assign and reassign partitions to devices in the ring based on weights, distinct zones, recent reassignments, etc. The process doesnt always perfectly assign partitions (thatd take a lot more analysis and therefore a lot more time I had code that did that before). Because of this, it keeps rebalancing until the device skew (number of partitions a device wants compared to what it has) gets below 1% or doesnt change by more than 1% (only happens with a ring that cant be balanced no matter what). seed a value for the random seed (optional) (numberofpartitionsaltered, resultingbalance, numberofremoved_devices) Remove a device from the ring. Note This will not rebalance the ring immediately as you may want to make multiple changes for a single rebalance. dev_id device id Serialize this RingBuilder instance to disk. builder_file path to builder file to save Search devices by parameters. search_values a dictionary with search values to filter devices, supported parameters are id, region, zone, ip, port, replication_ip, replication_port, device, weight, meta list of device dicts Set the region of a device. This should be called rather than just altering the region key in the device dict directly, as the builder will need to rebuild some internal state to reflect the change. Note This will not rebalance the ring immediately as you may want to make multiple changes for a single rebalance. dev_id device id region new region for device Set the weight of a device. This should be called rather than just altering the weight key in the device dict directly, as the builder will need to rebuild some internal state to reflect the change. Note This will not rebalance the ring immediately as you may want to make multiple changes for a single rebalance. dev_id device id weight new weight for device Set the zone of a device. This should be called rather than just altering the zone key in the device dict directly, as the builder will need to rebuild some internal state to reflect the change. Note This will not rebalance the ring immediately as you may want to make multiple changes for a single rebalance. dev_id device id zone new zone for device Changes the number of replicas in this ring. If the new replica count is sufficiently different that self._replica2part2dev will change size, sets self.devs_changed. This is so tools like bin/swift-ring-builder can know to write out the new ring rather than bailing out due to lack of balance change. Returns a dict that can be used later with copy_from to restore a RingBuilder. swift-ring-builder uses this to pickle.dump the dict to a file and later load that dict into copy_from. Validate the" }, { "data": "This is a safety function to try to catch any bugs in the building process. It ensures partitions have been assigned to real devices, arent doubly assigned, etc. It can also optionally check the even distribution of partitions across devices. stats if True, check distribution of partitions across devices if stats is True, a tuple of (deviceusage, worststat), else (None, None). deviceusage[devid] will equal the number of partitions assigned to that device. worst_stat will equal the number of partitions the worst device is skewed from the number it should have. RingValidationError problem was found with the ring. Returns the weight of each partition as calculated from the total weight of all the devices. Bases: Warning A standard ring built using the ring-builder will attempt to randomly disperse replicas or erasure-coded fragments across failure domains, but does not provide any guarantees such as placing at least one replica of every partition into each region. Composite rings are intended to provide operators with greater control over the dispersion of object replicas or fragments across a cluster, in particular when there is a desire to have strict guarantees that some replicas or fragments are placed in certain failure domains. This is particularly important for policies with duplicated erasure-coded fragments. A composite ring comprises two or more component rings that are combined to form a single ring with a replica count equal to the sum of replica counts from the component rings. The component rings are built independently, using distinct devices in distinct regions, which means that the dispersion of replicas between the components can be guaranteed. The composite_builder utilities may then be used to combine components into a composite ring. For example, consider a normal ring ring0 with replica count of 4 and devices in two regions r1 and r2. Despite the best efforts of the ring-builder, it is possible for there to be three replicas of a particular partition placed in one region and only one replica placed in the other region. For example: ``` part_n -> r1z1h110/sdb r1z2h12/sdb r1z3h13/sdb r2z1h21/sdb ``` Now consider two normal rings each with replica count of 2: ring1 has devices in only r1; ring2 has devices in only r2. When these rings are combined into a composite ring then every partition is guaranteed to be mapped to two devices in each of r1 and r2, for example: ``` part_n -> r1z1h10/sdb r1z2h20/sdb r2z1h21/sdb r2z2h22/sdb || || | | ring1 ring2 ``` The dispersion of partition replicas across failure domains within each of the two component rings may change as they are modified and rebalanced, but the dispersion of replicas between the two regions is guaranteed by the use of a composite ring. For rings to be formed into a composite they must satisfy the following requirements: All component rings must have the same part power (and therefore number of partitions) All component rings must have an integer replica count Each region may only be used in one component ring Each device may only be used in one component ring Under the hood, the composite ring has a replica2part2devid table that is the union of the tables from the component rings. Whenever the component rings are rebalanced, the composite ring must be rebuilt. There is no dynamic rebuilding of the composite" }, { "data": "Note The order in which component rings are combined into a composite ring is very significant because it determines the order in which the Ring.getpartnodes() method will provide primary nodes for the composite ring and consequently the node indexes assigned to the primary nodes. For an erasure-coded policy, inadvertent changes to the primary node indexes could result in large amounts of data movement due to fragments being moved to their new correct primary. The id of each component RingBuilder is therefore stored in metadata of the composite and used to check for the component ordering when the same composite ring is re-composed. RingBuilder ids are normally assigned when a RingBuilder instance is first saved. Older RingBuilder instances loaded from file may not have an id assigned and will need to be saved before they can be used as components of a composite ring. This can be achieved by, for example: ``` swift-ring-builder <builder-file> rebalance --force ``` Bases: object Provides facility to create, persist, load, rebalance and update composite rings, for example: ``` crb = CompositeRingBuilder([\"region1.builder\", \"region2.builder\"]) crb.rebalance() ring_data = crb.compose() ringdata.save(\"compositering.gz\") crb.save(\"composite_builder.composite\") crb = CompositeRingBuilder.load(\"composite_builder.composite\") crb.compose([\"/path/to/region1.builder\", \"/path/to/region2.builder\"]) ``` Composite ring metadata is persisted to file in JSON format. The metadata has the structure shown below (using example values): ``` { \"version\": 4, \"components\": [ { \"version\": 3, \"id\": \"8e56f3b692d43d9a666440a3d945a03a\", \"replicas\": 1 }, { \"version\": 5, \"id\": \"96085923c2b644999dbfd74664f4301b\", \"replicas\": 1 } ] \"componentbuilderfiles\": { \"8e56f3b692d43d9a666440a3d945a03a\": \"/etc/swift/region1.builder\", \"96085923c2b644999dbfd74664f4301b\": \"/etc/swift/region2.builder\", } \"serialization_version\": 1, \"saved_path\": \"/etc/swift/multi-ring-1.composite\", } ``` version is an integer representing the current version of the composite ring, which increments each time the ring is successfully (re)composed. components is a list of dicts, each of which describes relevant properties of a component ring componentbuilderfiles is a dict that maps component ring builder ids to the file from which that component ring builder was loaded. serialization_version is an integer constant. saved_path is the path to which the metadata was written. a list of paths to builder files that will be used as components of the composite ring. Check with all component builders that it is ok to move a partition. part The partition to check. True if all component builders agree that the partition can be moved, False otherwise. Builds a composite ring using component ring builders loaded from a list of builder files and updates composite ring metadata. If a list of component ring builder files is given then that will be used to load component ring builders. Otherwise, component ring builders will be loaded using the list of builder files that was set when the instance was constructed. In either case, if metadata for an existing composite ring has been loaded then the component ring builders are verified for consistency with the existing composition of builders, unless the optional force flag if set True. builder_files Optional list of paths to ring builder files that will be used to load the component ring builders. Typically the list of component builder files will have been set when the instance was constructed, for example when using the load() class method. However, this parameter may be used if the component builder file paths have moved, or, in conjunction with the force parameter, if a new list of component builders is to be" }, { "data": "force if True then do not verify given builders are consistent with any existing composite ring (default is False). require_modified if True and force is False, then verify that at least one of the given builders has been modified since the composite ring was last built (default is False). An instance of swift.common.ring.ring.RingData ValueError if the component ring builders are not suitable for composing with each other, or are inconsistent with any existing composite ring, or if require_modified is True and there has been no change with respect to the existing ring. Load composite ring metadata. pathtofile Absolute path to a composite ring JSON file. an instance of CompositeRingBuilder IOError if there is a problem opening the file ValueError if the file does not contain valid composite ring metadata Loads component ring builders from builder files. Previously loaded component ring builders will discarded and reloaded. If a list of component ring builder files is given then that will be used to load component ring builders. Otherwise, component ring builders will be loaded using the list of builder files that was set when the instance was constructed. In either case, if metadata for an existing composite ring has been loaded then the component ring builders are verified for consistency with the existing composition of builders, unless the optional force flag if set True. builder_files Optional list of paths to ring builder files that will be used to load the component ring builders. Typically the list of component builder files will have been set when the instance was constructed, for example when using the load() class method. However, this parameter may be used if the component builder file paths have moved, or, in conjunction with the force parameter, if a new list of component builders is to be used. force if True then do not verify given builders are consistent with any existing composite ring (default is False). require_modified if True and force is False, then verify that at least one of the given builders has been modified since the composite ring was last built (default is False). A tuple of (builder files, loaded builders) ValueError if the component ring builders are not suitable for composing with each other, or are inconsistent with any existing composite ring, or if require_modified is True and there has been no change with respect to the existing ring. Cooperatively rebalances all component ring builders. This method does not change the state of the composite ring; a subsequent call to compose() is required to generate updated composite RingData. A list of dicts, one per component builder, each having the following keys: builder_file maps to the component builder file; builder maps to the corresponding instance of swift.common.ring.builder.RingBuilder; result maps to the results of the rebalance of that component i.e. a tuple of: (numberofpartitions_altered, resultingbalance, numberofremoveddevices) The list has the same order as components in the composite ring. RingBuilderError if there is an error while rebalancing any component builder. Save composite ring metadata to given file. See CompositeRingBuilder for details of the persisted metadata format. pathtofile Absolute path to a composite ring file ValueError if no composite ring has been built yet with this instance Transform the composite ring attributes to a dict. See CompositeRingBuilder for details of the persisted metadata" }, { "data": "a composite ring metadata dict Updates the record of how many hours ago each partition was moved in all component builders. Bases: RingBuilder A subclass of RingBuilder that participates in cooperative rebalance. During rebalance this subclass will consult with its parent_builder before moving a partition. The parent_builder may in turn check with co-builders (including this instance) to verify that none have moved that partition in the last minparthours. partpower number of partitions = 2**partpower. replicas number of replicas for each partition. minparthours minimum number of hours between partition changes. parent_builder an instance of CompositeRingBuilder. Check that in the context of this builder alone it is ok to move a partition. part The partition to check. True if the partition can be moved, False otherwise. Updates the record of how many hours ago each partition was moved in in this builder. Check that the given builders and their order are the same as that used to build an existing composite ring. Return True if any of the given builders has been modified with respect to its state when the given component_meta was created. oldcompositemeta a dict of the form returned by makecomposite_meta() newcompositemeta a dict of the form returned by makecomposite_meta() True if any of the components has been modified, False otherwise. Value Error if proposed new components do not match any existing components. Check that all builders in the given list have ids assigned and that no id appears more than once in the list. builders a list instances of swift.common.ring.builder.RingBuilder ValueError if any builder id is missing or repeated Check that no device appears in more than one of the given list of builders. builders a list of swift.common.ring.builder.RingBuilder instances ValueError if the same device is found in more than one builder Check that the given new_component metadata describes the same builder as the given oldcomponent metadata. The newcomponent builder does not necessarily need to be in the same state as when the old_component metadata was created to satisfy this check e.g. it may have changed devs and been rebalanced. old_component a dict of metadata describing a component builder new_component a dict of metadata describing a component builder ValueError if the new_component is not the same as that described by the old_component Given a list of component ring builders, perform validation on the list of builders and return a composite RingData instance. builders a list of swift.common.ring.builder.RingBuilder instances a new RingData instance built from the component builders ValueError if the builders are invalid with respect to each other Return True if the given builder has been modified with respect to its state when the given component_meta was created. old_component a dict of metadata describing a component ring new_component a dict of metadata describing a component ring True if the builder has been modified, False otherwise. ValueError if the version of the new_component is older than the version of the existing component. Pre-validation for all component ring builders that are to be included in the composite ring. Checks that all component rings are valid with respect to each other. builders a list of swift.common.ring.builder.RingBuilder instances ValueError if the builders are invalid with respect to each other Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "container.html#module-swift.container.replicator.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Bases: object Partitioned consistent hashing ring. serialized_path path to serialized RingData instance reload_time time interval in seconds to check for a ring change ring_name ring name string (basically specified from policy) validation_hook hook point to validate ring configuration ontime RingLoadError if the loaded ring data violates its constraint Number of devices with assignments in the ring. Number of devices in the ring. devices in the ring Generator to get extra nodes for a partition for hinted handoff. The handoff nodes will try to be in zones other than the primary zones, will take into account the device weights, and will usually keep the same sequences of handoffs even with ring changes. part partition to get handoff nodes for generator of node dicts See get_nodes() for a description of the node dicts. Get the partition and nodes for an account/container/object. If a node is responsible for more than one replica, it will only appear in the output once. account account name container container name obj object name a tuple of (partition, list of node dicts) Each node dict will have at least the following keys: | 0 | 1 | |:-|:| | id | unique integer identifier amongst devices | | index | offset into the primary node list for the partition | | weight | a float of the relative weight of this device as compared to others; this indicates how many partitions the builder will try to assign to this device | | zone | integer indicating which zone the device is in; a given partition will not be assigned to multiple devices within the same zone | | ip | the ip address of the device | | port | the tcp port of the device | | device | the devices name on disk (sdb1, for example) | | meta | general use extra field; for example: the online date, the hardware description | id unique integer identifier amongst devices index offset into the primary node list for the partition weight a float of the relative weight of this device as compared to others; this indicates how many partitions the builder will try to assign to this device zone integer indicating which zone the device is in; a given partition will not be assigned to multiple devices within the same zone ip the ip address of the device port the tcp port of the device device the devices name on disk (sdb1, for example) meta general use extra field; for example: the online date, the hardware description Get the partition for an account/container/object. account account name container container name obj object name the partition number Get the nodes that are responsible for the partition. If one node is responsible for more than one replica of the same partition, it will only appear in the output once. part partition to get nodes for list of node dicts See get_nodes() for a description of the node dicts. Check to see if the ring on disk is different than the current one in memory. True if the ring on disk has changed, False otherwise Number of partitions in the ring. Number of replicas (full or partial) used in the ring. Number of devices with weight in the" }, { "data": "Bases: object Partitioned consistent hashing ring data (used for serialization). Deserialize a v1 ring file into a dictionary with devs, part_shift, and replica2part2dev_id keys. If the optional kwarg metadata_only is True, then the replica2part2dev_id is not loaded and that key in the returned dictionary just has the value []. gz_file (file) An opened file-like object which has already consumed the 6 bytes of magic and version. metadataonly (bool) If True, only load devs and partshift A dict containing devs, part_shift, and replica2part2dev_id Load ring data from a file. filename Path to a file serialized by the save() method. metadataonly (bool) If True, only load devs and partshift. A RingData instance containing the loaded data. Number of replicas (full or partial) used in the ring. Serialize this RingData instance to disk. filename File into which this instance should be serialized. mtime time used to override mtime for gzip, default or None if the caller wants to include time Bases: object Bases: object Used to build swift.common.ring.RingData instances to be written to disk and used with swift.common.ring.Ring instances. See bin/swift-ring-builder for example usage. The instance variable devs_changed indicates if the device information has changed since the last balancing. This can be used by tools to know whether a rebalance request is an isolated request or due to added, changed, or removed devices. partpower number of partitions = 2**partpower. replicas number of replicas for each partition minparthours minimum number of hours between partition changes Add a device to the ring. This device dict should have a minimum of the following keys: | 0 | 1 | |:-|:| | id | unique integer identifier amongst devices. Defaults to the next id if the id key is not provided in the dict | | weight | a float of the relative weight of this device as compared to others; this indicates how many partitions the builder will try to assign to this device | | region | integer indicating which region the device is in | | zone | integer indicating which zone the device is in; a given partition will not be assigned to multiple devices within the same (region, zone) pair if there is any alternative | | ip | the ip address of the device | | port | the tcp port of the device | | device | the devices name on disk (sdb1, for example) | | meta | general use extra field; for example: the online date, the hardware description | id unique integer identifier amongst" }, { "data": "Defaults to the next id if the id key is not provided in the dict weight a float of the relative weight of this device as compared to others; this indicates how many partitions the builder will try to assign to this device region integer indicating which region the device is in zone integer indicating which zone the device is in; a given partition will not be assigned to multiple devices within the same (region, zone) pair if there is any alternative ip the ip address of the device port the tcp port of the device device the devices name on disk (sdb1, for example) meta general use extra field; for example: the online date, the hardware description Note This will not rebalance the ring immediately as you may want to make multiple changes for a single rebalance. dev device dict id of device (not used in the tree anymore, but unknown users may depend on it) Cancels a ring partition power increasement. This sets the nextpartpower to the current part_power. Object replicators will still skip replication, and a cleanup is still required. Finally, a finishincreasepartition_power needs to be run. False if nextpartpower was not set or is equal to current part_power, otherwise True. Changes the value used to decide if a given partition can be moved again. This restriction is to give the overall system enough time to settle a partition to its new location before moving it to yet another location. While no data would be lost if a partition is moved several times quickly, it could make that data unreachable for a short period of time. This should be set to at least the average full partition replication time. Starting it at 24 hours and then lowering it to what the replicator reports as the longest partition cycle is best. minparthours new value for minparthours Reinitializes this RingBuilder instance from data obtained from the builder dict given. Code example: ``` b = RingBuilder(1, 1, 1) # Dummy values b.copy_from(builder) ``` This is to restore a RingBuilder that has had its b.to_dict() previously saved. Temporarily enables debug logging, useful in tests, e.g.: ``` with rb.debug(): rb.rebalance() ``` Finish the partition power increase. The hard links from the old object locations should be removed by now. Get the balance of the ring. The balance value is the highest percentage of the desired amount of partitions a given device wants. For instance, if the worst device wants (based on its weight relative to the sum of all the devices weights) 123 partitions and it has 124 partitions, the balance value would be 0.83 (1 extra / 123 wanted * 100 for percentage). balance of the ring Get the devices that are responsible for the partition, filtering out duplicates. part partition to get devices for list of device dicts Returns the minimum overload value required to make the ring maximally dispersed. The required overload is the largest percentage change of any single device from its weighted replicanth to its wanted replicanth (note: under weighted devices have a negative percentage change) to archive dispersion - that is to say a single device that must be overloaded by 5% is worse than 5 devices in a single tier overloaded by 1%. Get the ring, or more specifically, the swift.common.ring.RingData. This ring data is the minimum required for use of the ring. The ring builder itself keeps additional data such as when partitions were last moved. Increases ring partition power by one. Devices will be assigned to partitions like this: OLD: 0, 3, 7, 5, 2, 1, NEW: 0, 0, 3, 3, 7, 7, 5, 5, 2, 2, 1, 1, False if nextpartpower was not set or is equal to current part_power, None if something went wrong, otherwise True. Obtain RingBuilder instance of the provided builder file builder_file path to builder file to load RingBuilder instance Get the total seconds until a rebalance can be performed Prepares a ring for partition power increase. This makes it possible to compute the future location of any object based on the next partition" }, { "data": "In this phase object servers should create hard links when finalizing a write to the new location as well. A relinker will be run after restarting object-servers, creating hard links to all existing objects in their future location. False if nextpartpower was not set, otherwise True. Override minparthours by marking all partitions as having been moved 255 hours ago and last move epoch to the beginning of time. This can be used to force a full rebalance on the next call to rebalance. Rebalance the ring. This is the main work function of the builder, as it will assign and reassign partitions to devices in the ring based on weights, distinct zones, recent reassignments, etc. The process doesnt always perfectly assign partitions (thatd take a lot more analysis and therefore a lot more time I had code that did that before). Because of this, it keeps rebalancing until the device skew (number of partitions a device wants compared to what it has) gets below 1% or doesnt change by more than 1% (only happens with a ring that cant be balanced no matter what). seed a value for the random seed (optional) (numberofpartitionsaltered, resultingbalance, numberofremoved_devices) Remove a device from the ring. Note This will not rebalance the ring immediately as you may want to make multiple changes for a single rebalance. dev_id device id Serialize this RingBuilder instance to disk. builder_file path to builder file to save Search devices by parameters. search_values a dictionary with search values to filter devices, supported parameters are id, region, zone, ip, port, replication_ip, replication_port, device, weight, meta list of device dicts Set the region of a device. This should be called rather than just altering the region key in the device dict directly, as the builder will need to rebuild some internal state to reflect the change. Note This will not rebalance the ring immediately as you may want to make multiple changes for a single rebalance. dev_id device id region new region for device Set the weight of a device. This should be called rather than just altering the weight key in the device dict directly, as the builder will need to rebuild some internal state to reflect the change. Note This will not rebalance the ring immediately as you may want to make multiple changes for a single rebalance. dev_id device id weight new weight for device Set the zone of a device. This should be called rather than just altering the zone key in the device dict directly, as the builder will need to rebuild some internal state to reflect the change. Note This will not rebalance the ring immediately as you may want to make multiple changes for a single rebalance. dev_id device id zone new zone for device Changes the number of replicas in this ring. If the new replica count is sufficiently different that self._replica2part2dev will change size, sets self.devs_changed. This is so tools like bin/swift-ring-builder can know to write out the new ring rather than bailing out due to lack of balance change. Returns a dict that can be used later with copy_from to restore a RingBuilder. swift-ring-builder uses this to pickle.dump the dict to a file and later load that dict into copy_from. Validate the" }, { "data": "This is a safety function to try to catch any bugs in the building process. It ensures partitions have been assigned to real devices, arent doubly assigned, etc. It can also optionally check the even distribution of partitions across devices. stats if True, check distribution of partitions across devices if stats is True, a tuple of (deviceusage, worststat), else (None, None). deviceusage[devid] will equal the number of partitions assigned to that device. worst_stat will equal the number of partitions the worst device is skewed from the number it should have. RingValidationError problem was found with the ring. Returns the weight of each partition as calculated from the total weight of all the devices. Bases: Warning A standard ring built using the ring-builder will attempt to randomly disperse replicas or erasure-coded fragments across failure domains, but does not provide any guarantees such as placing at least one replica of every partition into each region. Composite rings are intended to provide operators with greater control over the dispersion of object replicas or fragments across a cluster, in particular when there is a desire to have strict guarantees that some replicas or fragments are placed in certain failure domains. This is particularly important for policies with duplicated erasure-coded fragments. A composite ring comprises two or more component rings that are combined to form a single ring with a replica count equal to the sum of replica counts from the component rings. The component rings are built independently, using distinct devices in distinct regions, which means that the dispersion of replicas between the components can be guaranteed. The composite_builder utilities may then be used to combine components into a composite ring. For example, consider a normal ring ring0 with replica count of 4 and devices in two regions r1 and r2. Despite the best efforts of the ring-builder, it is possible for there to be three replicas of a particular partition placed in one region and only one replica placed in the other region. For example: ``` part_n -> r1z1h110/sdb r1z2h12/sdb r1z3h13/sdb r2z1h21/sdb ``` Now consider two normal rings each with replica count of 2: ring1 has devices in only r1; ring2 has devices in only r2. When these rings are combined into a composite ring then every partition is guaranteed to be mapped to two devices in each of r1 and r2, for example: ``` part_n -> r1z1h10/sdb r1z2h20/sdb r2z1h21/sdb r2z2h22/sdb || || | | ring1 ring2 ``` The dispersion of partition replicas across failure domains within each of the two component rings may change as they are modified and rebalanced, but the dispersion of replicas between the two regions is guaranteed by the use of a composite ring. For rings to be formed into a composite they must satisfy the following requirements: All component rings must have the same part power (and therefore number of partitions) All component rings must have an integer replica count Each region may only be used in one component ring Each device may only be used in one component ring Under the hood, the composite ring has a replica2part2devid table that is the union of the tables from the component rings. Whenever the component rings are rebalanced, the composite ring must be rebuilt. There is no dynamic rebuilding of the composite" }, { "data": "Note The order in which component rings are combined into a composite ring is very significant because it determines the order in which the Ring.getpartnodes() method will provide primary nodes for the composite ring and consequently the node indexes assigned to the primary nodes. For an erasure-coded policy, inadvertent changes to the primary node indexes could result in large amounts of data movement due to fragments being moved to their new correct primary. The id of each component RingBuilder is therefore stored in metadata of the composite and used to check for the component ordering when the same composite ring is re-composed. RingBuilder ids are normally assigned when a RingBuilder instance is first saved. Older RingBuilder instances loaded from file may not have an id assigned and will need to be saved before they can be used as components of a composite ring. This can be achieved by, for example: ``` swift-ring-builder <builder-file> rebalance --force ``` Bases: object Provides facility to create, persist, load, rebalance and update composite rings, for example: ``` crb = CompositeRingBuilder([\"region1.builder\", \"region2.builder\"]) crb.rebalance() ring_data = crb.compose() ringdata.save(\"compositering.gz\") crb.save(\"composite_builder.composite\") crb = CompositeRingBuilder.load(\"composite_builder.composite\") crb.compose([\"/path/to/region1.builder\", \"/path/to/region2.builder\"]) ``` Composite ring metadata is persisted to file in JSON format. The metadata has the structure shown below (using example values): ``` { \"version\": 4, \"components\": [ { \"version\": 3, \"id\": \"8e56f3b692d43d9a666440a3d945a03a\", \"replicas\": 1 }, { \"version\": 5, \"id\": \"96085923c2b644999dbfd74664f4301b\", \"replicas\": 1 } ] \"componentbuilderfiles\": { \"8e56f3b692d43d9a666440a3d945a03a\": \"/etc/swift/region1.builder\", \"96085923c2b644999dbfd74664f4301b\": \"/etc/swift/region2.builder\", } \"serialization_version\": 1, \"saved_path\": \"/etc/swift/multi-ring-1.composite\", } ``` version is an integer representing the current version of the composite ring, which increments each time the ring is successfully (re)composed. components is a list of dicts, each of which describes relevant properties of a component ring componentbuilderfiles is a dict that maps component ring builder ids to the file from which that component ring builder was loaded. serialization_version is an integer constant. saved_path is the path to which the metadata was written. a list of paths to builder files that will be used as components of the composite ring. Check with all component builders that it is ok to move a partition. part The partition to check. True if all component builders agree that the partition can be moved, False otherwise. Builds a composite ring using component ring builders loaded from a list of builder files and updates composite ring metadata. If a list of component ring builder files is given then that will be used to load component ring builders. Otherwise, component ring builders will be loaded using the list of builder files that was set when the instance was constructed. In either case, if metadata for an existing composite ring has been loaded then the component ring builders are verified for consistency with the existing composition of builders, unless the optional force flag if set True. builder_files Optional list of paths to ring builder files that will be used to load the component ring builders. Typically the list of component builder files will have been set when the instance was constructed, for example when using the load() class method. However, this parameter may be used if the component builder file paths have moved, or, in conjunction with the force parameter, if a new list of component builders is to be" }, { "data": "force if True then do not verify given builders are consistent with any existing composite ring (default is False). require_modified if True and force is False, then verify that at least one of the given builders has been modified since the composite ring was last built (default is False). An instance of swift.common.ring.ring.RingData ValueError if the component ring builders are not suitable for composing with each other, or are inconsistent with any existing composite ring, or if require_modified is True and there has been no change with respect to the existing ring. Load composite ring metadata. pathtofile Absolute path to a composite ring JSON file. an instance of CompositeRingBuilder IOError if there is a problem opening the file ValueError if the file does not contain valid composite ring metadata Loads component ring builders from builder files. Previously loaded component ring builders will discarded and reloaded. If a list of component ring builder files is given then that will be used to load component ring builders. Otherwise, component ring builders will be loaded using the list of builder files that was set when the instance was constructed. In either case, if metadata for an existing composite ring has been loaded then the component ring builders are verified for consistency with the existing composition of builders, unless the optional force flag if set True. builder_files Optional list of paths to ring builder files that will be used to load the component ring builders. Typically the list of component builder files will have been set when the instance was constructed, for example when using the load() class method. However, this parameter may be used if the component builder file paths have moved, or, in conjunction with the force parameter, if a new list of component builders is to be used. force if True then do not verify given builders are consistent with any existing composite ring (default is False). require_modified if True and force is False, then verify that at least one of the given builders has been modified since the composite ring was last built (default is False). A tuple of (builder files, loaded builders) ValueError if the component ring builders are not suitable for composing with each other, or are inconsistent with any existing composite ring, or if require_modified is True and there has been no change with respect to the existing ring. Cooperatively rebalances all component ring builders. This method does not change the state of the composite ring; a subsequent call to compose() is required to generate updated composite RingData. A list of dicts, one per component builder, each having the following keys: builder_file maps to the component builder file; builder maps to the corresponding instance of swift.common.ring.builder.RingBuilder; result maps to the results of the rebalance of that component i.e. a tuple of: (numberofpartitions_altered, resultingbalance, numberofremoveddevices) The list has the same order as components in the composite ring. RingBuilderError if there is an error while rebalancing any component builder. Save composite ring metadata to given file. See CompositeRingBuilder for details of the persisted metadata format. pathtofile Absolute path to a composite ring file ValueError if no composite ring has been built yet with this instance Transform the composite ring attributes to a dict. See CompositeRingBuilder for details of the persisted metadata" }, { "data": "a composite ring metadata dict Updates the record of how many hours ago each partition was moved in all component builders. Bases: RingBuilder A subclass of RingBuilder that participates in cooperative rebalance. During rebalance this subclass will consult with its parent_builder before moving a partition. The parent_builder may in turn check with co-builders (including this instance) to verify that none have moved that partition in the last minparthours. partpower number of partitions = 2**partpower. replicas number of replicas for each partition. minparthours minimum number of hours between partition changes. parent_builder an instance of CompositeRingBuilder. Check that in the context of this builder alone it is ok to move a partition. part The partition to check. True if the partition can be moved, False otherwise. Updates the record of how many hours ago each partition was moved in in this builder. Check that the given builders and their order are the same as that used to build an existing composite ring. Return True if any of the given builders has been modified with respect to its state when the given component_meta was created. oldcompositemeta a dict of the form returned by makecomposite_meta() newcompositemeta a dict of the form returned by makecomposite_meta() True if any of the components has been modified, False otherwise. Value Error if proposed new components do not match any existing components. Check that all builders in the given list have ids assigned and that no id appears more than once in the list. builders a list instances of swift.common.ring.builder.RingBuilder ValueError if any builder id is missing or repeated Check that no device appears in more than one of the given list of builders. builders a list of swift.common.ring.builder.RingBuilder instances ValueError if the same device is found in more than one builder Check that the given new_component metadata describes the same builder as the given oldcomponent metadata. The newcomponent builder does not necessarily need to be in the same state as when the old_component metadata was created to satisfy this check e.g. it may have changed devs and been rebalanced. old_component a dict of metadata describing a component builder new_component a dict of metadata describing a component builder ValueError if the new_component is not the same as that described by the old_component Given a list of component ring builders, perform validation on the list of builders and return a composite RingData instance. builders a list of swift.common.ring.builder.RingBuilder instances a new RingData instance built from the component builders ValueError if the builders are invalid with respect to each other Return True if the given builder has been modified with respect to its state when the given component_meta was created. old_component a dict of metadata describing a component ring new_component a dict of metadata describing a component ring True if the builder has been modified, False otherwise. ValueError if the version of the new_component is older than the version of the existing component. Pre-validation for all component ring builders that are to be included in the composite ring. Checks that all component rings are valid with respect to each other. builders a list of swift.common.ring.builder.RingBuilder instances ValueError if the builders are invalid with respect to each other Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "container.html#module-swift.container.sharder.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Bases: DatabaseAuditor Audit accounts. alias of AccountBroker Pluggable Back-end for Account Server Bases: DatabaseBroker Encapsulates working with an account database. Create account_stat table which is specific to the account DB. Not a part of Pluggable Back-ends, internal to the baseline code. conn DB connection object put_timestamp put timestamp Create container table which is specific to the account DB. conn DB connection object Create policy_stat table which is specific to the account DB. Not a part of Pluggable Back-ends, internal to the baseline code. conn DB connection object Check if the account DB is empty. True if the database has no active containers. Get global data for the account. dict with keys: account, createdat, puttimestamp, deletetimestamp, statuschangedat, containercount, objectcount, bytesused, hash, id Get global policy stats for the account. do_migrations boolean, if True the policy stat dicts will always include the container_count key; otherwise it may be omitted on legacy databases until they are migrated. dict of policy stats where the key is the policy index and the value is a dictionary like {object_count: M, bytesused: N, containercount: L} Only returns true if the status field is set to DELETED. Get a list of containers sorted by name starting at marker onward, up to limit entries. Entries will begin with the prefix and will not have the delimiter after the prefix. limit maximum number of entries to get marker marker query end_marker end marker query prefix prefix query delimiter delimiter for query reverse reverse the result order. allow_reserved exclude names with reserved-byte by default list of tuples of (name, objectcount, bytesused, put_timestamp, 0) Turn this db record dict into the format this service uses for pending pickles. Merge items into the container table. itemlist list of dictionaries of {name, puttimestamp, deletetimestamp, objectcount, bytes_used, deleted, storagepolicyindex} source if defined, update incoming_sync with the source Create a container with the given attributes. name name of the container to create (a native string) puttimestamp puttimestamp of the container to create deletetimestamp deletetimestamp of the container to create object_count number of objects in the container bytes_used number of bytes used by the container storagepolicyindex the storage policy for this container Bases: Daemon Removes data from status=DELETED accounts. These are accounts that have been asked to be removed by the reseller via services removestorageaccount XMLRPC call. The account is not deleted immediately by the services call, but instead the account is simply marked for deletion by setting the status column in the account_stat table of the account database. This account reaper scans for such accounts and removes the data in the background. The background deletion process will occur on the primary account server for the" }, { "data": "server_conf The [account-server] dictionary of the account server configuration file reaper_conf The [account-reaper] dictionary of the account server configuration file See the etc/account-server.conf-sample for information on the possible configuration parameters. The account swift.common.ring.Ring for the cluster. The container swift.common.ring.Ring for the cluster. Get the ring identified by the policy index policy_idx Storage policy index A ring matching the storage policy Called once per pass for each account this server is the primary for and attempts to delete the data for the given account. The reaper will only delete one account at any given time. It will call reap_container() up to sqrt(self.concurrency) times concurrently while reaping the account. If there is any exception while deleting a single container, the process will continue for any other containers and the failed containers will be tried again the next time this function is called with the same parameters. If there is any exception while listing the containers for deletion, the process will stop (but will obviously be tried again the next time this function is called with the same parameters). This isnt likely since the listing comes from the local database. After the process completes (successfully or not) statistics about what was accomplished will be logged. This function returns nothing and should raise no exception but only update various self.stats_* values for what occurs. broker The AccountBroker for the account to delete. partition The partition in the account ring the account is on. nodes The primary node dicts for the account to delete. container_shard int used to shard containers reaped. If None, will reap all containers. See also swift.account.backend.AccountBroker for the broker class. See also swift.common.ring.Ring.get_nodes() for a description of the node dicts. Deletes the data and the container itself for the given container. This will call reap_object() up to sqrt(self.concurrency) times concurrently for the objects in the container. If there is any exception while deleting a single object, the process will continue for any other objects in the container and the failed objects will be tried again the next time this function is called with the same parameters. If there is any exception while listing the objects for deletion, the process will stop (but will obviously be tried again the next time this function is called with the same parameters). This is a possibility since the listing comes from querying just the primary remote container server. Once all objects have been attempted to be deleted, the container itself will be attempted to be deleted by sending a delete request to all container nodes. The format of the delete request is such that each container server will update a corresponding account server, removing the container from the accounts" }, { "data": "This function returns nothing and should raise no exception but only update various self.stats_* values for what occurs. account The name of the account for the container. account_partition The partition for the account on the account ring. account_nodes The primary node dicts for the account. container The name of the container to delete. See also: swift.common.ring.Ring.get_nodes() for a description of the account node dicts. Called once per pass for each device on the server. This will scan the accounts directory for the device, looking for partitions this device is the primary for, then looking for account databases that are marked status=DELETED and still have containers and calling reap_account(). Account databases marked status=DELETED that no longer have containers will eventually be permanently removed by the reclaim process within the account replicator (see swift.db_replicator). device The device to look for accounts to be deleted. Deletes the given object by issuing a delete request to each node for the object. The format of the delete request is such that each object server will update a corresponding container server, removing the object from the containers listing. This function returns nothing and should raise no exception but only update various self.stats_* values for what occurs. account The name of the account for the object. container The name of the container for the object. container_partition The partition for the container on the container ring. container_nodes The primary node dicts for the container. obj The name of the object to delete. policy_index The storage policy index of the objects container See also: swift.common.ring.Ring.get_nodes() for a description of the container node dicts. Main entry point when running the reaper in normal daemon mode. This repeatedly calls run_once() no quicker than the configuration interval. Main entry point when running the reaper in once mode, where it will do a single pass over all accounts on the server. This is called repeatedly by runforever(). This will call reapdevice() once for each device on the server. Bases: BaseStorageServer WSGI controller for the account server. Handle HTTP DELETE request. Handle HTTP GET request. Handle HTTP HEAD request. Handle HTTP POST request. Handle HTTP PUT request. Handle HTTP REPLICATE request. Handler for RPC calls for account replication. paste.deploy app factory for creating WSGI account server apps Split and validate path for an account. req a swob request a tuple of path parts as strings Split and validate path for a container. req a swob request a tuple of path parts as strings Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "contributing.html#communication.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "This is a compilation of five posts I made earlier discussing how to build a consistent hashing ring. The posts seemed to be accessed quite frequently, so Ive gathered them all here on one page for easier reading. Note This is an historical document; as such, all code examples are Python 2. If this makes you squirm, think of it as pseudo-code. Regardless of implementation language, the state of the art in consistent-hashing and distributed systems more generally has advanced. We hope that this introduction from first principles will still prove informative, particularly with regard to how data is distributed within a Swift cluster. Consistent Hashing is a term used to describe a process where data is distributed using a hashing algorithm to determine its location. Using only the hash of the id of the data you can determine exactly where that data should be. This mapping of hashes to locations is usually termed a ring. Probably the simplest hash is just a modulus of the id. For instance, if all ids are numbers and you have two machines you wish to distribute data to, you could just put all odd numbered ids on one machine and even numbered ids on the other. Assuming you have a balanced number of odd and even numbered ids, and a balanced data size per id, your data would be balanced between the two machines. Since data ids are often textual names and not numbers, like paths for files or URLs, it makes sense to use a real hashing algorithm to convert the names to numbers first. Using MD5 for instance, the hash of the name mom.png is 4559a12e3e8da7c2186250c2f292e3af and the hash of dad.png is 096edcc4107e9e18d6a03a43b3853bea. Now, using the modulus, we can place mom.jpg on the odd machine and dad.png on the even one. Another benefit of using a hashing algorithm like MD5 is that the resulting hashes have a known even distribution, meaning your ids will be evenly distributed without worrying about keeping the id values themselves evenly distributed. Here is a simple example of this in action: ``` from hashlib import md5 from struct import unpack_from NODE_COUNT = 100 DATAIDCOUNT = 10000000 nodecounts = [0] * NODECOUNT for dataid in range(DATAID_COUNT): dataid = str(dataid) hsh = unpackfrom('>I', md5(dataid).digest())[0] nodeid = hsh % NODECOUNT nodecounts[nodeid] += 1 desiredcount = DATAIDCOUNT / NODECOUNT print '%d: Desired data ids per node' % desired_count maxcount = max(nodecounts) over = 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids on one node, %.02f%% over' % \\ (max_count, over) mincount = min(nodecounts) under = 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids on one node, %.02f%% under' % \\ (min_count, under) ``` ``` 100000: Desired data ids per node 100695: Most data ids on one node, 0.69% over 99073: Least data ids on one node, 0.93% under ``` So thats not bad at all; less than a percent over/under for distribution per node. In the next part of this series well examine where modulus distribution causes problems and how to improve our ring to overcome them. In Part 1 of this series, we did a simple test of using the modulus of a hash to locate data. We saw very good distribution, but thats only part of the story. Distributed systems not only need to distribute load, but they often also need to grow as more and more data is placed in" }, { "data": "So lets imagine we have a 100 node system up and running using our previous algorithm, but its starting to get full so we want to add another node. When we add that 101st node to our algorithm we notice that many ids now map to different nodes than they previously did. Were going to have to shuffle a ton of data around our system to get it all into place again. Lets examine whats happened on a much smaller scale: just 2 nodes again, node 0 gets even ids and node 1 gets odd ids. So data id 100 would map to node 0, data id 101 to node 1, data id 102 to node 0, etc. This is simply node = id % 2. Now we add a third node (node 2) for more space, so we want node = id % 3. So now data id 100 maps to node id 1, data id 101 to node 2, and data id 102 to node 0. So we have to move data for 2 of our 3 ids so they can be found again. Lets examine this at a larger scale: ``` from hashlib import md5 from struct import unpack_from NODE_COUNT = 100 NEWNODECOUNT = 101 DATAIDCOUNT = 10000000 moved_ids = 0 for dataid in range(DATAID_COUNT): dataid = str(dataid) hsh = unpackfrom('>I', md5(str(dataid)).digest())[0] nodeid = hsh % NODECOUNT newnodeid = hsh % NEWNODECOUNT if nodeid != newnode_id: moved_ids += 1 percentmoved = 100.0 * movedids / DATAIDCOUNT print '%d ids moved, %.02f%%' % (movedids, percentmoved) ``` ``` 9900989 ids moved, 99.01% ``` Wow, thats severe. Wed have to shuffle around 99% of our data just to increase our capacity 1%! We need a new algorithm that combats this behavior. This is where the ring really comes in. We can assign ranges of hashes directly to nodes and then use an algorithm that minimizes the changes to those ranges. Back to our small scale, lets say our ids range from 0 to 999. We have two nodes and well assign data ids 0499 to node 0 and 500999 to node 1. Later, when we add node 2, we can take half the data ids from node 0 and half from node 1, minimizing the amount of data that needs to move. Lets examine this at a larger scale: ``` from bisect import bisect_left from hashlib import md5 from struct import unpack_from NODE_COUNT = 100 NEWNODECOUNT = 101 DATAIDCOUNT = 10000000 noderangestarts = [] for nodeid in range(NODECOUNT): noderangestarts.append(DATAIDCOUNT / NODECOUNT * nodeid) newnoderange_starts = [] for newnodeid in range(NEWNODECOUNT): newnoderangestarts.append(DATAID_COUNT / NEWNODECOUNT * newnodeid) moved_ids = 0 for dataid in range(DATAID_COUNT): dataid = str(dataid) hsh = unpackfrom('>I', md5(str(dataid)).digest())[0] nodeid = bisectleft(noderangestarts, hsh % DATAIDCOUNT) % NODE_COUNT newnodeid = bisectleft(newnoderangestarts, hsh % DATAIDCOUNT) % NEWNODECOUNT if nodeid != newnode_id: moved_ids += 1 percentmoved = 100.0 * movedids / DATAIDCOUNT print '%d ids moved, %.02f%%' % (movedids, percentmoved) ``` ``` 4901707 ids moved, 49.02% ``` Okay, that is better. But still, moving 50% of our data to add 1% capacity is not very good. If we examine what happened more closely well see what is an accordion effect. We shrunk node 0s range a bit to give to the new node, but that shifted all the other nodes ranges by the same amount. We can minimize the change to a nodes assigned range by assigning several smaller ranges instead of the single broad range we were before. This can be done by creating virtual nodes for each" }, { "data": "So 100 nodes might have 1000 virtual nodes. Lets examine how that might work. ``` from bisect import bisect_left from hashlib import md5 from struct import unpack_from NODE_COUNT = 100 DATAIDCOUNT = 10000000 VNODE_COUNT = 1000 vnoderangestarts = [] vnode2node = [] for vnodeid in range(VNODECOUNT): vnoderangestarts.append(DATAIDCOUNT / VNODECOUNT * vnodeid) vnode2node.append(vnodeid % NODECOUNT) new_vnode2node = list(vnode2node) newnodeid = NODE_COUNT NEWNODECOUNT = NODE_COUNT + 1 vnodestoreassign = VNODECOUNT / NEWNODE_COUNT while vnodestoreassign > 0: for nodetotakefrom in range(NODECOUNT): for vnodeid, nodeid in enumerate(new_vnode2node): if nodeid == nodetotakefrom: newvnode2node[vnodeid] = newnodeid vnodestoreassign -= 1 break if vnodestoreassign <= 0: break moved_ids = 0 for dataid in range(DATAID_COUNT): dataid = str(dataid) hsh = unpackfrom('>I', md5(str(dataid)).digest())[0] vnodeid = bisectleft(vnoderangestarts, hsh % DATAIDCOUNT) % VNODE_COUNT nodeid = vnode2node[vnodeid] newnodeid = newvnode2node[vnodeid] if nodeid != newnode_id: moved_ids += 1 percentmoved = 100.0 * movedids / DATAIDCOUNT print '%d ids moved, %.02f%%' % (movedids, percentmoved) ``` ``` 90423 ids moved, 0.90% ``` There we go, we added 1% capacity and only moved 0.9% of existing data. The vnoderangestarts list seems a bit out of place though. Its values are calculated and never change for the lifetime of the cluster, so lets optimize that out. ``` from bisect import bisect_left from hashlib import md5 from struct import unpack_from NODE_COUNT = 100 DATAIDCOUNT = 10000000 VNODE_COUNT = 1000 vnode2node = [] for vnodeid in range(VNODECOUNT): vnode2node.append(vnodeid % NODECOUNT) new_vnode2node = list(vnode2node) newnodeid = NODE_COUNT vnodestoreassign = VNODECOUNT / (NODECOUNT + 1) while vnodestoreassign > 0: for nodetotakefrom in range(NODECOUNT): for vnodeid, nodeid in enumerate(vnode2node): if nodeid == nodetotakefrom: vnode2node[vnodeid] = newnode_id vnodestoreassign -= 1 break if vnodestoreassign <= 0: break moved_ids = 0 for dataid in range(DATAID_COUNT): dataid = str(dataid) hsh = unpackfrom('>I', md5(str(dataid)).digest())[0] vnodeid = hsh % VNODECOUNT nodeid = vnode2node[vnodeid] newnodeid = newvnode2node[vnodeid] if nodeid != newnode_id: moved_ids += 1 percentmoved = 100.0 * movedids / DATAIDCOUNT print '%d ids moved, %.02f%%' % (movedids, percentmoved) ``` ``` 89841 ids moved, 0.90% ``` There we go. In the next part of this series, will further examine the algorithms limitations and how to improve on it. In Part 2 of this series, we reached an algorithm that performed well even when adding new nodes to the cluster. We used 1000 virtual nodes that could be independently assigned to nodes, allowing us to minimize the amount of data moved when a node was added. The number of virtual nodes puts a cap on how many real nodes you can have. For example, if you have 1000 virtual nodes and you try to add a 1001st real node, you cant assign a virtual node to it without leaving another real node with no assignment, leaving you with just 1000 active real nodes still. Unfortunately, the number of virtual nodes created at the beginning can never change for the life of the cluster without a lot of careful work. For example, you could double the virtual node count by splitting each existing virtual node in half and assigning both halves to the same real node. However, if the real node uses the virtual nodes id to optimally store the data (for example, all data might be stored in /[virtual node id]/[data id]) it would have to move data around locally to reflect the change. And it would have to resolve data using both the new and old locations while the moves were taking place, making atomic operations difficult or" }, { "data": "Lets continue with this assumption that changing the virtual node count is more work than its worth, but keep in mind that some applications might be fine with this. The easiest way to deal with this limitation is to make the limit high enough that it wont matter. For instance, if we decide our cluster will never exceed 60,000 real nodes, we can just make 60,000 virtual nodes. Also, we should include in our calculations the relative size of our nodes. For instance, a year from now we might have real nodes that can handle twice the capacity of our current nodes. So wed want to assign twice the virtual nodes to those future nodes, so maybe we should raise our virtual node estimate to 120,000. A good rule to follow might be to calculate 100 virtual nodes to each real node at maximum capacity. This would allow you to alter the load on any given node by 1%, even at max capacity, which is pretty fine tuning. So now were at 6,000,000 virtual nodes for a max capacity cluster of 60,000 real nodes. 6 million virtual nodes seems like a lot, and it might seem like wed use up way too much memory. But the only structure this affects is the virtual node to real node mapping. The base amount of memory required would be 6 million times 2 bytes (to store a real node id from 0 to 65,535). 12 megabytes of memory just isnt that much to use these days. Even with all the overhead of flexible data types, things arent that bad. I changed the code from the previous part in this series to have 60,000 real and 6,000,000 virtual nodes, changed the list to an array(H), and python topped out at 27m of resident memory and that includes two rings. To change terminology a bit, were going to start calling these virtual nodes partitions. This will make it a bit easier to discern between the two types of nodes weve been talking about so far. Also, it makes sense to talk about partitions as they are really just unchanging sections of the hash space. Were also going to always keep the partition count a power of two. This makes it easy to just use bit manipulation on the hash to determine the partition rather than modulus. It isnt much faster, but it is a little. So, heres our updated ring code, using 8,388,608 (2 23) partitions and 65,536 nodes. Weve upped the sample data id set and checked the distribution to make sure we havent broken anything. ``` from array import array from hashlib import md5 from struct import unpack_from PARTITION_POWER = 23 PARTITIONSHIFT = 32 - PARTITIONPOWER NODE_COUNT = 65536 DATAIDCOUNT = 100000000 part2node = array('H') for part in range(2 PARTITION_POWER): part2node.append(part % NODE_COUNT) nodecounts = [0] * NODECOUNT for dataid in range(DATAID_COUNT): dataid = str(dataid) part = unpack_from('>I', md5(str(dataid)).digest())[0] >> PARTITIONSHIFT node_id = part2node[part] nodecounts[nodeid] += 1 desiredcount = DATAIDCOUNT / NODECOUNT print '%d: Desired data ids per node' % desired_count maxcount = max(nodecounts) over = 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids on one node, %.02f%% over' % \\ (max_count, over) mincount = min(nodecounts) under = 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids on one node, %.02f%% under' % \\ (min_count, under) ``` ``` 1525: Desired data ids per node 1683: Most data ids on one node, 10.36% over 1360: Least data ids on one node, 10.82% under ```" }, { "data": "+10% seems a bit high, but I reran with 65,536 partitions and 256 nodes and got +0.4% so its just that our sample size (100m) is too small for our number of partitions (8m). Itll take way too long to run experiments with an even larger sample size, so lets reduce back down to these lesser numbers. (To be certain, I reran at the full version with a 10 billion data id sample set and got +1%, but it took 6.5 hours to run.) In the next part of this series, well talk about how to increase the durability of our data in the cluster. In Part 3 of this series, we just further discussed partitions (virtual nodes) and cleaned up our code a bit based on that. Now, lets talk about how to increase the durability and availability of our data in the cluster. For many distributed data stores, durability is quite important. Either RAID arrays or individually distinct copies of data are required. While RAID will increase the durability, it does nothing to increase the availability if the RAID machine crashes, the data may be safe but inaccessible until repairs are done. If we keep distinct copies of the data on different machines and a machine crashes, the other copies will still be available while we repair the broken machine. An easy way to gain this multiple copy durability/availability is to just use multiple rings and groups of nodes. For instance, to achieve the industry standard of three copies, youd split the nodes into three groups and each group would have its own ring and each would receive a copy of each data item. This can work well enough, but has the drawback that expanding capacity requires adding three nodes at a time and that losing one node essentially lowers capacity by three times that nodes capacity. Instead, lets use a different, but common, approach of meeting our requirements with a single ring. This can be done by walking the ring from the starting point and looking for additional distinct nodes. Heres code that supports a variable number of replicas (set to 3 for testing): ``` from array import array from hashlib import md5 from struct import unpack_from REPLICAS = 3 PARTITION_POWER = 16 PARTITIONSHIFT = 32 - PARTITIONPOWER PARTITIONMAX = 2 ** PARTITIONPOWER - 1 NODE_COUNT = 256 DATAIDCOUNT = 10000000 part2node = array('H') for part in range(2 PARTITION_POWER): part2node.append(part % NODE_COUNT) nodecounts = [0] * NODECOUNT for dataid in range(DATAID_COUNT): dataid = str(dataid) part = unpack_from('>I', md5(str(dataid)).digest())[0] >> PARTITIONSHIFT node_ids = [part2node[part]] nodecounts[nodeids[0]] += 1 for replica in range(1, REPLICAS): while part2node[part] in node_ids: part += 1 if part > PARTITION_MAX: part = 0 node_ids.append(part2node[part]) nodecounts[nodeids[-1]] += 1 desiredcount = DATAIDCOUNT / NODECOUNT * REPLICAS print '%d: Desired data ids per node' % desired_count maxcount = max(nodecounts) over = 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids on one node, %.02f%% over' % \\ (max_count, over) mincount = min(nodecounts) under = 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids on one node, %.02f%% under' % \\ (min_count, under) ``` ``` 117186: Desired data ids per node 118133: Most data ids on one node, 0.81% over 116093: Least data ids on one node, 0.93% under ``` Thats pretty good; less than 1% over/under. While this works well, there are a couple of" }, { "data": "First, because of how weve initially assigned the partitions to nodes, all the partitions for a given node have their extra copies on the same other two nodes. The problem here is that when a machine fails, the load on these other nodes will jump by that amount. Itd be better if we initially shuffled the partition assignment to distribute the failover load better. The other problem is a bit harder to explain, but deals with physical separation of machines. Imagine you can only put 16 machines in a rack in your datacenter. The 256 nodes weve been using would fill 16 racks. With our current code, if a rack goes out (power problem, network issue, etc.) there is a good chance some data will have all three copies in that rack, becoming inaccessible. We can fix this shortcoming by adding the concept of zones to our nodes, and then ensuring that replicas are stored in distinct zones. ``` from array import array from hashlib import md5 from random import shuffle from struct import unpack_from REPLICAS = 3 PARTITION_POWER = 16 PARTITIONSHIFT = 32 - PARTITIONPOWER PARTITIONMAX = 2 ** PARTITIONPOWER - 1 NODE_COUNT = 256 ZONE_COUNT = 16 DATAIDCOUNT = 10000000 node2zone = [] while len(node2zone) < NODE_COUNT: zone = 0 while zone < ZONECOUNT and len(node2zone) < NODECOUNT: node2zone.append(zone) zone += 1 part2node = array('H') for part in range(2 PARTITION_POWER): part2node.append(part % NODE_COUNT) shuffle(part2node) nodecounts = [0] * NODECOUNT zonecounts = [0] * ZONECOUNT for dataid in range(DATAID_COUNT): dataid = str(dataid) part = unpack_from('>I', md5(str(dataid)).digest())[0] >> PARTITIONSHIFT node_ids = [part2node[part]] zones = [node2zone[node_ids[0]]] nodecounts[nodeids[0]] += 1 zone_counts[zones[0]] += 1 for replica in range(1, REPLICAS): while part2node[part] in node_ids and \\ node2zone[part2node[part]] in zones: part += 1 if part > PARTITION_MAX: part = 0 node_ids.append(part2node[part]) zones.append(node2zone[node_ids[-1]]) nodecounts[nodeids[-1]] += 1 zone_counts[zones[-1]] += 1 desiredcount = DATAIDCOUNT / NODECOUNT * REPLICAS print '%d: Desired data ids per node' % desired_count maxcount = max(nodecounts) over = 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids on one node, %.02f%% over' % \\ (max_count, over) mincount = min(nodecounts) under = 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids on one node, %.02f%% under' % \\ (min_count, under) desiredcount = DATAIDCOUNT / ZONECOUNT * REPLICAS print '%d: Desired data ids per zone' % desired_count maxcount = max(zonecounts) over = 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids in one zone, %.02f%% over' % \\ (max_count, over) mincount = min(zonecounts) under = 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids in one zone, %.02f%% under' % \\ (min_count, under) ``` ``` 117186: Desired data ids per node 118782: Most data ids on one node, 1.36% over 115632: Least data ids on one node, 1.33% under 1875000: Desired data ids per zone 1878533: Most data ids in one zone, 0.19% over 1869070: Least data ids in one zone, 0.32% under ``` So the shuffle and zone distinctions affected our distribution some, but still definitely good enough. This test took about 64 seconds to run on my machine. Theres a completely alternate, and quite common, way of accomplishing these same requirements. This alternate method doesnt use partitions at all, but instead just assigns anchors to the nodes within the hash space. Finding the first node for a given hash just involves walking this anchor ring for the next node, and finding additional nodes works similarly as before. To attain the equivalent of our virtual nodes, each real node is assigned multiple" }, { "data": "``` from bisect import bisect_left from hashlib import md5 from struct import unpack_from REPLICAS = 3 NODE_COUNT = 256 ZONE_COUNT = 16 DATAIDCOUNT = 10000000 VNODE_COUNT = 100 node2zone = [] while len(node2zone) < NODE_COUNT: zone = 0 while zone < ZONECOUNT and len(node2zone) < NODECOUNT: node2zone.append(zone) zone += 1 hash2index = [] index2node = [] for node in range(NODE_COUNT): for vnode in range(VNODE_COUNT): hsh = unpack_from('>I', md5(str(node)).digest())[0] index = bisect_left(hash2index, hsh) if index > len(hash2index): index = 0 hash2index.insert(index, hsh) index2node.insert(index, node) nodecounts = [0] * NODECOUNT zonecounts = [0] * ZONECOUNT for dataid in range(DATAID_COUNT): dataid = str(dataid) hsh = unpackfrom('>I', md5(str(dataid)).digest())[0] index = bisect_left(hash2index, hsh) if index >= len(hash2index): index = 0 node_ids = [index2node[index]] zones = [node2zone[node_ids[0]]] nodecounts[nodeids[0]] += 1 zone_counts[zones[0]] += 1 for replica in range(1, REPLICAS): while index2node[index] in node_ids and \\ node2zone[index2node[index]] in zones: index += 1 if index >= len(hash2index): index = 0 node_ids.append(index2node[index]) zones.append(node2zone[node_ids[-1]]) nodecounts[nodeids[-1]] += 1 zone_counts[zones[-1]] += 1 desiredcount = DATAIDCOUNT / NODECOUNT * REPLICAS print '%d: Desired data ids per node' % desired_count maxcount = max(nodecounts) over = 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids on one node, %.02f%% over' % \\ (max_count, over) mincount = min(nodecounts) under = 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids on one node, %.02f%% under' % \\ (min_count, under) desiredcount = DATAIDCOUNT / ZONECOUNT * REPLICAS print '%d: Desired data ids per zone' % desired_count maxcount = max(zonecounts) over = 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids in one zone, %.02f%% over' % \\ (max_count, over) mincount = min(zonecounts) under = 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids in one zone, %.02f%% under' % \\ (min_count, under) ``` ``` 117186: Desired data ids per node 351282: Most data ids on one node, 199.76% over 15965: Least data ids on one node, 86.38% under 1875000: Desired data ids per zone 2248496: Most data ids in one zone, 19.92% over 1378013: Least data ids in one zone, 26.51% under ``` This test took over 15 minutes to run! Unfortunately, this method also gives much less control over the distribution. To get better distribution, you have to add more virtual nodes, which eats up more memory and takes even more time to build the ring and perform distinct node lookups. The most common operation, data id lookup, can be improved (by predetermining each virtual nodes failover nodes, for instance) but it starts off so far behind our first approach that well just stick with that. In the next part of this series, well start to wrap all this up into a useful Python module. In Part 4 of this series, we ended up with a multiple copy, distinctly zoned ring. Or at least the start of it. In this final part well package the code up into a useable Python module and then add one last feature. First, lets separate the ring itself from the building of the data for the ring and its testing. ``` from array import array from hashlib import md5 from random import shuffle from struct import unpack_from from time import time class Ring(object): def init(self, nodes, part2node, replicas): self.nodes = nodes self.part2node = part2node self.replicas = replicas partition_power = 1 while 2 partition_power < len(part2node): partition_power += 1 if len(part2node) != 2 partition_power: raise Exception(\"part2node's length is not an \" \"exact power of 2\")" }, { "data": "= 32 - partitionpower def getnodes(self, dataid): dataid = str(dataid) part = unpack_from('>I', md5(dataid).digest())[0] >> self.partitionshift node_ids = [self.part2node[part]] zones = [self.nodes[node_ids[0]]] for replica in range(1, self.replicas): while self.part2node[part] in node_ids and \\ self.nodes[self.part2node[part]] in zones: part += 1 if part >= len(self.part2node): part = 0 node_ids.append(self.part2node[part]) zones.append(self.nodes[node_ids[-1]]) return [self.nodes[n] for n in node_ids] def buildring(nodes, partitionpower, replicas): begin = time() part2node = array('H') for part in range(2 partition_power): part2node.append(part % len(nodes)) shuffle(part2node) ring = Ring(nodes, part2node, replicas) print '%.02fs to build ring' % (time() - begin) return ring def test_ring(ring): begin = time() DATAIDCOUNT = 10000000 node_counts = {} zone_counts = {} for dataid in range(DATAID_COUNT): for node in ring.getnodes(dataid): node_counts[node['id']] = \\ node_counts.get(node['id'], 0) + 1 zone_counts[node['zone']] = \\ zone_counts.get(node['zone'], 0) + 1 print '%ds to test ring' % (time() - begin) desired_count = \\ DATAIDCOUNT / len(ring.nodes) * REPLICAS print '%d: Desired data ids per node' % desired_count maxcount = max(nodecounts.values()) over = \\ 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids on one node, %.02f%% over' % \\ (max_count, over) mincount = min(nodecounts.values()) under = \\ 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids on one node, %.02f%% under' % \\ (min_count, under) zone_count = \\ len(set(n['zone'] for n in ring.nodes.values())) desired_count = \\ DATAIDCOUNT / zone_count * ring.replicas print '%d: Desired data ids per zone' % desired_count maxcount = max(zonecounts.values()) over = \\ 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids in one zone, %.02f%% over' % \\ (max_count, over) mincount = min(zonecounts.values()) under = \\ 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids in one zone, %.02f%% under' % \\ (min_count, under) if name == 'main': PARTITION_POWER = 16 REPLICAS = 3 NODE_COUNT = 256 ZONE_COUNT = 16 nodes = {} while len(nodes) < NODE_COUNT: zone = 0 while zone < ZONECOUNT and len(nodes) < NODECOUNT: node_id = len(nodes) nodes[nodeid] = {'id': nodeid, 'zone': zone} zone += 1 ring = buildring(nodes, PARTITIONPOWER, REPLICAS) test_ring(ring) ``` ``` 0.06s to build ring 82s to test ring 117186: Desired data ids per node 118773: Most data ids on one node, 1.35% over 115801: Least data ids on one node, 1.18% under 1875000: Desired data ids per zone 1878339: Most data ids in one zone, 0.18% over 1869914: Least data ids in one zone, 0.27% under ``` It takes a bit longer to test our ring, but thats mostly because of the switch to dictionaries from arrays for various items. Having node dictionaries is nice because you can attach any node information you want directly there (ip addresses, tcp ports, drive paths, etc.). But were still on track for further testing; our distribution is still good. Now, lets add our one last feature to our ring: the concept of weights. Weights are useful because the nodes you add later in a rings life are likely to have more capacity than those you have at the outset. For this test, well make half our nodes have twice the weight. Well have to change build_ring to give more partitions to the nodes with more weight and well change test_ring to take into account these weights. Since weve changed so much Ill just post the entire module again: ``` from array import array from hashlib import md5 from random import shuffle from struct import unpack_from from time import time class Ring(object): def init(self, nodes, part2node, replicas): self.nodes = nodes self.part2node = part2node" }, { "data": "= replicas partition_power = 1 while 2 partition_power < len(part2node): partition_power += 1 if len(part2node) != 2 partition_power: raise Exception(\"part2node's length is not an \" \"exact power of 2\") self.partitionshift = 32 - partitionpower def getnodes(self, dataid): dataid = str(dataid) part = unpack_from('>I', md5(dataid).digest())[0] >> self.partitionshift node_ids = [self.part2node[part]] zones = [self.nodes[node_ids[0]]] for replica in range(1, self.replicas): while self.part2node[part] in node_ids and \\ self.nodes[self.part2node[part]] in zones: part += 1 if part >= len(self.part2node): part = 0 node_ids.append(self.part2node[part]) zones.append(self.nodes[node_ids[-1]]) return [self.nodes[n] for n in node_ids] def buildring(nodes, partitionpower, replicas): begin = time() parts = 2 partition_power total_weight = \\ float(sum(n['weight'] for n in nodes.values())) for node in nodes.values(): node['desired_parts'] = \\ parts / total_weight * node['weight'] part2node = array('H') for part in range(2 partition_power): for node in nodes.values(): if node['desired_parts'] >= 1: node['desired_parts'] -= 1 part2node.append(node['id']) break else: for node in nodes.values(): if node['desired_parts'] >= 0: node['desired_parts'] -= 1 part2node.append(node['id']) break shuffle(part2node) ring = Ring(nodes, part2node, replicas) print '%.02fs to build ring' % (time() - begin) return ring def test_ring(ring): begin = time() DATAIDCOUNT = 10000000 node_counts = {} zone_counts = {} for dataid in range(DATAID_COUNT): for node in ring.getnodes(dataid): node_counts[node['id']] = \\ node_counts.get(node['id'], 0) + 1 zone_counts[node['zone']] = \\ zone_counts.get(node['zone'], 0) + 1 print '%ds to test ring' % (time() - begin) total_weight = float(sum(n['weight'] for n in ring.nodes.values())) max_over = 0 max_under = 0 for node in ring.nodes.values(): desired = DATAIDCOUNT REPLICAS \\ node['weight'] / total_weight diff = node_counts[node['id']] - desired if diff > 0: over = 100.0 * diff / desired if over > max_over: max_over = over else: under = 100.0 * (-diff) / desired if under > max_under: max_under = under print '%.02f%% max node over' % max_over print '%.02f%% max node under' % max_under max_over = 0 max_under = 0 for zone in set(n['zone'] for n in ring.nodes.values()): zone_weight = sum(n['weight'] for n in ring.nodes.values() if n['zone'] == zone) desired = DATAIDCOUNT REPLICAS \\ zoneweight / totalweight diff = zone_counts[zone] - desired if diff > 0: over = 100.0 * diff / desired if over > max_over: max_over = over else: under = 100.0 * (-diff) / desired if under > max_under: max_under = under print '%.02f%% max zone over' % max_over print '%.02f%% max zone under' % max_under if name == 'main': PARTITION_POWER = 16 REPLICAS = 3 NODE_COUNT = 256 ZONE_COUNT = 16 nodes = {} while len(nodes) < NODE_COUNT: zone = 0 while zone < ZONECOUNT and len(nodes) < NODECOUNT: node_id = len(nodes) nodes[nodeid] = {'id': nodeid, 'zone': zone, 'weight': 1.0 + (node_id % 2)} zone += 1 ring = buildring(nodes, PARTITIONPOWER, REPLICAS) test_ring(ring) ``` ``` 0.88s to build ring 86s to test ring 1.66% max over 1.46% max under 0.28% max zone over 0.23% max zone under ``` So things are still good, even though we have differently weighted nodes. I ran another test with this code using random weights from 1 to 100 and got over/under values for nodes of 7.35%/18.12% and zones of 0.24%/0.22%, still pretty good considering the crazy weight ranges. Hopefully this series has been a good introduction to building a ring. This code is essentially how the OpenStack Swift ring works, except that Swifts ring has lots of additional optimizations, such as storing each replica assignment separately, and lots of extra features for building, validating, and otherwise working with rings. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "contributing.html#community.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Swift has a limit on the size of a single uploaded object; by default this is 5GB. However, the download size of a single object is virtually unlimited with the concept of segmentation. Segments of the larger object are uploaded and a special manifest file is created that, when downloaded, sends all the segments concatenated as a single object. This also offers much greater upload speed with the possibility of parallel uploads of the segments. Middleware that will provide Dynamic Large Object (DLO) support. The quickest way to try out this feature is use the swift Swift Tool included with the python-swiftclient library. You can use the -S option to specify the segment size to use when splitting a large file. For example: ``` swift upload testcontainer -S 1073741824 largefile ``` This would split the large_file into 1G segments and begin uploading those segments in parallel. Once all the segments have been uploaded, swift will then create the manifest file so the segments can be downloaded as one. So now, the following swift command would download the entire large object: ``` swift download testcontainer largefile ``` swift command uses a strict convention for its segmented object support. In the above example it will upload all the segments into a second container named testcontainersegments. These segments will have names like large_file/1290206778.25/21474836480/00000000, large_file/1290206778.25/21474836480/00000001, etc. The main benefit for using a separate container is that the main container listings will not be polluted with all the segment names. The reason for using the segment name format of <name>/<timestamp>/<size>/<segment> is so that an upload of a new file with the same name wont overwrite the contents of the first until the last moment when the manifest file is updated. swift will manage these segment files for you, deleting old segments on deletes and overwrites, etc. You can override this behavior with the --leave-segments option if desired; this is useful if you want to have multiple versions of the same large object available. You can also work with the segments and manifests directly with HTTP requests instead of having swift do that for you. You can just upload the segments like you would any other object and the manifest is just a zero-byte (not enforced) file with an extra X-Object-Manifest header. All the object segments need to be in the same container, have a common object name prefix, and sort in the order in which they should be concatenated. Object names are sorted lexicographically as UTF-8 byte strings. They dont have to be in the same container as the manifest file will be, which is useful to keep container listings clean as explained above with swift. The manifest file is simply a zero-byte (not enforced) file with the extra X-Object-Manifest: <container>/<prefix> header, where <container> is the container the object segments are in and <prefix> is the common prefix for all the segments. It is best to upload all the segments first and then create or update the manifest. In this way, the full object wont be available for downloading until the upload is complete. Also, you can upload a new set of segments to a second location and then update the manifest to point to this new" }, { "data": "During the upload of the new segments, the original manifest will still be available to download the first set of segments. Note When updating a manifest object using a POST request, a X-Object-Manifest header must be included for the object to continue to behave as a manifest object. The manifest file should have no content. However, this is not enforced. If the manifest path itself conforms to container/prefix specified in X-Object-Manifest, and if manifest has some content/data in it, it would also be considered as segment and manifests content will be part of the concatenated GET response. The order of concatenation follows the usual DLO logic which is - the order of concatenation adheres to order returned when segment names are sorted. Heres an example using curl with tiny 1-byte segments: ``` curl -X PUT -H 'X-Auth-Token: <token>' http://<storage_url>/container/myobject/00000001 --data-binary '1' curl -X PUT -H 'X-Auth-Token: <token>' http://<storage_url>/container/myobject/00000002 --data-binary '2' curl -X PUT -H 'X-Auth-Token: <token>' http://<storage_url>/container/myobject/00000003 --data-binary '3' curl -X PUT -H 'X-Auth-Token: <token>' -H 'X-Object-Manifest: container/myobject/' http://<storage_url>/container/myobject --data-binary '' curl -H 'X-Auth-Token: <token>' http://<storage_url>/container/myobject ``` Bases: WSGIContext req users request xobjectmanifest as unquoted, native string Take a GET or HEAD request, and if it is for a dynamic large object manifest, return an appropriate response. Otherwise, simply pass it through. Middleware that will provide Static Large Object (SLO) support. This feature is very similar to Dynamic Large Object (DLO) support in that it allows the user to upload many objects concurrently and afterwards download them as a single object. It is different in that it does not rely on eventually consistent container listings to do so. Instead, a user defined manifest of the object segments is used. After the user has uploaded the objects to be concatenated, a manifest is uploaded. The request must be a PUT with the query parameter: ``` ?multipart-manifest=put ``` The body of this request will be an ordered list of segment descriptions in JSON format. The data to be supplied for each segment is either: | Key | Description | |:--|:--| | path | the path to the segment object (not including account) /container/object_name | | etag | (optional) the ETag given back when the segment object was PUT | | size_bytes | (optional) the size of the complete segment object in bytes | | range | (optional) the (inclusive) range within the object to use as a segment. If omitted, the entire object is used | Key Description path the path to the segment object (not including account) /container/object_name etag (optional) the ETag given back when the segment object was PUT size_bytes (optional) the size of the complete segment object in bytes range (optional) the (inclusive) range within the object to use as a segment. If omitted, the entire object is used Or: | Key | Description | |:|:--| | data | base64-encoded data to be returned | Key Description data base64-encoded data to be returned Note At least one object-backed segment must be included. If youd like to create a manifest consisting purely of data segments, consider uploading a normal object instead. The format of the list will be: ``` [{\"path\": \"/cont/object\", \"etag\": \"etagoftheobjectsegment\", \"size_bytes\": 10485760, \"range\": \"1048576-2097151\"}, {\"data\": base64.b64encode(\"interstitial data\")}, {\"path\": \"/cont/another-object\", ...}, ...] ``` The number of object-backed segments is limited to maxmanifestsegments (configurable in proxy-server.conf, default 1000). Each segment must be at least 1" }, { "data": "On upload, the middleware will head every object-backed segment passed in to verify: the segment exists (i.e. the HEAD was successful); the segment meets minimum size requirements; if the user provided a non-null etag, the etag matches; if the user provided a non-null sizebytes, the sizebytes matches; and if the user provided a range, it is a singular, syntactically correct range that is satisfiable given the size of the object referenced. For inlined data segments, the middleware verifies each is valid, non-empty base64-encoded binary data. Note that data segments do not count against maxmanifestsegments. Note that the etag and size_bytes keys are optional; if omitted, the verification is not performed. If any of the objects fail to verify (not found, size/etag mismatch, below minimum size, invalid range) then the user will receive a 4xx error response. If everything does match, the user will receive a 2xx response and the SLO object is ready for downloading. Note that large manifests may take a long time to verify; historically, clients would need to use a long read timeout for the connection to give Swift enough time to send a final 201 Created or 400 Bad Request response. Now, clients should use the query parameters: ``` ?multipart-manifest=put&heartbeat=on ``` to request that Swift send an immediate 202 Accepted response and periodic whitespace to keep the connection alive. A final response code will appear in the body. The format of the response body defaults to text/plain but can be either json or xml depending on the Accept header. An example body is as follows: ``` Response Status: 201 Created Response Body: Etag: \"8f481cede6d2ddc07cb36aa084d9a64d\" Last Modified: Wed, 25 Oct 2017 17:08:55 GMT Errors: ``` Or, as a json response: ``` {\"Response Status\": \"201 Created\", \"Response Body\": \"\", \"Etag\": \"\\\"8f481cede6d2ddc07cb36aa084d9a64d\\\"\", \"Last Modified\": \"Wed, 25 Oct 2017 17:08:55 GMT\", \"Errors\": []} ``` Behind the scenes, on success, a JSON manifest generated from the user input is sent to object servers with an extra X-Static-Large-Object: True header and a modified Content-Type. The items in this manifest will include the etag and size_bytes for each segment, regardless of whether the client specified them for verification. The parameter swiftbytes=$totalsize will be appended to the existing Content-Type, where $total_size is the sum of all the included segments size_bytes. This extra parameter will be hidden from the user. Manifest files can reference objects in separate containers, which will improve concurrent upload speed. Objects can be referenced by multiple manifests. The segments of a SLO manifest can even be other SLO manifests. Treat them as any other object i.e., use the Etag and Content-Length given on the PUT of the sub-SLO in the manifest to the parent SLO. While uploading a manifest, a user can send Etag for verification. It needs to be md5 of the segments etags, if there is no range specified. For example, if the manifest to be uploaded looks like this: ``` [{\"path\": \"/cont/object1\", \"etag\": \"etagoftheobjectsegment1\", \"size_bytes\": 10485760}, {\"path\": \"/cont/object2\", \"etag\": \"etagoftheobjectsegment2\", \"size_bytes\": 10485760}] ``` The Etag of the above manifest would be md5 of etagoftheobjectsegment1 and" }, { "data": "This could be computed in the following way: ``` echo -n 'etagoftheobjectsegment1etagoftheobjectsegment2' | md5sum ``` If a manifest to be uploaded with a segment range looks like this: ``` [{\"path\": \"/cont/object1\", \"etag\": \"etagoftheobjectsegmentone\", \"size_bytes\": 10485760, \"range\": \"1-2\"}, {\"path\": \"/cont/object2\", \"etag\": \"etagoftheobjectsegmenttwo\", \"size_bytes\": 10485760, \"range\": \"3-4\"}] ``` While computing the Etag of the above manifest, internally each segments etag will be taken in the form of etagvalue:rangevalue;. Hence the Etag of the above manifest would be: ``` echo -n 'etagoftheobjectsegmentone:1-2;etagoftheobjectsegmenttwo:3-4;' \\ | md5sum ``` For the purposes of Etag computations, inlined data segments are considered to have an etag of the md5 of the raw data (i.e., not base64-encoded). Users now have the ability to specify ranges for SLO segments. Users can include an optional range field in segment descriptions to specify which bytes from the underlying object should be used for the segment data. Only one range may be specified per segment. Note The etag and size_bytes fields still describe the backing object as a whole. If a user uploads this manifest: ``` [{\"path\": \"/con/objseg1\", \"size_bytes\": 2097152, \"range\": \"0-1048576\"}, {\"path\": \"/con/objseg2\", \"size_bytes\": 2097152, \"range\": \"512-1550000\"}, {\"path\": \"/con/objseg1\", \"size_bytes\": 2097152, \"range\": \"-2048\"}] ``` The segment will consist of the first 1048576 bytes of /con/objseg1, followed by bytes 513 through 1550000 (inclusive) of /con/objseg2, and finally bytes 2095104 through 2097152 (i.e., the last 2048 bytes) of /con/objseg1. Note The minimum sized range is 1 byte. This is the same as the minimum segment size. When uploading a manifest, users can include data segments that should be included along with objects. The data in these segments must be base64-encoded binary data and will be included in the etag of the resulting large object exactly as if that data had been uploaded and referenced as separate objects. Note This feature is primarily aimed at reducing the need for storing many tiny objects, and as such any supplied data must fit within the maximum manifest size (default is 8MiB). This maximum size can be configured via maxmanifestsize in proxy-server.conf. A GET request to the manifest object will return the concatenation of the objects from the manifest much like DLO. If any of the segments from the manifest are not found or their Etag/Content-Length have changed since upload, the connection will drop. In this case a 409 Conflict will be logged in the proxy logs and the user will receive incomplete results. Note that this will be enforced regardless of whether the user performed per-segment validation during upload. The headers from this GET or HEAD request will return the metadata attached to the manifest object itself with some exceptions: | Header | Value | |:-|:| | Content-Length | the total size of the SLO (the sum of the sizes of the segments in the manifest) | | X-Static-Large-Object | the string True | | Etag | the etag of the SLO (generated the same way as DLO) | Header Value Content-Length the total size of the SLO (the sum of the sizes of the segments in the manifest) X-Static-Large-Object the string True Etag the etag of the SLO (generated the same way as DLO) A GET request with the query parameter: ``` ?multipart-manifest=get ``` will return a transformed version of the original manifest, containing additional fields and different key names. For example, the first manifest in the example above would look like this: ``` [{\"name\": \"/cont/object\", \"hash\": \"etagoftheobjectsegment\", \"bytes\": 10485760, \"range\": \"1048576-2097151\"}, ...] ``` As you can see, some of the fields are renamed compared to the put request: path is name, etag is hash, size_bytes is bytes. The range field remains the same (if" }, { "data": "A GET request with the query parameters: ``` ?multipart-manifest=get&format=raw ``` will return the contents of the original manifest as it was sent by the client. The main purpose for both calls is solely debugging. A GET request to a manifest object with the query parameter: ``` ?part-number=<n> ``` will return the contents of the nth segment. Segments are indexed from 1, so n must be an integer between 1 and the total number of segments in the manifest. The response status will be 206 Partial Content and its headers will include: an X-Parts-Count header equal to the total number of segments; a Content-Length header equal to the length of the specified segment; a Content-Range header describing the byte range of the specified part within the SLO. A HEAD request with a part-number parameter will also return a response with status 206 Partial Content and the same headers. Note When the manifest object is uploaded you are more or less guaranteed that every segment in the manifest exists and matched the specifications. However, there is nothing that prevents the user from breaking the SLO download by deleting/replacing a segment referenced in the manifest. It is left to the user to use caution in handling the segments. A DELETE request will just delete the manifest object itself. The segment data referenced by the manifest will remain unchanged. A DELETE with a query parameter: ``` ?multipart-manifest=delete ``` will delete all the segments referenced in the manifest and then the manifest itself. The failure response will be similar to the bulk delete middleware. A DELETE with the query parameters: ``` ?multipart-manifest=delete&async=yes ``` will schedule all the segments referenced in the manifest to be deleted asynchronously and then delete the manifest itself. Note that segments will continue to appear in listings and be counted for quotas until they are cleaned up by the object-expirer. This option is only available when all segments are in the same container and none of them are nested SLOs. PUT and POST requests will work as expected; PUTs will just overwrite the manifest object for example. In a container listing the size listed for SLO manifest objects will be the total_size of the concatenated segments in the manifest. The overall X-Container-Bytes-Used for the container (and subsequently for the account) will not reflect total_size of the manifest but the actual size of the JSON data stored. The reason for this somewhat confusing discrepancy is we want the container listing to reflect the size of the manifest object when it is downloaded. We do not, however, want to count the bytes-used twice (for both the manifest and the segments its referring to) in the container and account metadata which can be used for stats and billing purposes. Bases: object Encapsulate properties of a GET or HEAD response that are pertinent to handling a potential SLO response. Instances of this class are typically constructed using the from_headers method. is_slo True if the response appears to be an SLO manifest, False otherwise. timestamp an instance of Timestamp. manifest_etag the Etag of the manifest object, or None if is_slo is False. slo_etag the Etag of the SLO. slo_size the size of the SLO. Inspect response headers and extract any resp_attrs we can" }, { "data": "response_headers list of tuples from a object response an instance of RespAttrs to represent the response headers Always called if SLO has fetched the manifest response body, for legacy manifests well calculate size/etag values we wouldnt have gotten from sys-meta headers. Bases: WSGIContext Converts the manifest data to match with the format that was put in through ?multipart-manifest=put resp_iter a response iterable HTTPServerError the json-serialized raw format (as bytes) Takes a request and a start_response callable and does the normal WSGI thing with them. Returns an iterator suitable for sending up the WSGI chain. req Request object; is a GET or HEAD request aimed at what may (or may not) be a static large object manifest. startresponse WSGI startresponse callable Bases: object StaticLargeObject Middleware See above for a full description. The proxy logs created for any subrequests made will have swift.source set to SLO. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. maxmanifestsegments The maximum number of segments allowed in newly-created static large objects. maxmanifestsize The maximum size (in bytes) of newly-created static-large-object manifests. yield_frequency If the client included heartbeat=on in the query parameters when creating a new static large object, the period of time to wait between sending whitespace to keep the connection alive. A generator function to be used to delete all the segments and sub-segments referenced in a manifest. req a Request with an SLO manifest in path HTTPPreconditionFailed on invalid UTF8 in request path HTTPBadRequest on too many buffered sub segments and on invalid SLO manifest path Performs a Request and returns the SLO manifests segments. obj_name the name of the object being deleted, as /container/object req the base Request HTTPServerError on unable to load obj_name or on unable to load the SLO manifest data. HTTPBadRequest on not an SLO manifest HTTPNotFound on SLO manifest not found SLO manifests segments Will delete all the segments in the SLO manifest and then, if successful, will delete the manifest file. req a Request with an obj in path swob.Response whose appiter set to Bulk.handledelete_iter Handles the GET or HEAD of a SLO manifest. The response body (only on GET, of course) will consist of the concatenation of the segments. req a Request with a path referencing an object startresponse WSGI startresponse callable HttpException on errors Will handle the PUT of a SLO manifest. Heads every object in manifest to check if is valid and if so will save a manifest generated from the user input. Uses WSGIContext to call self and start_response and returns a WSGI iterator. req a Request with an obj in path startresponse WSGI startresponse callable HttpException on errors Helper function to calculate the byterange for a part_num response. N.B. as a side-effect of calculating the single tuple representing the byterange required for a part_num response this function will also mutate the requests Range header so that swob knows to return 206. req the request object segments the list of seg_dicts part_num the part number of the object to return a tuple representing the byterange Calculate the byteranges based on the request, segments, and part number. N.B. as a side-effect of calculating the single tuple representing the byterange required for a part_num response this function will also mutate the requests Range header so that swob knows to return" }, { "data": "req the request object segments the list of seg_dicts resp_attrs the slo response attributes part_num the part number of the object to return a list of tuples representing byteranges Given a request body, parses it and returns a list of dictionaries. The output structure is nearly the same as the input structure, but it is not an exact copy. Given a valid object-backed input dictionary din, its corresponding output dictionary dout will be as follows: dout[etag] == din[etag] dout[path] == din[path] din[sizebytes] can be a string (12) or an integer (12), but dout[sizebytes] is an integer. (optional) d_in[range] is a string of the form M-N, M-, or -N, where M and N are non-negative integers. d_out[range] is the corresponding swob.Range object. If d_in does not have a key range, neither will d_out. Inlined data dictionaries will have any extraneous padding stripped. HTTPException on parse errors or semantic errors (e.g. bogus JSON structure, syntactically invalid ranges) a list of dictionaries on success SLO support centers around the user generated manifest file. After the user has uploaded the segments into their account a manifest file needs to be built and uploaded. All object segments, must be at least 1 byte in size. Please see the SLO docs for Static Large Objects further details. With a GET or HEAD of a manifest file, the X-Object-Manifest: <container>/<prefix> header will be returned with the concatenated object so you can tell where its getting its segments from. When updating a manifest object using a POST request, a X-Object-Manifest header must be included for the object to continue to behave as a manifest object. The responses Content-Length for a GET or HEAD on the manifest file will be the sum of all the segments in the <container>/<prefix> listing, dynamically. So, uploading additional segments after the manifest is created will cause the concatenated object to be that much larger; theres no need to recreate the manifest file. The responses Content-Type for a GET or HEAD on the manifest will be the same as the Content-Type set during the PUT request that created the manifest. You can easily change the Content-Type by reissuing the PUT. The responses ETag for a GET or HEAD on the manifest file will be the MD5 sum of the concatenated string of ETags for each of the segments in the manifest (for DLO, from the listing <container>/<prefix>). Usually in Swift the ETag is the MD5 sum of the contents of the object, and that holds true for each segment independently. But its not meaningful to generate such an ETag for the manifest itself so this method was chosen to at least offer change detection. Note If you are using the container sync feature you will need to ensure both your manifest file and your segment files are synced if they happen to be in different containers. Dynamic large object support has gone through various iterations before settling on this implementation. The primary factor driving the limitation of object size in Swift is maintaining balance among the partitions of the ring. To maintain an even dispersion of disk usage throughout the cluster the obvious storage pattern was to simply split larger objects into smaller segments, which could then be glued together during a read. Before the introduction of large object support some applications were already splitting their uploads into segments and re-assembling them on the client side after retrieving the individual" }, { "data": "This design allowed the client to support backup and archiving of large data sets, but was also frequently employed to improve performance or reduce errors due to network interruption. The major disadvantage of this method is that knowledge of the original partitioning scheme is required to properly reassemble the object, which is not practical for some use cases, such as CDN origination. In order to eliminate any barrier to entry for clients wanting to store objects larger than 5GB, initially we also prototyped fully transparent support for large object uploads. A fully transparent implementation would support a larger max size by automatically splitting objects into segments during upload within the proxy without any changes to the client API. All segments were completely hidden from the client API. This solution introduced a number of challenging failure conditions into the cluster, wouldnt provide the client with any option to do parallel uploads, and had no basis for a resume feature. The transparent implementation was deemed just too complex for the benefit. The current user manifest design was chosen in order to provide a transparent download of large objects to the client and still provide the uploading client a clean API to support segmented uploads. To meet an many use cases as possible Swift supports two types of large object manifests. Dynamic and static large object manifests both support the same idea of allowing the user to upload many segments to be later downloaded as a single file. Dynamic large objects rely on a container listing to provide the manifest. This has the advantage of allowing the user to add/removes segments from the manifest at any time. It has the disadvantage of relying on eventually consistent container listings. All three copies of the container dbs must be updated for a complete list to be guaranteed. Also, all segments must be in a single container, which can limit concurrent upload speed. Static large objects rely on a user provided manifest file. A user can upload objects into multiple containers and then reference those objects (segments) in a self generated manifest file. Future GETs to that file will download the concatenation of the specified segments. This has the advantage of being able to immediately download the complete object once the manifest has been successfully PUT. Being able to upload segments into separate containers also improves concurrent upload speed. It has the disadvantage that the manifest is finalized once PUT. Any changes to it means it has to be replaced. Between these two methods the user has great flexibility in how (s)he chooses to upload and retrieve large objects to Swift. Swift does not, however, stop the user from harming themselves. In both cases the segments are deletable by the user at any time. If a segment was deleted by mistake, a dynamic large object, having no way of knowing it was ever there, would happily ignore the deleted file and the user will get an incomplete file. A static large object would, when failing to retrieve the object specified in the manifest, drop the connection and the user would receive partial results. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "contributing.html#contacting-the-core-team.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Put simply, if you improve Swift, youre a contributor. The easiest way to improve the project is to tell us where theres a bug. In other words, filing a bug is a valuable and helpful way to contribute to the project. Once a bug has been filed, someone will work on writing a patch to fix the bug. Perhaps youd like to fix a bug. Writing code to fix a bug or add new functionality is tremendously important. Once code has been written, it is submitted upstream for review. All code, even that written by the most senior members of the community, must pass code review and all tests before it can be included in the project. Reviewing proposed patches is a very helpful way to be a contributor. Swift is nothing without the community behind it. Wed love to welcome you to our community. Come find us in #openstack-swift on OFTC IRC or on the OpenStack dev mailing list. For general information on contributing to OpenStack, please check out the contributor guide to get started. It covers all the basics that are common to all OpenStack projects: the accounts you need, the basics of interacting with our Gerrit review system, how we communicate as a community, etc. If you want more Swift related project documentation make sure you checkout the Swift developer (contributor) documentation at https://docs.openstack.org/swift/latest/ Filing a bug is the easiest way to contribute. We use Launchpad as a bug tracker; you can find currently-tracked bugs at https://bugs.launchpad.net/swift. Use the Report a bug link to file a new bug. If you find something in Swift that doesnt match the documentation or doesnt meet your expectations with how it should work, please let us know. Of course, if you ever get an error (like a Traceback message in the logs), we definitely want to know about that. Well do our best to diagnose any problem and patch it as soon as possible. A bug report, at minimum, should describe what you were doing that caused the bug. Swift broke, pls fix is not helpful. Instead, something like When I restarted syslog, Swift started logging traceback messages is very helpful. The goal is that we can reproduce the bug and isolate the issue in order to apply a fix. If you dont have full details, thats ok. Anything you can provide is helpful. You may have noticed that there are many tracked bugs, but not all of them have been confirmed. If you take a look at an old bug report and you can reproduce the issue described, please leave a comment on the bug about that. It lets us all know that the bug is very likely to be valid. All code reviews in OpenStack projects are done on https://review.opendev.org/. Reviewing patches is one of the most effective ways you can contribute to the community. Weve written REVIEW_GUIDELINES.rst (found in this source tree) to help you give good reviews. https://wiki.openstack.org/wiki/Swift/PriorityReviews is a starting point to find what reviews are priority in the" }, { "data": "If youre looking for a way to write and contribute code, but youre not sure what to work on, check out the wishlist bugs in the bug tracker. These are normally smaller items that someone took the time to write down but didnt have time to implement. And please join #openstack-swift on OFTC IRC to tell us what youre working on. https://docs.openstack.org/swift/latest/firstcontributionswift.html Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at http://docs.openstack.org/infra/manual/developers.html#development-workflow. Gerrit is the review system used in the OpenStack projects. Were sorry, but we wont be able to respond to pull requests submitted through GitHub. Bugs should be filed on Launchpad, not in GitHubs issue tracker. The Zen of Python Simple Scales Minimal dependencies Re-use existing tools and libraries when reasonable Leverage the economies of scale Small, loosely coupled RESTful services No single points of failure Start with the use case then design from the cluster operator up If you havent argued about it, you dont have the right answer yet :) If it is your first implementation, you probably arent done yet :) Please dont feel offended by difference of opinion. Be prepared to advocate for your change and iterate on it based on feedback. Reach out to other people working on the project on IRC or the mailing list - we want to help. Set up a Swift All-In-One VM(SAIO). Make your changes. Docs and tests for your patch must land before or with your patch. Run unit tests, functional tests, probe tests ./.unittests ./.functests ./.probetests Run tox (no command-line args needed) git review Running the tests above against Swift in your development environment (ie your SAIO) will catch most issues. Any patch you propose is expected to be both tested and documented and all tests should pass. If you want to run just a subset of the tests while you are developing, you can use pytest: ``` cd test/unit/common/middleware/ && pytest test_healthcheck.py ``` To check which parts of your code are being exercised by a test, you can run tox and then point your browser to swift/cover/index.html: ``` tox -e py27 -- test.unit.common.middleware.testhealthcheck:TestHealthCheck.testhealthcheck ``` Swifts unit tests are designed to test small parts of the code in isolation. The functional tests validate that the entire system is working from an external perspective (they are black-box tests). You can even run functional tests against public Swift endpoints. The probetests are designed to test much of Swifts internal processes. For example, a test may write data, intentionally corrupt it, and then ensure that the correct processes detect and repair it. When your patch is submitted for code review, it will automatically be tested on the OpenStack CI infrastructure. In addition to many of the tests above, it will also be tested by several other OpenStack test jobs. Once your patch has been reviewed and approved by core reviewers and has passed all automated tests, it will be merged into the Swift source tree." }, { "data": "If youre working on something, its a very good idea to write down what youre thinking about. This lets others get up to speed, helps you collaborate, and serves as a great record for future reference. Write down your thoughts somewhere and put a link to it here. It doesnt matter what form your thoughts are in; use whatever is best for you. Your document should include why your idea is needed and your thoughts on particular design choices and tradeoffs. Please include some contact information (ideally, your IRC nick) so that people can collaborate with you. People working on the Swift project may be found in the in their timezone. The channel is logged, so if you ask a question when no one is around, you can check the log to see if its been answered: http://eavesdrop.openstack.org/irclogs/%23openstack-swift/ This is a Swift team meeting. The discussion in this meeting is about all things related to the Swift project: time: http://eavesdrop.openstack.org/#SwiftTeamMeeting agenda: https://wiki.openstack.org/wiki/Meetings/Swift We use the openstack-discuss@lists.openstack.org mailing list for asynchronous discussions or to communicate with other OpenStack teams. Use the prefix [swift] in your subject line (its a high-volume list, so most people use email filters). More information about the mailing list, including how to subscribe and read the archives, can be found at: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss The swift-core team is an active group of contributors who are responsible for directing and maintaining the Swift project. As a new contributor, your interaction with this group will be mostly through code reviews, because only members of swift-core can approve a code change to be merged into the code repository. But the swift-core team also spend time on IRC so feel free to drop in to ask questions or just to meet us. Note Although your contribution will require reviews by members of swift-core, these arent the only people whose reviews matter. Anyone with a gerrit account can post reviews, so you can ask other developers you know to review your code and you can review theirs. (A good way to learn your way around the codebase is to review other peoples patches.) If youre thinking, Im new at this, how can I possibly provide a helpful review?, take a look at How to Review Changes the OpenStack Way. Or for more specifically in a Swift context read Review Guidelines You can learn more about the role of core reviewers in the OpenStack governance documentation: https://docs.openstack.org/contributors/common/governance.html#core-reviewer The membership list of swift-core is maintained in gerrit: https://review.opendev.org/#/admin/groups/24,members You can also find the members of the swift-core team at the Swift weekly meetings. Understanding how reviewers review and what they look for will help getting your code merged. See Swift Review Guidelines for how we review code. Keep in mind that reviewers are also human; if something feels stalled, then come and poke us on IRC or add it to our meeting agenda. All common PTL duties are enumerated in the PTL guide. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "contributing.html#getting-your-patch-merged.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Put simply, if you improve Swift, youre a contributor. The easiest way to improve the project is to tell us where theres a bug. In other words, filing a bug is a valuable and helpful way to contribute to the project. Once a bug has been filed, someone will work on writing a patch to fix the bug. Perhaps youd like to fix a bug. Writing code to fix a bug or add new functionality is tremendously important. Once code has been written, it is submitted upstream for review. All code, even that written by the most senior members of the community, must pass code review and all tests before it can be included in the project. Reviewing proposed patches is a very helpful way to be a contributor. Swift is nothing without the community behind it. Wed love to welcome you to our community. Come find us in #openstack-swift on OFTC IRC or on the OpenStack dev mailing list. For general information on contributing to OpenStack, please check out the contributor guide to get started. It covers all the basics that are common to all OpenStack projects: the accounts you need, the basics of interacting with our Gerrit review system, how we communicate as a community, etc. If you want more Swift related project documentation make sure you checkout the Swift developer (contributor) documentation at https://docs.openstack.org/swift/latest/ Filing a bug is the easiest way to contribute. We use Launchpad as a bug tracker; you can find currently-tracked bugs at https://bugs.launchpad.net/swift. Use the Report a bug link to file a new bug. If you find something in Swift that doesnt match the documentation or doesnt meet your expectations with how it should work, please let us know. Of course, if you ever get an error (like a Traceback message in the logs), we definitely want to know about that. Well do our best to diagnose any problem and patch it as soon as possible. A bug report, at minimum, should describe what you were doing that caused the bug. Swift broke, pls fix is not helpful. Instead, something like When I restarted syslog, Swift started logging traceback messages is very helpful. The goal is that we can reproduce the bug and isolate the issue in order to apply a fix. If you dont have full details, thats ok. Anything you can provide is helpful. You may have noticed that there are many tracked bugs, but not all of them have been confirmed. If you take a look at an old bug report and you can reproduce the issue described, please leave a comment on the bug about that. It lets us all know that the bug is very likely to be valid. All code reviews in OpenStack projects are done on https://review.opendev.org/. Reviewing patches is one of the most effective ways you can contribute to the community. Weve written REVIEW_GUIDELINES.rst (found in this source tree) to help you give good reviews. https://wiki.openstack.org/wiki/Swift/PriorityReviews is a starting point to find what reviews are priority in the" }, { "data": "If youre looking for a way to write and contribute code, but youre not sure what to work on, check out the wishlist bugs in the bug tracker. These are normally smaller items that someone took the time to write down but didnt have time to implement. And please join #openstack-swift on OFTC IRC to tell us what youre working on. https://docs.openstack.org/swift/latest/firstcontributionswift.html Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at http://docs.openstack.org/infra/manual/developers.html#development-workflow. Gerrit is the review system used in the OpenStack projects. Were sorry, but we wont be able to respond to pull requests submitted through GitHub. Bugs should be filed on Launchpad, not in GitHubs issue tracker. The Zen of Python Simple Scales Minimal dependencies Re-use existing tools and libraries when reasonable Leverage the economies of scale Small, loosely coupled RESTful services No single points of failure Start with the use case then design from the cluster operator up If you havent argued about it, you dont have the right answer yet :) If it is your first implementation, you probably arent done yet :) Please dont feel offended by difference of opinion. Be prepared to advocate for your change and iterate on it based on feedback. Reach out to other people working on the project on IRC or the mailing list - we want to help. Set up a Swift All-In-One VM(SAIO). Make your changes. Docs and tests for your patch must land before or with your patch. Run unit tests, functional tests, probe tests ./.unittests ./.functests ./.probetests Run tox (no command-line args needed) git review Running the tests above against Swift in your development environment (ie your SAIO) will catch most issues. Any patch you propose is expected to be both tested and documented and all tests should pass. If you want to run just a subset of the tests while you are developing, you can use pytest: ``` cd test/unit/common/middleware/ && pytest test_healthcheck.py ``` To check which parts of your code are being exercised by a test, you can run tox and then point your browser to swift/cover/index.html: ``` tox -e py27 -- test.unit.common.middleware.testhealthcheck:TestHealthCheck.testhealthcheck ``` Swifts unit tests are designed to test small parts of the code in isolation. The functional tests validate that the entire system is working from an external perspective (they are black-box tests). You can even run functional tests against public Swift endpoints. The probetests are designed to test much of Swifts internal processes. For example, a test may write data, intentionally corrupt it, and then ensure that the correct processes detect and repair it. When your patch is submitted for code review, it will automatically be tested on the OpenStack CI infrastructure. In addition to many of the tests above, it will also be tested by several other OpenStack test jobs. Once your patch has been reviewed and approved by core reviewers and has passed all automated tests, it will be merged into the Swift source tree." }, { "data": "If youre working on something, its a very good idea to write down what youre thinking about. This lets others get up to speed, helps you collaborate, and serves as a great record for future reference. Write down your thoughts somewhere and put a link to it here. It doesnt matter what form your thoughts are in; use whatever is best for you. Your document should include why your idea is needed and your thoughts on particular design choices and tradeoffs. Please include some contact information (ideally, your IRC nick) so that people can collaborate with you. People working on the Swift project may be found in the in their timezone. The channel is logged, so if you ask a question when no one is around, you can check the log to see if its been answered: http://eavesdrop.openstack.org/irclogs/%23openstack-swift/ This is a Swift team meeting. The discussion in this meeting is about all things related to the Swift project: time: http://eavesdrop.openstack.org/#SwiftTeamMeeting agenda: https://wiki.openstack.org/wiki/Meetings/Swift We use the openstack-discuss@lists.openstack.org mailing list for asynchronous discussions or to communicate with other OpenStack teams. Use the prefix [swift] in your subject line (its a high-volume list, so most people use email filters). More information about the mailing list, including how to subscribe and read the archives, can be found at: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss The swift-core team is an active group of contributors who are responsible for directing and maintaining the Swift project. As a new contributor, your interaction with this group will be mostly through code reviews, because only members of swift-core can approve a code change to be merged into the code repository. But the swift-core team also spend time on IRC so feel free to drop in to ask questions or just to meet us. Note Although your contribution will require reviews by members of swift-core, these arent the only people whose reviews matter. Anyone with a gerrit account can post reviews, so you can ask other developers you know to review your code and you can review theirs. (A good way to learn your way around the codebase is to review other peoples patches.) If youre thinking, Im new at this, how can I possibly provide a helpful review?, take a look at How to Review Changes the OpenStack Way. Or for more specifically in a Swift context read Review Guidelines You can learn more about the role of core reviewers in the OpenStack governance documentation: https://docs.openstack.org/contributors/common/governance.html#core-reviewer The membership list of swift-core is maintained in gerrit: https://review.opendev.org/#/admin/groups/24,members You can also find the members of the swift-core team at the Swift weekly meetings. Understanding how reviewers review and what they look for will help getting your code merged. See Swift Review Guidelines for how we review code. Keep in mind that reviewers are also human; if something feels stalled, then come and poke us on IRC or add it to our meeting agenda. All common PTL duties are enumerated in the PTL guide. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "contributing.html#getting-started.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Theres a lot of good material out there on Erasure Code (EC) theory, this short introduction is just meant to provide some basic context to help the reader better understand the implementation in Swift. Erasure Coding for storage applications grew out of Coding Theory as far back as the 1960s with the Reed-Solomon codes. These codes have been used for years in applications ranging from CDs to DVDs to general communications and, yes, even in the space program starting with Voyager! The basic idea is that some amount of data is broken up into smaller pieces called fragments and coded in such a way that it can be transmitted with the ability to tolerate the loss of some number of the coded fragments. Thats where the word erasure comes in, if you transmit 14 fragments and only 13 are received then one of them is said to be erased. The word erasure provides an important distinction with EC; it isnt about detecting errors, its about dealing with failures. Another important element of EC is that the number of erasures that can be tolerated can be adjusted to meet the needs of the application. At a high level EC works by using a specific scheme to break up a single data buffer into several smaller data buffers then, depending on the scheme, performing some encoding operation on that data in order to generate additional information. So you end up with more data than you started with and that extra data is often called parity. Note that there are many, many different encoding techniques that vary both in how they organize and manipulate the data as well by what means they use to calculate parity. For example, one scheme might rely on Galois Field Arithmetic while others may work with only XOR. The number of variations and details about their differences are well beyond the scope of this introduction, but we will talk more about a few of them when we get into the implementation of EC in Swift. First and foremost, from an application perspective EC support is totally transparent. There are no EC related external API; a container is simply created using a Storage Policy defined to use EC and then interaction with the cluster is the same as any other durability policy. EC is implemented in Swift as a Storage Policy, see Storage Policies for complete details on Storage Policies. Because support is implemented as a Storage Policy, all of the storage devices associated with your clusters EC capability can be isolated. It is entirely possible to share devices between storage policies, but for EC it may make more sense to not only use separate devices but possibly even entire nodes dedicated for EC. Which direction one chooses depends on why the EC policy is being deployed. If, for example, there is a production replication policy in place already and the goal is to add a cold storage tier such that the existing nodes performing replication are impacted as little as possible, adding a new set of nodes dedicated to EC might make the most sense but also incurs the most" }, { "data": "On the other hand, if EC is being added as a capability to provide additional durability for a specific set of applications and the existing infrastructure is well suited for EC (sufficient number of nodes, zones for the EC scheme that is chosen) then leveraging the existing infrastructure such that the EC ring shares nodes with the replication ring makes the most sense. These are some of the main considerations: Layout of existing infrastructure. Cost of adding dedicated EC nodes (or just dedicated EC devices). Intended usage model(s). The Swift code base does not include any of the algorithms necessary to perform the actual encoding and decoding of data; that is left to external libraries. The Storage Policies architecture is leveraged to enable EC on a per container basis the object rings are still used to determine the placement of EC data fragments. Although there are several code paths that are unique to an operation associated with an EC policy, an external dependency to an Erasure Code library is what Swift counts on to perform the low level EC functions. The use of an external library allows for maximum flexibility as there are a significant number of options out there, each with its owns pros and cons that can vary greatly from one use case to another. PyECLib is a Python Erasure Coding Library originally designed and written as part of the effort to add EC support to the Swift project, however it is an independent project. The library provides a well-defined and simple Python interface and internally implements a plug-in architecture allowing it to take advantage of many well-known C libraries such as: Jerasure and GFComplete at http://jerasure.org. Intel(R) ISA-L at http://01.org/intel%C2%AE-storage-acceleration-library-open-source-version. Or write your own! PyECLib uses a C based library called liberasurecode to implement the plug in infrastructure; liberasurecode is available at: liberasurecode: https://github.com/openstack/liberasurecode PyECLib itself therefore allows for not only choice but further extensibility as well. PyECLib also comes with a handy utility to help determine the best algorithm to use based on the equipment that will be used (processors and server configurations may vary in performance per algorithm). More on this will be covered in the configuration section. PyECLib is included as a Swift requirement. For complete details see PyECLib We will discuss the details of how PUT and GET work in the Under the Hood section later on. The key point here is that all of the erasure code work goes on behind the scenes; this summary is a high level information overview only. The PUT flow looks like this: The proxy server streams in an object and buffers up a segment of data (size is configurable). The proxy server calls on PyECLib to encode the data into smaller fragments. The proxy streams the encoded fragments out to the storage nodes based on ring locations. Repeat until the client is done sending data. The client is notified of completion when a quorum is met. The GET flow looks like this: The proxy server makes simultaneous requests to participating nodes. As soon as the proxy has the fragments it needs, it calls on PyECLib to decode the data. The proxy streams the decoded data it has back to the client. Repeat until the proxy is done sending data back to the client. It may sound like, from this high level overview, that using EC is going to cause an explosion in the number of actual files stored in each nodes local file" }, { "data": "Although it is true that more files will be stored (because an object is broken into pieces), the implementation works to minimize this where possible, more details are available in the Under the Hood section. In EC policies, similarly to replication, handoff nodes are a set of storage nodes used to augment the list of primary nodes responsible for storing an erasure coded object. These handoff nodes are used in the event that one or more of the primaries are unavailable. Handoff nodes are still selected with an attempt to achieve maximum separation of the data being placed. For an EC policy, reconstruction is analogous to the process of replication for a replication type policy essentially the reconstructor replaces the replicator for EC policy types. The basic framework of reconstruction is very similar to that of replication with a few notable exceptions: Because EC does not actually replicate partitions, it needs to operate at a finer granularity than what is provided with rsync, therefore EC leverages much of ssync behind the scenes (you do not need to manually configure ssync). Once a pair of nodes has determined the need to replace a missing object fragment, instead of pushing over a copy like replication would do, the reconstructor has to read in enough surviving fragments from other nodes and perform a local reconstruction before it has the correct data to push to the other node. A reconstructor does not talk to all other reconstructors in the set of nodes responsible for an EC partition, this would be far too chatty, instead each reconstructor is responsible for syncing with the partitions closest two neighbors (closest meaning left and right on the ring). Note EC work (encode and decode) takes place both on the proxy nodes, for PUT/GET operations, as well as on the storage nodes for reconstruction. As with replication, reconstruction can be the result of rebalancing, bit-rot, drive failure or reverting data from a hand-off node back to its primary. In general, EC has different performance characteristics than replicated data. EC requires substantially more CPU to read and write data, and is more suited for larger objects that are not frequently accessed (e.g. backups). Operators are encouraged to characterize the performance of various EC schemes and share their observations with the developer community. To use an EC policy, the administrator simply needs to define an EC policy in swift.conf and create/configure the associated object ring. An example of how an EC policy can be setup is shown below: ``` [storage-policy:2] name = ec104 policytype = erasurecoding ectype = liberasurecoders_vand ecnumdata_fragments = 10 ecnumparity_fragments = 4 ecobjectsegment_size = 1048576 ``` Lets take a closer look at each configuration parameter: name: This is a standard storage policy parameter. See Storage Policies for details. policytype: Set this to erasurecoding to indicate that this is an EC policy. ec_type: Set this value according to the available options in the selected PyECLib back-end. This specifies the EC scheme that is to be used. For example the option shown here selects Vandermonde Reed-Solomon encoding while an option of flatxorhd_3 would select Flat-XOR based HD combination codes. See the PyECLib page for full details. ecnumdata_fragments: The total number of fragments that will be comprised of data. ecnumparity_fragments: The total number of fragments that will be comprised of" }, { "data": "ecobjectsegment_size: The amount of data that will be buffered up before feeding a segment into the encoder/decoder. The default value is 1048576. When PyECLib encodes an object, it will break it into N fragments. However, what is important during configuration, is how many of those are data and how many are parity. So in the example above, PyECLib will actually break an object in 14 different fragments, 10 of them will be made up of actual object data and 4 of them will be made of parity data (calculations depending on ec_type). When deciding which devices to use in the EC policys object ring, be sure to carefully consider the performance impacts. Running some performance benchmarking in a test environment for your configuration is highly recommended before deployment. To create the EC policys object ring, the only difference in the usage of the swift-ring-builder create command is the replicas parameter. The replicas value is the number of fragments spread across the object servers associated with the ring; replicas must be equal to the sum of ecnumdatafragments and ecnumparityfragments. For example: ``` swift-ring-builder object-1.builder create 10 14 1 ``` Note that in this example the replicas value of 14 is based on the sum of 10 EC data fragments and 4 EC parity fragments. Once you have configured your EC policy in swift.conf and created your object ring, your application is ready to start using EC simply by creating a container with the specified policy name and interacting as usual. Note Its important to note that once you have deployed a policy and have created objects with that policy, these configurations options cannot be changed. In case a change in the configuration is desired, you must create a new policy and migrate the data to a new container. Warning Using isalrs_vand with more than 4 parity fragments creates fragments which may in some circumstances fail to reconstruct properly or (with liberasurecode < 1.3.1) reconstruct corrupted data. New policies that need large numbers of parity fragments should consider using isalrs_cauchy. Any existing affected policies must be marked deprecated, and data in containers with that policy should be migrated to a new policy. A common usage of EC is to migrate less commonly accessed data from a more expensive but lower latency policy such as replication. When an application determines that it wants to move data from a replication policy to an EC policy, it simply needs to move the data from the replicated container to an EC container that was created with the target durability policy. The following recommendations are made when deploying an EC policy that spans multiple regions in a Global Cluster: The global EC policy should use EC Duplication in conjunction with a Composite Ring, as described below. Proxy servers should be configured to use read affinity to prefer reading from their local region for the global EC policy. Per policy configuration allows this to be configured for individual policies. Note Before deploying a Global EC policy, consideration should be given to the Known Issues, in particular the relatively poor performance anticipated from the object-reconstructor. EC Duplication enables Swift to make duplicated copies of fragments of erasure coded" }, { "data": "If an EC storage policy is configured with a non-default ecduplicationfactor of N > 1, then the policy will create N duplicates of each unique fragment that is returned from the configured EC engine. Duplication of EC fragments is optimal for Global EC storage policies, which require dispersion of fragment data across failure domains. Without fragment duplication, common EC parameters will not distribute enough unique fragments between large failure domains to allow for a rebuild using fragments from any one domain. For example a uniformly distributed 10+4 EC policy schema would place 7 fragments in each of two failure domains, which is less in each failure domain than the 10 fragments needed to rebuild a missing fragment. Without fragment duplication, an EC policy schema must be adjusted to include additional parity fragments in order to guarantee the number of fragments in each failure domain is greater than the number required to rebuild. For example, a uniformly distributed 10+18 EC policy schema would place 14 fragments in each of two failure domains, which is more than sufficient in each failure domain to rebuild a missing fragment. However, empirical testing has shown encoding a schema with numparity > numdata (such as 10+18) is less efficient than using duplication of fragments. EC fragment duplication enables Swifts Global EC to maintain more independence between failure domains without sacrificing efficiency on read/write or rebuild! The ecduplicationfactor option may be configured in swift.conf in each storage-policy section. The option may be omitted - the default value is 1 (i.e. no duplication): ``` [storage-policy:2] name = ec104 policytype = erasurecoding ectype = liberasurecoders_vand ecnumdata_fragments = 10 ecnumparity_fragments = 4 ecobjectsegment_size = 1048576 ecduplicationfactor = 2 ``` Warning EC duplication is intended for use with Global EC policies. To ensure independent availability of data in all regions, the ecduplicationfactor option should only be used in conjunction with Composite Rings, as described in this document. In this example, a 10+4 schema and a duplication factor of 2 will result in (10+4)x2 = 28 fragments being stored (we will use the shorthand 10+4x2 to denote that policy configuration) . The ring for this policy should be configured with 28 replicas (i.e. (ecnumdata_fragments + ecnumparityfragments) * ecduplication_factor). A 10+4x2 schema can allow a multi-region deployment to rebuild an object to full durability even when more than 14 fragments are unavailable. This is advantageous with respect to a 10+18 configuration not only because reads from data fragments will be more common and more efficient, but also because a 10+4x2 can grow into a 10+4x3 to expand into another region. It is recommended that EC Duplication is used with Composite Rings in order to disperse duplicate fragments across regions. When EC duplication is used, it is highly desirable to have one duplicate of each fragment placed in each region. This ensures that a set of ecnumdata_fragments unique fragments (the minimum needed to reconstruct an object) can always be assembled from a single region. This in turn means that objects are robust in the event of an entire region becoming unavailable. This can be achieved by using a composite ring with the following properties: The number of component rings in the composite ring is equal to the ecduplicationfactor for the policy. Each component ring has a number of replicas that is equal to the sum of ecnumdatafragments and ecnumparityfragments. Each component ring is populated with devices in a unique" }, { "data": "This arrangement results in each component ring in the composite ring, and therefore each region, having one copy of each fragment. For example, consider a Swift cluster with two regions, region1 and region2 and a 4+2x2 EC policy schema. This policy should use a composite ring with two component rings, ring1 and ring2, having devices exclusively in regions region1 and region2 respectively. Each component ring should have replicas = 6. As a result, the first 6 fragments for an object will always be placed in ring1 (i.e. in region1) and the second 6 duplicate fragments will always be placed in ring2 (i.e. in region2). Conversely, a conventional ring spanning the two regions may give a suboptimal distribution of duplicates across the regions; it is possible for duplicates of the same fragment to be placed in the same region, and consequently for another region to have no copies of that fragment. This may make it impossible to assemble a set of ecnumdata_fragments unique fragments from a single region. For example, the conventional ring could have a pathologically sub-optimal placement such as: ``` r1 <timestamp>#0#d.data <timestamp>#0#d.data <timestamp>#2#d.data <timestamp>#2#d.data <timestamp>#4#d.data <timestamp>#4#d.data r2 <timestamp>#1#d.data <timestamp>#1#d.data <timestamp>#3#d.data <timestamp>#3#d.data <timestamp>#5#d.data <timestamp>#5#d.data ``` In this case, the object cannot be reconstructed from a single region; region1 has only the fragments with index 0, 2, 4 and region2 has the other 3 indexes, but we need 4 unique indexes to be able to rebuild an object. Proxy servers require a set of unique fragment indexes to decode the original object when handling a GET request to an EC policy. With a conventional EC policy, this is very likely to be the outcome of reading fragments from a random selection of backend nodes. With an EC Duplication policy it is significantly more likely that responses from a random selection of backend nodes might include some duplicated fragments. For this reason it is strongly recommended that EC Duplication always be deployed in combination with Composite Rings and proxy server read affinity. Under normal conditions with the recommended deployment, read affinity will cause a proxy server to first attempt to read fragments from nodes in its local region. These fragments are guaranteed to be unique with respect to each other. Even if there are a small number of local failures, unique local parity fragments will make up the difference. However, should enough local primary storage nodes fail, such that sufficient unique fragments are not available in the local region, a global EC cluster will proceed to read fragments from the other region(s). Random reads from the remote region are not guaranteed to return unique fragments; with EC Duplication there is a significantly high probability that the proxy server will encounter a fragment that is a duplicate of one it has already found in the local region. The proxy server will ignore these and make additional requests until it accumulates the required set of unique fragments, potentially searching all the primary and handoff locations in the local and remote regions before ultimately failing the read. A global EC deployment configured as recommended is therefore extremely resilient. However, under extreme failure conditions read handling can be inefficient because nodes in other regions are guaranteed to have some fragments which are duplicates of those the proxy server has already" }, { "data": "Work is in progress to improve the proxy server node selection strategy such that when it is necessary to read from other regions, nodes that are likely to have useful fragments are preferred over those that are likely to return a duplicate. Work is also in progress to improve the object-reconstructor efficiency for Global EC policies. Unlike the proxy server, the reconstructor does not apply any read affinity settings when gathering fragments. It is therefore likely to receive duplicated fragments (i.e. make wasted backend GET requests) while performing every fragment reconstruction. Additionally, other reconstructor optimisations for Global EC are under investigation: Since fragments are duplicated between regions it may in some cases be more attractive to restore failed fragments from their duplicates in another region instead of rebuilding them from other fragments in the local region. Conversely, to avoid WAN transfer it may be more attractive to rebuild fragments from local parity. During rebalance it will always be more attractive to revert a fragment from its old-primary to its new primary rather than rebuilding or transferring a duplicate from the remote region. Now that weve explained a little about EC support in Swift and how to configure and use it, lets explore how EC fits in at the nuts-n-bolts level. The term fragment has been used already to describe the output of the EC process (a series of fragments) however we need to define some other key terms here before going any deeper. Without paying special attention to using the correct terms consistently, it is very easy to get confused in a hurry! chunk: HTTP chunks received over wire (term not used to describe any EC specific operation). segment: Not to be confused with SLO/DLO use of the word, in EC we call a segment a series of consecutive HTTP chunks buffered up before performing an EC operation. fragment: Data and parity fragments are generated when erasure coding transformation is applied to a segment. EC archive: A concatenation of EC fragments; to a storage node this looks like an object. ec_ndata: Number of EC data fragments. ec_nparity: Number of EC parity fragments. Middleware remains unchanged. For most middleware (e.g., SLO/DLO) the fact that the proxy is fragmenting incoming objects is transparent. For list endpoints, however, it is a bit different. A caller of list endpoints will get back the locations of all of the fragments. The caller will be unable to re-assemble the original object with this information, however the node locations may still prove to be useful information for some applications. EC archives are stored on disk in their respective objects-N directory based on their policy index. See Storage Policies for details on per policy directory information. In addition to the object timestamp, the filenames of EC archives encode other information related to the archive: The fragment archive index. This is required for a few reasons. For one, it allows us to store fragment archives of different indexes on the same storage node which is not typical however it is possible in many circumstances. Without unique filenames for the different EC archive files in a set, we would be at risk of overwriting one archive of index n with another of index m in some scenarios. The index is appended to the filename just before the .data extension. For example, the filename for a fragment archive storing the 5th fragment would be: ``` 1418673556.92690#5.data ``` The durable state of the" }, { "data": "The meaning of this will be described in more detail later, but a fragment archive that is considered durable has an additional #d string included in its filename immediately before the .data extension. For example: ``` 1418673556.92690#5#d.data ``` A policy-specific transformation function is therefore used to build the archive filename. These functions are implemented in the diskfile module as methods of policy specific sub classes of BaseDiskFileManager. The transformation function for the replication policy is simply a NOP. Note In older versions the durable state of an archive was represented by an additional file called the .durable file instead of the #d substring in the .data filename. The .durable for the example above would be: ``` 1418673556.92690.durable ``` The Proxy Server handles Erasure Coding in a different manner than replication, therefore there are several code paths unique to EC policies either though sub classing or simple conditionals. Taking a closer look at the PUT and the GET paths will help make this clearer. But first, a high level overview of how an object flows through the system: Note how: Incoming objects are buffered into segments at the proxy. Segments are erasure coded into fragments at the proxy. The proxy stripes fragments across participating nodes such that the on-disk stored files that we call a fragment archive is appended with each new fragment. This scheme makes it possible to minimize the number of on-disk files given our segmenting and fragmenting. Multi-part MIME document support is used to allow the proxy to engage in a handshake conversation with the storage node for processing PUT requests. This is required for a few different reasons. From the perspective of the storage node, a fragment archive is really just another object, we need a mechanism to send down the original object etag after all fragment archives have landed. Without introducing strong consistency semantics, the proxy needs a mechanism to know when a quorum of fragment archives have actually made it to disk before it can inform the client of a successful PUT. MIME supports a conversation between the proxy and the storage nodes for every PUT. This provides us with the ability to handle a PUT in one connection and assure that we have the essence of a 2 phase commit, basically having the proxy communicate back to the storage nodes once it has confirmation that a quorum of fragment archives in the set have been written. For the first phase of the conversation the proxy requires a quorum of ec_ndata + 1 fragment archives to be successfully put to storage nodes. This ensures that the object could still be reconstructed even if one of the fragment archives becomes unavailable. As described above, each fragment archive file is named: ``` <ts>#<frag_index>.data ``` where ts is the timestamp and frag_index is the fragment archive index. During the second phase of the conversation the proxy communicates a confirmation to storage nodes that the fragment archive quorum has been achieved. This causes each storage node to rename the fragment archive written in the first phase of the conversation to include the substring #d in its name: ``` <ts>#<frag_index>#d.data ``` This indicates to the object server that this fragment archive is durable and that there is a set of data files that are durable at timestamp" }, { "data": "For the second phase of the conversation the proxy requires a quorum of ec_ndata + 1 successful commits on storage nodes. This ensures that there are sufficient committed fragment archives for the object to be reconstructed even if one becomes unavailable. The reconstructor ensures that the durable state is replicated on storage nodes where it may be missing. Note that the completion of the commit phase of the conversation is also a signal for the object server to go ahead and immediately delete older timestamp files for this object. This is critical as we do not want to delete the older object until the storage node has confirmation from the proxy, via the multi-phase conversation, that the other nodes have landed enough for a quorum. The basic flow looks like this: The Proxy Server erasure codes and streams the object fragments (ecndata + ecnparity) to the storage nodes. The storage nodes store objects as EC archives and upon finishing object data/metadata write, send a 1st-phase response to proxy. Upon quorum of storage nodes responses, the proxy initiates 2nd-phase by sending commit confirmations to object servers. Upon receipt of commit message, object servers rename .data files to include the #d substring, indicating successful PUT, and send a final response to the proxy server. The proxy waits for ec_ndata + 1 object servers to respond with a success (2xx) status before responding to the client with a successful status. Here is a high level example of what the conversation looks like: ``` proxy: PUT /p/a/c/o Transfer-Encoding': 'chunked' Expect': '100-continue' X-Backend-Obj-Multiphase-Commit: yes obj: 100 Continue X-Obj-Multiphase-Commit: yes proxy: --MIMEboundary X-Document: object body <obj_data> --MIMEboundary X-Document: object metadata Content-MD5: <footermetacksum> <footer_meta> --MIMEboundary <object server writes data, metadata to <ts>#<frag_index>.data file> obj: 100 Continue <quorum> proxy: X-Document: put commit commit_confirmation --MIMEboundary-- <object server renames <ts>#<fragindex>.data to <ts>#<fragindex>#d.data> obj: 20x <proxy waits to receive >=2 2xx responses> proxy: 2xx -> client ``` A few key points on the durable state of a fragment archive: A durable fragment archive means that there exist sufficient other fragment archives elsewhere in the cluster (durable and/or non-durable) to reconstruct the object. When a proxy does a GET, it will require at least one object server to respond with a fragment archive is durable before reconstructing and returning the object to the client. A partial PUT failure has a few different modes. In one scenario the Proxy Server is alive through the entire PUT conversation. This is a very straightforward case. The client will receive a good response if and only if a quorum of fragment archives were successfully landed on their storage nodes. In this case the Reconstructor will discover the missing fragment archives, perform a reconstruction and deliver those fragment archives to their nodes. The more interesting case is what happens if the proxy dies in the middle of a conversation. If it turns out that a quorum had been met and the commit phase of the conversation finished, its as simple as the previous case in that the reconstructor will repair things. However, if the commit didnt get a chance to happen then some number of the storage nodes have .data files on them (fragment archives) but none of them knows whether there are enough elsewhere for the entire object to be" }, { "data": "In this case the client will not have received a 2xx response so there is no issue there, however, it is left to the storage nodes to clean up the stale fragment archives. Work is ongoing in this area to enable the proxy to play a role in reviving these fragment archives, however, for the current release, a proxy failure after the start of a conversation but before the commit message will simply result in a PUT failure. The GET for EC is different enough from replication that subclassing the BaseObjectController to the ECObjectController enables an efficient way to implement the high level steps described earlier: The proxy server makes simultaneous requests to ec_ndata primary object server nodes with goal of finding a set of ec_ndata distinct EC archives at the same timestamp, and an indication from at least one object server that a durable fragment archive exists for that timestamp. If this goal is not achieved with the first ec_ndata requests then the proxy server continues to issue requests to the remaining primary nodes and then handoff nodes. As soon as the proxy server has found a usable set of ec_ndata EC archives, it starts to call PyECLib to decode fragments as they are returned by the object server nodes. The proxy server creates Etag and content length headers for the client response since each EC archives metadata is valid only for that archive. The proxy streams the decoded data it has back to the client. Note that the proxy does not require all objects servers to have a durable fragment archive to return in response to a GET. The proxy will be satisfied if just one object server has a durable fragment archive at the same timestamp as EC archives returned from other object servers. This means that the proxy can successfully GET an object that had missing durable state on some nodes when it was PUT (i.e. a partial PUT failure occurred). Note also that an object server may inform the proxy server that it has more than one EC archive for different timestamps and/or fragment indexes, which may cause the proxy server to issue multiple requests for distinct EC archives to that object server. (This situation can temporarily occur after a ring rebalance when a handoff node storing an archive has become a primary node and received its primary archive but not yet moved the handoff archive to its primary node.) The proxy may receive EC archives having different timestamps, and may receive several EC archives having the same index. The proxy therefore ensures that it has sufficient EC archives with the same timestamp and distinct fragment indexes before considering a GET to be successful. The Object Server, like the Proxy Server, supports MIME conversations as described in the proxy section earlier. This includes processing of the commit message and decoding various sections of the MIME document to extract the footer which includes things like the entire object etag. Erasure code policies use subclassed ECDiskFile, ECDiskFileWriter, ECDiskFileReader and ECDiskFileManager to implement EC specific handling of on disk files. This includes things like file name manipulation to include the fragment index and durable state in the filename, construction of EC specific hashes.pkl file to include fragment index information," }, { "data": "There are few different categories of metadata that are associated with EC: System Metadata: EC has a set of object level system metadata that it attaches to each of the EC archives. The metadata is for internal use only: X-Object-Sysmeta-EC-Etag: The Etag of the original object. X-Object-Sysmeta-EC-Content-Length: The content length of the original object. X-Object-Sysmeta-EC-Frag-Index: The fragment index for the object. X-Object-Sysmeta-EC-Scheme: Description of the EC policy used to encode the object. X-Object-Sysmeta-EC-Segment-Size: The segment size used for the object. User Metadata: User metadata is unaffected by EC, however, a full copy of the user metadata is stored with every EC archive. This is required as the reconstructor needs this information and each reconstructor only communicates with its closest neighbors on the ring. PyECLib Metadata: PyECLib stores a small amount of metadata on a per fragment basis. This metadata is not documented here as it is opaque to Swift. As account and container rings are not associated with a Storage Policy, there is no change to how these database updates occur when using an EC policy. The Reconstructor performs analogous functions to the replicator: Recovering from disk drive failure. Moving data around because of a rebalance. Reverting data back to a primary from a handoff. Recovering fragment archives from bit rot discovered by the auditor. However, under the hood it operates quite differently. The following are some of the key elements in understanding how the reconstructor operates. Unlike the replicator, the work that the reconstructor does is not always as easy to break down into the 2 basic tasks of synchronize or revert (move data from handoff back to primary) because of the fact that one storage node can house fragment archives of various indexes and each index really \"belongs\" to a different node. So, whereas when the replicator is reverting data from a handoff it has just one node to send its data to, the reconstructor can have several. Additionally, it is not always the case that the processing of a particular suffix directory means one or the other job type for the entire directory (as it does for replication). The scenarios that create these mixed situations can be pretty complex so we will just focus on what the reconstructor does here and not a detailed explanation of why. Because of the nature of the work it has to do as described above, the reconstructor builds jobs for a single job processor. The job itself contains all of the information needed for the processor to execute the job which may be a synchronization or a data reversion. There may be a mix of jobs that perform both of these operations on the same suffix directory. Jobs are constructed on a per-partition basis and then per-fragment-index basis. That is, there will be one job for every fragment index in a partition. Performing this construction \"up front\" like this helps minimize the interaction between nodes collecting hashes.pkl information. Once a set of jobs for a partition has been constructed, those jobs are sent off to threads for execution. The single job processor then performs the necessary actions, working closely with ssync to carry out its instructions. For data reversion, the actual objects themselves are cleaned up via the ssync module and once that partitions set of jobs is complete, the reconstructor will attempt to remove the relevant directory" }, { "data": "Job construction must account for a variety of scenarios, including: A partition directory with all fragment indexes matching the local node index. This is the case where everything is where it belongs and we just need to compare hashes and sync if needed. Here we simply sync with our partners. A partition directory with at least one local fragment index and mix of others. Here we need to sync with our partners where fragment indexes matches the local_id, all others are syncd with their home nodes and then deleted. A partition directory with no local fragment index and just one or more of others. Here we sync with just the home nodes for the fragment indexes that we have and then all the local archives are deleted. This is the basic handoff reversion case. Note A \"home node\" is the node where the fragment index encoded in the fragment archives filename matches the node index of a node in the primary partition list. The replicators talk to all nodes who have a copy of their object, typically just 2 other nodes. For EC, having each reconstructor node talk to all nodes would incur a large amount of overhead as there will typically be a much larger number of nodes participating in the EC scheme. Therefore, the reconstructor is built to talk to its adjacent nodes on the ring only. These nodes are typically referred to as partners. Reconstruction can be thought of sort of like replication but with an extra step in the middle. The reconstructor is hard-wired to use ssync to determine what is missing and desired by the other side. However, before an object is sent over the wire it needs to be reconstructed from the remaining fragments as the local fragment is just that - a different fragment index than what the other end is asking for. Thus, there are hooks in ssync for EC based policies. One case would be for basic reconstruction which, at a high level, looks like this: Determine which nodes need to be contacted to collect other EC archives needed to perform reconstruction. Update the etag and fragment index metadata elements of the newly constructed fragment archive. Establish a connection to the target nodes and give ssync a DiskFileLike class from which it can stream data. The reader in this class gathers fragments from the nodes and uses PyECLib to reconstruct each segment before yielding data back to ssync. Essentially what this means is that data is buffered, in memory, on a per segment basis at the node performing reconstruction and each segment is dynamically reconstructed and delivered to ssyncsender where the sendput() method will ship them on over. The sender is then responsible for deleting the objects as they are sent in the case of data reversion. Because the auditor already operates on a per storage policy basis, there are no specific auditor changes associated with EC. Each EC archive looks like, and is treated like, a regular object from the perspective of the auditor. Therefore, if the auditor finds bit-rot in an EC archive, it simply quarantines it and the reconstructor will take care of the rest just as the replicator does for replication policies. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "contributing.html#project-team-lead-duties.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Put simply, if you improve Swift, youre a contributor. The easiest way to improve the project is to tell us where theres a bug. In other words, filing a bug is a valuable and helpful way to contribute to the project. Once a bug has been filed, someone will work on writing a patch to fix the bug. Perhaps youd like to fix a bug. Writing code to fix a bug or add new functionality is tremendously important. Once code has been written, it is submitted upstream for review. All code, even that written by the most senior members of the community, must pass code review and all tests before it can be included in the project. Reviewing proposed patches is a very helpful way to be a contributor. Swift is nothing without the community behind it. Wed love to welcome you to our community. Come find us in #openstack-swift on OFTC IRC or on the OpenStack dev mailing list. For general information on contributing to OpenStack, please check out the contributor guide to get started. It covers all the basics that are common to all OpenStack projects: the accounts you need, the basics of interacting with our Gerrit review system, how we communicate as a community, etc. If you want more Swift related project documentation make sure you checkout the Swift developer (contributor) documentation at https://docs.openstack.org/swift/latest/ Filing a bug is the easiest way to contribute. We use Launchpad as a bug tracker; you can find currently-tracked bugs at https://bugs.launchpad.net/swift. Use the Report a bug link to file a new bug. If you find something in Swift that doesnt match the documentation or doesnt meet your expectations with how it should work, please let us know. Of course, if you ever get an error (like a Traceback message in the logs), we definitely want to know about that. Well do our best to diagnose any problem and patch it as soon as possible. A bug report, at minimum, should describe what you were doing that caused the bug. Swift broke, pls fix is not helpful. Instead, something like When I restarted syslog, Swift started logging traceback messages is very helpful. The goal is that we can reproduce the bug and isolate the issue in order to apply a fix. If you dont have full details, thats ok. Anything you can provide is helpful. You may have noticed that there are many tracked bugs, but not all of them have been confirmed. If you take a look at an old bug report and you can reproduce the issue described, please leave a comment on the bug about that. It lets us all know that the bug is very likely to be valid. All code reviews in OpenStack projects are done on https://review.opendev.org/. Reviewing patches is one of the most effective ways you can contribute to the community. Weve written REVIEW_GUIDELINES.rst (found in this source tree) to help you give good reviews. https://wiki.openstack.org/wiki/Swift/PriorityReviews is a starting point to find what reviews are priority in the" }, { "data": "If youre looking for a way to write and contribute code, but youre not sure what to work on, check out the wishlist bugs in the bug tracker. These are normally smaller items that someone took the time to write down but didnt have time to implement. And please join #openstack-swift on OFTC IRC to tell us what youre working on. https://docs.openstack.org/swift/latest/firstcontributionswift.html Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at http://docs.openstack.org/infra/manual/developers.html#development-workflow. Gerrit is the review system used in the OpenStack projects. Were sorry, but we wont be able to respond to pull requests submitted through GitHub. Bugs should be filed on Launchpad, not in GitHubs issue tracker. The Zen of Python Simple Scales Minimal dependencies Re-use existing tools and libraries when reasonable Leverage the economies of scale Small, loosely coupled RESTful services No single points of failure Start with the use case then design from the cluster operator up If you havent argued about it, you dont have the right answer yet :) If it is your first implementation, you probably arent done yet :) Please dont feel offended by difference of opinion. Be prepared to advocate for your change and iterate on it based on feedback. Reach out to other people working on the project on IRC or the mailing list - we want to help. Set up a Swift All-In-One VM(SAIO). Make your changes. Docs and tests for your patch must land before or with your patch. Run unit tests, functional tests, probe tests ./.unittests ./.functests ./.probetests Run tox (no command-line args needed) git review Running the tests above against Swift in your development environment (ie your SAIO) will catch most issues. Any patch you propose is expected to be both tested and documented and all tests should pass. If you want to run just a subset of the tests while you are developing, you can use pytest: ``` cd test/unit/common/middleware/ && pytest test_healthcheck.py ``` To check which parts of your code are being exercised by a test, you can run tox and then point your browser to swift/cover/index.html: ``` tox -e py27 -- test.unit.common.middleware.testhealthcheck:TestHealthCheck.testhealthcheck ``` Swifts unit tests are designed to test small parts of the code in isolation. The functional tests validate that the entire system is working from an external perspective (they are black-box tests). You can even run functional tests against public Swift endpoints. The probetests are designed to test much of Swifts internal processes. For example, a test may write data, intentionally corrupt it, and then ensure that the correct processes detect and repair it. When your patch is submitted for code review, it will automatically be tested on the OpenStack CI infrastructure. In addition to many of the tests above, it will also be tested by several other OpenStack test jobs. Once your patch has been reviewed and approved by core reviewers and has passed all automated tests, it will be merged into the Swift source tree." }, { "data": "If youre working on something, its a very good idea to write down what youre thinking about. This lets others get up to speed, helps you collaborate, and serves as a great record for future reference. Write down your thoughts somewhere and put a link to it here. It doesnt matter what form your thoughts are in; use whatever is best for you. Your document should include why your idea is needed and your thoughts on particular design choices and tradeoffs. Please include some contact information (ideally, your IRC nick) so that people can collaborate with you. People working on the Swift project may be found in the in their timezone. The channel is logged, so if you ask a question when no one is around, you can check the log to see if its been answered: http://eavesdrop.openstack.org/irclogs/%23openstack-swift/ This is a Swift team meeting. The discussion in this meeting is about all things related to the Swift project: time: http://eavesdrop.openstack.org/#SwiftTeamMeeting agenda: https://wiki.openstack.org/wiki/Meetings/Swift We use the openstack-discuss@lists.openstack.org mailing list for asynchronous discussions or to communicate with other OpenStack teams. Use the prefix [swift] in your subject line (its a high-volume list, so most people use email filters). More information about the mailing list, including how to subscribe and read the archives, can be found at: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss The swift-core team is an active group of contributors who are responsible for directing and maintaining the Swift project. As a new contributor, your interaction with this group will be mostly through code reviews, because only members of swift-core can approve a code change to be merged into the code repository. But the swift-core team also spend time on IRC so feel free to drop in to ask questions or just to meet us. Note Although your contribution will require reviews by members of swift-core, these arent the only people whose reviews matter. Anyone with a gerrit account can post reviews, so you can ask other developers you know to review your code and you can review theirs. (A good way to learn your way around the codebase is to review other peoples patches.) If youre thinking, Im new at this, how can I possibly provide a helpful review?, take a look at How to Review Changes the OpenStack Way. Or for more specifically in a Swift context read Review Guidelines You can learn more about the role of core reviewers in the OpenStack governance documentation: https://docs.openstack.org/contributors/common/governance.html#core-reviewer The membership list of swift-core is maintained in gerrit: https://review.opendev.org/#/admin/groups/24,members You can also find the members of the swift-core team at the Swift weekly meetings. Understanding how reviewers review and what they look for will help getting your code merged. See Swift Review Guidelines for how we review code. Keep in mind that reviewers are also human; if something feels stalled, then come and poke us on IRC or add it to our meeting agenda. All common PTL duties are enumerated in the PTL guide. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "contributing.html#what-do-i-work-on.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "OpenStack supported binding: Python-SwiftClient Unofficial libraries and bindings: PHP PHP-opencloud - Official Rackspace PHP bindings that should work for other Swift deployments too. Ruby swift_client - Small but powerful Ruby client to interact with OpenStack Swift nightcrawler_swift - This Ruby gem teleports your assets to an OpenStack Swift bucket/container swift storage - Simple OpenStack Swift storage client. Java libcloud - Apache Libcloud - a unified interface in Python for different clouds with OpenStack Swift support. jclouds - Java library offering bindings for all OpenStack projects java-openstack-swift - Java bindings for OpenStack Swift javaswift - Collection of Java tools for Swift Bash supload - Bash script to upload file to cloud storage based on OpenStack Swift API. .NET openstacknetsdk.org - An OpenStack Cloud SDK for Microsoft .NET. Go Go language bindings Gophercloud an OpenStack SDK for Go Keystone - Official Identity Service for OpenStack. Swauth - RETIRED: An alternative Swift authentication service that only requires Swift itself. Basicauth - HTTP Basic authentication support (keystone backed). Swiftly - Alternate command line access to Swift with direct (no proxy) access capabilities as well. slogging - Basic stats and logging tools. Swift Informant - Swift proxy Middleware to send events to a statsd instance. Swift Inspector - Swift middleware to relay information about a request back to the client. SOS - Swift Origin Server. ProxyFS - Integrated file and object access for Swift object storage SwiftHLM - a middleware for using OpenStack Swift with tape and other high latency media storage backends. getput - getput tool suite COSbench - COSbench tool suite swift-sentry - Sentry exception reporting for Swift Swift-on-File - Enables objects created using Swift API to be accessed as files on a POSIX filesystem and vice versa. swift-scality-backend - Scality sproxyd object server implementation for Swift. SAIO bash scripts - Well commented simple bash scripts for Swift all in one setup. vagrant-swift-all-in-one - Quickly setup a standard development environment using Vagrant and Chef cookbooks in an Ubuntu virtual machine. SAIO Ansible playbook - Quickly setup a standard development environment using Vagrant and Ansible in a Fedora virtual machine (with built-in Swift-on-File support). Multi Swift - Bash scripts to spin up multiple Swift clusters sharing the same hardware Glance - Provides services for discovering, registering, and retrieving virtual machine images (for OpenStack Compute [Nova], for example). Django Swiftbrowser - Simple Django web app to access OpenStack Swift. Swift-account-stats - Swift-account-stats is a tool to report statistics on Swift usage at tenant and global levels. PyECLib - High-level erasure code library used by Swift liberasurecode - Low-level erasure code library used by PyECLib Swift Browser - JavaScript interface for Swift swift-ui - OpenStack Swift web browser swiftbackmeup - Utility that allows one to create backups and upload them to OpenStack Swift Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "contributing.html#recommended-workflow.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "This is a compilation of five posts I made earlier discussing how to build a consistent hashing ring. The posts seemed to be accessed quite frequently, so Ive gathered them all here on one page for easier reading. Note This is an historical document; as such, all code examples are Python 2. If this makes you squirm, think of it as pseudo-code. Regardless of implementation language, the state of the art in consistent-hashing and distributed systems more generally has advanced. We hope that this introduction from first principles will still prove informative, particularly with regard to how data is distributed within a Swift cluster. Consistent Hashing is a term used to describe a process where data is distributed using a hashing algorithm to determine its location. Using only the hash of the id of the data you can determine exactly where that data should be. This mapping of hashes to locations is usually termed a ring. Probably the simplest hash is just a modulus of the id. For instance, if all ids are numbers and you have two machines you wish to distribute data to, you could just put all odd numbered ids on one machine and even numbered ids on the other. Assuming you have a balanced number of odd and even numbered ids, and a balanced data size per id, your data would be balanced between the two machines. Since data ids are often textual names and not numbers, like paths for files or URLs, it makes sense to use a real hashing algorithm to convert the names to numbers first. Using MD5 for instance, the hash of the name mom.png is 4559a12e3e8da7c2186250c2f292e3af and the hash of dad.png is 096edcc4107e9e18d6a03a43b3853bea. Now, using the modulus, we can place mom.jpg on the odd machine and dad.png on the even one. Another benefit of using a hashing algorithm like MD5 is that the resulting hashes have a known even distribution, meaning your ids will be evenly distributed without worrying about keeping the id values themselves evenly distributed. Here is a simple example of this in action: ``` from hashlib import md5 from struct import unpack_from NODE_COUNT = 100 DATAIDCOUNT = 10000000 nodecounts = [0] * NODECOUNT for dataid in range(DATAID_COUNT): dataid = str(dataid) hsh = unpackfrom('>I', md5(dataid).digest())[0] nodeid = hsh % NODECOUNT nodecounts[nodeid] += 1 desiredcount = DATAIDCOUNT / NODECOUNT print '%d: Desired data ids per node' % desired_count maxcount = max(nodecounts) over = 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids on one node, %.02f%% over' % \\ (max_count, over) mincount = min(nodecounts) under = 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids on one node, %.02f%% under' % \\ (min_count, under) ``` ``` 100000: Desired data ids per node 100695: Most data ids on one node, 0.69% over 99073: Least data ids on one node, 0.93% under ``` So thats not bad at all; less than a percent over/under for distribution per node. In the next part of this series well examine where modulus distribution causes problems and how to improve our ring to overcome them. In Part 1 of this series, we did a simple test of using the modulus of a hash to locate data. We saw very good distribution, but thats only part of the story. Distributed systems not only need to distribute load, but they often also need to grow as more and more data is placed in" }, { "data": "So lets imagine we have a 100 node system up and running using our previous algorithm, but its starting to get full so we want to add another node. When we add that 101st node to our algorithm we notice that many ids now map to different nodes than they previously did. Were going to have to shuffle a ton of data around our system to get it all into place again. Lets examine whats happened on a much smaller scale: just 2 nodes again, node 0 gets even ids and node 1 gets odd ids. So data id 100 would map to node 0, data id 101 to node 1, data id 102 to node 0, etc. This is simply node = id % 2. Now we add a third node (node 2) for more space, so we want node = id % 3. So now data id 100 maps to node id 1, data id 101 to node 2, and data id 102 to node 0. So we have to move data for 2 of our 3 ids so they can be found again. Lets examine this at a larger scale: ``` from hashlib import md5 from struct import unpack_from NODE_COUNT = 100 NEWNODECOUNT = 101 DATAIDCOUNT = 10000000 moved_ids = 0 for dataid in range(DATAID_COUNT): dataid = str(dataid) hsh = unpackfrom('>I', md5(str(dataid)).digest())[0] nodeid = hsh % NODECOUNT newnodeid = hsh % NEWNODECOUNT if nodeid != newnode_id: moved_ids += 1 percentmoved = 100.0 * movedids / DATAIDCOUNT print '%d ids moved, %.02f%%' % (movedids, percentmoved) ``` ``` 9900989 ids moved, 99.01% ``` Wow, thats severe. Wed have to shuffle around 99% of our data just to increase our capacity 1%! We need a new algorithm that combats this behavior. This is where the ring really comes in. We can assign ranges of hashes directly to nodes and then use an algorithm that minimizes the changes to those ranges. Back to our small scale, lets say our ids range from 0 to 999. We have two nodes and well assign data ids 0499 to node 0 and 500999 to node 1. Later, when we add node 2, we can take half the data ids from node 0 and half from node 1, minimizing the amount of data that needs to move. Lets examine this at a larger scale: ``` from bisect import bisect_left from hashlib import md5 from struct import unpack_from NODE_COUNT = 100 NEWNODECOUNT = 101 DATAIDCOUNT = 10000000 noderangestarts = [] for nodeid in range(NODECOUNT): noderangestarts.append(DATAIDCOUNT / NODECOUNT * nodeid) newnoderange_starts = [] for newnodeid in range(NEWNODECOUNT): newnoderangestarts.append(DATAID_COUNT / NEWNODECOUNT * newnodeid) moved_ids = 0 for dataid in range(DATAID_COUNT): dataid = str(dataid) hsh = unpackfrom('>I', md5(str(dataid)).digest())[0] nodeid = bisectleft(noderangestarts, hsh % DATAIDCOUNT) % NODE_COUNT newnodeid = bisectleft(newnoderangestarts, hsh % DATAIDCOUNT) % NEWNODECOUNT if nodeid != newnode_id: moved_ids += 1 percentmoved = 100.0 * movedids / DATAIDCOUNT print '%d ids moved, %.02f%%' % (movedids, percentmoved) ``` ``` 4901707 ids moved, 49.02% ``` Okay, that is better. But still, moving 50% of our data to add 1% capacity is not very good. If we examine what happened more closely well see what is an accordion effect. We shrunk node 0s range a bit to give to the new node, but that shifted all the other nodes ranges by the same amount. We can minimize the change to a nodes assigned range by assigning several smaller ranges instead of the single broad range we were before. This can be done by creating virtual nodes for each" }, { "data": "So 100 nodes might have 1000 virtual nodes. Lets examine how that might work. ``` from bisect import bisect_left from hashlib import md5 from struct import unpack_from NODE_COUNT = 100 DATAIDCOUNT = 10000000 VNODE_COUNT = 1000 vnoderangestarts = [] vnode2node = [] for vnodeid in range(VNODECOUNT): vnoderangestarts.append(DATAIDCOUNT / VNODECOUNT * vnodeid) vnode2node.append(vnodeid % NODECOUNT) new_vnode2node = list(vnode2node) newnodeid = NODE_COUNT NEWNODECOUNT = NODE_COUNT + 1 vnodestoreassign = VNODECOUNT / NEWNODE_COUNT while vnodestoreassign > 0: for nodetotakefrom in range(NODECOUNT): for vnodeid, nodeid in enumerate(new_vnode2node): if nodeid == nodetotakefrom: newvnode2node[vnodeid] = newnodeid vnodestoreassign -= 1 break if vnodestoreassign <= 0: break moved_ids = 0 for dataid in range(DATAID_COUNT): dataid = str(dataid) hsh = unpackfrom('>I', md5(str(dataid)).digest())[0] vnodeid = bisectleft(vnoderangestarts, hsh % DATAIDCOUNT) % VNODE_COUNT nodeid = vnode2node[vnodeid] newnodeid = newvnode2node[vnodeid] if nodeid != newnode_id: moved_ids += 1 percentmoved = 100.0 * movedids / DATAIDCOUNT print '%d ids moved, %.02f%%' % (movedids, percentmoved) ``` ``` 90423 ids moved, 0.90% ``` There we go, we added 1% capacity and only moved 0.9% of existing data. The vnoderangestarts list seems a bit out of place though. Its values are calculated and never change for the lifetime of the cluster, so lets optimize that out. ``` from bisect import bisect_left from hashlib import md5 from struct import unpack_from NODE_COUNT = 100 DATAIDCOUNT = 10000000 VNODE_COUNT = 1000 vnode2node = [] for vnodeid in range(VNODECOUNT): vnode2node.append(vnodeid % NODECOUNT) new_vnode2node = list(vnode2node) newnodeid = NODE_COUNT vnodestoreassign = VNODECOUNT / (NODECOUNT + 1) while vnodestoreassign > 0: for nodetotakefrom in range(NODECOUNT): for vnodeid, nodeid in enumerate(vnode2node): if nodeid == nodetotakefrom: vnode2node[vnodeid] = newnode_id vnodestoreassign -= 1 break if vnodestoreassign <= 0: break moved_ids = 0 for dataid in range(DATAID_COUNT): dataid = str(dataid) hsh = unpackfrom('>I', md5(str(dataid)).digest())[0] vnodeid = hsh % VNODECOUNT nodeid = vnode2node[vnodeid] newnodeid = newvnode2node[vnodeid] if nodeid != newnode_id: moved_ids += 1 percentmoved = 100.0 * movedids / DATAIDCOUNT print '%d ids moved, %.02f%%' % (movedids, percentmoved) ``` ``` 89841 ids moved, 0.90% ``` There we go. In the next part of this series, will further examine the algorithms limitations and how to improve on it. In Part 2 of this series, we reached an algorithm that performed well even when adding new nodes to the cluster. We used 1000 virtual nodes that could be independently assigned to nodes, allowing us to minimize the amount of data moved when a node was added. The number of virtual nodes puts a cap on how many real nodes you can have. For example, if you have 1000 virtual nodes and you try to add a 1001st real node, you cant assign a virtual node to it without leaving another real node with no assignment, leaving you with just 1000 active real nodes still. Unfortunately, the number of virtual nodes created at the beginning can never change for the life of the cluster without a lot of careful work. For example, you could double the virtual node count by splitting each existing virtual node in half and assigning both halves to the same real node. However, if the real node uses the virtual nodes id to optimally store the data (for example, all data might be stored in /[virtual node id]/[data id]) it would have to move data around locally to reflect the change. And it would have to resolve data using both the new and old locations while the moves were taking place, making atomic operations difficult or" }, { "data": "Lets continue with this assumption that changing the virtual node count is more work than its worth, but keep in mind that some applications might be fine with this. The easiest way to deal with this limitation is to make the limit high enough that it wont matter. For instance, if we decide our cluster will never exceed 60,000 real nodes, we can just make 60,000 virtual nodes. Also, we should include in our calculations the relative size of our nodes. For instance, a year from now we might have real nodes that can handle twice the capacity of our current nodes. So wed want to assign twice the virtual nodes to those future nodes, so maybe we should raise our virtual node estimate to 120,000. A good rule to follow might be to calculate 100 virtual nodes to each real node at maximum capacity. This would allow you to alter the load on any given node by 1%, even at max capacity, which is pretty fine tuning. So now were at 6,000,000 virtual nodes for a max capacity cluster of 60,000 real nodes. 6 million virtual nodes seems like a lot, and it might seem like wed use up way too much memory. But the only structure this affects is the virtual node to real node mapping. The base amount of memory required would be 6 million times 2 bytes (to store a real node id from 0 to 65,535). 12 megabytes of memory just isnt that much to use these days. Even with all the overhead of flexible data types, things arent that bad. I changed the code from the previous part in this series to have 60,000 real and 6,000,000 virtual nodes, changed the list to an array(H), and python topped out at 27m of resident memory and that includes two rings. To change terminology a bit, were going to start calling these virtual nodes partitions. This will make it a bit easier to discern between the two types of nodes weve been talking about so far. Also, it makes sense to talk about partitions as they are really just unchanging sections of the hash space. Were also going to always keep the partition count a power of two. This makes it easy to just use bit manipulation on the hash to determine the partition rather than modulus. It isnt much faster, but it is a little. So, heres our updated ring code, using 8,388,608 (2 23) partitions and 65,536 nodes. Weve upped the sample data id set and checked the distribution to make sure we havent broken anything. ``` from array import array from hashlib import md5 from struct import unpack_from PARTITION_POWER = 23 PARTITIONSHIFT = 32 - PARTITIONPOWER NODE_COUNT = 65536 DATAIDCOUNT = 100000000 part2node = array('H') for part in range(2 PARTITION_POWER): part2node.append(part % NODE_COUNT) nodecounts = [0] * NODECOUNT for dataid in range(DATAID_COUNT): dataid = str(dataid) part = unpack_from('>I', md5(str(dataid)).digest())[0] >> PARTITIONSHIFT node_id = part2node[part] nodecounts[nodeid] += 1 desiredcount = DATAIDCOUNT / NODECOUNT print '%d: Desired data ids per node' % desired_count maxcount = max(nodecounts) over = 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids on one node, %.02f%% over' % \\ (max_count, over) mincount = min(nodecounts) under = 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids on one node, %.02f%% under' % \\ (min_count, under) ``` ``` 1525: Desired data ids per node 1683: Most data ids on one node, 10.36% over 1360: Least data ids on one node, 10.82% under ```" }, { "data": "+10% seems a bit high, but I reran with 65,536 partitions and 256 nodes and got +0.4% so its just that our sample size (100m) is too small for our number of partitions (8m). Itll take way too long to run experiments with an even larger sample size, so lets reduce back down to these lesser numbers. (To be certain, I reran at the full version with a 10 billion data id sample set and got +1%, but it took 6.5 hours to run.) In the next part of this series, well talk about how to increase the durability of our data in the cluster. In Part 3 of this series, we just further discussed partitions (virtual nodes) and cleaned up our code a bit based on that. Now, lets talk about how to increase the durability and availability of our data in the cluster. For many distributed data stores, durability is quite important. Either RAID arrays or individually distinct copies of data are required. While RAID will increase the durability, it does nothing to increase the availability if the RAID machine crashes, the data may be safe but inaccessible until repairs are done. If we keep distinct copies of the data on different machines and a machine crashes, the other copies will still be available while we repair the broken machine. An easy way to gain this multiple copy durability/availability is to just use multiple rings and groups of nodes. For instance, to achieve the industry standard of three copies, youd split the nodes into three groups and each group would have its own ring and each would receive a copy of each data item. This can work well enough, but has the drawback that expanding capacity requires adding three nodes at a time and that losing one node essentially lowers capacity by three times that nodes capacity. Instead, lets use a different, but common, approach of meeting our requirements with a single ring. This can be done by walking the ring from the starting point and looking for additional distinct nodes. Heres code that supports a variable number of replicas (set to 3 for testing): ``` from array import array from hashlib import md5 from struct import unpack_from REPLICAS = 3 PARTITION_POWER = 16 PARTITIONSHIFT = 32 - PARTITIONPOWER PARTITIONMAX = 2 ** PARTITIONPOWER - 1 NODE_COUNT = 256 DATAIDCOUNT = 10000000 part2node = array('H') for part in range(2 PARTITION_POWER): part2node.append(part % NODE_COUNT) nodecounts = [0] * NODECOUNT for dataid in range(DATAID_COUNT): dataid = str(dataid) part = unpack_from('>I', md5(str(dataid)).digest())[0] >> PARTITIONSHIFT node_ids = [part2node[part]] nodecounts[nodeids[0]] += 1 for replica in range(1, REPLICAS): while part2node[part] in node_ids: part += 1 if part > PARTITION_MAX: part = 0 node_ids.append(part2node[part]) nodecounts[nodeids[-1]] += 1 desiredcount = DATAIDCOUNT / NODECOUNT * REPLICAS print '%d: Desired data ids per node' % desired_count maxcount = max(nodecounts) over = 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids on one node, %.02f%% over' % \\ (max_count, over) mincount = min(nodecounts) under = 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids on one node, %.02f%% under' % \\ (min_count, under) ``` ``` 117186: Desired data ids per node 118133: Most data ids on one node, 0.81% over 116093: Least data ids on one node, 0.93% under ``` Thats pretty good; less than 1% over/under. While this works well, there are a couple of" }, { "data": "First, because of how weve initially assigned the partitions to nodes, all the partitions for a given node have their extra copies on the same other two nodes. The problem here is that when a machine fails, the load on these other nodes will jump by that amount. Itd be better if we initially shuffled the partition assignment to distribute the failover load better. The other problem is a bit harder to explain, but deals with physical separation of machines. Imagine you can only put 16 machines in a rack in your datacenter. The 256 nodes weve been using would fill 16 racks. With our current code, if a rack goes out (power problem, network issue, etc.) there is a good chance some data will have all three copies in that rack, becoming inaccessible. We can fix this shortcoming by adding the concept of zones to our nodes, and then ensuring that replicas are stored in distinct zones. ``` from array import array from hashlib import md5 from random import shuffle from struct import unpack_from REPLICAS = 3 PARTITION_POWER = 16 PARTITIONSHIFT = 32 - PARTITIONPOWER PARTITIONMAX = 2 ** PARTITIONPOWER - 1 NODE_COUNT = 256 ZONE_COUNT = 16 DATAIDCOUNT = 10000000 node2zone = [] while len(node2zone) < NODE_COUNT: zone = 0 while zone < ZONECOUNT and len(node2zone) < NODECOUNT: node2zone.append(zone) zone += 1 part2node = array('H') for part in range(2 PARTITION_POWER): part2node.append(part % NODE_COUNT) shuffle(part2node) nodecounts = [0] * NODECOUNT zonecounts = [0] * ZONECOUNT for dataid in range(DATAID_COUNT): dataid = str(dataid) part = unpack_from('>I', md5(str(dataid)).digest())[0] >> PARTITIONSHIFT node_ids = [part2node[part]] zones = [node2zone[node_ids[0]]] nodecounts[nodeids[0]] += 1 zone_counts[zones[0]] += 1 for replica in range(1, REPLICAS): while part2node[part] in node_ids and \\ node2zone[part2node[part]] in zones: part += 1 if part > PARTITION_MAX: part = 0 node_ids.append(part2node[part]) zones.append(node2zone[node_ids[-1]]) nodecounts[nodeids[-1]] += 1 zone_counts[zones[-1]] += 1 desiredcount = DATAIDCOUNT / NODECOUNT * REPLICAS print '%d: Desired data ids per node' % desired_count maxcount = max(nodecounts) over = 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids on one node, %.02f%% over' % \\ (max_count, over) mincount = min(nodecounts) under = 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids on one node, %.02f%% under' % \\ (min_count, under) desiredcount = DATAIDCOUNT / ZONECOUNT * REPLICAS print '%d: Desired data ids per zone' % desired_count maxcount = max(zonecounts) over = 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids in one zone, %.02f%% over' % \\ (max_count, over) mincount = min(zonecounts) under = 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids in one zone, %.02f%% under' % \\ (min_count, under) ``` ``` 117186: Desired data ids per node 118782: Most data ids on one node, 1.36% over 115632: Least data ids on one node, 1.33% under 1875000: Desired data ids per zone 1878533: Most data ids in one zone, 0.19% over 1869070: Least data ids in one zone, 0.32% under ``` So the shuffle and zone distinctions affected our distribution some, but still definitely good enough. This test took about 64 seconds to run on my machine. Theres a completely alternate, and quite common, way of accomplishing these same requirements. This alternate method doesnt use partitions at all, but instead just assigns anchors to the nodes within the hash space. Finding the first node for a given hash just involves walking this anchor ring for the next node, and finding additional nodes works similarly as before. To attain the equivalent of our virtual nodes, each real node is assigned multiple" }, { "data": "``` from bisect import bisect_left from hashlib import md5 from struct import unpack_from REPLICAS = 3 NODE_COUNT = 256 ZONE_COUNT = 16 DATAIDCOUNT = 10000000 VNODE_COUNT = 100 node2zone = [] while len(node2zone) < NODE_COUNT: zone = 0 while zone < ZONECOUNT and len(node2zone) < NODECOUNT: node2zone.append(zone) zone += 1 hash2index = [] index2node = [] for node in range(NODE_COUNT): for vnode in range(VNODE_COUNT): hsh = unpack_from('>I', md5(str(node)).digest())[0] index = bisect_left(hash2index, hsh) if index > len(hash2index): index = 0 hash2index.insert(index, hsh) index2node.insert(index, node) nodecounts = [0] * NODECOUNT zonecounts = [0] * ZONECOUNT for dataid in range(DATAID_COUNT): dataid = str(dataid) hsh = unpackfrom('>I', md5(str(dataid)).digest())[0] index = bisect_left(hash2index, hsh) if index >= len(hash2index): index = 0 node_ids = [index2node[index]] zones = [node2zone[node_ids[0]]] nodecounts[nodeids[0]] += 1 zone_counts[zones[0]] += 1 for replica in range(1, REPLICAS): while index2node[index] in node_ids and \\ node2zone[index2node[index]] in zones: index += 1 if index >= len(hash2index): index = 0 node_ids.append(index2node[index]) zones.append(node2zone[node_ids[-1]]) nodecounts[nodeids[-1]] += 1 zone_counts[zones[-1]] += 1 desiredcount = DATAIDCOUNT / NODECOUNT * REPLICAS print '%d: Desired data ids per node' % desired_count maxcount = max(nodecounts) over = 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids on one node, %.02f%% over' % \\ (max_count, over) mincount = min(nodecounts) under = 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids on one node, %.02f%% under' % \\ (min_count, under) desiredcount = DATAIDCOUNT / ZONECOUNT * REPLICAS print '%d: Desired data ids per zone' % desired_count maxcount = max(zonecounts) over = 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids in one zone, %.02f%% over' % \\ (max_count, over) mincount = min(zonecounts) under = 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids in one zone, %.02f%% under' % \\ (min_count, under) ``` ``` 117186: Desired data ids per node 351282: Most data ids on one node, 199.76% over 15965: Least data ids on one node, 86.38% under 1875000: Desired data ids per zone 2248496: Most data ids in one zone, 19.92% over 1378013: Least data ids in one zone, 26.51% under ``` This test took over 15 minutes to run! Unfortunately, this method also gives much less control over the distribution. To get better distribution, you have to add more virtual nodes, which eats up more memory and takes even more time to build the ring and perform distinct node lookups. The most common operation, data id lookup, can be improved (by predetermining each virtual nodes failover nodes, for instance) but it starts off so far behind our first approach that well just stick with that. In the next part of this series, well start to wrap all this up into a useful Python module. In Part 4 of this series, we ended up with a multiple copy, distinctly zoned ring. Or at least the start of it. In this final part well package the code up into a useable Python module and then add one last feature. First, lets separate the ring itself from the building of the data for the ring and its testing. ``` from array import array from hashlib import md5 from random import shuffle from struct import unpack_from from time import time class Ring(object): def init(self, nodes, part2node, replicas): self.nodes = nodes self.part2node = part2node self.replicas = replicas partition_power = 1 while 2 partition_power < len(part2node): partition_power += 1 if len(part2node) != 2 partition_power: raise Exception(\"part2node's length is not an \" \"exact power of 2\")" }, { "data": "= 32 - partitionpower def getnodes(self, dataid): dataid = str(dataid) part = unpack_from('>I', md5(dataid).digest())[0] >> self.partitionshift node_ids = [self.part2node[part]] zones = [self.nodes[node_ids[0]]] for replica in range(1, self.replicas): while self.part2node[part] in node_ids and \\ self.nodes[self.part2node[part]] in zones: part += 1 if part >= len(self.part2node): part = 0 node_ids.append(self.part2node[part]) zones.append(self.nodes[node_ids[-1]]) return [self.nodes[n] for n in node_ids] def buildring(nodes, partitionpower, replicas): begin = time() part2node = array('H') for part in range(2 partition_power): part2node.append(part % len(nodes)) shuffle(part2node) ring = Ring(nodes, part2node, replicas) print '%.02fs to build ring' % (time() - begin) return ring def test_ring(ring): begin = time() DATAIDCOUNT = 10000000 node_counts = {} zone_counts = {} for dataid in range(DATAID_COUNT): for node in ring.getnodes(dataid): node_counts[node['id']] = \\ node_counts.get(node['id'], 0) + 1 zone_counts[node['zone']] = \\ zone_counts.get(node['zone'], 0) + 1 print '%ds to test ring' % (time() - begin) desired_count = \\ DATAIDCOUNT / len(ring.nodes) * REPLICAS print '%d: Desired data ids per node' % desired_count maxcount = max(nodecounts.values()) over = \\ 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids on one node, %.02f%% over' % \\ (max_count, over) mincount = min(nodecounts.values()) under = \\ 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids on one node, %.02f%% under' % \\ (min_count, under) zone_count = \\ len(set(n['zone'] for n in ring.nodes.values())) desired_count = \\ DATAIDCOUNT / zone_count * ring.replicas print '%d: Desired data ids per zone' % desired_count maxcount = max(zonecounts.values()) over = \\ 100.0 * (maxcount - desiredcount) / desired_count print '%d: Most data ids in one zone, %.02f%% over' % \\ (max_count, over) mincount = min(zonecounts.values()) under = \\ 100.0 * (desiredcount - mincount) / desired_count print '%d: Least data ids in one zone, %.02f%% under' % \\ (min_count, under) if name == 'main': PARTITION_POWER = 16 REPLICAS = 3 NODE_COUNT = 256 ZONE_COUNT = 16 nodes = {} while len(nodes) < NODE_COUNT: zone = 0 while zone < ZONECOUNT and len(nodes) < NODECOUNT: node_id = len(nodes) nodes[nodeid] = {'id': nodeid, 'zone': zone} zone += 1 ring = buildring(nodes, PARTITIONPOWER, REPLICAS) test_ring(ring) ``` ``` 0.06s to build ring 82s to test ring 117186: Desired data ids per node 118773: Most data ids on one node, 1.35% over 115801: Least data ids on one node, 1.18% under 1875000: Desired data ids per zone 1878339: Most data ids in one zone, 0.18% over 1869914: Least data ids in one zone, 0.27% under ``` It takes a bit longer to test our ring, but thats mostly because of the switch to dictionaries from arrays for various items. Having node dictionaries is nice because you can attach any node information you want directly there (ip addresses, tcp ports, drive paths, etc.). But were still on track for further testing; our distribution is still good. Now, lets add our one last feature to our ring: the concept of weights. Weights are useful because the nodes you add later in a rings life are likely to have more capacity than those you have at the outset. For this test, well make half our nodes have twice the weight. Well have to change build_ring to give more partitions to the nodes with more weight and well change test_ring to take into account these weights. Since weve changed so much Ill just post the entire module again: ``` from array import array from hashlib import md5 from random import shuffle from struct import unpack_from from time import time class Ring(object): def init(self, nodes, part2node, replicas): self.nodes = nodes self.part2node = part2node" }, { "data": "= replicas partition_power = 1 while 2 partition_power < len(part2node): partition_power += 1 if len(part2node) != 2 partition_power: raise Exception(\"part2node's length is not an \" \"exact power of 2\") self.partitionshift = 32 - partitionpower def getnodes(self, dataid): dataid = str(dataid) part = unpack_from('>I', md5(dataid).digest())[0] >> self.partitionshift node_ids = [self.part2node[part]] zones = [self.nodes[node_ids[0]]] for replica in range(1, self.replicas): while self.part2node[part] in node_ids and \\ self.nodes[self.part2node[part]] in zones: part += 1 if part >= len(self.part2node): part = 0 node_ids.append(self.part2node[part]) zones.append(self.nodes[node_ids[-1]]) return [self.nodes[n] for n in node_ids] def buildring(nodes, partitionpower, replicas): begin = time() parts = 2 partition_power total_weight = \\ float(sum(n['weight'] for n in nodes.values())) for node in nodes.values(): node['desired_parts'] = \\ parts / total_weight * node['weight'] part2node = array('H') for part in range(2 partition_power): for node in nodes.values(): if node['desired_parts'] >= 1: node['desired_parts'] -= 1 part2node.append(node['id']) break else: for node in nodes.values(): if node['desired_parts'] >= 0: node['desired_parts'] -= 1 part2node.append(node['id']) break shuffle(part2node) ring = Ring(nodes, part2node, replicas) print '%.02fs to build ring' % (time() - begin) return ring def test_ring(ring): begin = time() DATAIDCOUNT = 10000000 node_counts = {} zone_counts = {} for dataid in range(DATAID_COUNT): for node in ring.getnodes(dataid): node_counts[node['id']] = \\ node_counts.get(node['id'], 0) + 1 zone_counts[node['zone']] = \\ zone_counts.get(node['zone'], 0) + 1 print '%ds to test ring' % (time() - begin) total_weight = float(sum(n['weight'] for n in ring.nodes.values())) max_over = 0 max_under = 0 for node in ring.nodes.values(): desired = DATAIDCOUNT REPLICAS \\ node['weight'] / total_weight diff = node_counts[node['id']] - desired if diff > 0: over = 100.0 * diff / desired if over > max_over: max_over = over else: under = 100.0 * (-diff) / desired if under > max_under: max_under = under print '%.02f%% max node over' % max_over print '%.02f%% max node under' % max_under max_over = 0 max_under = 0 for zone in set(n['zone'] for n in ring.nodes.values()): zone_weight = sum(n['weight'] for n in ring.nodes.values() if n['zone'] == zone) desired = DATAIDCOUNT REPLICAS \\ zoneweight / totalweight diff = zone_counts[zone] - desired if diff > 0: over = 100.0 * diff / desired if over > max_over: max_over = over else: under = 100.0 * (-diff) / desired if under > max_under: max_under = under print '%.02f%% max zone over' % max_over print '%.02f%% max zone under' % max_under if name == 'main': PARTITION_POWER = 16 REPLICAS = 3 NODE_COUNT = 256 ZONE_COUNT = 16 nodes = {} while len(nodes) < NODE_COUNT: zone = 0 while zone < ZONECOUNT and len(nodes) < NODECOUNT: node_id = len(nodes) nodes[nodeid] = {'id': nodeid, 'zone': zone, 'weight': 1.0 + (node_id % 2)} zone += 1 ring = buildring(nodes, PARTITIONPOWER, REPLICAS) test_ring(ring) ``` ``` 0.88s to build ring 86s to test ring 1.66% max over 1.46% max under 0.28% max zone over 0.23% max zone under ``` So things are still good, even though we have differently weighted nodes. I ran another test with this code using random weights from 1 to 100 and got over/under values for nodes of 7.35%/18.12% and zones of 0.24%/0.22%, still pretty good considering the crazy weight ranges. Hopefully this series has been a good introduction to building a ring. This code is essentially how the OpenStack Swift ring works, except that Swifts ring has lots of additional optimizations, such as storing each replica assignment separately, and lots of extra features for building, validating, and otherwise working with rings. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "db.html#module-swift.common.db_replicator.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Bases: object Enforces that successive calls to file_like.read() give at least <nbytes> bytes before exhaustion. If file_like fails to do so, ShortReadError is raised. If more than <nbytes> bytes are read, we dont care. Bases: object Base WSGI controller class for the proxy Handler for HTTP GET requests. req The client request the response to the client Base handler for HTTP GET or HEAD requests. req swob.Request object server_type server type used in logging node_iter an iterator to obtain nodes from partition partition path path for the request concurrency number of requests to run concurrently policy the policy instance, or None if Account or Container swob.Response object Handler for HTTP HEAD requests. req The client request the response to the client Base handler for OPTIONS requests req swob.Request object swob.Response object Get account information, and also verify that the account exists. account native str name of the account to get the info for req callers HTTP request context object tuple of (account partition, account nodes, container_count) or (None, None, None) if it does not exist Autocreate an account req request leading to this autocreate account the unquoted account name Given a list of responses from several servers, choose the best to return to the API. req swob.Request object statuses list of statuses returned reasons list of reasons for each status bodies bodies of each response server_type type of server the responses came from etag etag headers headers of each response overrides overrides to apply when lacking quorum quorum_size quorum size to use swob.Response object with the correct status, body, etc. set Get container information and thusly verify container existence. This will also verify account existence. account native-str account name for the container container native-str container name to look up req callers HTTP request context object dict containing at least container partition (partition), container nodes (containers), container read acl (readacl), container write acl (writeacl), and container sync key (sync_key). Values are set to None if the container does not exist. Create a list of headers to be used in backend requests orig_req the original request sent by the client to the proxy additional additional headers to send to the backend transfer If True, transfer headers from original client request a dictionary of headers Given a list of statuses from several requests, determine if a quorum response can already be decided. statuses list of statuses returned node_count number of nodes being queried (basically ring count) quorum number of statuses required for quorum True or False, depending on if quorum is established Is the given Origin allowed to make requests to this resource cors_info the resources CORS related metadata headers origin the origin making the request True or False Sends an HTTP request to multiple nodes and aggregates the results. It attempts the primary nodes concurrently, then iterates over the handoff nodes as needed. req a request sent by the client ring the ring used for finding backend servers part the partition number method the method to send to the backend path the path to send to the backend (full path ends up being /<$device>/<$part>/<$path>) headers a list of dicts, where each dict represents one backend request that should be made. query_string optional query string to send to the backend overrides optional return status override map used to override the returned status of a" }, { "data": "node_count optional number of nodes to send request to. node_iterator optional node iterator. a swob.Response object Transfer legal headers from an original client request to dictionary that will be used as headers by the backend request src_headers A dictionary of the original client request headers dst_headers A dictionary of the backend request headers Bases: GetterBase Handles GET requests to backend servers. app a proxy app. req an instance of swob.Request. server_type server type used in logging node_iter an iterator yielding nodes. partition partition. path path for the request. backend_headers a dict of headers to be sent with backend requests. concurrency number of requests to run concurrently. policy the policy instance, or None if Account or Container. logger a logger instance. Bases: object This base class provides helper methods for handling GET requests to backend servers. app a proxy app. req an instance of swob.Request. node_iter an iterator yielding nodes. partition partition. policy the policy instance, or None if Account or Container. path path for the request. backend_headers a dict of headers to be sent with backend requests. node_timeout the timeout value for backend requests. resource_type a string description of the type of resource being accessed; resource type is used in logs and isnt necessarily the server type. logger a logger instance. Will skip num_bytes into the current ranges. the number of bytes that have already been read on this request. This will change the Range header so that the next req will start where it left off. HTTPRequestedRangeNotSatisfiable if begin + num_bytes end of range + 1 RangeAlreadyComplete if begin + num_bytes == end of range + 1 Sets our Range headers first byterange to the value learned from the Content-Range header in the response; if we were given a fully-specified range (e.g. bytes=123-456), this is a no-op. If we were given a half-specified range (e.g. bytes=123- or bytes=-456), then this changes the Range header to a semantically-equivalent one and it lets us resume on a proper boundary instead of just in the middle of a piece somewhere. Remove the first byterange from our Range header. This is used after a byterange has been completely sent to the client; this way, should we need to resume the download from another object server, we do not re-fetch byteranges that the client already has. If we have no Range header, this is a no-op. Bases: object Encapsulates properties of a source from which a GET response is read. app a proxy app. resp an instance of HTTPResponse. node a dict describing the node from which the response was returned. Provide the timestamp of the swift http response as a floating point value. Used as a sort key. an instance of utils.Timestamp Bases: object Yields nodes for a ring partition, skipping over error limited nodes and stopping at the configurable number of nodes. If a node yielded subsequently gets error limited, an extra node will be yielded to take its place. Note that if youre going to iterate over this concurrently from multiple greenthreads, youll want to use a swift.common.utils.GreenthreadSafeIterator to serialize access. Otherwise, you may get ValueErrors from concurrent access. (You also may not, depending on how logging is configured, the vagaries of socket IO and eventlet, and the phase of the" }, { "data": "server_type one of account, container, or object app a proxy app ring ring to get yield nodes from partition ring partition to yield nodes for logger a logger instance request yielded nodes will be annotated with use_replication based on the request headers. node_iter optional iterable of nodes to try. Useful if you want to filter or reorder the nodes. policy an instance of BaseStoragePolicy. This should be None for an account or container ring. Log handoff requests if handoff logging is enabled and the handoff was not expected. We only log handoffs when weve pushed the handoff count further than we would normally have expected under normal circumstances, that is (requestnodecount - num_primaries), when handoffs goes higher than that it means one of the primaries must have been skipped because of error limiting before we consumed all of our nodes_left. Install a callback function that will be used during a call to next() to get an alternate node instead of returning the next node from the iterator. callback A no argument function that should return a node dict or None. Assume an object is composed of N records, where the first N-1 are all the same size and the last is at most that large, but may be smaller. When a range request is made, it might start with a partial record. This must be discarded, lest the consumer get bad data. This is particularly true of suffix-byte-range requests, e.g. Range: bytes=-12345 where the size of the object is unknown at the time the request is made. This function computes the number of bytes that must be discarded to ensure only whole records are yielded. Erasure-code decoding needs this. This function could have been inlined, but it took enough tries to get right that some targeted unit tests were desirable, hence its extraction. Clear the cached info in both memcache and env env the WSGI request environment account the account name container the container name if clearing info for containers, or None shard the sharding state if clearing info for container shard ranges, or None Force close the http connection to the backend. src the response from the backend Decorator to check if the request is a CORS request and if so, if its valid. func function to check Decorator to declare which methods should have any swift.authorize call delayed. This is so the method can load the Request object up with additional information that may be needed by the authorization system. func function for which authorization will be delayed Get the info structure for an account, based on env and app. This is useful to middlewares. Note This call bypasses auth. Success does not imply that the request has authorization to the account. ValueError when path doesnt contain an account Get the keys for both memcache and env[swift.infocache] (cache_key) where info about accounts, containers, and objects is cached account The name of the account container The name of the container (or None if account) obj The name of the object (or None if account or container) shard Sharding state for the container query; typically updating or listing (Requires account and container; cannot use with obj) a (native) string cache_key Get the info structure for a container, based on env and app. This is useful to middlewares. env the environment used by the current request app the application object swift_source Used to mark the request as originating out of middleware. Will be logged in proxy" }, { "data": "cache_only If true, indicates that caller doesnt want to HEAD the backend container when cache miss. the object info Note This call bypasses auth. Success does not imply that the request has authorization to the container. Get info about accounts or containers request has authorization to the info. app the application object env the environment used by the current request account The unquoted name of the account container The unquoted name of the container (or None if account) swift_source swift source logged for any subrequests made while retrieving the account or container info information about the specified entity in a dictionary. See getaccountinfo and getcontainerinfo for details on whats in the dictionary. Get cached namespaces from infocache or memcache. req a swift.common.swob.Request object. cache_key the cache key for both infocache and memcache. skip_chance the probability of skipping the memcache look-up. a tuple of (value, cache state). Value is an instance of swift.common.utils.NamespaceBoundList if a non-empty list is found in memcache. Otherwise value is None, for example if memcache look-up was skipped, or no value was found, or an empty list was found. Get the info structure for an object, based on env and app. This is useful to middlewares. Note This call bypasses auth. Success does not imply that the request has authorization to the object. Construct a HeaderKeyDict from a container info dict. info a dict of container metadata a HeaderKeyDict or None if info is None or any required headers could not be constructed Construct a cacheable dict of account info based on response headers. Construct a cacheable dict of container info based on response headers. Construct a cacheable dict of object info based on response headers. Indicates whether or not the request made to the backend found what it was looking for. resp the response from the backend. server_type the type of server: Account, Container or Object. True if the response status code is acceptable, False if not. Record a single cache operation into its corresponding metrics. logger the metrics logger server_type account or container optype the name of the operation type, includes shardlisting, shard_updating, and etc. cache_state the state of this cache operation. When its infocache_hit or memcache hit, expect it succeeded and resp will be None; for all other cases like memcache miss or skip which will make to backend, expect a valid resp. resp the response from backend for all cases except cache hits. Cache info in both memcache and env. env the WSGI request environment account the unquoted account name container the unquoted container name or None resp the response received or None if info cache should be cleared the info that was placed into the cache, or None if the request status was not in (404, 410, 2xx). Set a list of namespace bounds in infocache and memcache. req a swift.common.swob.Request object. cache_key the cache key for both infocache and memcache. nsboundlist a swift.common.utils.NamespaceBoundList. time how long the namespaces should remain in memcache. the cache_state. Cache object info in the WSGI environment, but not in memcache. Caching in memcache would lead to cache pressure and mass evictions due to the large number of objects in a typical Swift cluster. This is a per-request cache" }, { "data": "app the application object env the environment used by the current request account the unquoted account name container the unquoted container name obj the unquoted object name resp a GET or HEAD response received from an object server, or None if info cache should be cleared the object info Helper function to update headers in the response. response swob.Response object headers dictionary headers Bases: Controller WSGI controller for account requests HTTP DELETE request handler. Handler for HTTP GET/HEAD requests. HTTP POST request handler. HTTP PUT request handler. Bases: Controller WSGI controller for container requests HTTP DELETE request handler. Handler for HTTP GET requests. Handler for HTTP HEAD requests. HTTP POST request handler. HTTP PUT request handler. HTTP UPDATE request handler. Method to perform bulk operations on container DBs, similar to a merge_items REPLICATE request. Not client facing; internal clients or middlewares must include X-Backend-Allow-Private-Methods: true header to access. Bases: Controller Base WSGI controller for object requests. HTTP DELETE request handler. Handler for HTTP GET requests. Handle HTTP GET or HEAD requests. Handler for HTTP HEAD requests. HTTP POST request handler. HTTP PUT request handler. Yields nodes for a ring partition. If the write_affinity setting is non-empty, then this will yield N local nodes (as defined by the write_affinity setting) first, then the rest of the nodes as normal. It is a re-ordering of the nodes such that the local ones come first; no node is omitted. The effect is that the request will be serviced by local object servers first, but nonlocal ones will be employed if not enough local ones are available. ring ring to get nodes from partition ring partition to yield nodes for request nodes will be annotated with use_replication based on the request headers policy optional, an instance of BaseStoragePolicy localhandoffsfirst optional, if True prefer primaries and local handoff nodes first before looking elsewhere. Bases: object WSGI iterable that decodes EC fragment archives (or portions thereof) into the original object (or portions thereof). path objects path, sans v1 (e.g. /a/c/o) policy storage policy for this object internalpartsiters list of the response-document-parts iterators for the backend GET responses. For an M+K erasure code, the caller must supply M such iterables. range_specs list of dictionaries describing the ranges requested by the client. Each dictionary contains the start and end of the clients requested byte range as well as the start and end of the EC segments containing that byte range. fa_length length of the fragment archive, in bytes, if the response is a 200. If its a 206, then this is ignored. obj_length length of the object, in bytes. Learned from the headers in the GET response from the object server. logger a logger Start pulling data from the backends so that we can learn things like the real Content-Type that might only be in the multipart/byteranges response body. Update our response accordingly. Also, this is the first point at which we can learn the MIME boundary that our response has in the headers. We grab that so we can also use it in the body. None HTTPException on error Bases: GetterBase Create an iterator over a single fragment response body. an interator that yields chunks of bytes from a fragment response body. An iterator over responses to backend fragment GETs. Yields an instance of GetterSource if a response is good, otherwise None. Bases: object A helper class to encapsulate the properties of buckets in which fragment getters and alternate nodes are collected. Add another response to this" }, { "data": "Response buckets can be for fragments with the same timestamp, or for errors with the same status. Close buckets responses; they wont be used for a client response. Return a list of all useful sources. Where there are multiple sources associated with the same frag_index then only one is included. a list of sources, each source being a tuple of form (ECFragGetter, iter) The number of additional responses needed to complete this bucket; typically (ndata - resp_count). If the bucket has no durable responses, shortfall is extended out to replica count to ensure the proxy makes additional primary requests. Bases: object Manages all successful EC GET responses gathered by ECFragGetters. A response comprises a tuple of (<getter instance>, <parts iterator>). All responses having the same data timestamp are placed in an ECGetResponseBucket for that timestamp. The buckets are stored in the buckets dict which maps timestamp-> bucket. This class encapsulates logic for selecting the best bucket from the collection, and for choosing alternate nodes. Add a response to the collection. get An instance of ECFragGetter parts_iter An iterator over response body parts ValueError if the response etag or status code values do not match any values previously received for the same timestamp Return the best bucket in the collection. The best bucket is the newest timestamp with sufficient getters, or the closest to having sufficient getters, unless it is bettered by a bucket with potential alternate nodes. If there are no good buckets we return the least_bad bucket. An instance of ECGetResponseBucket or None if there are no buckets in the collection. Return the bad_bucket with the smallest shortfall Callback function that is installed in a NodeIter. Called on every call to NodeIter.next(), which means we can track the number of nodes to which GET requests have been made and selectively inject an alternate node, if we have one. A dict describing a node to which the next GET request should be made. Bases: BaseObjectController Bases: Putter Putter for backend PUT requests that use MIME. This is here mostly to wrap up the fact that all multipart PUTs are chunked because of the mime boundary footer trick and the first half of the two-phase PUT conversation handling. An HTTP PUT request that supports streaming. Connect to a backend node and send the headers. Override superclass method to notify object of need for support for multipart body with footers and optionally multiphase commit, and verify object servers capabilities. need_multiphase if True then multiphase support is required of the object server FooterNotSupported if needmetadatafooter is set but backend node cant process footers MultiphasePUTNotSupported if need_multiphase is set but backend node cant handle multiphase PUT Call when there is no more data to send. Overrides superclass implementation to send any footer metadata after object data. footer_metadata dictionary of metadata items to be sent as footers. Call when there are > quorum 2XX responses received. Send commit confirmations to all object nodes to finalize the PUT. Bases: object Decorator for Storage Policy implementations to register their ObjectController implementations. This also fills in a policy_type attribute on the class. Bases: object Putter for backend PUT requests. Encapsulates all the actions required to establish a connection with a storage node and stream data to that" }, { "data": "conn an HTTPConnection instance node dict describing storage node resp an HTTPResponse instance if connect() received final response path the object path to send to the storage node connect_duration time taken to initiate the HTTPConnection watchdog a spawned Watchdog instance that will enforce timeouts write_timeout time limit to write a chunk to the connection socket sendexceptionhandler callback called when an exception occured writing to the connection socket logger a Logger instance chunked boolean indicating if the request encoding is chunked Get 100-continue response indicating the end of 1st phase of a 2-phase commit or the final response, i.e. the one with status >= 200. Might or might not actually wait for anything. If we said Expect: 100-continue but got back a non-100 response, thatll be the thing returned, and we wont do any network IO to get it. OTOH, if we got a 100 Continue response and sent up the PUT requests body, then well actually read the 2xx-5xx response off the network here. timeout time to wait for a response informational if True then try to get a 100-continue response, otherwise try to get a final response. HTTPResponse Timeout if the response took too long Connect to a backend node and send the headers. Putter instance ConnectionTimeout if initial connection timed out ResponseTimeout if header retrieval timed out InsufficientStorage on 507 response from node PutterConnectError on non-507 server error response from node Call when there is no more data to send. Bases: BaseObjectController A generator to transform a source chunk to erasure coded chunks for each send call. The number of erasure coded chunks is as policy.ecnunique_fragments. Takes a byterange from the client and converts it into a byterange spanning the necessary segments. Handles prefix, suffix, and fully-specified byte ranges. clientrangetosegmentrange(100, 700, 512) = (0, 1023) clientrangetosegmentrange(100, 700, 256) = (0, 767) clientrangetosegmentrange(300, None, 256) = (256, None) client_start first byte of the range requested by the client client_end last byte of the range requested by the client segment_size size of an EC segment, in bytes a 2-tuple (segstart, segend) where seg_start is the first byte of the first segment, or None if this is a suffix byte range seg_end is the last byte of the last segment, or None if this is a prefix byte range We need to send container updates via enough object servers such that, if the object PUT succeeds, then the container update is durable (either its synchronously updated or written to async pendings). Qc = the quorum size for the container ring Qo = the quorum size for the object ring Rc = the replica count for the container ring Ro = the replica count (or EC N+K) for the object ring A durable container update is one thats made it to at least Qc nodes. To always be durable, we have to send enough container updates so that, if only Qo object PUTs succeed, and all the failed object PUTs had container updates, at least Qc updates remain. Since (Ro - Qo) object PUTs may fail, we must have at least Qc + Ro - Qo container updates to ensure that Qc of them remain. Also, each container replica is named in at least one object PUT request so that, when all requests succeed, no work is generated for the container replicator. Thus, at least Rc updates are" }, { "data": "container_replicas replica count for the container ring (Rc) container_quorum quorum size for the container ring (Qc) object_replicas replica count for the object ring (Ro) object_quorum quorum size for the object ring (Qo) Takes a byterange spanning some segments and converts that into a byterange spanning the corresponding fragments within their fragment archives. Handles prefix, suffix, and fully-specified byte ranges. segment_start first byte of the first segment segment_end last byte of the last segment segment_size size of an EC segment, in bytes fragment_size size of an EC fragment, in bytes a 2-tuple (fragstart, fragend) where frag_start is the first byte of the first fragment, or None if this is a suffix byte range frag_end is the last byte of the last fragment, or None if this is a prefix byte range Bases: object WSGI application for the proxy server. Check the configuration for possible errors Check response for error status codes and update error limiters as required. node a dict describing a node server_type the type of server from which the response was received (e.g. Object). response an instance of HTTPResponse. method the request method. path the request path. body an optional response body. If given, up to 1024 of the start of the body will be included in any log message. if the response status code is less than 500, False otherwise. Mark a node as error limited. This immediately pretends the node received enough errors to trigger error suppression. Use this for errors like Insufficient Storage. For other errors use increment(). node dictionary of node to error limit msg error message Check if the node is currently error limited. node dictionary of node to check True if error limited, False otherwise Handle logging, and handling of errors. node dictionary of node to handle errors for msg error message Handle logging of generic exceptions. node dictionary of node to log the error for typ server type additional_info additional information to log Get the controller to handle a request. req the request tuple of (controller class, path dictionary) ValueError (thrown by split_path) if given invalid path Get the ring object to use to handle a request based on its policy. policy_idx policy index as defined in swift.conf appropriate ring object Return policy specific options. policy an instance of BaseStoragePolicy or None an instance of ProxyOverrideOptions Entry point for proxy server. Should return a WSGI-style callable (such as swob.Response). req swob.Request object Called during WSGI pipeline creation. Modifies the WSGI pipeline context to ensure that mandatory middleware is present in the pipeline. pipe A PipelineWrapper object Sorts nodes in-place (and returns the sorted list) according to the configured strategy. The default sorting is to randomly shuffle the nodes. If the timing strategy is chosen, the nodes are sorted according to the stored timing data. nodes a list of nodes policy an instance of BaseStoragePolicy Bases: object Encapsulates proxy server options that may be overridden e.g. for policy specific configurations. conf the proxy-server config dict. override_conf a dict of overriding configuration options. paste.deploy app factory for creating WSGI proxy apps. Search the config file for any per-policy config sections and load those sections to a dict mapping policy reference (name or index) to policy options. conf the proxy server conf dict a dict mapping policy reference -> dict of policy options ValueError if a policy config section has an invalid name Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "contributing.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Normally to create, read and modify containers and objects, you must have the appropriate roles on the project associated with the account, i.e., you must be the owner of the account. However, an owner can grant access to other users by using an Access Control List (ACL). There are two types of ACLs: Container ACLs. These are specified on a container and apply to that container only and the objects in the container. Account ACLs. These are specified at the account level and apply to all containers and objects in the account. Container ACLs are stored in the X-Container-Write and X-Container-Read metadata. The scope of the ACL is limited to the container where the metadata is set and the objects in the container. In addition: X-Container-Write grants the ability to perform PUT, POST and DELETE operations on objects within a container. It does not grant the ability to perform POST or DELETE operations on the container itself. Some ACL elements also grant the ability to perform HEAD or GET operations on the container. X-Container-Read grants the ability to perform GET and HEAD operations on objects within a container. Some of the ACL elements also grant the ability to perform HEAD or GET operations on the container itself. However, a container ACL does not allow access to privileged metadata (such as X-Container-Sync-Key). Container ACLs use the V1 ACL syntax which is a comma separated string of elements as shown in the following example: ``` .r:,.rlistings,7ec59e87c6584c348b563254aae4c221: ``` Spaces may occur between elements as shown in the following example: ``` .r : , .rlistings, 7ec59e87c6584c348b563254aae4c221: ``` However, these spaces are removed from the value stored in the X-Container-Write and X-Container-Read metadata. In addition, the .r: string can be written as .referrer:, but is stored as .r:. While all auth systems use the same syntax, the meaning of some elements is different because of the different concepts used by different auth systems as explained in the following sections: Common ACL Elements Keystone Auth ACL Elements TempAuth ACL Elements The following table describes elements of an ACL that are supported by both Keystone auth and TempAuth. These elements should only be used with X-Container-Read (with the exception of .rlistings, an error will occur if used with X-Container-Write): | Element | Description | |:|:--| | .r:* | Any user has access to objects. No token is required in the request. | | .r:<referrer> | The referrer is granted access to objects. The referrer is identified by the Referer request header in the request. No token is required. | | .r:-<referrer> | This syntax (with - prepended to the referrer) is supported. However, it does not deny access if another element (e.g., .r:*) grants access. | | .rlistings | Any user can perform a HEAD or GET operation on the container provided the user also has read access on objects (e.g., also has .r:* or .r:<referrer>. No token is" }, { "data": "| Element Description .r:* Any user has access to objects. No token is required in the request. .r:<referrer> The referrer is granted access to objects. The referrer is identified by the Referer request header in the request. No token is required. .r:-<referrer> This syntax (with - prepended to the referrer) is supported. However, it does not deny access if another element (e.g., .r:*) grants access. .rlistings Any user can perform a HEAD or GET operation on the container provided the user also has read access on objects (e.g., also has .r:* or .r:<referrer>. No token is required. The following table describes elements of an ACL that are supported only by Keystone auth. Keystone auth also supports the elements described in Common ACL Elements. A token must be included in the request for any of these ACL elements to take effect. | Element | Description | |:--|:-| | <project-id>:<user-id> | The specified user, provided a token scoped to the project is included in the request, is granted access. Access to the container is also granted when used in X-Container-Read. | | <project-id>:* | Any user with a role in the specified Keystone project has access. A token scoped to the project must be included in the request. Access to the container is also granted when used in X-Container-Read. | | *:<user-id> | The specified user has access. A token for the user (scoped to any project) must be included in the request. Access to the container is also granted when used in X-Container-Read. | | : | Any user has access. Access to the container is also granted when used in X-Container-Read. The : element differs from the .r: element because : requires that a valid token is included in the request whereas .r: does not require a token. In addition, .r:* does not grant access to the container listing. | | <role_name> | A user with the specified role name on the project within which the container is stored is granted access. A user token scoped to the project must be included in the request. Access to the container is also granted when used in X-Container-Read. | Element Description <project-id>:<user-id> The specified user, provided a token scoped to the project is included in the request, is granted access. Access to the container is also granted when used in X-Container-Read. <project-id>:* Any user with a role in the specified Keystone project has access. A token scoped to the project must be included in the request. Access to the container is also granted when used in X-Container-Read. *:<user-id> The specified user has access. A token for the user (scoped to any project) must be included in the request. Access to the container is also granted when used in X-Container-Read. : Any user has access. Access to the container is also granted when used in X-Container-Read. The : element differs from the" }, { "data": "element because : requires that a valid token is included in the request whereas .r:* does not require a token. In addition, .r:* does not grant access to the container listing. <role_name> A user with the specified role name on the project within which the container is stored is granted access. A user token scoped to the project must be included in the request. Access to the container is also granted when used in X-Container-Read. Note Keystone project (tenant) or user names (i.e., <project-name>:<user-name>) must no longer be used because with the introduction of domains in Keystone, names are not globally unique. You should use user and project ids instead. For backwards compatibility, ACLs using names will be granted by keystoneauth when it can be established that the grantee project, the grantee user and the project being accessed are either not yet in a domain (e.g. the X-Auth-Token has been obtained via the Keystone V2 API) or are all in the default domain to which legacy accounts would have been migrated. The following table describes elements of an ACL that are supported only by TempAuth. TempAuth auth also supports the elements described in Common ACL Elements. | Element | Description | |:|:-| | <user-name> | The named user is granted access. The wildcard (*) character is not supported. A token from the user must be included in the request. | Element Description <user-name> The named user is granted access. The wildcard (*) character is not supported. A token from the user must be included in the request. Container ACLs may be set by including X-Container-Write and/or X-Container-Read headers with a PUT or a POST request to the container URL. The following examples use the swift command line client which support these headers being set via its --write-acl and --read-acl options. The following allows anybody to list objects in the www container and download objects. The users do not need to include a token in their request. This ACL is commonly referred to as making the container public. It is useful when used with StaticWeb: ``` swift post www --read-acl \".r:*,.rlistings\" ``` The following allows anybody to upload or download objects. However, to download an object, the exact name of the object must be known since users cannot list the objects in the container. The users must include a Keystone token in the upload request. However, it does not need to be scoped to the project associated with the container: ``` swift post www --read-acl \".r:\" --write-acl \":*\" ``` The following allows any member of the 77b8f82565f14814bece56e50c4c240f project to upload and download objects or to list the contents of the www" }, { "data": "A token scoped to the 77b8f82565f14814bece56e50c4c240f project must be included in the request: ``` swift post www --read-acl \"77b8f82565f14814bece56e50c4c240f:*\" \\ --write-acl \"77b8f82565f14814bece56e50c4c240f:*\" ``` The following allows any user that has been assigned the myreadaccess_role on the project within which the www container is stored to download objects or to list the contents of the www container. A user token scoped to the project must be included in the download or list request: ``` swift post www --read-acl \"myreadaccess_role\" ``` The following allows any request from the example.com domain to access an object in the container: ``` swift post www --read-acl \".r:.example.com\" ``` However, the request from the user must contain the appropriate Referer header as shown in this example request: ``` curl -i $publicURL/www/document --head -H \"Referer: http://www.example.com/index.html\" ``` Note The Referer header is included in requests by many browsers. However, since it is easy to create a request with any desired value in the Referer header, the referrer ACL has very weak security. Sharing a Container with another user requires the knowledge of few parameters regarding the users. The sharing user must know: the OpenStack user id of the other user The sharing user must communicate to the other user: the name of the shared container the OSSTORAGEURL Usually the OSSTORAGEURL is not exposed directly to the user because the swift client by default automatically construct the OSSTORAGEURL based on the User credential. We assume that in the current directory there are the two client environment script for the two users sharing.openrc and other.openrc. The sharing.openrc should be similar to the following: ``` export OS_USERNAME=sharing export OS_PASSWORD=password export OSTENANTNAME=projectName export OSAUTHURL=https://identityHost:portNumber/v2.0 export OSTENANTID=tenantIDString export OSREGIONNAME=regionName export OS_CACERT=/path/to/cacertFile ``` The other.openrc should be similar to the following: ``` export OS_USERNAME=other export OS_PASSWORD=otherPassword export OSTENANTNAME=otherProjectName export OSAUTHURL=https://identityHost:portNumber/v2.0 export OSTENANTID=tenantIDString export OSREGIONNAME=regionName export OS_CACERT=/path/to/cacertFile ``` For more information see using the OpenStack RC file First we figure out the other user id: ``` . other.openrc OUID=\"$(openstack user show --format json \"${OS_USERNAME}\" | jq -r .id)\" ``` or alternatively: ``` . other.openrc OUID=\"$(openstack token issue -f json | jq -r .user_id)\" ``` Then we figure out the storage url of the sharing user: ``` sharing.openrc SURL=\"$(swift auth | awk -F = '/OSSTORAGEURL/ {print $2}')\" ``` Running as the sharing user create a shared container named shared in read-only mode with the other user using the proper acl: ``` sharing.openrc swift post --read-acl \"*:${OUID}\" shared ``` Running as the sharing user create and upload a test file: ``` touch void swift upload shared void ``` Running as the other user list the files in the shared container: ``` other.openrc swift --os-storage-url=\"${SURL}\" list shared ``` Running as the other user download the shared container in the /tmp directory: ``` cd /tmp swift --os-storage-url=\"${SURL}\" download shared ``` Note Account ACLs are not currently supported by Keystone auth The X-Account-Access-Control header is used to specify account-level ACLs in a format specific to the auth system. These headers are visible and settable only by account owners (those for whom swift_owner is true). Behavior of account ACLs is" }, { "data": "In the case of TempAuth, if an authenticated user has membership in a group which is listed in the ACL, then the user is allowed the access level of that ACL. Account ACLs use the V2 ACL syntax, which is a JSON dictionary with keys named admin, read-write, and read-only. (Note the case sensitivity.) An example value for the X-Account-Access-Control header looks like this, where a, b and c are user names: ``` {\"admin\":[\"a\",\"b\"],\"read-only\":[\"c\"]} ``` Keys may be absent (as shown in above example). The recommended way to generate ACL strings is as follows: ``` from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } aclstring = formatacl(version=2, acldict=acldata) ``` Using the format_acl() method will ensure that JSON is encoded as ASCII (using e.g. u1234 for Unicode). While its permissible to manually send curl commands containing X-Account-Access-Control headers, you should exercise caution when doing so, due to the potential for human error. Within the JSON dictionary stored in X-Account-Access-Control, the keys have the following meanings: | Access Level | Description | |:|:--| | read-only | These identities can read everything (except privileged headers) in the account. Specifically, a user with read-only account access can get a list of containers in the account, list the contents of any container, retrieve any object, and see the (non-privileged) headers of the account, any container, or any object. | | read-write | These identities can read or write (or create) any container. A user with read-write account access can create new containers, set any unprivileged container headers, overwrite objects, delete containers, etc. A read-write user can NOT set account headers (or perform any PUT/POST/DELETE requests on the account). | | admin | These identities have swift_owner privileges. A user with admin account access can do anything the account owner can, including setting account headers and any privileged headers and thus granting read-only, read-write, or admin access to other users. | Access Level Description read-only These identities can read everything (except privileged headers) in the account. Specifically, a user with read-only account access can get a list of containers in the account, list the contents of any container, retrieve any object, and see the (non-privileged) headers of the account, any container, or any object. read-write These identities can read or write (or create) any container. A user with read-write account access can create new containers, set any unprivileged container headers, overwrite objects, delete containers, etc. A read-write user can NOT set account headers (or perform any PUT/POST/DELETE requests on the account). admin These identities have swift_owner privileges. A user with admin account access can do anything the account owner can, including setting account headers and any privileged headers and thus granting read-only, read-write, or admin access to other users. For more details, see swift.common.middleware.tempauth. For details on the ACL format, see swift.common.middleware.acl. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "contributing.html#ideas.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Swift supports the optional encryption of object data at rest on storage nodes. The encryption of object data is intended to mitigate the risk of users data being read if an unauthorised party were to gain physical access to a disk. Note Swifts data-at-rest encryption accepts plaintext object data from the client, encrypts it in the cluster, and stores the encrypted data. This protects object data from inadvertently being exposed if a data drive leaves the Swift cluster. If a user wishes to ensure that the plaintext data is always encrypted while in transit and in storage, it is strongly recommended that the data be encrypted before sending it to the Swift cluster. Encrypting on the client side is the only way to ensure that the data is fully encrypted for its entire lifecycle. Encryption of data at rest is implemented by middleware that may be included in the proxy server WSGI pipeline. The feature is internal to a Swift cluster and not exposed through the API. Clients are unaware that data is encrypted by this feature internally to the Swift service; internally encrypted data should never be returned to clients via the Swift API. The following data are encrypted while at rest in Swift: Object content i.e. the content of an object PUT requests body The entity tag (ETag) of objects that have non-zero content All custom user object metadata values i.e. metadata sent using X-Object-Meta- prefixed headers with PUT or POST requests Any data or metadata not included in the list above are not encrypted, including: Account, container and object names Account and container custom user metadata values All custom user metadata names Object Content-Type values Object size System metadata Note This feature is intended to provide confidentiality of data that is at rest i.e. to protect user data from being read by an attacker that gains access to disks on which object data is stored. This feature is not intended to prevent undetectable modification of user data at rest. This feature is not intended to protect against an attacker that gains access to Swifts internal network connections, or gains access to key material or is able to modify the Swift code running on Swift nodes. Encryption is deployed by adding two middleware filters to the proxy server WSGI pipeline and including their respective filter configuration sections in the proxy-server.conf file. Additional steps are required if the container sync feature is being used. The keymaster and encryption middleware filters must be to the right of all other middleware in the pipeline apart from the final proxy-logging middleware, and in the order shown in this example: ``` <other middleware> keymaster encryption proxy-logging proxy-server [filter:keymaster] use = egg:swift#keymaster encryptionrootsecret = your_secret [filter:encryption] use = egg:swift#encryption ``` See the proxy-server.conf-sample file for further details on the middleware configuration options. The keymaster middleware must be configured with a root secret before it is used. By default the keymaster middleware will use the root secret configured using the encryptionrootsecret option in the middleware filter section of the proxy-server.conf file, for example: ``` [filter:keymaster] use = egg:swift#keymaster encryptionrootsecret = your_secret ``` Root secret values MUST be at least 44 valid base-64 characters and should be consistent across all proxy servers. The minimum length of 44 has been chosen because it is the length of a base-64 encoded 32 byte value. Note The encryptionrootsecret option holds the master secret key used for" }, { "data": "The security of all encrypted data critically depends on this key and it should therefore be set to a high-entropy value. For example, a suitable encryptionrootsecret may be obtained by base-64 encoding a 32 byte (or longer) value generated by a cryptographically secure random number generator. The encryptionrootsecret value is necessary to recover any encrypted data from the storage system, and therefore, it must be guarded against accidental loss. Its value (and consequently, the proxy-server.conf file) should not be stored on any disk that is in any account, container or object ring. The encryptionrootsecret value should not be changed once deployed. Doing so would prevent Swift from properly decrypting data that was encrypted using the former value, and would therefore result in the loss of that data. One method for generating a suitable value for encryptionrootsecret is to use the openssl command line tool: ``` openssl rand -base64 32 ``` The encryptionrootsecret option may alternatively be specified in a separate config file at a path specified by the keymasterconfigpath option, for example: ``` [filter:keymaster] use = egg:swift#keymaster keymasterconfigpath = /etc/swift/keymaster.conf ``` This has the advantage of allowing multiple processes which need to be encryption-aware (for example, proxy-server and container-sync) to share the same config file, ensuring that consistent encryption keys are used by those processes. It also allows the keymaster configuration file to have different permissions than the proxy-server.conf file. A separate keymaster config file should have a [keymaster] section containing the encryptionrootsecret option: ``` [keymaster] encryptionrootsecret = your_secret ``` Note Alternative keymaster middleware is available to retrieve encryption root secrets from an external key management system such as Barbican rather than storing root secrets in configuration files. Once deployed, the encryption filter will by default encrypt object data and metadata when handling PUT and POST requests and decrypt object data and metadata when handling GET and HEAD requests. COPY requests are transformed into GET and PUT requests by the Server Side Copy middleware before reaching the encryption middleware and as a result object data and metadata is decrypted and re-encrypted when copied. From time to time it may be desirable to change the root secret that is used to derive encryption keys for new data written to the cluster. The keymaster middleware allows alternative root secrets to be specified in its configuration using options of the form: ``` encryptionrootsecret<secretid> = <secret value> ``` where secret_id is a unique identifier for the root secret and secret value is a value that meets the requirements for a root secret described above. Only one root secret is used to encrypt new data at any moment in time. This root secret is specified using the activerootsecret_id option. If specified, the value of this option should be one of the configured root secret secretid values; otherwise the value of encryptionroot_secret will be taken as the default active root secret. Note The active root secret is only used to derive keys for new data written to the cluster. Changing the active root secret does not cause any existing data to be re-encrypted. Existing encrypted data will be decrypted using the root secret that was active when that data was written. All previous active root secrets must therefore remain in the middleware configuration in order for decryption of existing data to succeed. Existing encrypted data will reference previous root secret by the secret_id so it must be kept consistent in the configuration. Note Do not remove or change any previously active <secret value> or" }, { "data": "For example, the following keymaster configuration file specifies three root secrets, with the value of encryptionrootsecret_2 being the current active root secret: ``` [keymaster] activerootsecret_id = 2 encryptionrootsecret = your_secret encryptionrootsecret1 = yoursecret_1 encryptionrootsecret2 = yoursecret_2 ``` Note To ensure there is no loss of data availability, deploying a new key to your cluster requires a two-stage config change. First, add the new key to the encryptionrootsecret<secretid> option and restart the proxy-server. Do this for all proxies. Next, set the activerootsecret_id option to the new secret id and restart the proxy. Again, do this for all proxies. This process ensures that all proxies will have the new key available for decryption before any proxy uses it for encryption. Once deployed, the encryption filter will by default encrypt object data and metadata when handling PUT and POST requests and decrypt object data and metadata when handling GET and HEAD requests. COPY requests are transformed into GET and PUT requests by the Server Side Copy middleware before reaching the encryption middleware and as a result object data and metadata is decrypted and re-encrypted when copied. The benefits of using a dedicated system for storing the encryption root secret include the auditing and access control infrastructure that are already in place in such a system, and the fact that an encryption root secret stored in a key management system (KMS) may be backed by a hardware security module (HSM) for additional security. Another significant benefit of storing the root encryption secret in an external KMS is that it is in this case never stored on a disk in the Swift cluster. Swift supports fetching encryption root secrets from a Barbican service or a KMIP service using the kmskeymaster or kmipkeymaster middleware respectively. Make sure the required dependencies are installed for retrieving an encryption root secret from an external KMS. This can be done when installing Swift (add the -e flag to install as a development version) by changing to the Swift directory and running the following command to install Swift together with the kms_keymaster extra dependencies: ``` sudo pip install .[kms_keymaster] ``` Another way to install the dependencies is by making sure the following lines exist in the requirements.txt file, and installing them using pip install -r requirements.txt: ``` cryptography>=1.6 # BSD/Apache-2.0 castellan>=0.6.0 ``` Note If any of the required packages is already installed, the --upgrade flag may be required for the pip commands in order for the required minimum version to be installed. To make use of an encryption root secret stored in an external KMS, replace the keymaster middleware with the kms_keymaster middleware in the proxy server WSGI pipeline in proxy-server.conf, in the order shown in this example: ``` <other middleware> kms_keymaster encryption proxy-logging proxy-server ``` and add a section to the same file: ``` [filter:kms_keymaster] use = egg:swift#kms_keymaster keymasterconfigpath = filewithkmskeymasterconfig ``` Create or edit the file filewithkmskeymasterconfig referenced above. For further details on the middleware configuration options, see the keymaster.conf-sample file. An example of the content of this file, with optional parameters omitted, is below: ``` [kms_keymaster] key_id = changeme username = swift password = password project_name = swift auth_endpoint = http://keystonehost:5000/v3 ``` The encryption root secret shall be created and stored in the external key management system before it can be used by the keymaster. It shall be stored as a symmetric key, with content type application/octet-stream, base64 content encoding, AES algorithm, bit length 256, and secret type" }, { "data": "The mode ctr may also be stored for informational purposes - it is not currently checked by the keymaster. The following command can be used to store the currently configured encryptionrootsecret value from the proxy-server.conf file in Barbican: ``` openstack secret store --name swiftrootsecret \\ --payload-content-type=\"application/octet-stream\" \\ --payload-content-encoding=\"base64\" --algorithm aes --bit-length 256 \\ --mode ctr --secret-type symmetric --payload <base64encodedroot_secret> ``` Alternatively, the existing root secret can also be stored in Barbican using curl. Note The credentials used to store the secret in Barbican shall be the same ones that the proxy server uses to retrieve the secret, i.e., the ones configured in the keymaster.conf file. For clarity reasons the commands shown here omit the credentials - they may be specified explicitly, or in environment variables. Instead of using an existing root secret, Barbican can also be asked to generate a new 256-bit root secret, with content type application/octet-stream and algorithm AES (the mode parameter is currently optional): ``` openstack secret order create --name swiftrootsecret \\ --payload-content-type=\"application/octet-stream\" --algorithm aes \\ --bit-length 256 --mode ctr key ``` The order create creates an asynchronous request to create the actual secret. The order can be retrieved using openstack secret order get, and once the order completes successfully, the output will show the key id of the generated root secret. Keys currently stored in Barbican can be listed using the openstack secret list command. Note Both the order (the asynchronous request for creating or storing a secret), and the actual secret itself, have similar unique identifiers. Once the order has been completed, the key id is shown in the output of the order get command. The keymaster uses the explicitly configured username and password (and project name etc.) from the keymaster.conf file for retrieving the encryption root secret from an external key management system. The Castellan library is used to communicate with Barbican. For the proxy server, reading the encryption root secret directly from the proxy-server.conf file, from the keymaster.conf file pointed to from the proxy-server.conf file, or from an external key management system such as Barbican, are all functionally equivalent. In case reading the encryption root secret from the external key management system fails, the proxy server will not start up. If the encryption root secret is retrieved successfully, it is cached in memory in the proxy server. For further details on the configuration options, see the [filter:kms_keymaster] section in the proxy-server.conf-sample file, and the keymaster.conf-sample file. This middleware enables Swift to fetch a root secret from a KMIP service. The root secret is expected to have been previously created in the KMIP service and is referenced by its unique identifier. The secret should be an AES-256 symmetric key. To use this middleware Swift must be installed with the extra required dependencies: ``` sudo pip install .[kmip_keymaster] ``` Add the -e flag to install as a development version. Edit the swift proxy-server.conf file to insert the middleware in the wsgi pipeline, replacing any other keymaster middleware: ``` [pipeline:main] pipeline = catch_errors gatekeeper healthcheck proxy-logging \\ <other middleware> kmip_keymaster encryption proxy-logging proxy-server ``` and add a new filter section: ``` [filter:kmip_keymaster] use = egg:swift#kmip_keymaster key_id = <unique id of secret to be fetched from the KMIP service> host = <KMIP server host> port = <KMIP server port> certfile = /path/to/client/cert.pem keyfile = /path/to/client/key.pem ca_certs = /path/to/server/cert.pem username = <KMIP username> password = <KMIP password> ``` Apart from use and key_id the options are as defined for a PyKMIP" }, { "data": "The authoritative definition of these options can be found at https://pykmip.readthedocs.io/en/latest/client.html. The value of the key_id option should be the unique identifier for a secret that will be retrieved from the KMIP service. The keymaster configuration can alternatively be defined in a separate config file by using the keymasterconfigpath option: ``` [filter:kmip_keymaster] use = egg:swift#kmip_keymaster keymasterconfigpath = /etc/swift/kmip_keymaster.conf ``` In this case, the filter:kmip_keymaster section should contain no other options than use and keymasterconfigpath. All other options should be defined in the separate config file in a section named kmip_keymaster. For example: ``` [kmip_keymaster] key_id = 1234567890 host = 127.0.0.1 port = 5696 certfile = /etc/swift/kmip_client.crt keyfile = /etc/swift/kmip_client.key cacerts = /etc/swift/kmipserver.crt username = swift password = swift_password ``` Because the KMS and KMIP keymasters derive from the default KeyMaster they also have to ability to define multiple keys. The only difference is the key option names. Instead of using the form encryptionrootsecret<secretid> both external KMSs use keyid<secret_id>, as it is an extension of their existing configuration. For example: ``` ... key_id = 1234567890 keyidfoo = 0987654321 keyidbar = 5432106789 activerootsecret_id = foo ... ``` Other then that, the process is the same as Changing the encryption root secret. When upgrading an existing cluster to deploy encryption, the following sequence of steps is recommended: Upgrade all object servers Upgrade all proxy servers Add keymaster and encryption middlewares to every proxy servers middleware pipeline with the encryption disable_encryption option set to True and the keymaster encryptionrootsecret value set as described above. If required, follow the steps for Container sync configuration. Finally, change the encryption disable_encryption option to False Objects that existed in the cluster prior to the keymaster and encryption middlewares being deployed are still readable with GET and HEAD requests. The content of those objects will not be encrypted unless they are written again by a PUT or COPY request. Any user metadata of those objects will not be encrypted unless it is written again by a PUT, POST or COPY request. Once deployed, the keymaster and encryption middlewares should not be removed from the pipeline. To do so will cause encrypted object data and/or metadata to be returned in response to GET or HEAD requests for objects that were previously encrypted. Encryption of inbound object data may be disabled by setting the encryption disable_encryption option to True, in which case existing encrypted objects will remain encrypted but new data written with PUT, POST or COPY requests will not be encrypted. The keymaster and encryption middlewares should remain in the pipeline even when encryption of new objects is not required. The encryption middleware is needed to handle GET requests for objects that may have been previously encrypted. The keymaster is needed to provide keys for those requests. If container sync is being used then the keymaster and encryption middlewares must be added to the container sync internal client pipeline. The following configuration steps are required: Create a custom internal client configuration file for container sync (if one is not already in use) based on the sample file internal-client.conf-sample. For example, copy internal-client.conf-sample to /etc/swift/container-sync-client.conf. Modify this file to include the middlewares in the pipeline in the same way as described above for the proxy server. Modify the container-sync section of all container server config files to point to this internal client config file using the internalclientconf_path option. For example: ``` internalclientconf_path =" }, { "data": "``` Note The encryptionrootsecret value is necessary to recover any encrypted data from the storage system, and therefore, it must be guarded against accidental loss. Its value (and consequently, the custom internal client configuration file) should not be stored on any disk that is in any account, container or object ring. Note These container sync configuration steps will be necessary for container sync probe tests to pass if the encryption middlewares are included in the proxy pipeline of a test cluster. Plaintext data is encrypted to ciphertext using the AES cipher with 256-bit keys implemented by the python cryptography package. The cipher is used in counter (CTR) mode so that any byte or range of bytes in the ciphertext may be decrypted independently of any other bytes in the ciphertext. This enables very simple handling of ranged GETs. In general an item of unencrypted data, plaintext, is transformed to an item of encrypted data, ciphertext: ``` ciphertext = E(plaintext, k, iv) ``` where E is the encryption function, k is an encryption key and iv is a unique initialization vector (IV) chosen for each encryption context. For example, the object body is one encryption context with a randomly chosen IV. The IV is stored as metadata of the encrypted item so that it is available for decryption: ``` plaintext = D(ciphertext, k, iv) ``` where D is the decryption function. The implementation of CTR mode follows NIST SP800-38A, and the full IV passed to the encryption or decryption function serves as the initial counter block. In general any encrypted item has accompanying crypto-metadata that describes the IV and the cipher algorithm used for the encryption: ``` crypto_metadata = {\"iv\": <16 byte value>, \"cipher\": \"AESCTR256\"} ``` This crypto-metadata is stored either with the ciphertext (for user metadata and etags) or as a separate header (for object bodies). A keymaster middleware is responsible for providing the keys required for each encryption and decryption operation. Two keys are required when handling object requests: a container key that is uniquely associated with the container path and an object key that is uniquely associated with the object path. These keys are made available to the encryption middleware via a callback function that the keymaster installs in the WSGI request environ. The current keymaster implementation derives container and object keys from the encryptionrootsecret in a deterministic way by constructing a SHA256 HMAC using the encryptionrootsecret as a key and the container or object path as a message, for example: ``` objectkey = HMAC(encryptionroot_secret, \"/a/c/o\") ``` Other strategies for providing object and container keys may be employed by future implementations of alternative keymaster middleware. During each object PUT, a random key is generated to encrypt the object body. This random key is then encrypted using the object key provided by the keymaster. This makes it safe to store the encrypted random key alongside the encrypted object data and metadata. This process of key wrapping enables more efficient re-keying events when the object key may need to be replaced and consequently any data encrypted using that key must be re-encrypted. Key wrapping minimizes the amount of data encrypted using those keys to just other randomly chosen keys which can be re-wrapped efficiently without needing to re-encrypt the larger amounts of data that were encrypted using the random keys. Note Re-keying is not currently implemented. Key wrapping is implemented in anticipation of future re-keying operations. The encryption middleware is composed of an encrypter component and a decrypter" }, { "data": "The encrypter encrypts each item of custom user metadata using the object key provided by the keymaster and an IV that is randomly chosen for that metadata item. The encrypted values are stored as Object Transient-Sysmeta with associated crypto-metadata appended to the encrypted value. For example: ``` X-Object-Meta-Private1: value1 X-Object-Meta-Private2: value2 ``` are transformed to: ``` X-Object-Transient-Sysmeta-Crypto-Meta-Private1: E(value1, objectkey, headeriv1); swiftmeta={\"iv\": headeriv1, \"cipher\": \"AESCTR256\"} X-Object-Transient-Sysmeta-Crypto-Meta-Private2: E(value2, objectkey, headeriv2); swiftmeta={\"iv\": headeriv2, \"cipher\": \"AESCTR256\"} ``` The unencrypted custom user metadata headers are removed. Encryption of an object body is performed using a randomly chosen body key and a randomly chosen IV: ``` bodyciphertext = E(bodyplaintext, bodykey, bodyiv) ``` The body_key is wrapped using the object key provided by the keymaster and a randomly chosen IV: ``` wrappedbodykey = E(bodykey, objectkey, bodykeyiv) ``` The encrypter stores the associated crypto-metadata in a system metadata header: ``` X-Object-Sysmeta-Crypto-Body-Meta: {\"iv\": body_iv, \"cipher\": \"AESCTR256\", \"bodykey\": {\"key\": wrappedbody_key, \"iv\": bodykeyiv}} ``` Note that in this case there is an extra item of crypto-metadata which stores the wrapped body key and its IV. While encrypting the object body the encrypter also calculates the ETag (md5 digest) of the plaintext body. This value is encrypted using the object key provided by the keymaster and a randomly chosen IV, and saved as an item of system metadata, with associated crypto-metadata appended to the encrypted value: ``` X-Object-Sysmeta-Crypto-Etag: E(md5(plaintext), objectkey, etagiv); swiftmeta={\"iv\": etagiv, \"cipher\": \"AESCTR256\"} ``` The encrypter also forces an encrypted version of the plaintext ETag to be sent with container updates by adding an update override header to the PUT request. The associated crypto-metadata is appended to the encrypted ETag value of this update override header: ``` X-Object-Sysmeta-Container-Update-Override-Etag: E(md5(plaintext), containerkey, overrideetag_iv); meta={\"iv\": overrideetagiv, \"cipher\": \"AESCTR256\"} ``` The container key is used for this encryption so that the decrypter is able to decrypt the ETags in container listings when handling a container request, since object keys may not be available in that context. Since the plaintext ETag value is only known once the encrypter has completed processing the entire object body, the X-Object-Sysmeta-Crypto-Etag and X-Object-Sysmeta-Container-Update-Override-Etag headers are sent after the encrypted object body using the proxy servers support for request footers. In general, an object server evaluates conditional requests with If[-None]-Match headers by comparing values listed in an If[-None]-Match header against the ETag that is stored in the object metadata. This is not possible when the ETag stored in object metadata has been encrypted. The encrypter therefore calculates an HMAC using the object key and the ETag while handling object PUT requests, and stores this under the metadata key X-Object-Sysmeta-Crypto-Etag-Mac: ``` X-Object-Sysmeta-Crypto-Etag-Mac: HMAC(object_key, md5(plaintext)) ``` Like other ETag-related metadata, this is sent after the encrypted object body using the proxy servers support for request footers. The encrypter similarly calculates an HMAC for each ETag value included in If[-None]-Match headers of conditional GET or HEAD requests, and appends these to the If[-None]-Match header. The encrypter also sets the X-Backend-Etag-Is-At header to point to the previously stored X-Object-Sysmeta-Crypto-Etag-Mac metadata so that the object server evaluates the conditional request by comparing the HMAC values included in the If[-None]-Match with the value stored under X-Object-Sysmeta-Crypto-Etag-Mac. For example, given a conditional request with header: ``` If-Match: match_etag ``` the encrypter would transform the request headers to include: ``` If-Match: matchetag,HMAC(objectkey, match_etag) X-Backend-Etag-Is-At: X-Object-Sysmeta-Crypto-Etag-Mac ``` This enables the object server to perform an encrypted comparison to check whether the ETags match, without leaking the ETag itself or leaking information about the object" }, { "data": "For each GET or HEAD request to an object, the decrypter inspects the response for encrypted items (revealed by crypto-metadata headers), and if any are discovered then it will: Fetch the object and container keys from the keymaster via its callback Decrypt the X-Object-Sysmeta-Crypto-Etag value Decrypt the X-Object-Sysmeta-Container-Update-Override-Etag value Decrypt metadata header values using the object key Decrypt the wrapped body key found in X-Object-Sysmeta-Crypto-Body-Meta Decrypt the body using the body key For each GET request to a container that would include ETags in its response body, the decrypter will: GET the response body with the container listing Fetch the container key from the keymaster via its callback Decrypt any encrypted ETag entries in the container listing using the container key Encryption has no impact on Versioned Writes other than that any previously unencrypted objects will be encrypted as they are copied to or from the versions container. Keymaster and encryption middlewares should be placed after versioned_writes in the proxy server pipeline, as described in Deployment and operation. Container Sync uses an internal client to GET objects that are to be syncd. This internal client must be configured to use the keymaster and encryption middlewares as described above. Encryption has no impact on the object-auditor service. Since the ETag header saved with the object at rest is the md5 sum of the encrypted object body then the auditor will verify that encrypted data is valid. Encryption has no impact on the object-expirer service. X-Delete-At and X-Delete-After headers are not encrypted. Encryption has no impact on the object-replicator and object-reconstructor services. These services are unaware of the object or EC fragment data being encrypted. Encryption has no impact on the container-reconciler service. The container-reconciler uses an internal client to move objects between different policy rings. The reconcilers pipeline MUST NOT have encryption enabled. The destination object has the same URL as the source object and the object is moved without re-encryption. Developers should be aware that keymaster and encryption middlewares rely on the path of an object remaining unchanged. The included keymaster derives keys for containers and objects based on their paths and the encryptionrootsecret. The keymaster does not rely on object metadata to inform its generation of keys for GET and HEAD requests because when handling Conditional Requests it is required to provide the object key before any metadata has been read from the object. Developers should therefore give careful consideration to any new features that would relocate object data and metadata within a Swift cluster by means that do not cause the object data and metadata to pass through the encryption middlewares in the proxy pipeline and be re-encrypted. The crypto-metadata associated with each encrypted item does include some key_id metadata that is provided by the keymaster and contains the path used to derive keys. This key_id metadata is persisted in anticipation of future scenarios when it may be necessary to decrypt an object that has been relocated without re-encrypting, in which case the metadata could be used to derive the keys that were used for encryption. However, this alone is not sufficient to handle conditional requests and to decrypt container listings where objects have been relocated, and further work will be required to solve those issues. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]