tag
dict
content
listlengths
1
139
{ "category": "App Definition and Development", "file_name": "docs.crunchybridge.com.md", "project_name": "Crunchy Postgres Operator", "subcategory": "Database" }
[ { "data": "Home Changelog Connecting Getting started Migrating to Crunchy Bridge Concepts How to Insights and metrics Guides and best practices Troubleshooting Extensions Analytics Citus Container apps API concepts API reference Crunchy Bridge is a fully managed Postgres service from Crunchy Data. It takes care of running and managing Postgres so that you can stay focused on building your applications, not on keeping your database up and running. Bridge provides highly available Postgres across every major cloud provider Amazon Web Services, Google Cloud Platform, and Microsoft Azure so regardless of where you're running, you can closely colocate your database. The Dashboard is the graphical frontend to Crunchy Bridge, giving users an easy way to perform all common operations involved in provisioning and configuring database clusters visually. This is a great place to get started with the product. cb provides the whole range of database operations as a CLI (command line interface). This is a useful tool for power users looking to perform operations quickly and precisely, and those trying to build simple automations. The web API is a fully-feature programmatic REST-ful interface to Bridge and is particularly useful for advanced automations. The Crunchy Dashboard is built on the API, so the platform's entire feature set is fully available. Connect your Postgres instance to your application or user interface. Set up a firewall or peer to a private cloud network. Configure custom log drains to many common log providers and get improved visibility. If you're looking for support on running and managing Postgres on your own premises, consider taking a look at Crunchy PostgreSQL for Kubernetes. 2024 Crunchy Data. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": "index.html.md", "project_name": "Couchbase", "subcategory": "Database" }
[ { "data": "Couchbase Capella is the easiest way to use the Couchbase NoSQL database and its many services, from SQL-like querying to connecting to mobile devices at the edge with our App Server. Sign up todayor read on to help you choose which service fits your app. New here? Take the tour, and see what Couchbase Capella can do for you. Sign Up Getting Started Tutorial SDKs for Client Apps Connect mobile & edge devices, or a stateless front end Web app Speed up and Secure with a Private Network Data Service SQL++ Queries Database Backups Roles Database Credentials Security Best Practices Database Sizing Database Scaling Data Replication (XDCR) Manage Deployments with the Management API Billing Manage Billing Pricing Capella UI Authentication Capella Account Settings Manage Organizations Supported Instance Types SDK Compatibility Management API Reference" } ]
{ "category": "App Definition and Development", "file_name": "why-couchbase.html.md", "project_name": "Couchbase", "subcategory": "Database" }
[ { "data": "Please use the form below to provide your feedback. Because your feedback is valuable to us, the information you submit in this form is recorded in our issue tracking system (JIRA), which is publicly available. You can track the status of your feedback using the ticket number displayed in the dialog once you submit the form. Couchbase is the modern database for enterprise applications. Couchbase is a distributed, JSON document database, with all the desired capabilities of a relational DBMS. It is a robust database, built for microservices and serverless consumption-based computing on the cloud on one end, and edge computing for occasionally and locally connected edge Mobile/IoT devices on the other. Couchbase manages JSON documents, eliminating the need for a hard coded schema in the database. The application object definition, available within JSON, is the schema controlled by the developer. Developers write the JSON once into this database and apply multiple data processing capabilities on it. In addition to the SQL-like programmability, Couchbase offers caching, key- value store, full-text search (for information retrieval), analytics (for ad-hoc querying), and Event driven (reactive) programming on this single copy of data. Couchbase is designed to interleave transactions with these high-performance operations at large scale. Developers are offered the freedom to pay the price of a transaction only when needed. Consequently, Couchbase can serve as a reliable system of record, while concurrently handling key-value operations of microseconds latency, SQL queries and text searches in milliseconds, and ad-hoc analytical queries spanning tens of seconds, one not impeding the other. These unique design choices in Couchbase lead to reduced data sprawl, improved security, decreased administration, and lower cost. But most importantly, it enables developers to write applications once and deploy them at any scale. Couchbases distributed streaming architecture is designed for no single point of failure. This enables elastic scaling, resource fencing as well as instantaneous data replication for high- availability, global geo-distribution, and disaster-recovery. The result is a database that is resilient, cost-efficient, and built for metered usage. This cloud-native architecture combined with Kubernetes (K8s) delivers a self-managed autonomous database. Couchbase is: A distributed database. No more scaling or availability issues. A cache and a database. No more cache invalidation or coherency issues. A database and a search engine. No more crawlers. An operational and an analytical database. No more ETL. A desktop, mobile, and clusterable database. No compromises between Server and Mobile. Couchbase is the modern database for Cloud and Edge." } ]
{ "category": "App Definition and Development", "file_name": "getting-started.md", "project_name": "Crux", "subcategory": "Database" }
[ { "data": "If you want to avoid running your own XTDB server locally, you can instantly play with inserting data and querying XTDB right now using the XT Play website. XTDB is built for traditional client-server interactions over HTTP. You can start an XTDB server using the following Docker image: ``` docker run --pull=always -tip 6543:3000 ghcr.io/xtdb/xtdb-standalone-ea``` This starts a server on http://localhost:6543. By default your data will only be stored transiently using a local directory within the Docker image, but you can attach a host volume to preserve your data across container restarts, e.g. by adding -v /tmp/xtdb-data-dir:/var/lib/xtdb. You can then connect to your XTDB server using cURL, or similar tools. For example, to check the status of the server: ``` curl http://localhost:6543/status``` To work with XTDB interactively right away, you can run raw SQL commands using the xtsql.py terminal console: ``` curl -s https://docs.xtdb.com/xtsql.py -O && \\ python xtsql.py --url=http://localhost:6543``` In addition to running SQL statements, xtsql supports a number of built-in commands, e.g. type 'status' and hit Enter to confirm XTDB is running and that you are connected: ``` -> status | latestCompletedTx | latestSubmittedTx | |-|-| | NULL | NULL |``` If that returns a table like the above, then everything should be up and running. Next up, you probably want to start by inserting data, running your first queries, and learning about XTDBs novel capabilities: Lets INSERT some data! Note that xtsql is just a lightweight wrapper around XTDBs HTTP + JSON API, and that client drivers are also provided for a range of languages to make working with XTDB more ergonomic than working with HTTP directly." } ]
{ "category": "App Definition and Development", "file_name": "concepts.md", "project_name": "Crux", "subcategory": "Database" }
[ { "data": "XTDB is a database built on top of columnar storage (using Apache Arrow) and is designed to handle evolving, semi-structured data via SQL natively, avoiding the need for workarounds like JSONB columns and audit tables. In XTDB all data, including nested data, is able to be stored in tables without classic constraints or restrictions in the range of types or shapes, and without any upfront schema definitions. This is more akin to a document database than a classic SQL database. Additionally XTDB allows for INSERTs into tables that dont exist until the moment you insert into them. The only hard requirement in XTDB is that all INSERTs to any table must specify a value for the xt$id column. This is a mandatory, user-provided primary key that is unique per table (i.e. the same xt$id can be safely used in isolation across multiple tables). For details about the exact range of supported data types, see XTDB Data Types. In addition to xt$id, which is the only mandatory column, 2 pairs of system-maintained temporal columns exist which track system time and valid time periods respectively: | SQL Column Name | XTDB Type | |:|:| | xt$system_from | TIMESTAMP WITH TIMEZONE | | xt$system_to | TIMESTAMP WITH TIMEZONE | | xt$valid_from | TIMESTAMP WITH TIMEZONE | | xt$valid_to | TIMESTAMP WITH TIMEZONE | xt$system_from TIMESTAMP WITH TIMEZONE xt$system_to TIMESTAMP WITH TIMEZONE xt$valid_from TIMESTAMP WITH TIMEZONE xt$valid_to TIMESTAMP WITH TIMEZONE As implied by the nature of these columns, no rows written into XTDB are ever mutated directly, and only new rows can be inserted. The only exceptions to this principle are: the existence of the ERASE operation, which can permanently delete entire rows for explicit data retention policies, and, the need to 'close' the xt$system_to period for the now-superseded row XTDB tracks both the system time when data is inserted (or UPDATE-d) into the database, and also the valid time periods that define exactly when a given row/record/document is considered valid/effective in your application. This combination of system and valid time dimensions is called \"bitemporality\" and in XTDB all data is bitemporal without having to think about storing or updating additional columns. The system time columns are useful for audit purposes for providing a stable, immutable 'basis' for running repeatable queries (i.e. queries that return the same answer today as they did last week). These columns cannot be modified. The timestamp used is governed by XTDBs single-writer Write-Ahead Log component which is used for serial transaction processing. The valid time columns can be updated and modified at will, but only for new versions of a given record (i.e. a new row sharing the same xt$id in the same table). Valid time is useful for scenarios where it is crucial to be able offer time-travel queries whilst supporting out of order arrival of information, and including corrections to past data while maintaining a general sense of immutability. Queries are assumed to query 'now' unless otherwise specified and timestamps are recorded automatically with intuitive values unless otherwise specified. Non-valid historical data is filtered out during low-level processing at the heart of the internal design. This allows developers to focus on their essential domain problems without also worrying about their accidental bitemporal problems. XTDBs approach to temporality is inspired by SQL:2011, but makes it ubiquitous, practical and transparent during day-to-day development." } ]
{ "category": "App Definition and Development", "file_name": "what-is-xtdb.md", "project_name": "Crux", "subcategory": "Database" }
[ { "data": "XTDB is a 'bitemporal' and 'dynamic' relational database for handling regulated data. XTDB is a transactional system designed for powering applications while also being amenable to analytical querying thanks to its internal columnar architecture built on Apache Arrow. XTDB is open source and runs on the JVM. XTDB tracks both the system time when data is inserted (or UPDATE-d) into the database, and also the valid time periods that define exactly when a given row/record/document is considered valid/effective in your application. This combination of system and valid time dimensions is called \"bitemporality\" and in XTDB all data is bitemporal without having to think about storing or updating additional columns. All data is time-versioned automatically. This system-maintained time-versioning allows application queries to easily access the correct state of the entire application history \"as-at\" any given moment, and to trivially audit all changes to the database. In other words, this unlocks the complete history of data for rich analysis and allows applications to cope with out of order arrival of information, including corrections to past data while maintaining a general sense of immutability. XTDBs approach to temporality is inspired by SQL:2011, but makes it ubiquitous, practical and transparent during day-to-day development. All tables include 4 temporal columns by default which are maintained automatically. However queries are assumed to query 'now' unless otherwise specified. Non-valid historical data is filtered out during low-level processing at the heart of the internal design. Unlike most transactional database systems, XTDB implements a columnar data architecture that \"separates storage and compute\" - this modern, Big-Data-inspired architecture is built around Apache Arrow and commodity object storage (e.g. S3). Most importantly, this design reduces operational costs when retaining large volumes of historical data. Transaction processing is strictly serial and strongly consistent (ACID), based on deterministic ordering of non-interactive transactions. All nodes in an XTDB cluster are replicas reading from a single, shared Write-Ahead Log. This design implies a hard upper limit on transaction throughput (since all processing must ultimately happen via a single thread) but the key advantage of this design is the concrete information guarantees about exactly when, how & why data across the database has changed. The columnar engine within XTDB is able to handle \"documents\" as wide rows in sparse tables, where any given value in a column may contain arbitrarily nested data without any need for upfront schema design. The full range of built-in types is supported within these nested structures (i.e. unlike JSONB). This enables developers to easily use XTDB either as a store of loosely structured documents, or as a more traditional normalized database, or both at the same" }, { "data": "Unlike typical SQL tables with row-oriented storage, XTDBs columnar tables are always 'sparse' (storing NULLs is cheap) and 'wide' (storing lots of columns is efficient). XTDB offers two interoperable query languages - one for reach (SQL) and one for developer productivity (XTQL). SQL in XTDB is a first-class citizen, built to reflect the SQL:2011 standard (which first introduced bitemporal capabilities to the SQL standard) and conforms to a broad suite of SQLite Logic Tests. XTQL is a novel relational database language that extends the power of SQL and its standard library to a more composable format that can be written or generated by client libraries using a JSON API. The two languages are able to interoperate with 100% parity, meaning application developers can use the APIs as they see fit without sacrificing analytical requirements or compromising on functionality. Supports the full spectrum between normalized relational modeling and dynamic document-like storage without compromising data type fidelity (i.e. unlike JSONB). The combination of a native SQL implementation alongside XTQL offers a more productive application development experience without sacrificing rich data analysis (and without ETL to another system). Strong data consistency built around linearized, single-writer transaction processing. Accurate and immutable temporal record versioning to mitigate the complexities of application logic and handle out-of-order data ingestion. Apache Arrow unlocks data for external integration. Advanced temporal querying allows you to analyze the evolution of your data. Deploy across your choice of cloud database services or on-premise to meet reliability and redundancy requirements. Through each of these interconnected principles and features XTDB solves the motivating problems in a single, coherent system. ation constraints' (similar to join conditions). Each input relation (e.g. from) defines a set of 'logic variables' in its bindings. If a logic variable appears more than once within a single unify clause, the results are constrained such that the logic variable has the same value everywhere its used. This has the effect of imposing 'join conditions' over the inputs. For example, imagine 'for each order, get me the customer name, order-id and order value' ``` SELECT c.customername, o.xt$id AS orderid, o.order_value FROM customers c JOIN orders o ON (o.customer_id = c.xt$id)``` In XTQL, we specify the join condition by re-using a logic variable (customerid), constraining the two input tables to have the same value for o.customerid and" }, { "data": "(customer table primary key): ``` [ { \"unify\": [ // bind `customer_id` to the `xt$id` of the `customers` table { \"from\": \"customers\", \"bind\": [ { \"xt$id\": \"customerId\" }, \"customerName\" ] }, // also bind `customerid` to the `customerid` of the `orders` table { \"from\": \"orders\", \"bind\": [ { \"xt$id\": \"orderId\" }, \"customerId\", \"orderValue\" ] } ] }, { \"return\": [ \"customerName\", \"orderId\", \"orderValue\" ]} ]``` ``` (-> (unify (from :customers [{:xt/id customer-id} customer-name]) (from :orders [{:xt/id order-id} customer-id order-value])) (return customer-name order-id order-value))``` The unify operator accepts 'unify clauses' - e.g. from, where, with, join, left-join - a full list of which can be found in the unify clause reference guide. XTQL can also be used in XTDB transactions to insert, update, delete and erase documents based on an XTQL query. It uses the same query language as above, with a small wrapper for each of the operations. These queries are evaluated on XTDBs single writer thread, so are guaranteed the strongest level of consistency. We can submit 'insert' operations to XTDB - these evaluate a query, and insert every result into the given table. e.g. INSERT INTO users2 SELECT xt$id, firstname AS givenname, lastname AS familyname FROM users: ``` { \"insert\": \"users2\", \"query\": { \"from\": \"users\", \"bind\": [ \"xt$id\", {\"firstName\": \"givenName\"}, {\"lastName\": \"familyName\"}] } }``` ``` [:insert-into :users2 '(from :users [xt/id {:first-name given-name, :last-name family-name} xt/valid-from xt/valid-to])]``` Update operations find rows, and specify which fields to update. Here, were incrementing a 'version' attribute - UPDATE docs SET version = version + 1 WHERE xt$id = ? ``` { \"update\": \"documents\", \"bind\": [ { \"xt$id\": \"$docId\", \"version\": \"v\" }], \"set\": { \"version\": { \"@call\": \"+\", \"@args\": [ \"v\", 1 ] } } } // separately, we pass the following as the arguments to the query: { \"docId\": \"myDocId\" }``` ``` [:update {:table :documents :bind [{:xt/id $doc-id, :version v}] :set {:version (+ v 1)}} ;; specifying a value for the parameter with args {:doc-id \"doc-id\"}]``` Delete operations work like 'update' operations, but without the set clause. Here, we delete all the comments for a given post-id - DELETE FROM comments WHERE post_id = ? ``` { \"delete\": \"comments\", \"bind\": [ { \"postId\": \"$postId\" }] } // separately, we pass the following as the arguments to the query: { \"postId\": \"myPostId\" }``` ``` [:delete {:from :posts, :bind [{:post-id $post-id}]} ;; specifying a value for the parameter with args {:post-id \"post-id\"}]``` Congratulations - this is the majority of the theory behind XTQL! You now understand the fundamentals behind how to construct XTQL queries from its simple building blocks - from here, its much more about incrementally learning what each individual operator does, and what it looks like in your client language. You can: check out the reference guides for XTQL queries and transactions. Were very much in listening mode right now - as a keen early adopter, wed love to hear your first impressions, thoughts and opinions on where were headed with XTQL. Please do get in touch via the usual channels!" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Doris", "subcategory": "Database" }
[ { "data": "Apache Doris is a high-performance, real-time analytic database based on the MPP architecture and is known for its extreme speed and ease of use. It takes only sub-second response times to return query results under massive amounts of data, and can support not only highly concurrent point query scenarios, but also high-throughput complex analytic scenarios. version, install and run it on a single node, including creating databases, data tables, importing data and queries, etc. Doris runs on a Linux environment, CentOS 7.x or Ubuntu 16.04 or higher is recommended, and you need to have a Java runtime environment installed (the minimum JDK version required is 8). To check the version of Java you have installed, run the following command. ``` java -version``` Next, download the latest binary version of Doris and unzip it. ``` tar zxf apache-doris-x.x.x.tar.gz``` Go to the apache-doris-x.x.x/fe directory ``` cd apache-doris-x.x.x/fe``` Modify the FE configuration file conf/fe.conf, here we mainly modify two parameters: prioritynetworks and metadir, if you need more optimized configuration, please refer to FE parameter configuration for instructions on how to adjust them. ``` priority_networks=172.23.16.0/24``` Note: This parameter we have to configure during installation, especially when a machine has multiple IP addresses, we have to specify a unique IP address for FE. ``` meta_dir=/path/your/doris-meta``` Note: Here you can leave it unconfigured, the default is doris-meta in your Doris FE installation directory. To configure the metadata directory separately, you need to create the directory you specify in advance Execute the following command in the FE installation directory to complete the FE startup. ``` ./bin/start_fe.sh --daemon``` You can check if Doris started successfully with the following command ``` curl http://127.0.0.1:8030/api/bootstrap``` Here the IP and port are the IP and http_port of FE (default 8030), if you are executing in FE node, just run the above command directly. If the return result has the word \"msg\": \"success\", then the startup was successful. You can also check this through the web UI provided by Doris FE by entering the address in your browser http:// fe_ip:8030 You can see the following screen, which indicates that the FE has started successfully Note: We will connect to Doris FE via MySQL client below, download the installation-free MySQL client Unzip the MySQL client you just downloaded and you can find the mysql command line tool in the bin/ directory. Then execute the following command to connect to Doris. ``` mysql -uroot -P9030 -h127.0.0.1``` Note: Execute the following command to view the FE running status ``` show frontends\\G;``` You can then see a result similar to the following. ``` mysql> show frontends\\G;* 1. row * Name: 172.21.32.590101660549353220 IP: 172.21.32.5 EditLogPort: 9010 HttpPort: 8030 QueryPort: 9030 RpcPort: 9020 Role: FOLLOWER IsMaster: true ClusterId: 1685821635 Join: true Alive: trueReplayedJournalId: 49292 LastHeartbeat: 2022-08-17 13:00:45 IsHelper: true ErrMsg: Version: 1.1.2-rc03-ca55ac2 CurrentConnected: Yes1 row in set (0.03 sec)``` Doris supports SSL-based encrypted" }, { "data": "It currently supports TLS1.2 and TLS1.3 protocols. Doris' SSL mode can be enabled through the following configuration: Modify the FE configuration file conf/fe.conf and add enable_ssl = true. Next, connect to Doris through mysql client, mysql supports five SSL modes: mysql -uroot -P9030 -h127.0.0.1 is the same as mysql --ssl-mode=PREFERRED -uroot -P9030 -h127.0.0.1, both try to establish an SSL encrypted connection at the beginning, if it fails , a normal connection is attempted. mysql --ssl-mode=DISABLE -uroot -P9030 -h127.0.0.1, do not use SSL encrypted connection, use normal connection directly. mysql --ssl-mode=REQUIRED -uroot -P9030 -h127.0.0.1, force the use of SSL encrypted connections. 4.mysql --ssl-mode=VERIFY_CA --ssl-ca=ca.pem -uroot -P9030 -h127.0.0.1, force the use of SSL encrypted connection and verify the validity of the server's identity by specifying the CA certificate 5.mysql --ssl-mode=VERIFY_CA --ssl-ca=ca.pem --ssl-cert=client-cert.pem --ssl-key=client-key.pem -uroot -P9030 -h127.0.0.1, force the use of SSL encrypted connection, two-way ssl Note: --ssl-mode parameter is introduced by mysql5.7.11 version, please refer to here for mysql client version lower than this version. Doris needs a key certificate file to verify the SSL encrypted connection. The default key certificate file is located at Doris/fe/mysqlssldefault_certificate/. For the generation of the key certificate file, please refer to Key Certificate Configuration The stopping of Doris FE can be done with the following command ``` ./bin/stop_fe.sh``` Go to the apache-doris-x.x.x/be directory ``` cd apache-doris-x.x.x/be``` Modify the BE configuration file conf/be.conf, here we mainly modify two parameters: prioritynetworks' and storageroot, if you need more optimized configuration, please refer to BE parameter configuration instructions to make adjustments. ``` priority_networks=172.23.16.0/24``` Note: This parameter we have to configure during installation, especially when a machine has multiple IP addresses, we have to assign a unique IP address to the BE. ``` storagerootpath=/path/your/data_dir``` Notes. Set JAVA_HOME environment variable Install Java UDF functions Execute the following command in the BE installation directory to complete the BE startup. ``` ./bin/start_be.sh --daemon``` Connect to FE via MySQL client and execute the following SQL to add the BE to the cluster ``` ALTER SYSTEM ADD BACKEND \"behostip:heartbeatserviceport\";``` You can check the running status of BE by executing the following command at the MySQL command line. ``` SHOW BACKENDS\\G``` Example: ``` mysql> SHOW BACKENDS\\G;* 1. row * BackendId: 10003 Cluster: default_cluster IP: 172.21.32.5 HeartbeatPort: 9050 BePort: 9060 HttpPort: 8040 BrpcPort: 8060 LastStartTime: 2022-08-16 15:31:37 LastHeartbeat: 2022-08-17 13:33:17 Alive: true SystemDecommissioned: falseClusterDecommissioned: false TabletNum: 170 DataUsedCapacity: 985.787 KB AvailCapacity: 782.729 GB TotalCapacity: 984.180 GB UsedPct: 20.47 % MaxDiskUsedPct: 20.47 % Tag: {\"location\" : \"default\"} ErrMsg: Version: 1.1.2-rc03-ca55ac2 Status: {\"lastSuccessReportTabletsTime\":\"2022-08-17 13:33:05\",\"lastStreamLoadTime\":-1,\"isQueryDisabled\":false,\"isLoadDisabled\":false}1 row in set (0.01 sec)``` The stopping of Doris BE can be done with the following command ``` ./bin/stop_be.sh``` ``` create database demo;``` ``` use demo;CREATE TABLE IF NOT EXISTS" }, { "data": "`userid` LARGEINT NOT NULL COMMENT \"user id\", `date` DATE NOT NULL COMMENT \"\", `city` VARCHAR(20) COMMENT \"\", `age` SMALLINT COMMENT \"\", `sex` TINYINT COMMENT \"\", `lastvisitdate` DATETIME REPLACE DEFAULT \"1970-01-01 00:00:00\" COMMENT \"\", `cost` BIGINT SUM DEFAULT \"0\" COMMENT \"\", `maxdwelltime` INT MAX DEFAULT \"0\" COMMENT \"\", `mindwelltime` INT MIN DEFAULT \"99999\" COMMENT \"\")AGGREGATE KEY(`userid`, `date`, `city`, `age`, `sex`)DISTRIBUTED BY HASH(`userid`) BUCKETS 1PROPERTIES ( \"replication_allocation\" = \"tag.location.default: 1\");``` ``` 10000,2017-10-01,beijing,20,0,2017-10-01 06:00:00,20,10,1010006,2017-10-01,beijing,20,0,2017-10-01 07:00:00,15,2,210001,2017-10-01,beijing,30,1,2017-10-01 17:05:45,2,22,2210002,2017-10-02,shanghai,20,1,2017-10-02 12:59:12,200,5,510003,2017-10-02,guangzhou,32,0,2017-10-02 11:20:00,30,11,1110004,2017-10-01,shenzhen,35,0,2017-10-01 10:00:15,100,3,310004,2017-10-03,shenzhen,35,0,2017-10-03 10:20:22,11,6,6``` Save the above data into test.csv file. Here we import the data saved to the file above into the table we just created via Stream load. ``` curl --location-trusted -u root: -T test.csv -H \"columnseparator:,\" http://127.0.0.1:8030/api/demo/exampletbl/streamload``` After successful execution we can see the following return message ``` { \"TxnId\": 30303, \"Label\": \"8690a5c7-a493-48fc-b274-1bb7cd656f25\", \"TwoPhaseCommit\": \"false\", \"Status\": \"Success\", \"Message\": \"OK\", \"NumberTotalRows\": 7, \"NumberLoadedRows\": 7, \"NumberFilteredRows\": 0, \"NumberUnselectedRows\": 0, \"LoadBytes\": 399, \"LoadTimeMs\": 381, \"BeginTxnTimeMs\": 3, \"StreamLoadPutTimeMs\": 5, \"ReadDataTimeMs\": 0, \"WriteDataTimeMs\": 191, \"CommitAndPublishTimeMs\": 175}``` NumberLoadedRows indicates the number of data records that have been imported NumberTotalRows indicates the total amount of data to be imported Status :Success means the import was successful Here we have finished importing the data, and we can now query and analyze the data according to our own needs. We have finished building tables and importing data above, so we can experience Doris' ability to quickly query and analyze data. ``` mysql> select from example_tbl;+++--+++++-+-+| user_id | date | city | age | sex | last_visit_date | cost | max_dwell_time | min_dwell_time |+++--+++++-+-+| 10000 | 2017-10-01 | beijing | 20 | 0 | 2017-10-01 06:00:00 | 20 | 10 | 10 || 10001 | 2017-10-01 | beijing | 30 | 1 | 2017-10-01 17:05:45 | 2 | 22 | 22 || 10002 | 2017-10-02 | shanghai | 20 | 1 | 2017-10-02 12:59:12 | 200 | 5 | 5 || 10003 | 2017-10-02 | guangzhou | 32 | 0 | 2017-10-02 11:20:00 | 30 | 11 | 11 || 10004 | 2017-10-01 | shenzhen | 35 | 0 | 2017-10-01 10:00:15 | 100 | 3 | 3 || 10004 | 2017-10-03 | shenzhen | 35 | 0 | 2017-10-03 10:20:22 | 11 | 6 | 6 || 10006 | 2017-10-01 | beijing | 20 | 0 | 2017-10-01 07:00:00 | 15 | 2 | 2 |+++--+++++-+-+7 rows in set (0.01 sec)mysql> select from exampletbl where city='shanghai';+++-+++++-+-+| userid | date | city | age | sex | lastvisitdate | cost | maxdwelltime | mindwelltime |+++-+++++-+-+| 10002 | 2017-10-02 | shanghai | 20 | 1 | 2017-10-02 12:59:12 | 200 | 5 | 5 |+++-+++++-+-+1 row in set (0.00 sec)mysql> select city, sum(cost) as totalcost from exampletbl group by city;+--++| city | total_cost |+--++| beijing | 37 || shenzhen | 111 || guangzhou | 30 || shanghai | 200 |+--++4 rows in set (0.00 sec)``` This is the end of our entire quick start. We have experienced the complete Doris operation process from Doris installation and deployment, start/stop, creation of library tables, data import and query, let's start our Doris usage journey. Connect on WeChat" } ]
{ "category": "App Definition and Development", "file_name": "what-is-apache-doris.md", "project_name": "Doris", "subcategory": "Database" }
[ { "data": "Apache Doris is an MPP-based real-time data warehouse known for its high query speed. For queries on large datasets, it returns results in sub-seconds. It supports both high-concurrent point queries and high-throughput complex analysis. It can be used for report analysis, ad-hoc queries, unified data warehouse, and data lake query acceleration. Based on Apache Doris, users can build applications for user behavior analysis, A/B testing platform, log analysis, user profile analysis, and e-commerce order analysis. Apache Doris, formerly known as Palo, was initially created to support Baidu's ad reporting business. It was officially open-sourced in 2017 and donated by Baidu to the Apache Software Foundation in July 2018, where it was operated by members of the incubator project management committee under the guidance of Apache mentors. In June 2022, Apache Doris graduated from the Apache incubator as a Top-Level Project. By 2024, the Apache Doris community has gathered more than 600 contributors from hundreds of companies in different industries, with over 120 monthly active contributors. Apache Doris has a wide user base. It has been used in production environments of over 4000 companies worldwide, including giants such as TikTok, Baidu, Cisco, Tencent, and NetEase. It is also widely used across industries from finance, retailing, and telecommunications to energy, manufacturing, medical care, etc. The figure below shows what Apache Doris can do in a data pipeline. Data sources, after integration and processing, are ingested into the Apache Doris real-time data warehouse and offline data lakehouses such as Hive, Iceberg, and Hudi. Apache Doris can be used for the following purposes: Apache Doris has a simple and neat architecture with only two types of processes. Both frontend and backend processes are scalable, supporting up to hundreds of machines and tens of petabytes of storage capacity in a single cluster. Both types of processes guarantee high service availability and high data reliability through consistency protocols. This highly integrated architecture design greatly reduces the operation and maintenance costs of a distributed" }, { "data": "Apache Doris adopts the MySQL protocol, supports standard SQL, and is highly compatible with MySQL syntax. Users can access Doris through various client tools and seamlessly integrate it with BI tools, including but not limited to SmartBI, DataEase, FineBI, Tableau, Power BI, and SuperSet. It can work as the data source for any BI tools that support the MySQL protocol. Apache Doris has a columnar storage engine, which encodes, compresses, and reads data by column. This enables a very high data compression ratio and largely reduces unnecessary data scanning, thus making more efficient use of IO and CPU resources. Doris supports various index structures to minimize data scans: Doris supports a variety of data models and has optimized them for different scenarios: Doris also supports strongly consistent materialized views. Materialized views are automatically selected and updated within the system without manual efforts, which reduces maintenance costs for users. Doris has an MPP-based query engine for parallel execution between and within nodes. It supports distributed shuffle join for large tables to better handle complicated queries. The Doris query engine is fully vectorized, with all memory structures laid out in a columnar format. This can largely reduce virtual function calls, increase cache hit rates, and make efficient use of SIMD instructions. Doris delivers a 5~10 times higher performance in wide table aggregation scenarios than non-vectorized engines. Doris uses adaptive query execution technology to dynamically adjust the execution plan based on runtime statistics. For example, it can generate a runtime filter and push it to the probe side. Specifically, it pushes the filters to the lowest-level scan node on the probe side, which largely reduces the data amount to be processed and increases join performance. The Doris runtime filter supports In/Min/Max/Bloom Filter. The Doris query optimizer is a combination of CBO and RBO. RBO supports constant folding, subquery rewriting, and predicate pushdown while CBO supports join reorder. The Doris CBO is under continuous optimization for more accurate statistics collection and inference as well as a more accurate cost model. Connect on WeChat" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "GreptimeDB", "subcategory": "Database" }
[ { "data": "English English Overview Overview GreptimeDB Standalone GreptimeDB Cluster GreptimeDB Dashboard Overview MySQL InfluxDB Line Protocol Go Java Python Node.js Vector Prometheus Overview Overview Why GreptimeDB Data Model Architecture Storage Location Key Concepts Features that You Concern Overview Authentication MySQL Prometheus InfluxDB Line Protocol OpenTSDB PostgreSQL HTTP API OpenTelemetry Protocol(OTLP) Vector Grafana EMQX Migrate from InfluxDB Table Management Overview Prometheus SQL InfluxDB Line Protocol OpenTSDB Overview SQL Prometheus Query Language Query External Data Overview Manage Flows Write a Query Define Time Window Expression Overview Go Java Overview Getting Started Define a Function Query Data Write Data Data Types API Cluster Administration Configuration Capacity Plan Back up & restore data Kubernetes gtctl Run on Android Platforms Monitoring Tracing Quick Start Cluster Deployment Region Migration Upgrade Overview Overview Prometheus MySQL InfluxDB Line Protocol Go Java Python Node.js Vector Quick Setup Rule Management Grafana MySQL PostgreSQL InfluxDB Line Protocol OpenTelemetry Protocol(OTLP) Vector EMQX Platform Streamlit MindsDB Go SDK Java SDK Migrate from InfluxDB Overview Request Capacity Unit Hobby Plan Serverless Plan Dedicated Plan BYOC Billing Prometheus MySQL InfluxDB Line Protocol Go Java Python Node.js Command lines Overview Data Types HTTP API INSERT CAST COPY DROP SELECT DISTINCT WHERE ORDER BY GROUP BY LIMIT JOIN Functions RANGE QUERY DELETE SHOW TQL CREATE DESCRIBE TABLE ALTER EXPLAIN ANSI Compatibility Overview BUILD_INFO CHARACTER_SETS COLLATIONS COLLATIONCHARACTERSET_APPLICABILITY COLUMNS ENGINES KEYCOLUMNUSAGE PARTITIONS SCHEMATA TABLES REGION_PEERS RUNTIME_METRICS CLUSTER_INFO Telemetry Overview Getting started Overview Table Sharding Distributed Querying Overview Storage Engine Query Engine Data Persistence and Indexing Write-Ahead Logging Python Scripts Metric Engine Overview Admin API Distributed Lock Selector Overview Dataflow Arrangement Overview Unit Test Integration Test Sqlness Test How to write a gRPC SDK for GreptimeDB How to use tokio-console in GreptimeDB How to trace GreptimeDB All releases v0.8.1 v0.8.0 v0.7.2 v0.7.1 v0.7.0 FAQ and others Frequently Asked Questions GreptimeDB is an open-source time-series database focusing on efficiency, scalability, and analytical capabilities. It's designed to work on infrastructure of the cloud era, and users benefit from its elasticity and commodity storage. GreptimeDB is also on cloud as GreptimeCloud, a fully-managed time-series database service that features serverless scalability, seamless integration with ecoystem and improved Prometheus compatibility. Our core developers have been building time-series data platforms for years. Based on their best-practices, GreptimeDB is born to give you: Before getting started, please read the following documents that include instructions for setting up, fundamental concepts, architectural designs, and tutorials: Last updated:" } ]
{ "category": "App Definition and Development", "file_name": "docs.md", "project_name": "GraphScope", "subcategory": "Database" }
[ { "data": "Overview Installation & Deployment GraphScope Flex Graph Analytical Engine Graph Interactive Engine Graph Learning Engine Storage Engine Troubleshooting & Utilities Development Reference GraphScope is a unified distributed graph computing platform that provides a one-stop environment for performing diverse graph operations on a cluster of computers through a user-friendly Python interface. GraphScope makes multi-staged processing of large-scale graph data on compute clusters simple by combining several important pieces of Alibaba technology: including GRAPE, GraphCompute, and Graph-Learn (GL) for analytics, interactive, and graph neural networks (GNN) computation, respectively, and the vineyard store that offers efficient in-memory data transfers. Overview Installation & Deployment GraphScope Flex Graph Analytical Engine Graph Interactive Engine Graph Learning Engine Storage Engine Troubleshooting & Utilities Development Reference Index Module Index Search Page" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "This quickstart helps you install Apache Druid and introduces you to Druid ingestion and query features. For this tutorial, you need a machine with at least 6 GiB of RAM. In this quickstart, you'll: Druid supports a variety of ingestion options. Once you're done with this tutorial, refer to the Ingestion page to determine which ingestion method is right for you. You can follow these steps on a relatively modest machine, such as a workstation or virtual server with 6 GiB of RAM. The software requirements for the installation machine are: Java must be available. Either it is on your path, or set one of the JAVAHOME or DRUIDJAVA_HOME environment variables. You can run apache-druid-29.0.1/bin/verify-java to verify Java requirements for your environment. Before installing a production Druid instance, be sure to review the security overview. In general, avoid running Druid as root user. Consider creating a dedicated user account for running Druid. Download the 29.0.1 release from Apache Druid. In your terminal, extract the file and change directories to the distribution directory: ``` tar -xzf apache-druid-29.0.1-bin.tar.gzcd apache-druid-29.0.1``` The distribution directory contains LICENSE and NOTICE files and subdirectories for executable files, configuration files, sample data and more. Start up Druid services using the automatic single-machine configuration. This configuration includes default settings that are appropriate for this tutorial, such as loading the druid-multi-stage-query extension by default so that you can use the MSQ task engine. You can view the default settings in the configuration files located in conf/druid/auto. From the apache-druid-29.0.1 package root, run the following command: ``` ./bin/start-druid``` This launches instances of ZooKeeper and the Druid services. For example: ``` $ ./bin/start-druid[Tue Nov 29 16:31:06 2022] Starting Apache Druid.[Tue Nov 29 16:31:06 2022] Open http://localhost:8888/ in your browser to access the web console.[Tue Nov 29 16:31:06 2022] Or, if you have enabled TLS, use https on port 9088.[Tue Nov 29 16:31:06 2022] Starting services with log directory [/apache-druid-29.0.1/log].[Tue Nov 29 16:31:06 2022] Running command[zk]: bin/run-zk conf[Tue Nov 29 16:31:06 2022] Running command[broker]: bin/run-druid broker /apache-druid-29.0.1/conf/druid/single-server/quickstart '-Xms1187m -Xmx1187m -XX:MaxDirectMemorySize=791m'[Tue Nov 29 16:31:06 2022] Running command[router]: bin/run-druid router /apache-druid-29.0.1/conf/druid/single-server/quickstart '-Xms128m -Xmx128m'[Tue Nov 29 16:31:06 2022] Running command[coordinator-overlord]: bin/run-druid coordinator-overlord /apache-druid-29.0.1/conf/druid/single-server/quickstart '-Xms1290m -Xmx1290m'[Tue Nov 29 16:31:06 2022] Running command[historical]: bin/run-druid historical /apache-druid-29.0.1/conf/druid/single-server/quickstart '-Xms1376m -Xmx1376m -XX:MaxDirectMemorySize=2064m'[Tue Nov 29 16:31:06 2022] Running command[middleManager]: bin/run-druid middleManager /apache-druid-29.0.1/conf/druid/single-server/quickstart '-Xms64m -Xmx64m' '-Ddruid.worker.capacity=2 -Ddruid.indexer.runner.javaOptsArray=[\"-server\",\"-Duser.timezone=UTC\",\"-Dfile.encoding=UTF-8\",\"-XX:+ExitOnOutOfMemoryError\",\"-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager\",\"-Xms256m\",\"-Xmx256m\",\"-XX:MaxDirectMemorySize=256m\"]'``` Druid may use up to 80% of the total available system memory. To explicitly set the total memory available to Druid, pass a value for the memory parameter. For example, ./bin/start-druid -m 16g. Druid stores all persistent state data, such as the cluster metadata store and data segments, in apache-druid-29.0.1/var. Each service writes to a log file under apache-druid-29.0.1/log. At any time, you can revert Druid to its original, post-installation state by deleting the entire var directory. You may want to do this, for example, between Druid tutorials or after experimentation, to start with a fresh instance. To stop Druid at any time, use CTRL+C in the terminal. This exits the bin/start-druid script and terminates all Druid" }, { "data": "After starting the Druid services, open the web console at http://localhost:8888. It may take a few seconds for all Druid services to finish starting, including the Druid router, which serves the console. If you attempt to open the web console before startup is complete, you may see errors in the browser. Wait a few moments and try again. In this quickstart, you use the the web console to perform ingestion. The MSQ task engine specifically uses the Query view to edit and run SQL queries. For a complete walkthrough of the Query view as it relates to the multi-stage query architecture and the MSQ task engine, see UI walkthrough. The Druid distribution bundles the wikiticker-2015-09-12-sampled.json.gz sample dataset that you can use for testing. The sample dataset is located in the quickstart/tutorial/ folder, accessible from the Druid root directory, and represents Wikipedia page edits for a given day. Follow these steps to load the sample Wikipedia dataset: In the Query view, click Connect external data. Select the Local disk tile and enter the following values: Base directory: quickstart/tutorial/ File filter: wikiticker-2015-09-12-sampled.json.gz Entering the base directory and wildcard file filter separately, as afforded by the UI, allows you to specify multiple files for ingestion at once. Click Connect data. On the Parse page, you can examine the raw data and perform the following optional actions before loading data into Druid: Click Done. You're returned to the Query view that displays the newly generated query. The query inserts the sample data into the table named wikiticker-2015-09-12-sampled. ``` REPLACE INTO \"wikiticker-2015-09-12-sampled\" OVERWRITE ALLWITH inputdata AS (SELECT *FROM TABLE( EXTERN( '{\"type\":\"local\",\"baseDir\":\"quickstart/tutorial/\",\"filter\":\"wikiticker-2015-09-12-sampled.json.gz\"}', '{\"type\":\"json\"}', '[{\"name\":\"time\",\"type\":\"string\"},{\"name\":\"channel\",\"type\":\"string\"},{\"name\":\"cityName\",\"type\":\"string\"},{\"name\":\"comment\",\"type\":\"string\"},{\"name\":\"countryIsoCode\",\"type\":\"string\"},{\"name\":\"countryName\",\"type\":\"string\"},{\"name\":\"isAnonymous\",\"type\":\"string\"},{\"name\":\"isMinor\",\"type\":\"string\"},{\"name\":\"isNew\",\"type\":\"string\"},{\"name\":\"isRobot\",\"type\":\"string\"},{\"name\":\"isUnpatrolled\",\"type\":\"string\"},{\"name\":\"metroCode\",\"type\":\"long\"},{\"name\":\"namespace\",\"type\":\"string\"},{\"name\":\"page\",\"type\":\"string\"},{\"name\":\"regionIsoCode\",\"type\":\"string\"},{\"name\":\"regionName\",\"type\":\"string\"},{\"name\":\"user\",\"type\":\"string\"},{\"name\":\"delta\",\"type\":\"long\"},{\"name\":\"added\",\"type\":\"long\"},{\"name\":\"deleted\",\"type\":\"long\"}]' ) ))SELECT TIMEPARSE(\"time\") AS time, channel, cityName, comment, countryIsoCode, countryName, isAnonymous, isMinor, isNew, isRobot, isUnpatrolled, metroCode, namespace, page, regionIsoCode, regionName, user, delta, added, deletedFROM input_dataPARTITIONED BY DAY``` Optionally, click Preview to see the general shape of the data before you ingest it. Edit the first line of the query and change the default destination datasource name from wikiticker-2015-09-12-sampled to wikipedia. Click Run to execute the query. The task may take a minute or two to complete. When done, the task displays its duration and the number of rows inserted into the table. The view is set to automatically refresh, so you don't need to refresh the browser to see the status change. A successful task means that Druid data servers have picked up one or more segments. Once the ingestion job is complete, you can query the data. In the Query view, run the following query to produce a list of top channels: ``` SELECT channel, COUNT()FROM \"wikipedia\"GROUP BY channelORDER BY COUNT() DESC``` Congratulations! You've gone from downloading Druid to querying data with the MSQ task engine in just one quickstart. See the following topics for more information: Remember that after stopping Druid services, you can start clean next time by deleting the var directory from the Druid root directory and running the bin/start-druid script again. You may want to do this before using other data ingestion tutorials, since they use the same Wikipedia datasource." } ]
{ "category": "App Definition and Development", "file_name": "resp-endpoint.md", "project_name": "Infinispan", "subcategory": "Database" }
[ { "data": "Infinispan Server includes an endpoint that implements the RESP3 protocol and allows you to interact with remote caches using Redis clients. The RESP endpoint is enabled by default on the single-port endpoint. Redis client connections will automatically be detected and routed to the internal connector. The RESP endpoint works with: Standalone Infinispan Server deployments, exactly like standalone Redis, where each server instance runs independently of each other. Clustered Infinispan Server deployments, where server instances replicate or distribute data between each other. Clustered deployments provides clients with failover capabilities. Install Infinispan Server. Create a user When you start Infinispan Server check for the following log message: ``` [org.infinispan.SERVER] ISPN080018: Started connector Resp (internal)``` You can now connect to the RESP endpoint with a Redis client. For example, with the Redis CLI you can do the following to add an entry to the cache: ``` redis-cli -p 11222 --user username --pass password``` ``` 127.0.0.1:11222> SET k v OK 127.0.0.1:11222> GET k \"v\" 127.0.0.1:11222> quit``` The RESP endpoint automatically configures and starts a respCache cache. This cache has the following configuration: local-cache or distributed-cache depending on the Infinispan Server clustering mode. application/octet-stream encoding for both keys and values. RESPHashFunctionPartitioner hash partitioner, which supports the CRC16 hashing used by Redis clients The cache configuration cannot enable capabilities that violate the RESP protocol. For example, specifying an incompatible hash partitioning function or a key encoding different from application/octet-stream. | 0 | 1 | |-:|:--| | nan | Configure your cache value encoding with Protobuf encoding if you want to view cache entries in the Infinispan Console (value media-type=\"application/x-protostream\"). | Configure your cache value encoding with Protobuf encoding if you want to view cache entries in the Infinispan Console (value media-type=\"application/x-protostream\"). If the implicit configuration used by the single-port endpoint does not fit your needs, use explicit configuration. ``` <endpoints> <endpoint socket-binding=\"default\" security-realm=\"default\"> <resp-connector cache=\"mycache\" /> <hotrod-connector /> <rest-connector/> </endpoint> </endpoints>``` ``` { \"server\": { \"endpoints\": { \"endpoint\": { \"socket-binding\": \"default\", \"security-realm\": \"default\", \"resp-connector\": { \"cache\": \"mycache\" }, \"hotrod-connector\": {}, \"rest-connector\": {} } } } }``` ``` server: endpoints: endpoint: socketBinding: \"default\" securityRealm: \"default\" respConnector: cache: \"mycache\" hotrodConnector: ~ restConnector: ~``` Use the cache aliases configuration attributes to map caches to Redis logical databases. The default respCache is mapped to logical database 0. | 0 | 1 | |-:|:| | nan | Infinispan can use multiple logical databases even in clustered mode, as opposed to Redis which only supports database 0 when using Redis Cluster. | Infinispan can use multiple logical databases even in clustered mode, as opposed to Redis which only supports database 0 when using Redis Cluster. The Infinispan RESP endpoint implements the following Redis commands: APPEND AUTH BLPOP BRPOP CLIENT GETNAME CLIENT ID CLIENT INFO CLIENT LIST CLIENT SETINFO CLIENT SETNAME COMMAND CLUSTER NODES | 0 | 1 | |-:|:-| | nan | This commands includes all required fields, but some fields are set to 0 as they do not apply to Infinispan. | CLUSTER SHARDS CLUSTER SLOTS DBSIZE DECR DECRBY DEL DISCARD ECHO EXEC See the MULTI command. EXISTS EXPIRE EXPIREAT EXPIRETIME FLUSHALL | 0 | 1 | |-:|:--| | nan | This command behaves like FLUSHDB since Infinispan does not support multiple Redis databases" }, { "data": "| FLUSHDB GET GETDEL GETEX GETRANGE GETSET | 0 | 1 | |-:|:| | nan | This command is deprecated. Use the SET command with the appropriate flags instead. | HDEL HELLO HEXISTS HGET HINCRBY HINCRBYFLOAT HKEYS HLEN HMGET HMSET HRANDFIELD HSCAN HSET HSETNX HSTRLEN HVALS INCR INCRBY INCRBYFLOAT INFO | 0 | 1 | |-:|:--| | nan | This implementation attempts to return all attributes that a real Redis server returns. However, in most cases, the values are set to 0 because they cannot be retrieved, or dont apply to Infinispan. | KEYS LINDEX LINSERT | 0 | 1 | |-:|:-| | nan | The current implementation has a time complexity of O(N), where N is the size of the list. | LLEN LCS LMOVE | 0 | 1 | |-:|:| | nan | The current implementation is atomic for rotation when the source and destination are the same list. For different lists, there is relaxed consistency for concurrent operations or failures unless the resp cache is configured to use transactions. | LMPOP LPOP LPOS LPUSH LPUSHX LRANGE LREM LSET LTRIM MEMORY-USAGE | 0 | 1 | |-:|:--| | nan | This command will return the memory used by the key and the value. It doesnt include the memory used by additional metadata associated with the entry. | MEMORY-STATS | 0 | 1 | |-:|:--| | nan | This command will return the same fields as a real Redis server, but all values will be set to 0. | MGET MODULE LIST | 0 | 1 | |-:|:| | nan | This command always returns an empty list of modules. | MSET MSETNX MULTI | 0 | 1 | |-:|:--| | nan | The current implementation has a relaxed isolation level. Redis offers serializable transactions, but Infinispan provides a read-uncommitted isolation. | PERSIST PEXPIRE PEXPIRETIME PING PSETEX | 0 | 1 | |-:|:-| | nan | This command is deprecated. Use the SET command with the appropriate flags. | PSUBSCRIBE PUBSUB CHANNELS PUBSUB NUMPAT PTTL PUBLISH PUNSUBSCRIBE QUIT RANDOMKEY RPOP RPOPLPUSH RPUSH RPUSHX READONLY READWRITE RENAME RENAMENX RESET SADD SCARD SCAN | 0 | 1 | |-:|:--| | nan | Cursors are reaped in case they have not been used within a timeout. The timeout is 5 minutes. | SDIFF SDIFFSTORE SELECT | 0 | 1 | |-:|:| | nan | Infinispan allows the SELECT command both in local and clustered mode, unlike Redis Cluster which forbids use of this command and only supports database zero. | SET SETEX | 0 | 1 | |-:|:| | nan | This command is deprecated. Use the SET command with the appropriate flags instead. | SETNX | 0 | 1 | |-:|:| | nan | This command is deprecated. Use the SET command with the appropriate flags instead. | SETRANGE SINTER SINTERCARD SINTERSTORE SISMEMBER SORT SORT_RO SMEMBERS SMOVE SPOP SRANDMEMBER SSCAN STRLEN SUBSTR | 0 | 1 | |-:|:--| | nan | This command is deprecated. Use the GETRANGE command instead. | SUBSCRIBE SUNION SUNIONSTORE TIME TTL TYPE UNSUBSCRIBE UNWATCH WATCH ZADD ZCARD ZCOUNT ZDIFF ZDIFFSTORE ZINCRBY ZINTER ZINTERCARD ZINTERSTORE ZLEXCOUNT ZMPOP ZPOPMAX ZPOPMIN ZUNION ZUNIONSTORE ZRANDMEMBER ZRANGE ZRANGEBYLEX ZRANGEBYSCORE ZREVRANGE ZREVRANGEBYLEX ZREVRANGEBYSCORE ZRANGESTORE ZREM ZREMRANGEBYLEX ZREMRANGEBYRANK ZREMRANGEBYSCORE ZSCAN ZSCORE" } ]
{ "category": "App Definition and Development", "file_name": "infinispanvs.md", "project_name": "Infinispan", "subcategory": "Database" }
[ { "data": "Infinispan is an open-source key-value data grid, it can work as single node as well as distributed. Vector search is supported since release 15.x For more: Infinispan Home ``` To run this demo we need a running Infinispan instance without authentication and a data file. In the next three cells we're going to: ``` %%bash#get an archive of newswget https://raw.githubusercontent.com/rigazilla/infinispan-vector/main/bbc_news.csv.gz``` ``` %%bash#create infinispan configuration fileecho 'infinispan: cache-container: name: default transport: cluster: cluster stack: tcp server: interfaces: interface: name: public inet-address: value: 0.0.0.0 socket-bindings: default-interface: public port-offset: 0 socket-binding: name: default port: 11222 endpoints: endpoint: socket-binding: default rest-connector:' > infinispan-noauth.yaml``` ``` !docker rm --force infinispanvs-demo!docker run -d --name infinispanvs-demo -v $(pwd):/user-config -p 11222:11222 infinispan/server:15.0 -c /user-config/infinispan-noauth.yaml``` In this demo we're using a HuggingFace embedding mode. ``` from langchain.embeddings import HuggingFaceEmbeddingsfrom langchaincore.embeddings import Embeddingsmodelname = \"sentence-transformers/all-MiniLM-L12-v2\"hf = HuggingFaceEmbeddings(modelname=modelname)``` Infinispan is a very flexible key-value store, it can store raw bits as well as complex data type. User has complete freedom in the datagrid configuration, but for simple data type everything is automatically configured by the python layer. We take advantage of this feature so we can focus on our application. In this demo we rely on the default configuration, thus texts, metadatas and vectors in the same cache, but other options are possible: i.e. content can be store somewhere else and vector store could contain only a reference to the actual content. ``` import csvimport gzipimport time# Open the news file and process it as a csvwith gzip.open(\"bbc_news.csv.gz\", \"rt\", newline=\"\") as csvfile: spamreader = csv.reader(csvfile, delimiter=\",\", quotechar='\"') i = 0 texts = [] metas = [] embeds = [] for row in spamreader: # first and fifth values are joined to form the content # to be processed text = row[0] + \".\" + row[4] texts.append(text) # Store text and title as metadata meta = {\"text\": row[4], \"title\": row[0]} metas.append(meta) i = i + 1 # Change this to change the number of news you want to load if i >= 5000: break``` ``` By default InfinispanVS returns the protobuf ext field in the Document.page_content and all the remaining protobuf fields (except the vector) in the metadata. This behaviour is configurable via lambda functions at setup. ``` def printdocs(docs): for res, i in zip(docs, range(len(docs))): print(\"-\" + str(i + 1) + \"-\") print(\"TITLE: \" + res.metadata[\"title\"]) print(res.pagecontent)``` Below some sample queries ``` docs = ispnvs.similaritysearch(\"European nations\", 5)printdocs(docs)``` ``` printdocs(ispnvs.similaritysearch(\"Milan fashion week begins\", 2))``` ``` printdocs(ispnvs.similaritysearch(\"Stock market is rising today\", 4))``` ``` printdocs(ispnvs.similaritysearch(\"Why cats are so viral?\", 2))``` ``` printdocs(ispnvs.similaritysearch(\"How to stay young\", 5))``` ``` !docker rm --force infinispanvs-demo```" } ]
{ "category": "App Definition and Development", "file_name": "introduction.md", "project_name": "KubeBlocks by ApeCloud", "subcategory": "Database" }
[ { "data": "KubeBlocks is an open-source control plane software that runs and manages databases, message queues and other data infrastructure on K8s. The name KubeBlocks is inspired by Kubernetes and LEGO blocks, signifying that running and managing data infrastructure on K8s can be standard and productive, like playing with LEGO blocks. KubeBlocks could manage various type of engines, including RDBMSs (MySQL, PostgreSQL), Caches(Redis), NoSQLs (MongoDB), MQs(Kafka, Pulsar), and vector databases(Milvus, Qdrant, Weaviate), and the community is actively integrating more types of engines into KubeBlocks. Currently it has supported 32 types of engines! The core of KubeBlocks is a K8s operator, which defines a set of CRDs to abstract the common attributes of various engines. KubeBlocks helps developers, SREs, and platform engineers deploy and maintain dedicated DBPaaS, and supports both public cloud vendors and on-premise environments. Kubernetes has become the de facto standard for container orchestration. It manages an ever-increasing number of stateless workloads with the scalability and availability provided by ReplicaSet and the rollout and rollback capabilities provided by Deployment. However, managing stateful workloads poses great challenges for Kubernetes. Although StatefulSet provides stable persistent storage and unique network identifiers, these abilities are far from enough for complex stateful workloads. To address these challenges, and solve the problem of complexity, KubeBlocks introduces ReplicationSet and ConsensusSet, with the following capabilities:" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "KubeDB by AppsCode", "subcategory": "Database" }
[ { "data": "We use cookies and other similar technology to collect data to improve your experience on our site, as described in our Privacy Policy. Run Production-Grade Databases on Kubernetes Backup and Recovery Solution for Kubernetes Run Production-Grade Vault on Kubernetes Secure Ingress Controller for Kubernetes Kubernetes Configuration Syncer Kubernetes Authentication WebHook Server KubeDB simplifies Provisioning, Upgrading, Scaling, Volume Expansion, Monitor, Backup, Restore for various Databases in Kubernetes on any Public & Private Cloud A complete Kubernetes native disaster recovery solution for backup and restore your volumes and databases in Kubernetes on any public and private clouds. KubeVault is a Git-Ops ready, production-grade solution for deploying and configuring Hashicorp's Vault on Kubernetes. Secure Ingress Controller for Kubernetes Kubernetes Configuration Syncer Kubernetes Authentication WebHook Server The setup section contains instructions for installing the KubeDB and its various components in Kubernetes. This section has been divided into the following sub-sections: Install KubeDB: Installation instructions for KubeDB and its various components. Uninstall KubeDB: Uninstallation instructions for KubeDB and its various components. Upgrading KubeDB: Instruction for updating KubeDB license and upgrading between various KubeDB versions. No spam, we promise. Your mail address is secure 2024 AppsCode Inc. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Microsoft SQL Server", "subcategory": "Database" }
[ { "data": "This browser is no longer supported. Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support. Documentation Search for in-depth articles on Microsoft developer tools and technologies. Index Explore guides and articles by product. Get your businesses up and running with the Microsoft Cloud, growing your startup while ensuring security and compliance for your customers. Learn technical skills to prepare you for your future. Find training, virtual events, and opportunities to connect with the Microsoft student developer community. Dive deep into learning with interactive lessons, earn professional development hours, acquire certifications and find programs that help meet your goals. Get the latest updates, articles, and news for learning content and events from the Microsoft Learn community. Take advantage of free Virtual Training Days, where participants of any skill level can build technical skills across a range of topics and technologies. Whether you're building your career or the next great idea, Microsoft Reactor connects you with the developers and startups that share your goals." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "MariaDB", "subcategory": "Database" }
[ { "data": "Keyboard users: Escape to exit. MariaDB Documentation / MariaDB ColumnStore (Analytics)MariaDB Server (SQL Database Server)MariaDB SkySQL previous releaseMariaDB SkySQL DBaaSMariaDB SkySQLMariaDB SkySQL ObservabilityMariaDB SkySQL Cloud BackupMariaDB GeospatialMariaDB Xpand (Distributed SQL) MariaDB Documentation :: MariaDB SkySQL :: MariaDB SkySQL DBaaS :: MariaDB SkySQL Observability :: MariaDB SkySQL Cloud Backup :: MariaDB Geospatial :: MariaDB SkySQL previous release :: MariaDB Server :: MariaDB Xpand :: MariaDB ColumnStore Topics: Available Documentation: Resources: Topics on This Page Overview Get Started Working with SkySQL Service Management Connect and Query Data Operations Security Launch-Time Selections Partner Integrations Latest Software Releases Table of Contents Additional Resources Latest Software Releases Documentation Topics Additional Resources SkySQL Services SkySQL Documentation Topics Resources Assistance Latest Software Releases Table of Contents Additional Resources Comments Reader Tools This page is part of MariaDB's Documentation. Topics on this page: Overview Get Started Working with SkySQL Service Management Connect and Query Data Operations Security Launch-Time Selections Partner Integrations Latest Software Releases Table of Contents Additional Resources Latest Software Releases Documentation Topics Additional Resources SkySQL Services SkySQL Documentation Topics Resources Assistance Latest Software Releases Table of Contents Additional Resources Comments MariaDB ColumnStore extends MariaDB Enterprise Server with distributed, columnar storage and a massively parallel processing (MPP) shared nothing architecture, transforming it into a standalone or distributed data warehouse for ad hoc SQL queries and advanced analytics without the need to create indexes. With MariaDB ColumnStore running in SkySQL you get a cloud data warehouse and DBaaS in one service. ColumnStore is available in new release of SkySQL through the ColumnStore Data Warehouse topology. For additional information, see MariaDB SkySQL DBaaS. MariaDB Enterprise Server is a premium version of MariaDB Community Server that runs in any cloud and includes additional enterprise features like: enhancements for commercial production deployment such as advanced audit capabilities and the MaxScale database proxy long-term maintenance and support backports to bring popular new features to customers running on older MariaDB Enterprise Server releases Companies who switch to MariaDB Enterprise Server from legacy databases save up to 90% of total database costs. Enterprise Server is available in new release of SkySQL through the Enterprise Server With Replica(s) topology or Enterprise Server Single Node topology. For additional information, see MariaDB SkySQL DBaaS. Enterprise Server is available in previous release of SkySQL through the Primary/Replica topology or Single Node Transactions topology. For additional information, see MariaDB SkySQL previous release. Release Notes Power Tier MariaDB SkySQL is a cloud database service (DBaaS) from MariaDB Corporation: MariaDB SkySQL delivers MariaDB Enterprise Server, MariaDB Xpand, and MariaDB ColumnStore for modern applications MariaDB SkySQL runs on expert-maintained cloud infrastructure Questions? See our FAQ or contact us A new SkySQL release is now available to explore. Detailed information is available in MariaDB SkySQL DBaaS documentation. This documentation covers the previous release of SkySQL. The previous release of SkySQL was last updated: 2022-08-02 (What's New?) MariaDB Xpand is chosen by developers when creating large, mission-critical, read/write scale applications which require ACID-level consistency and enterprise-grade" }, { "data": "Xpand combines the scalability of a NoSQL database with the robustness of a SQL database. As with MariaDB Enterprise Server and ColumnStore, Xpand is front-ended by MaxScale database proxy for high availability, disaster recovery and enhanced security. Xpand can be deployed in the cloud as a SkySQL database service on AWS or Google Cloud and runs across multiple public cloud regional centers and on-premise for multi-cloud and hybrid operations. Xpand is available in new release of SkySQL through the Xpand Distributed SQL topology. For additional information, see MariaDB SkySQL DBaaS. Xpand is available in previous release of SkySQL through the Distributed Transactions topology. For additional information, see MariaDB SkySQL previous release. MariaDB SkySQL DBaaS MariaDB SkySQL Observability MariaDB SkySQL Cloud Backup MariaDB Geospatial MariaDB SkySQL Observability. This page will redirect. MariaDB SkySQL Cloud Backup is a backup and recovery solution offered by MariaDB. Features Release Notes Signup to enable Cloud Backup service Backup Management Agent Installation MariaDB Geospatial is available as a service in MariaDB SkySQL and on premises. With MariaDB's geospatial data service, you'll be able to: Publish (and manage) even Petabytes of geospatial data via REST APIs Create streaming web services for any application Protect your services with our geospatial security gateway Get your data to the people who need it Focus on your customers, not your infrastructure For next steps, see \"Geospatial Data Service (Contact Us)\". QuickstartQuickstart will get you up and running with MariaDB SkySQL. We'll launch a service, then perform a simple test from Java, Node.js, PHP, Python, or a desktop client. query, monitor, scale, and replicate data to another region. Register for a MariaDB ID Access SkySQL Portal Settings Notifications Billing Scale Fabric Choose a SkySQL Release Contact Support Service Launch Monitoring Service Details Self-Service Operations Autonomous Configuration Management Alerts Logs Service Availability (SLA) Client Connections Serverless Analytics Query Editor Notebook NoSQL Interface Xpand Global Replication Backup/Restore Data Import / Data Load Data Export Replication User Management Firewall Private Connections SkySQL Portal Enterprise Authentication (SSO) Workload Type Topology Cloud Provider Region Hardware Architecture Instance Size Power Tier Auto-Scaling of Nodes Auto-Scaling of Storage Storage Configuration Replica Count Xpand Node Count Software Version Service Name Disable SSL/TLS NoSQL Interface MindsDB Partner Integration Qlik Partner Integration Striim Partner Integration | Version | Release Date | |:|:| | MariaDB Enterprise Server 10.6.17-13 | 2024-04-24 | | MariaDB Enterprise ColumnStore 23.10.1 | 2024-03-11 | | MariaDB MaxScale 24.02.2 | 2024-06-03 | | Cluster Management API (CMAPI) 22.08.2 | 2022-11-15 | Version Release Date MariaDB Enterprise Server 10.6.17-13 2024-04-24 MariaDB Enterprise ColumnStore 23.10.1 2024-03-11 MariaDB MaxScale 24.02.2 2024-06-03 Cluster Management API (CMAPI) 22.08.2 2022-11-15 | Topic | Contents | |:-|:-| | Deployment | Procedures to download, install, set-up, configure, and test MariaDB products. | | Service Management | Operating instructions, including: Administrative tools Logs Upgrades | | Connect and Query | Connect quickly and securely with instructions for: C C++ Java Java R2DBC Node.js ODBC API Python Business Intelligence platforms | | Data Operations | Data operations ensure the availability and integrity of" }, { "data": "Detailed information and instructions explain: Backups Restoring from backups Importing data Replication | | Storage Engines | MariaDB Enterprise Server features pluggable storage engines to allow per-table workload optimization: Aria storage engine ColumnStore storage engine InnoDB storage engine MyISAM storage engine MyRocks storage engine S3 storage engine Spider storage engine | | Security | MariaDB products incorporate features focused on enterprise governance, risk, compliance (GRC) and information security (infosec) requirements. Here we detail: Audit trails Authentication Data-at-Rest Encryption Data-in-Transit Encryption Hardening Privileges User accounts | | SQL Reference | Detailed reference and examples | | Architecture | Architectural information | Topic Contents Deployment Procedures to download, install, set-up, configure, and test MariaDB products. Service Management Operating instructions, including: Administrative tools Logs Upgrades Connect and Query Connect quickly and securely with instructions for: C C++ Java Java R2DBC Node.js ODBC API Python Business Intelligence platforms Data Operations Data operations ensure the availability and integrity of data. Detailed information and instructions explain: Backups Restoring from backups Importing data Replication Storage Engines MariaDB Enterprise Server features pluggable storage engines to allow per-table workload optimization: Aria storage engine ColumnStore storage engine InnoDB storage engine MyISAM storage engine MyRocks storage engine S3 storage engine Spider storage engine Security MariaDB products incorporate features focused on enterprise governance, risk, compliance (GRC) and information security (infosec) requirements. Here we detail: Audit trails Authentication Data-at-Rest Encryption Data-in-Transit Encryption Hardening Privileges User accounts SQL Reference Detailed reference and examples Architecture Architectural information Reference Tables Release Notes What's New Sample Code Feedback Support | Version | Release Date | |:-|:| | MariaDB Enterprise Server 10.6.17-13 | 2024-04-24 | | MariaDB MaxScale 24.02.2 | 2024-06-03 | Version Release Date MariaDB Enterprise Server 10.6.17-13 2024-04-24 MariaDB MaxScale 24.02.2 2024-06-03 | Topic | Contents | |:-|:-| | Deployment | Procedures to download, install, set-up, configure, and test MariaDB products. | | Service Management | Operating instructions, including: Administrative tools Logs Upgrades | | Connect and Query | Connect quickly and securely with instructions for: C C++ Java Java R2DBC Node.js ODBC API Python Business Intelligence platforms | | Data Operations | Data operations ensure the availability and integrity of data. Detailed information and instructions explain: Backups Restoring from backups Importing data Replication | | Storage Engines | MariaDB Enterprise Server features pluggable storage engines to allow per-table workload optimization: Aria storage engine ColumnStore storage engine InnoDB storage engine MyISAM storage engine MyRocks storage engine S3 storage engine Spider storage engine | | Security | MariaDB products incorporate features focused on enterprise governance, risk, compliance (GRC) and information security (infosec) requirements. Here we detail: Audit trails Authentication Data-at-Rest Encryption Data-in-Transit Encryption Hardening Privileges User accounts | | SQL Reference | Detailed reference and examples | | Architecture | Architectural information | Topic Contents Deployment Procedures to download, install, set-up, configure, and test MariaDB products. Service Management Operating instructions, including: Administrative tools Logs Upgrades Connect and Query Connect quickly and securely with instructions for: C C++ Java Java R2DBC Node.js ODBC API Python Business Intelligence platforms Data Operations Data operations ensure the availability and integrity of" }, { "data": "Detailed information and instructions explain: Backups Restoring from backups Importing data Replication Storage Engines MariaDB Enterprise Server features pluggable storage engines to allow per-table workload optimization: Aria storage engine ColumnStore storage engine InnoDB storage engine MyISAM storage engine MyRocks storage engine S3 storage engine Spider storage engine Security MariaDB products incorporate features focused on enterprise governance, risk, compliance (GRC) and information security (infosec) requirements. Here we detail: Audit trails Authentication Data-at-Rest Encryption Data-in-Transit Encryption Hardening Privileges User accounts SQL Reference Detailed reference and examples Architecture Architectural information Reference Tables Release Notes What's New Sample Code Feedback Support SkySQL services are optimized for your workload: | Topology | Use Case | |:-|:| | Distributed Transactions | Read/write scale applications with high concurrency and availability | | Replicated Transactions | Read scale applications with high availability | | Single Node Transactions | Smaller datasets with moderate concurrency | | Multi-Node Analytics | Run complex analytical queries on large datasets | | Single Node Analytics | Run complex analytical queries | Topology Use Case Distributed Transactions Read/write scale applications with high concurrency and availability Replicated Transactions Read scale applications with high availability Single Node Transactions Smaller datasets with moderate concurrency Multi-Node Analytics Run complex analytical queries on large datasets Single Node Analytics Run complex analytical queries | Topic | Contents | |:-|:-| | Quickstart | This guided walkthrough will get you started quick, launching your first SkySQL service then connecting, loading data, and querying your database. | | Features and Concepts | This SkySQL overview explains how to select the SkySQL service optimized for your use case, and how SkySQL can be tailored to your specific business requirements. | | Service Management | Instructions are provided for: Managing services using the SkySQL Portal Managing services using the SkySQL DBaaS REST API Billing configuration Service configuration Monitoring Support Workload Analysis | | Connect and Query | Connect quickly and securely to your SkySQL services with instructions for: Desktop clients Command-line scripts C C++ Java Java R2DBC Node.js ODBC API Python Business Intelligence platforms | | Data Operations | Data operations ensure the availability and integrity of data. Detailed information and instructions explain: Automated nightly backups Self-service manual backups Restoring from backups Exporting data Importing data Replication | | Migration | Migrate to MariaDB SkySQL | | Security | MariaDB SkySQL incorporates features focused on enterprise governance, risk, compliance (GRC) and information security (infosec) requirements. Here we detail: User accounts Permissions Encryption Allowlists Audit features | | SQL Reference | Detailed reference and examples for: Localization with character sets and collations | Topic Contents Quickstart This guided walkthrough will get you started quick, launching your first SkySQL service then connecting, loading data, and querying your database. Features and Concepts This SkySQL overview explains how to select the SkySQL service optimized for your use case, and how SkySQL can be tailored to your specific business" }, { "data": "Service Management Instructions are provided for: Managing services using the SkySQL Portal Managing services using the SkySQL DBaaS REST API Billing configuration Service configuration Monitoring Support Workload Analysis Connect and Query Connect quickly and securely to your SkySQL services with instructions for: Desktop clients Command-line scripts C C++ Java Java R2DBC Node.js ODBC API Python Business Intelligence platforms Data Operations Data operations ensure the availability and integrity of data. Detailed information and instructions explain: Automated nightly backups Self-service manual backups Restoring from backups Exporting data Importing data Replication Migration Migrate to MariaDB SkySQL Security MariaDB SkySQL incorporates features focused on enterprise governance, risk, compliance (GRC) and information security (infosec) requirements. Here we detail: User accounts Permissions Encryption Allowlists Audit features SQL Reference Detailed reference and examples for: Localization with character sets and collations FAQ (Frequently Asked Questions) Feedback Reference Tables Release Notes Sample Code Not sure where to start? See Frequently Asked Questions (FAQ). Ready for POC (Proof of Concept)? Learn the next steps. Need Help? Contact us. | Version | Release Date | |:-|:| | MariaDB Xpand 23.09.1 | 2023-09-21 | | MariaDB MaxScale 24.02.2 | 2024-06-03 | Version Release Date MariaDB Xpand 23.09.1 2023-09-21 MariaDB MaxScale 24.02.2 2024-06-03 | Topic | Contents | |:-|:-| | Deployment | Procedures to download, install, set-up, configure, and test MariaDB products. | | Service Management | Operating instructions, including: Administrative tools Logs Upgrades | | Connect and Query | Connect quickly and securely with instructions for: C C++ Java Java R2DBC Node.js ODBC API Python Business Intelligence platforms | | Data Operations | Data operations ensure the availability and integrity of data. Detailed information and instructions explain: Backups Restoring from backups Importing data Replication | | Security | MariaDB products incorporate features focused on enterprise governance, risk, compliance (GRC) and information security (infosec) requirements. Here we detail: Audit trails Authentication Data-at-Rest Encryption Data-in-Transit Encryption Hardening Privileges User accounts | | SQL Reference | Detailed reference and examples | | Architecture | Architectural information | Topic Contents Deployment Procedures to download, install, set-up, configure, and test MariaDB products. Service Management Operating instructions, including: Administrative tools Logs Upgrades Connect and Query Connect quickly and securely with instructions for: C C++ Java Java R2DBC Node.js ODBC API Python Business Intelligence platforms Data Operations Data operations ensure the availability and integrity of data. Detailed information and instructions explain: Backups Restoring from backups Importing data Replication Security MariaDB products incorporate features focused on enterprise governance, risk, compliance (GRC) and information security (infosec) requirements. Here we detail: Audit trails Authentication Data-at-Rest Encryption Data-in-Transit Encryption Hardening Privileges User accounts SQL Reference Detailed reference and examples Architecture Architectural information Reference Tables Release Notes Sample Code Feedback MariaDB Forums Support MariaDB Documentation integrates with MariaDB Forums. The \"Comments\" section of each documentation page lists comments made on that page. Comments made on documentation pages are also available through topical forums. Log in with a MariaDB ID, available free of charge, for access to post. See MariaDB Forums to search forum posts and to access the full set of topical forums. 2024 MariaDB. All rights reserved. Products | Services | Resources Legal | Privacy Policy | Cookie Policy | | | Newsletter | About Us | Contact" } ]
{ "category": "App Definition and Development", "file_name": "docs.md", "project_name": "KubeDB by AppsCode", "subcategory": "Database" }
[ { "data": "Run Production-Grade Databases on Kubernetes Backup and Recovery Solution for Kubernetes Run Production-Grade Vault on Kubernetes Secure Ingress Controller for Kubernetes Kubernetes Configuration Syncer Kubernetes Authentication WebHook Server KubeDB simplifies Provisioning, Upgrading, Scaling, Volume Expansion, Monitor, Backup, Restore for various Databases in Kubernetes on any Public & Private Cloud Assistance is available at any stage of your AppsCode journey. Whether you're just starting out or navigating more advanced territories, support is here for you. User Guides Register, add credentials, clusters, enable features, and manage databases seamlessly. Get started now! User Guides Manage profile, emails, avatar, security, OAuth2 applications, tokens, organizations, and Kubernetes credentials effortlessly User Guides Add, import, and remove clusters, manage workloads, helm charts, presets, security, and monitor constraints and violations seamlessly User Guides Create, scale, backup, upgrade, monitor, and secure your databases effortlessly. Stay informed with insights and handle violations New to AppsCode? Follow simple steps to set up your account. Manage your account, subscriptions, and security settings with our interface Simplify cluster management with AppsCode's intuitive tools and features Manage databases with AppsCode's user-friendly tools and solutions No spam, we promise. Your mail address is secure 2024 AppsCode Inc. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "MongoDB", "subcategory": "Database" }
[ { "data": "Welcome to the official MongoDB Documentation. Whether youre a developer, database administrator, or just starting your journey with MongoDB, our documentation provides you with the information and knowledge needed to build applications on MongoDB and the Atlas developer data platform. MongoDB Atlas Run MongoDB on a multi-cloud developer data platform that accelerates and simplifies working with operational data. Database Manual Learn core MongoDB concepts, including data modeling, querying data, aggregations, sharding, and more. Migrators, Tools, and Connectors Explore tools and integrations for MongoDB, from data visualization and development to migration and management. Client Libraries Connect your application to your database with one of the official MongoDB libraries. Take Free Courses on MongoDB University Join Forums and Discussions View Developer Resources" } ]
{ "category": "App Definition and Development", "file_name": "github-privacy-statement.md", "project_name": "NomsDB", "subcategory": "Database" }
[ { "data": "Effective date: February 1, 2024 Welcome to the GitHub Privacy Statement. This is where we describe how we handle your Personal Data, which is information that is directly linked or can be linked to you. It applies to the Personal Data that GitHub, Inc. or GitHub B.V., processes as the Data Controller when you interact with websites, applications, and services that display this Statement (collectively, Services). This Statement does not apply to services or products that do not display this Statement, such as Previews, where relevant. When a school or employer supplies your GitHub account, they assume the role of Data Controller for most Personal Data used in our Services. This enables them to: Should you access a GitHub Service through an account provided by an organization, such as your employer or school, the organization becomes the Data Controller, and this Privacy Statement's direct applicability to you changes. Even so, GitHub remains dedicated to preserving your privacy rights. In such circumstances, GitHub functions as a Data Processor, adhering to the Data Controller's instructions regarding your Personal Data's processing. A Data Protection Agreement governs the relationship between GitHub and the Data Controller. For further details regarding their privacy practices, please refer to the privacy statement of the organization providing your account. In cases where your organization grants access to GitHub products, GitHub acts as the Data Controller solely for specific processing activities. These activities are clearly defined in a contractual agreement with your organization, known as a Data Protection Agreement. You can review our standard Data Protection Agreement at GitHub Data Protection Agreement. For those limited purposes, this Statement governs the handling of your Personal Data. For all other aspects of GitHub product usage, your organization's policies apply. When you use third-party extensions, integrations, or follow references and links within our Services, the privacy policies of these third parties apply to any Personal Data you provide or consent to share with them. Their privacy statements will govern how this data is processed. Personal Data is collected from you directly, automatically from your device, and also from third parties. The Personal Data GitHub processes when you use the Services depends on variables like how you interact with our Services (such as through web interfaces, desktop or mobile applications), the features you use (such as pull requests, Codespaces, or GitHub Copilot) and your method of accessing the Services (your preferred IDE). Below, we detail the information we collect through each of these channels: The Personal Data we process depends on your interaction and access methods with our Services, including the interfaces (web, desktop, mobile apps), features used (pull requests, Codespaces, GitHub Copilot), and your preferred access tools (like your IDE). This section details all the potential ways GitHub may process your Personal Data: When carrying out these activities, GitHub practices data minimization and uses the minimum amount of Personal Information required. We may share Personal Data with the following recipients: If your GitHub account has private repositories, you control the access to that information. GitHub personnel does not access private repository information without your consent except as provided in this Privacy Statement and for: GitHub will provide you with notice regarding private repository access unless doing so is prohibited by law or if GitHub acted in response to a security threat or other risk to security. GitHub processes Personal Data in compliance with the GDPR, ensuring a lawful basis for each processing" }, { "data": "The basis varies depending on the data type and the context, including how you access the services. Our processing activities typically fall under these lawful bases: Depending on your residence location, you may have specific legal rights regarding your Personal Data: To exercise these rights, please send an email to privacy[at]github[dot]com and follow the instructions provided. To verify your identity for security, we may request extra information before addressing your data-related request. Please contact our Data Protection Officer at dpo[at]github[dot]com for any feedback or concerns. Depending on your region, you have the right to complain to your local Data Protection Authority. European users can find authority contacts on the European Data Protection Board website, and UK users on the Information Commissioners Office website. We aim to promptly respond to requests in compliance with legal requirements. Please note that we may retain certain data as necessary for legal obligations or for establishing, exercising, or defending legal claims. GitHub stores and processes Personal Data in a variety of locations, including your local region, the United States, and other countries where GitHub, its affiliates, subsidiaries, or subprocessors have operations. We transfer Personal Data from the European Union, the United Kingdom, and Switzerland to countries that the European Commission has not recognized as having an adequate level of data protection. When we engage in such transfers, we generally rely on the standard contractual clauses published by the European Commission under Commission Implementing Decision 2021/914, to help protect your rights and enable these protections to travel with your data. To learn more about the European Commissions decisions on the adequacy of the protection of personal data in the countries where GitHub processes personal data, see this article on the European Commission website. GitHub also complies with the EU-U.S. Data Privacy Framework (EU-U.S. DPF), the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. Data Privacy Framework (Swiss-U.S. DPF) as set forth by the U.S. Department of Commerce. GitHub has certified to the U.S. Department of Commerce that it adheres to the EU-U.S. Data Privacy Framework Principles (EU-U.S. DPF Principles) with regard to the processing of personal data received from the European Union in reliance on the EU-U.S. DPF and from the United Kingdom (and Gibraltar) in reliance on the UK Extension to the EU-U.S. DPF. GitHub has certified to the U.S. Department of Commerce that it adheres to the Swiss-U.S. Data Privacy Framework Principles (Swiss-U.S. DPF Principles) with regard to the processing of personal data received from Switzerland in reliance on the Swiss-U.S. DPF. If there is any conflict between the terms in this privacy statement and the EU-U.S. DPF Principles and/or the Swiss-U.S. DPF Principles, the Principles shall govern. To learn more about the Data Privacy Framework (DPF) program, and to view our certification, please visit https://www.dataprivacyframework.gov/. GitHub has the responsibility for the processing of Personal Data it receives under the Data Privacy Framework (DPF) Principles and subsequently transfers to a third party acting as an agent on GitHubs behalf. GitHub shall remain liable under the DPF Principles if its agent processes such Personal Data in a manner inconsistent with the DPF Principles, unless the organization proves that it is not responsible for the event giving rise to the damage. In compliance with the EU-U.S. DPF, the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. DPF, GitHub commits to resolve DPF Principles-related complaints about our collection and use of your personal" }, { "data": "EU, UK, and Swiss individuals with inquiries or complaints regarding our handling of personal data received in reliance on the EU-U.S. DPF, the UK Extension, and the Swiss-U.S. DPF should first contact GitHub at: dpo[at]github[dot]com. If you do not receive timely acknowledgment of your DPF Principles-related complaint from us, or if we have not addressed your DPF Principles-related complaint to your satisfaction, please visit https://go.adr.org/dpf_irm.html for more information or to file a complaint. The services of the International Centre for Dispute Resolution are provided at no cost to you. An individual has the possibility, under certain conditions, to invoke binding arbitration for complaints regarding DPF compliance not resolved by any of the other DPF mechanisms. For additional information visit https://www.dataprivacyframework.gov/s/article/ANNEX-I-introduction-dpf?tabset-35584=2. GitHub is subject to the investigatory and enforcement powers of the Federal Trade Commission (FTC). Under Section 5 of the Federal Trade Commission Act (15 U.S.C. 45), an organization's failure to abide by commitments to implement the DPF Principles may be challenged as deceptive by the FTC. The FTC has the power to prohibit such misrepresentations through administrative orders or by seeking court orders. GitHub uses appropriate administrative, technical, and physical security controls to protect your Personal Data. Well retain your Personal Data as long as your account is active and as needed to fulfill contractual obligations, comply with legal requirements, resolve disputes, and enforce agreements. The retention duration depends on the purpose of data collection and any legal obligations. GitHub uses administrative, technical, and physical security controls where appropriate to protect your Personal Data. Contact us via our contact form or by emailing our Data Protection Officer at dpo[at]github[dot]com. Our addresses are: GitHub B.V. Prins Bernhardplein 200, Amsterdam 1097JB The Netherlands GitHub, Inc. 88 Colin P. Kelly Jr. St. San Francisco, CA 94107 United States Our Services are not intended for individuals under the age of 13. We do not intentionally gather Personal Data from such individuals. If you become aware that a minor has provided us with Personal Data, please notify us. GitHub may periodically revise this Privacy Statement. If there are material changes to the statement, we will provide at least 30 days prior notice by updating our website or sending an email to your primary email address associated with your GitHub account. Below are translations of this document into other languages. In the event of any conflict, uncertainty, or apparent inconsistency between any of those versions and the English version, this English version is the controlling version. Cliquez ici pour obtenir la version franaise: Dclaration de confidentialit de GitHub (PDF). For translations of this statement into other languages, please visit https://docs.github.com/ and select a language from the drop-down menu under English. GitHub uses cookies to provide, secure and improve our Service or to develop new features and functionality of our Service. For example, we use them to (i) keep you logged in, (ii) remember your preferences, (iii) identify your device for security and fraud purposes, including as needed to maintain the integrity of our Service, (iv) compile statistical reports, and (v) provide information and insight for future development of GitHub. We provide more information about cookies on GitHub that describes the cookies we set, the needs we have for those cookies, and the expiration of such cookies. For Enterprise Marketing Pages, we may also use non-essential cookies to (i) gather information about enterprise users interests and online activities to personalize their experiences, including by making the ads, content, recommendations, and marketing seen or received more relevant and (ii) serve and measure the effectiveness of targeted advertising and other marketing" }, { "data": "If you disable the non-essential cookies on the Enterprise Marketing Pages, the ads, content, and marketing you see may be less relevant. Our emails to users may contain a pixel tag, which is a small, clear image that can tell us whether or not you have opened an email and what your IP address is. We use this pixel tag to make our email communications more effective and to make sure we are not sending you unwanted email. The length of time a cookie will stay on your browser or device depends on whether it is a persistent or session cookie. Session cookies will only stay on your device until you stop browsing. Persistent cookies stay until they expire or are deleted. The expiration time or retention period applicable to persistent cookies depends on the purpose of the cookie collection and tool used. You may be able to delete cookie data. For more information, see \"GitHub General Privacy Statement.\" We use cookies and similar technologies, such as web beacons, local storage, and mobile analytics, to operate and provide our Services. When visiting Enterprise Marketing Pages, like resources.github.com, these and additional cookies, like advertising IDs, may be used for sales and marketing purposes. Cookies are small text files stored by your browser on your device. A cookie can later be read when your browser connects to a web server in the same domain that placed the cookie. The text in a cookie contains a string of numbers and letters that may uniquely identify your device and can contain other information as well. This allows the web server to recognize your browser over time, each time it connects to that web server. Web beacons are electronic images (also called single-pixel or clear GIFs) that are contained within a website or email. When your browser opens a webpage or email that contains a web beacon, it automatically connects to the web server that hosts the image (typically operated by a third party). This allows that web server to log information about your device and to set and read its own cookies. In the same way, third-party content on our websites (such as embedded videos, plug-ins, or ads) results in your browser connecting to the third-party web server that hosts that content. Mobile identifiers for analytics can be accessed and used by apps on mobile devices in much the same way that websites access and use cookies. When visiting Enterprise Marketing pages, like resources.github.com, on a mobile device these may allow us and our third-party analytics and advertising partners to collect data for sales and marketing purposes. We may also use so-called flash cookies (also known as Local Shared Objects or LSOs) to collect and store information about your use of our Services. Flash cookies are commonly used for advertisements and videos. The GitHub Services use cookies and similar technologies for a variety of purposes, including to store your preferences and settings, enable you to sign-in, analyze how our Services perform, track your interaction with the Services, develop inferences, combat fraud, and fulfill other legitimate purposes. Some of these cookies and technologies may be provided by third parties, including service providers and advertising" }, { "data": "For example, our analytics and advertising partners may use these technologies in our Services to collect personal information (such as the pages you visit, the links you click on, and similar usage information, identifiers, and device information) related to your online activities over time and across Services for various purposes, including targeted advertising. GitHub will place non-essential cookies on pages where we market products and services to enterprise customers, for example, on resources.github.com. We and/or our partners also share the information we collect or infer with third parties for these purposes. The table below provides additional information about how we use different types of cookies: | Purpose | Description | |:--|:--| | Required Cookies | GitHub uses required cookies to perform essential website functions and to provide the services. For example, cookies are used to log you in, save your language preferences, provide a shopping cart experience, improve performance, route traffic between web servers, detect the size of your screen, determine page load times, improve user experience, and for audience measurement. These cookies are necessary for our websites to work. | | Analytics | We allow third parties to use analytics cookies to understand how you use our websites so we can make them better. For example, cookies are used to gather information about the pages you visit and how many clicks you need to accomplish a task. We also use some analytics cookies to provide personalized advertising. | | Social Media | GitHub and third parties use social media cookies to show you ads and content based on your social media profiles and activity on GitHubs websites. This ensures that the ads and content you see on our websites and on social media will better reflect your interests. This also enables third parties to develop and improve their products, which they may use on websites that are not owned or operated by GitHub. | | Advertising | In addition, GitHub and third parties use advertising cookies to show you new ads based on ads you've already seen. Cookies also track which ads you click or purchases you make after clicking an ad. This is done both for payment purposes and to show you ads that are more relevant to you. For example, cookies are used to detect when you click an ad and to show you ads based on your social media interests and website browsing history. | You have several options to disable non-essential cookies: Specifically on GitHub Enterprise Marketing Pages Any GitHub page that serves non-essential cookies will have a link in the pages footer to cookie settings. You can express your preferences at any time by clicking on that linking and updating your settings. Some users will also be able to manage non-essential cookies via a cookie consent banner, including the options to accept, manage, and reject all non-essential cookies. Generally for all websites You can control the cookies you encounter on the web using a variety of widely-available tools. For example: These choices are specific to the browser you are using. If you access our Services from other devices or browsers, take these actions from those systems to ensure your choices apply to the data collected when you use those systems. This section provides extra information specifically for residents of certain US states that have distinct data privacy laws and regulations. These laws may grant specific rights to residents of these states when the laws come into effect. This section uses the term personal information as an equivalent to the term Personal Data. These rights are common to the US State privacy laws: We may collect various categories of personal information about our website visitors and users of \"Services\" which includes GitHub applications, software, products, or" }, { "data": "That information includes identifiers/contact information, demographic information, payment information, commercial information, internet or electronic network activity information, geolocation data, audio, electronic, visual, or similar information, and inferences drawn from such information. We collect this information for various purposes. This includes identifying accessibility gaps and offering targeted support, fostering diversity and representation, providing services, troubleshooting, conducting business operations such as billing and security, improving products and supporting research, communicating important information, ensuring personalized experiences, and promoting safety and security. To make an access, deletion, correction, or opt-out request, please send an email to privacy[at]github[dot]com and follow the instructions provided. We may need to verify your identity before processing your request. If you choose to use an authorized agent to submit a request on your behalf, please ensure they have your signed permission or power of attorney as required. To opt out of the sharing of your personal information, you can click on the \"Do Not Share My Personal Information\" link on the footer of our Websites or use the Global Privacy Control (\"GPC\") if available. Authorized agents can also submit opt-out requests on your behalf. We also make the following disclosures for purposes of compliance with California privacy law: Under California Civil Code section 1798.83, also known as the Shine the Light law, California residents who have provided personal information to a business with which the individual has established a business relationship for personal, family, or household purposes (California Customers) may request information about whether the business has disclosed personal information to any third parties for the third parties direct marketing purposes. Please be aware that we do not disclose personal information to any third parties for their direct marketing purposes as defined by this law. California Customers may request further information about our compliance with this law by emailing (privacy[at]github[dot]com). Please note that businesses are required to respond to one request per California Customer each year and may not be required to respond to requests made by means other than through the designated email address. California residents under the age of 18 who are registered users of online sites, services, or applications have a right under California Business and Professions Code Section 22581 to remove, or request and obtain removal of, content or information they have publicly posted. To remove content or information you have publicly posted, please submit a Private Information Removal request. Alternatively, to request that we remove such content or information, please send a detailed description of the specific content or information you wish to have removed to GitHub support. Please be aware that your request does not guarantee complete or comprehensive removal of content or information posted online and that the law may not permit or require removal in certain circumstances. If you have any questions about our privacy practices with respect to California residents, please send an email to privacy[at]github[dot]com. We value the trust you place in us and are committed to handling your personal information with care and respect. If you have any questions or concerns about our privacy practices, please email our Data Protection Officer at dpo[at]github[dot]com. If you live in Colorado, Connecticut, or Virginia you have some additional rights: We do not sell your covered information, as defined under Chapter 603A of the Nevada Revised Statutes. If you still have questions about your covered information or anything else in our Privacy Statement, please send an email to privacy[at]github[dot]com. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": "docs.github.com.md", "project_name": "NomsDB", "subcategory": "Database" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Neo4j", "subcategory": "Database" }
[ { "data": "Is this page helpful? 2024 License: Creative Commons 4.0 The manual covers the following areas: IntroductionAn introduction to the Neo4j Graph Data Science library. InstallationInstructions for how to install and use the Neo4j Graph Data Science library. Common usageGeneral usage patterns and recommendations for getting the most out of the Neo4j Graph Data Science library. Graph managementA detailed guide to the graph catalog and utility procedures included in the Neo4j Graph Data Science library. Graph algorithmsA detailed guide to each of the algorithms in their respective categories, including use-cases and examples. Machine learningA detailed guide to the machine learning procedures included in the Neo4j Graph Data Science library. Production deploymentThis chapter explains advanced details with regards to common Neo4j components. Python clientDocumentation of the Graph Data Science client for Python users. Operations referenceReference of all procedures contained in the Neo4j Graph Data Science library. Migration from Graph Data Science library Version 1.xAdditional resources - migration guide, books, etc - to help using the Neo4j Graph Data Science library. Migration from Legacy to new Cypher projectionsMigration guide to help migration from the Legacy Cypher projections to the new Cypher projections. The source code of the library is available at GitHub. If you have a suggestion on how we can improve the library or want to report a problem, you can create a new issue. 2024 Neo4j, Inc. Terms | Privacy | Sitemap Neo4j, Neo Technology, Cypher, Neo4j Bloom and Neo4j Aura are registered trademarks of Neo4j, Inc. All other marks are owned by their respective companies." } ]
{ "category": "App Definition and Development", "file_name": "understanding-github-code-search-syntax.md", "project_name": "NomsDB", "subcategory": "Database" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": "github-terms-of-service.md", "project_name": "NomsDB", "subcategory": "Database" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": "introduction-to-oracle-database.html#GUID-166C1E31-CDBC-47D9-867A-3D4C9AAC837D.md", "project_name": "Oracle Database", "subcategory": "Database" }
[ { "data": "This chapter provides an overview of Oracle Database. This chapter contains the following topics: About Relational Databases Schema Objects Data Access Transaction Management Oracle Database Architecture Oracle Database Documentation Roadmap Every organization has information that it must store and manage to meet its requirements. For example, a corporation must collect and maintain human resources records for its employees. This information must be available to those who need it. An information system is a formal system for storing and processing information. An information system could be a set of cardboard boxes containing manila folders along with rules for how to store and retrieve the folders. However, most companies today use a database to automate their information systems. A database is an organized collection of information treated as a unit. The purpose of a database is to collect, store, and retrieve related information for use by database applications. Parent topic: Introduction to Oracle Database A database management system (DBMS) is software that controls the storage, organization, and retrieval of data. Typically, a DBMS has the following elements: Kernel code This code manages memory and storage for the DBMS. Repository of metadata This repository is usually called a data dictionary. Query language This language enables applications to access the data. A database application is a software program that interacts with a database to access and manipulate data. The first generation of database management systems included the following types: Hierarchical A hierarchical database organizes data in a tree structure. Each parent record has one or more child records, similar to the structure of a file system. Network A network database is similar to a hierarchical database, except records have a many-to-many rather than a one-to-many relationship. The preceding database management systems stored data in rigid, predetermined relationships. Because no data definition language existed, changing the structure of the data was difficult. Also, these systems lacked a simple query language, which hindered application development. Parent topic: About Relational Databases In his seminal 1970 paper \"A Relational Model of Data for Large Shared Data Banks,\" E. F. Codd defined a relational model based on mathematical set theory. Today, the most widely accepted database model is the relational model. A relational database is a database that conforms to the relational model. The relational model has the following major aspects: Structures Well-defined objects store or access the data of a database. Operations Clearly defined actions enable applications to manipulate the data and structures of a database. Integrity rules Integrity rules govern operations on the data and structures of a database. A relational database stores data in a set of simple relations. A relation is a set of tuples. A tuple is an unordered set of attribute values. A table is a two-dimensional representation of a relation in the form of rows (tuples) and columns (attributes). Each row in a table has the same set of columns. A relational database is a database that stores data in relations (tables). For example, a relational database could store information about company employees in an employee table, a department table, and a salary table. Related Topics Parent topic: About Relational Databases The relational model is the basis for a relational database management system (RDBMS). An RDBMS moves data into a database, stores the data, and retrieves it so that applications can manipulate it. An RDBMS distinguishes between the following types of operations: Logical operations In this case, an application specifies what content is required. For example, an application requests an employee name or adds an employee record to a" }, { "data": "Physical operations In this case, the RDBMS determines how things should be done and carries out the operation. For example, after an application queries a table, the database may use an index to find the requested rows, read the data into memory, and perform many other steps before returning a result to the user. The RDBMS stores and retrieves data so that physical operations are transparent to database applications. Oracle Database is an RDBMS. An RDBMS that implements object-oriented features such as user-defined types, inheritance, and polymorphism is called an object-relational database management system (ORDBMS). Oracle Database has extended the relational model to an object-relational model, making it possible to store complex business models in a relational database. Parent topic: About Relational Databases The current version of Oracle Database is the result of over 40 years of innovative development. Highlights in the evolution of Oracle Database include the following: Founding of Oracle Corporation In 1977, Larry Ellison, Bob Miner, and Ed Oates started the consultancy Software Development Laboratories, which became Relational Software, Inc. (RSI). In 1983, RSI became Oracle Systems Corporation and then later Oracle Corporation. First commercially available RDBMS In 1979, RSI introduced Oracle V2 (Version 2) as the first commercially available SQL-based RDBMS, a landmark event in the history of relational databases. Portable version of Oracle Database Oracle Version 3, released in 1983, was the first relational database to run on mainframes, minicomputers, and personal computers. The database was written in C, enabling the database to be ported to multiple platforms. Enhancements to concurrency control, data distribution, and scalability Version 4 introduced multiversion read consistency. Version 5, released in 1985, supported client/server computing and distributed database systems. Version 6 brought enhancements to disk I/O, row locking, scalability, and backup and recovery. Also, Version 6 introduced the first version of the PL/SQL language, a proprietary procedural extension to SQL. PL/SQL stored program units Oracle7, released in 1992, introduced PL/SQL stored procedures and triggers. Objects and partitioning Oracle8 was released in 1997 as the object-relational database, supporting many new data types. Additionally, Oracle8 supported partitioning of large tables. Internet computing Oracle8i Database, released in 1999, provided native support for internet protocols and server-side support for Java. Oracle8i was designed for internet computing, enabling the database to be deployed in a multitier environment. Oracle Real Application Clusters (Oracle RAC) Oracle9i Database introduced Oracle RAC in 2001, enabling multiple instances to access a single database simultaneously. Additionally, Oracle XML Database (Oracle XML DB) introduced the ability to store and query XML. Grid computing Oracle Database 10g introduced grid computing in 2003. This release enabled organizations to virtualize computing resources by building a grid infrastructure based on low-cost commodity servers. A key goal was to make the database self-managing and self-tuning. Oracle Automatic Storage Management (Oracle ASM) helped achieve this goal by virtualizing and simplifying database storage management. Manageability, diagnosability, and availability Oracle Database 11g, released in 2007, introduced a host of new features that enabled administrators and developers to adapt quickly to changing business requirements. The key to adaptability is simplifying the information infrastructure by consolidating information and using automation wherever possible. Plugging In to the Cloud Oracle Database 12c, released in 2013, was designed for the Cloud, featuring a new Multitenant architecture, In-Memory Column Store (IM column store), and support for JSON documents. Oracle Database 12c helped DBAs make more efficient use of their IT resources, while continuing to reduce costs and improve service levels for end" }, { "data": "Integration and memory performance Oracle Database 18c simplified integration with directory services such as Microsoft Active Directory. It also introduced functionality to exploit memory for columnar data models and high-speed row access. Enhanced stability Oracle Database 19c was the long-support version of the Oracle Database 12c (Release 12.2) family of products. A major focus of this release was stability. Oracle Database 19c also introduced several small but significant improvements to features such as JSON and Active Data Guard. Improved developer experience Oracle Database 21c improves the developer experience with features such as Oracle Blockchain Tables and native JSON data types. Enhancements to Automatic In-Memory make the IM column store largely self-managing. Parent topic: About Relational Databases One characteristic of an RDBMS is the independence of physical data storage from logical data structures. In Oracle Database, a database schema is a collection of logical data structures, or schema objects. A database user owns a database schema, which has the same name as the user name. Schema objects are user-created structures that directly refer to the data in the database. The database supports many types of schema objects, the most important of which are tables and indexes. A schema object is one type of database object. Some database objects, such as profiles and roles, do not reside in schemas. Related Topics Parent topic: Introduction to Oracle Database A table describes an entity such as employees. You define a table with a table name, such as employees, and set of columns. In general, you give each column a name, a data type, and a width when you create the table. A table is a set of rows. A column identifies an attribute of the entity described by the table, whereas a row identifies an instance of the entity. For example, attributes of the employees entity correspond to columns for employee ID and last name. A row identifies a specific employee. You can optionally specify a rule, called an integrity constraint, for a column. One example is a NOT NULL integrity constraint. This constraint forces the column to contain a value in every row. Related Topics Parent topic: Schema Objects An index is an optional data structure that you can create on one or more columns of a table. Indexes can increase the performance of data retrieval. When processing a request, the database can use available indexes to locate the requested rows efficiently. Indexes are useful when applications often query a specific row or range of rows. Indexes are logically and physically independent of the data. Thus, you can drop and create indexes with no effect on the tables or other indexes. All applications continue to function after you drop an index. Related Topics Parent topic: Schema Objects A general requirement for a DBMS is to adhere to accepted industry standards for a data access language. Note: Parent topic: Introduction to Oracle Database SQL is a set-based declarative language that provides an interface to an RDBMS such as Oracle Database. Procedural languages such as C describe how things should be done. SQL is nonprocedural and describes what should be done. SQL is the ANSI standard language for relational databases. All operations on the data in an Oracle database are performed using SQL statements. For example, you use SQL to create tables and query and modify data in tables. A SQL statement can be thought of as a very simple, but powerful, computer program or instruction. Users specify the result that they want (for example, the names of employees), not how to derive" }, { "data": "A SQL statement is a string of SQL text such as the following: ``` SELECT firstname, lastname FROM employees; ``` SQL statements enable you to perform the following tasks: Query data Insert, update, and delete rows in a table Create, replace, alter, and drop objects Control access to the database and its objects Guarantee database consistency and integrity SQL unifies the preceding tasks in one consistent language. Oracle SQL is an implementation of the ANSI standard. Oracle SQL supports numerous features that extend beyond standard SQL. Related Topics Parent topic: Data Access PL/SQL is a procedural extension to Oracle SQL. Java and JavaScript are additional options that you can use to store business logic in the database. PL/SQL is integrated with Oracle Database, enabling you to use all of the Oracle Database SQL statements, functions, and data types. You can use PL/SQL to control the flow of a SQL program, use variables, and write error-handling procedures. A primary benefit of PL/SQL is the ability to store application logic in the database itself. A PL/SQL procedure or function is a schema object that consists of a set of SQL statements and other PL/SQL constructs, grouped together, stored in the database, and run as a unit to solve a specific problem or to perform a set of related tasks. The principal benefit of server-side programming is that built-in functionality can be deployed anywhere. Oracle Database can also store program units written in Java and JavaScript. A Java stored procedure is a Java method published to SQL and stored in the database for general use. You can call existing PL/SQL programs from Java and JavaScript, and Java and JavaScript programs from PL/SQL. Multilingual Engine (MLE) offers you the ability to write business logic in JavaScript and store the code in the database as MLE Modules. Functions exported by MLE Modules can be exposed to SQL and PL/SQL by means of Call Specifications. These call specifications are PL/SQL units (functions, procedures, and packages ) and can be called anywhere PL/SQL is called. Related Topics Parent topic: Data Access Oracle Database is designed as a multiuser database. The database must ensure that multiple users can work concurrently without corrupting one another's data. Parent topic: Introduction to Oracle Database A transaction is a logical, atomic unit of work that contains one or more SQL statements. An RDBMS must be able to group SQL statements so that they are either all committed, which means they are applied to the database, or all rolled back, which means they are undone. An illustration of the need for transactions is a funds transfer from a savings account to a checking account. The transfer consists of the following separate operations: Decrease the savings account. Increase the checking account. Record the transaction in the transaction journal. Oracle Database guarantees that all three operations succeed or fail as a unit. For example, if a hardware failure prevents a statement in the transaction from executing, then the other statements must be rolled back. Transactions are one feature that set Oracle Database apart from a file system. If you perform an atomic operation that updates several files, and if the system fails halfway through, then the files will not be consistent. In contrast, a transaction moves an Oracle database from one consistent state to another. The basic principle of a transaction is \"all or nothing\": an atomic operation succeeds or fails as a whole. Related Topics Parent topic: Transaction Management A requirement of a multiuser RDBMS is the control of data concurrency, which is the simultaneous access of the same data by multiple" }, { "data": "Without concurrency controls, users could change data improperly, compromising data integrity. For example, one user could update a row while a different user simultaneously updates it. If multiple users access the same data, then one way of managing concurrency is to make users wait. However, the goal of a DBMS is to reduce wait time so it is either nonexistent or negligible. All SQL statements that modify data must proceed with as little interference as possible. Destructive interactions, which are interactions that incorrectly update data or alter underlying data structures, must be avoided. Oracle Database uses locks to control concurrent access to data. A lock is a mechanism that prevents destructive interaction between transactions accessing a shared resource. Locks help ensure data integrity while allowing maximum concurrent access to data. Related Topics Parent topic: Transaction Management In Oracle Database, each user must see a consistent view of the data, including visible changes made by a user's own transactions and committed transactions of other users. For example, the database must prevent the dirty read problem, which occurs when one transaction sees uncommitted changes made by another concurrent transaction. Oracle Database always enforces statement-level read consistency, which guarantees that the data that a single query returns is committed and consistent for a single point in time. Depending on the transaction isolation level, this point is the time at which the statement was opened or the time the transaction began. The Oracle Flashback Query feature enables you to specify this point in time explicitly. The database can also provide read consistency to all queries in a transaction, known as transaction-level read consistency. In this case, each statement in a transaction sees data from the same point in time, which is the time at which the transaction began. Related Topics Parent topic: Transaction Management A database server is the key to information management. In general, a server reliably manages a large amount of data in a multiuser environment so that users can concurrently access the same data. A database server also prevents unauthorized access and provides efficient solutions for failure recovery. Parent topic: Introduction to Oracle Database An Oracle database server consists of a database and at least one database instance, commonly referred to as simply an instance. Because an instance and a database are so closely connected, the term Oracle database sometimes refers to both instance and database. In the strictest sense, the terms have the following meanings: Database A database is a set of files, located on disk, that store user data. These data files can exist independently of a database instance. Starting in Oracle Database 21c, \"database\" refers specifically to the data files of a multitenant container database (CDB), pluggable database (PDB), or application container. Database instance An instance is a named set of memory structures that manage database files. A database instance consists of a shared memory area, called the system global area (SGA), and a set of background processes. An instance can exist independently of database files. Parent topic: Oracle Database Architecture The multitenant architecture enables an Oracle database to be a CDB. Every Oracle database must contain or be able to be contained by another database. For example, a CDB contains PDBs, and an application container contains application PDBs. A PDB is contained by a CDB or application container, and an application container is contained by a CDB. Starting in Oracle Database 21c, a multitenant container database is the only supported architecture. In previous releases, Oracle supported non-container databases" }, { "data": "Parent topic: Database and Instance A CDB contains one or more user-created PDBs and application containers. At the physical level, a CDB is a set of files: control file, online redo log files, and data files. The database instance manages the files that make up the CDB. The following figure shows a CDB and an associated database instance. Figure 1-1 Database Instance and CDB Parent topic: Multitenant Architecture A PDB is a portable collection of schemas, schema objects, and nonschema objects that appears to an application as a separate database. At the physical level, each PDB has its own set of data files that store the data for the PDB. The CDB includes all the data files for the PDBs contained within it, and a set of system data files that store metadata for the CDB itself. To move or archive a PDB, you can unplug it. An unplugged PDB consists of the PDB data files and a metadata file. An unplugged PDB is not usable until it is plugged in to a CDB. The following figure shows a CDB named MYCDB. Figure 1-2 PDBs in a CDB Physically, MYCDB is an Oracle database, in the sense of a set of data files associated with an instance. MYCDB has one database instance, although multiple instances are possible in Oracle Real Application Clusters, and one set of database files. MYCDB contains two PDBs: hrpdb and salespdb. As shown in Figure 1-2, these PDBs appear to their respective applications as separate, independent databases. An application has no knowledge of whether it is connecting to a CDB or PDB. To administer the CDB itself or any PDB within it, you can connect to the CDB root. The root is a collection of schemas, schema objects, and nonschema objects to which all PDBs and application containers belong. Parent topic: Multitenant Architecture An application container is an optional, user-created container within a CDB that stores data and metadata for one or more applications. In this context, an application (also called the master application definition) is a named, versioned set of common data and metadata stored in the application root. For example, the application might include definitions of tables, views, user accounts, and PL/SQL packages that are common to a set of PDBs. In some ways, an application container functions as an application-specific CDB within a CDB. An application container, like the CDB itself, can include multiple application PDBs, and enables these PDBs to share metadata and data. At the physical level, an application container has its own set of data files, just like a PDB. For example, a SaaS deployment can use multiple application PDBs, each for a separate customer, which share application metadata and data. For example, in the following figure, salesapp is the application model in the application root. The application PDB named cust1pdb contains sales data only for customer 1, whereas the application PDB named cust2_pdb contains sales data only for customer 2. Plugging, unplugging, cloning, and other PDB-level operations are available for individual customer PDBs. Figure 1-3 SaaS Use Case Parent topic: Multitenant Architecture Oracle Sharding is a database scaling technique based on horizontal partitioning of data across multiple PDBs. Applications perceive the pool of PDBs as a single logical database. Key benefits of sharding for OLTP applications include linear scalability, fault containment, and geographical data distribution. Sharding is well suited to deployment in the Oracle Cloud. Unlike NoSQL data stores that implement sharding, Oracle Sharding provides the benefits of sharding without sacrificing the capabilities of an enterprise" }, { "data": "In a sharding architecture, each CDB is hosted on a dedicated server with its own local resources: CPU, memory, flash, or disk. You can designate a PDB as a shard. PDB shards from different CDBs make up a single logical database, which is referred to as a sharded database. Two shards in the same CDB cannot be members of the same sharded database. However, within the same CDB, one PDB could be in one sharded database, and another PDB could be in a separate sharded database. Horizontal partitioning involves splitting a database table across shards so that each shard contains the table with the same columns but a different subset of rows. A table split up in this manner is also known as a sharded table. The following figure shows a sharded table horizontally partitioned across three shards, each of which is a PDB in a separate CDB. Figure 1-4 Horizontal Partitioning of a Table Across Shards A use case is distributing customer account data across multiple CDBs. For example, a customer with ID 28459361 may look up his records. The following figure shows a possible architecture. The customer request is routed through a connection pool, where sharding directors (network listeners) direct the request to the appropriate PDB shard, which contains all the customer rows. Figure 1-5 Oracle Sharding Architecture Related Topics Parent topic: Database and Instance A database can be considered from both a physical and logical perspective. Physical data is data viewable at the operating system level. For example, operating system utilities such as the Linux ls and ps can list database files and processes. Logical data such as a table is meaningful only for the database. A SQL statement can list the tables in an Oracle database, but an operating system utility cannot. The database has physical structures and logical structures. Because the physical and logical structures are separate, you can manage the physical storage of data without affecting access to logical storage structures. For example, renaming a physical database file does not rename the tables whose data is stored in this file. Parent topic: Oracle Database Architecture The physical database structures are the files that store the data. When you execute a CREATE DATABASE command, you create a CDB. The following files are created: Data files Every CDB has one or more physical data files, which contain all the database data. The data of logical database structures, such as tables and indexes, is physically stored in the data files. Control files Every CDB has a control file. A control file contains metadata specifying the physical structure of the database, including the database name and the names and locations of the database files. Online redo log files Every CDB has an online redo log, which is a set of two or more online redo log files. An online redo log is made up of redo entries (also called redo log records), which record all changes made to data. When you execute a CREATE PLUGGABLE DATABASE command within a CDB, you create a PDB. The PDB contains a dedicated set of data files within the CDB. A PDB does not have a separate, dedicated control file and online redo log: these files are shared by the PDBs. Many other files are important for the functioning of a CDB. These include parameter files and networking files. Backup files and archived redo log files are offline files important for backup and" }, { "data": "Related Topics See Also: \"Physical Storage Structures\" Parent topic: Database Storage Structures Logical storage structures enable Oracle Database to have fine-grained control of disk space use. This topic discusses logical storage structures: Data blocks At the finest level of granularity, Oracle Database data is stored in data blocks. One data block corresponds to a specific number of bytes on disk. Extents An extent is a specific number of logically contiguous data blocks, obtained in a single allocation, used to store a specific type of information. Segments A segment is a set of extents allocated for a user object (for example, a table or index), undo data, or temporary data. Tablespaces A database is divided into logical storage units called tablespaces. A tablespace is the logical container for segments. Each tablespace consists of at least one data file. Related Topics Parent topic: Database Storage Structures An Oracle database uses memory structures and processes to manage and access the CDB. All memory structures exist in the main memory of the computers that constitute the RDBMS. When applications connect to a CDB or PDB, they connect to a database instance. The instance services applications by allocating other memory areas in addition to the SGA, and starting other processes in addition to background processes. Parent topic: Oracle Database Architecture A process is a mechanism in an operating system that can run a series of steps. Some operating systems use the terms job, task, or thread. For the purposes of this topic, a thread is equivalent to a process. An Oracle database instance has the following types of processes: Client processes These processes are created and maintained to run the software code of an application program or an Oracle tool. Most environments have separate computers for client processes. Background processes These processes consolidate functions that would otherwise be handled by multiple Oracle Database programs running for each client process. Background processes asynchronously perform I/O and monitor other Oracle Database processes to provide increased parallelism for better performance and reliability. Server processes These processes communicate with client processes and interact with Oracle Database to fulfill requests. Oracle processes include server processes and background processes. In most environments, Oracle processes and client processes run on separate computers. Related Topics Parent topic: Database Instance Structures Oracle Database creates and uses memory structures for program code, data shared among users, and private data areas for each connected user. The following memory structures are associated with a database instance: System Global Area (SGA) The SGA is a group of shared memory structures that contain data and control information for one database instance. Examples of SGA components include the database buffer cache and shared SQL areas. The SGA can contain an optional In-Memory Column Store (IM column store), which enables data to be populated in memory in a columnar format. Program Global Areas (PGA) A PGA is a memory region that contains data and control information for a server or background process. Access to the PGA is exclusive to the process. Each server process and background process has its own PGA. Related Topics Parent topic: Database Instance Structures To take full advantage of a given computer system or network, Oracle Database enables processing to be split between the database server and the client programs. The computer running the RDBMS handles the database server responsibilities while the computers running the applications handle the interpretation and display of data. Parent topic: Oracle Database Architecture The application architecture is the computing environment in which a database application connects to an Oracle database. The two most common database architectures are client/server and" }, { "data": "Client-Server Architecture In a client/server architecture, the client application initiates a request for an operation to be performed on the database server. The server runs Oracle Database software and handles the functions required for concurrent, shared data access. The server receives and processes requests that originate from clients. Multitier Architecture In a multitier architecture, one or more application servers perform parts of the operation. An application server contains a large part of the application logic, provides access to the data for the client, and performs some query processing. In this way, the load on the database decreases. The application server can serve as an interface between clients and multiple databases and provide an additional level of security. A service-oriented architecture (SOA) is a multitier architecture in which application functionality is encapsulated in services. SOA services are usually implemented as Web services. Web services are accessible through HTTP and are based on XML-based standards such as Web Services Description Language (WSDL) and SOAP. Oracle Database can act as a Web service provider in a traditional multitier or SOA environment. Simple Oracle Document Access (SODA) is an adaption of SOA that enables you to access to data stored in the database. SODA is designed for schemaless application development without knowledge of relational database features or languages such as SQL and PL/SQL. You can create and store collections of documents in Oracle Database, retrieve them, and query them, without needing to know how the documents are stored. SODA for REST uses the representational state transfer (REST) architectural style to implement SODA. Related Topics Parent topic: Application and Networking Architecture Oracle Net Services is the interface between the database and the network communication protocols that facilitate distributed processing and distributed databases. Communication protocols define the way that data is transmitted and received on a network. Oracle Net Services supports communications on all major network protocols, including TCP/IP, HTTP, FTP, and WebDAV. Oracle Net, a component of Oracle Net Services, establishes and maintains a network session from a client application to a database server. After a network session is established, Oracle Net acts as the data courier for both the client application and the database server, exchanging messages between them. Oracle Net can perform these jobs because it is located on each computer in the network. An important component of Net Services is the Oracle Net Listener (called the listener), which is a process that runs on the database or elsewhere in the network. Client applications send connection requests to the listener, which manages the traffic of these requests to the database. When a connection is established, the client and database communicate directly. The most common ways to configure an Oracle database to service client requests are: Dedicated server architecture Each client process connects to a dedicated server process. The server process is not shared by any other client for the duration of the client's session. Each new session is assigned a dedicated server process. Shared server architecture The database uses a pool of shared server processes for multiple sessions. A client process communicates with a dispatcher, which is a process that enables many clients to connect to the same database instance without the need for a dedicated server process for each client. Related Topics Parent topic: Application and Networking Architecture The documentation set is designed with specific access paths to ensure that users are able to find the information they need as efficiently as possible. The documentation set is divided into three layers or groups: basic, intermediate, and" }, { "data": "Users begin with the manuals in the basic group, proceed to the manuals in the intermediate group (the 2 Day + series), and finally to the advanced manuals, which include the remainder of the documentation. You can find the documentation for supported releases of Oracle Database at https://docs.oracle.com/en/database/oracle/oracle-database/. Parent topic: Introduction to Oracle Database Technical users who are new to Oracle Database begin by reading one or more manuals in the basic group from cover to cover. Each manual in this group is designed to be read in two days. In addition to this manual, the basic group includes the manuals shown in the following table. Table 1-1 Basic Group | Manual | Description | |:--|:--| | Oracle Database Get Started with Oracle Database Development | This task-based quick start guide explains how to use the basic features of Oracle Database through SQL and PL/SQL. | Oracle Database Get Started with Oracle Database Development This task-based quick start guide explains how to use the basic features of Oracle Database through SQL and PL/SQL. The manuals in the basic group are closely related, which is reflected in the number of cross-references. For example, Oracle Database Concepts frequently sends users to a 2 Day manual to learn how to perform a task based on a concept. The 2 Day manuals frequently reference Oracle Database Concepts for conceptual background about a task. Parent topic: Oracle Database Documentation Roadmap The next step up from the basic group is the intermediate group. Manuals in the intermediate group are prefixed with the word 2 Day + because they expand on and assume information contained in the 2 Day manuals. The 2 Day + manuals cover topics in more depth than is possible in the basic manuals, or cover topics of special interest. The manuals are intended for different audiences: Database administrators Oracle Database Get Started with Performance Tuning is a quick start guide that describes how to perform day-to-day database performance tuning tasks using features provided by Oracle Diagnostics Pack, Oracle Tuning Pack, and Oracle Enterprise Manager Cloud Control (Cloud Control). Database developers Oracle Database Get Started with Java Development helps you understand all Java products used to build a Java application. The manual explains how to use Oracle JDBC Thin driver, Universal Connection Pool (UCP), and Java in the Database (OJVM) in a sample Web application. Parent topic: Oracle Database Documentation Roadmap The advanced group manuals are intended for expert users who require more detailed information about a particular topic than can be provided by the 2 Day + manuals. The following table lists essential reference manuals in the advanced group. Table 1-2 Essential Reference Manuals | Manual | Description | |:--|:--| | Oracle Database SQL Language Reference | Provides a complete description of the Structured Query Language (SQL) used to manage information in an Oracle Database. | | Oracle Database Reference | Describes database initialization parameters, data dictionary views, dynamic performance views, wait events, and background processes. | | Oracle Database PL/SQL Packages and Types Reference | Describes the PL/SQL packages provided with the Oracle database server. You can use the supplied packages when creating your applications or for ideas in creating your own stored procedures. | Oracle Database SQL Language Reference Provides a complete description of the Structured Query Language (SQL) used to manage information in an Oracle Database. Oracle Database Reference Describes database initialization parameters, data dictionary views, dynamic performance views, wait events, and background processes. Describes the PL/SQL packages provided with the Oracle database server. You can use the supplied packages when creating your applications or for ideas in creating your own stored" }, { "data": "The advanced guides are too numerous to list in this section. The following table lists guides that the majority of expert Oracle DBAs use. Table 1-3 Advanced Group for DBAs | Manual | Description | |:-|:| | Oracle Database Administrators Guide | Explains how to perform tasks such as creating and configuring databases, maintaining and monitoring databases, creating schema objects, scheduling jobs, and diagnosing problems. | | Oracle Database Security Guide | Describes how to configure security for Oracle Database by using the default database features. | | Oracle Database Performance Tuning Guide | Describes how to use Oracle Database tools to optimize database performance. This guide also describes performance best practices for creating a database and includes performance-related reference information. | | Oracle Database SQL Tuning Guide | Describes SQL processing, the optimizer, execution plans, SQL operators, optimizer statistics, application tracing, and SQL advisors. | | Oracle Database Backup and Recovery Users Guide | Explains how to back up, restore, and recover Oracle databases, perform maintenance on backups of database files, and transfer data between storage systems. | | Oracle Real Application Clusters Administration and Deployment Guide | Explains how to install, configure, manage, and troubleshoot an Oracle RAC database. | Oracle Database Administrators Guide Explains how to perform tasks such as creating and configuring databases, maintaining and monitoring databases, creating schema objects, scheduling jobs, and diagnosing problems. Describes how to configure security for Oracle Database by using the default database features. Oracle Database Performance Tuning Guide Describes how to use Oracle Database tools to optimize database performance. This guide also describes performance best practices for creating a database and includes performance-related reference information. Oracle Database SQL Tuning Guide Describes SQL processing, the optimizer, execution plans, SQL operators, optimizer statistics, application tracing, and SQL advisors. Oracle Database Backup and Recovery Users Guide Explains how to back up, restore, and recover Oracle databases, perform maintenance on backups of database files, and transfer data between storage systems. Oracle Real Application Clusters Administration and Deployment Guide Explains how to install, configure, manage, and troubleshoot an Oracle RAC database. The following table lists guides that the majority of expert Oracle developers use. Table 1-4 Advanced Group for Developers | Manual | Description | |:|:--| | Oracle Database Development Guide | Explains how to develop applications or convert existing applications to run in the Oracle Database environment. The manual explains fundamentals of application design, and describes essential concepts for developing in SQL and PL/SQL. | | Oracle Database PL/SQL Language Reference | Describes all aspects of the PL/SQL language, including data types, control statements, collections, triggers, packages, and error handling. | | Oracle Database Java Developers Guide | Describes how to develop, load, and run Java applications in Oracle Database. | | Oracle Database SecureFiles and Large Objects Developer's Guide | Explains how to develop new applications using Large Objects (LOBs), SecureFiles LOBs, and Database File System (DBFS). | Oracle Database Development Guide Explains how to develop applications or convert existing applications to run in the Oracle Database environment. The manual explains fundamentals of application design, and describes essential concepts for developing in SQL and PL/SQL. Oracle Database PL/SQL Language Reference Describes all aspects of the PL/SQL language, including data types, control statements, collections, triggers, packages, and error handling. Describes how to develop, load, and run Java applications in Oracle Database. Explains how to develop new applications using Large Objects (LOBs), SecureFiles LOBs, and Database File System (DBFS). Other advanced guides required by a particular user depend on the area of responsibility of this user. Parent topic: Oracle Database Documentation Roadmap" } ]
{ "category": "App Definition and Development", "file_name": "3.0.x.md", "project_name": "OrientDB", "subcategory": "Database" }
[ { "data": "Welcome to OrientDB - the first Multi-Model Open Source NoSQL DBMS that brings together the power of graphs and the flexibility of documents into one scalable high-performance operational database. | Getting Started | Main Topics | Developers | |:-|:|:| | Introduction to OrientDB | Basic Concepts | SQL | | Installation | Supported Data Types | Gremlin | | First Steps | Inheritance | HTTP API | | Troubleshooting | Security | Java API | | Enterprise Edition | Indexes | NodeJS | | Security Guide | ACID Transactions | PHP | | nan | Functions | Python | | nan | Caching Levels | .NET | | nan | Common Use Cases | Other Drivers | | nan | nan | Network Binary Protocol | | nan | nan | Javadocs | Check out our Get in Touch page for different ways of getting in touch with us. Every effort has been made to ensure the accuracy of this manual. However, OrientDB, LTD. makes no warranties with respect to this documentation and disclaims any implied warranties of merchantability and fitness for a particular purpose. The information in this document is subject to change without notice." } ]
{ "category": "App Definition and Development", "file_name": "sql.html#GUID-CBD8FE77-BA6F-4241-A71C-2ADDDF43EA7F.md", "project_name": "Oracle Database", "subcategory": "Database" }
[ { "data": "This chapter provides an overview of the Structured Query Language (SQL) and how Oracle Database processes SQL statements. Parent topic: Oracle Data Access SQL (pronounced sequel) is the set-based, high-level declarative computer language with which all programs and users access data in an Oracle database. Although some Oracle tools and applications mask SQL use, all database tasks are performed using SQL. Any other data access method circumvents the security built into Oracle Database and potentially compromises data security and integrity. SQL provides an interface to a relational database such as Oracle Database. SQL unifies tasks such as the following in one consistent language: Creating, replacing, altering, and dropping objects Inserting, updating, and deleting table rows Querying data Controlling access to the database and its objects Guaranteeing database consistency and integrity SQL can be used interactively, which means that statements are entered manually into a program. SQL statements can also be embedded within a program written in a different language such as C or Java. See Also: Introduction to Server-Side Programming Oracle Database Development Guide to learn how to choose a programming environment Oracle Database SQL Language Reference for an introduction to SQL Parent topic: SQL There are two broad families of computer languages: declarative languages that are nonprocedural and describe what should be done, and procedural languages such as C++ and Java that describe how things should be done. SQL is declarative in the sense that users specify the result that they want, not how to derive it. For example, the following statement queries records for employees whose last name begins with K: The database performs the work of generating a procedure to navigate the data and retrieve the requested results. The declarative nature of SQL enables you to work with data at the logical level. You need be concerned with implementation details only when you manipulate the data. ``` SELECT lastname, firstname FROM hr.employees WHERE last_name LIKE 'K%' ORDER BY lastname, firstname; ``` The database retrieves all rows satisfying the WHERE condition, also called the predicate, in a single step. The database can pass these rows as a unit to the user, to another SQL statement, or to an application. The application does not need to process the rows one by one, nor does the developer need to know how the rows are physically stored or retrieved. All SQL statements use the optimizer, a component of the database that determines the most efficient means of accessing the requested data. Oracle Database also supports techniques that you can use to make the optimizer perform its job better. See Also: Oracle Database SQL Language Reference for detailed information about SQL statements and other parts of SQL (such as operators, functions, and format models) Parent topic: Introduction to SQL Oracle strives to follow industry-accepted standards and participates actively in SQL standards committees. Industry-accepted committees are the American National Standards Institute (ANSI) and the International Organization for Standardization (ISO). Both ANSI and the ISO/IEC have accepted SQL as the standard language for relational databases. The SQL standard consists of ten parts. One part (SQL/RPR:2012) is new in 2102. Five other parts were revised in 2011. For the other four parts, the 2008 version remains in place. Oracle SQL includes many extensions to the ANSI/ISO standard SQL language, and Oracle Database tools and applications provide additional" }, { "data": "The tools SQL*Plus, SQL Developer, and Oracle Enterprise Manager enable you to run any ANSI/ISO standard SQL statement against an Oracle database and any additional statements or functions available for those tools. See Also: Oracle Database Get Started with Oracle Database Development Oracle Database SQL Language Reference for an explanation of the differences between Oracle SQL and standard SQL SQL*Plus User's Guide and Reference for SQL*Plus commands, including their distinction from SQL statements Parent topic: Introduction to SQL All operations performed on the information in an Oracle database are run using SQL statements. A SQL statement is a computer program or instruction that consists of identifiers, parameters, variables, names, data types, and SQL reserved words. Note: SQL reserved words have special meaning in SQL and should not be used for any other purpose. For example, SELECT and UPDATE are reserved words and should not be used as table names. A SQL statement must be the equivalent of a complete SQL sentence, such as: ``` SELECT lastname, departmentid FROM employees ``` Oracle Database only runs complete SQL statements. A fragment such as the following generates an error indicating that more text is required: ``` SELECT last_name; ``` Oracle SQL statements are divided into the following categories: Data Definition Language (DDL) Statements Data Manipulation Language (DML) Statements Transaction Control Statements Session Control Statements System Control Statement Embedded SQL Statements Parent topic: SQL Data definition language (DLL) statements define, structurally change, and drop schema objects. DDL enables you to alter attributes of an object without altering the applications that access the object. For example, you can add a column to a table accessed by a human resources application without rewriting the application. You can also use DDL to alter the structure of objects while database users are performing work in the database. More specifically, DDL statements enable you to: Create, alter, and drop schema objects and other database structures, including the database itself and database users. Most DDL statements start with the keywords CREATE, ALTER, or DROP. Delete all the data in schema objects without removing the structure of these objects (TRUNCATE). Note: Unlike DELETE, TRUNCATE generates no undo data, which makes it faster than DELETE. Also, TRUNCATE does not invoke delete triggers Grant and revoke privileges and roles (GRANT, REVOKE). Turn auditing options on and off (AUDIT, NOAUDIT). Add a comment to the data dictionary (COMMENT). Example 10-1 DDL Statements The following example uses DDL statements to create the plants table and then uses DML to insert two rows in the table. The example then uses DDL to alter the table structure, grant and revoke read privileges on this table to a user, and then drop the table. ``` CREATE TABLE plants ( plant_id NUMBER PRIMARY KEY, common_name VARCHAR2(15) ); INSERT INTO plants VALUES (1, 'African Violet'); # DML statement INSERT INTO plants VALUES (2, 'Amaryllis'); # DML statement ALTER TABLE plants ADD ( latin_name VARCHAR2(40) ); GRANT READ ON plants TO scott; REVOKE READ ON plants FROM scott; DROP TABLE plants; ``` An implicit COMMIT occurs immediately before the database executes a DDL statement and a COMMIT or ROLLBACK occurs immediately afterward. In the preceding example, two INSERT statements are followed by an ALTER TABLE statement, so the database commits the two INSERT statements. If the ALTER TABLE statement succeeds, then the database commits this statement; otherwise, the database rolls back this statement. In either case, the two INSERT statements have already been" }, { "data": "See Also: Oracle Database Security Guide to learn about privileges and roles Oracle Database Get Started with Oracle Database Development and Oracle Database Administrators Guide to learn how to create schema objects Oracle Database Development Guide to learn about the difference between blocking and nonblocking DDL Oracle Database SQL Language Reference for a list of DDL statements Parent topic: Overview of SQL Statements Data manipulation language (DML) statements query or manipulate data in existing schema objects. Whereas DDL statements change the structure of the database, DML statements query or change the contents. For example, ALTER TABLE changes the structure of a table, whereas INSERT adds one or more rows to the table. DML statements are the most frequently used SQL statements and enable you to: Retrieve or fetch data from one or more tables or views (SELECT). Add new rows of data into a table or view (INSERT) by specifying a list of column values or using a subquery to select and manipulate existing data. Change column values in existing rows of a table or view (UPDATE). Update or insert rows conditionally into a table or view (MERGE). Remove rows from tables or views (DELETE). View the execution plan for a SQL statement (EXPLAIN PLAN). Lock a table or view, temporarily limiting access by other users (LOCK TABLE). The following example uses DML to query the employees table. The example uses DML to insert a row into employees, update this row, and then delete it: ``` SELECT * FROM employees; INSERT INTO employees (employeeid, lastname, email, jobid, hiredate, salary) VALUES (1234, 'Mascis', 'JMASCIS', 'IT_PROG', '14-FEB-2008', 9000); UPDATE employees SET salary=9100 WHERE employee_id=1234; DELETE FROM employees WHERE employee_id=1234; ``` A collection of DML statements that forms a logical unit of work is called a transaction. For example, a transaction to transfer money could involve three discrete operations: decreasing the savings account balance, increasing the checking account balance, and recording the transfer in an account history table. Unlike DDL statements, DML statements do not implicitly commit the current transaction. See Also: Differences Between DML and DDL Processing Introduction to Transactions Oracle Database Get Started with Oracle Database Development to learn how to query and manipulate data Oracle Database SQL Language Reference for a list of DML statements Parent topic: Overview of SQL Statements A query is an operation that retrieves data from a table or view. SELECT is the only SQL statement that you can use to query data. The set of data retrieved from execution of a SELECT statement is known as a result set. The following table shows two required keywords and two keywords that are commonly found in a SELECT statement. The table also associates capabilities of a SELECT statement with the keywords. Table 10-1 Keywords in a SQL Statement | Keyword | Required? | Description | Capability | |:-|:|:--|:-| | SELECT | Yes | Specifies which columns should be shown in the result. Projection produces a subset of the columns in the table. An expression is a combination of one or more values, operators, and SQL functions that resolves to a value. The list of expressions that appears after the SELECT keyword and before the FROM clause is called the select list. | Projection | | FROM | No | Specifies the tables or views from which the data should be" }, { "data": "| Joining | | WHERE | No | Specifies a condition to filter rows, producing a subset of the rows in the table. A condition specifies a combination of one or more expressions and logical (Boolean) operators and returns a value of TRUE, FALSE, or UNKNOWN. | Selection | | ORDER BY | No | Specifies the order in which the rows should be shown. | nan | SELECT Yes Specifies which columns should be shown in the result. Projection produces a subset of the columns in the table. An expression is a combination of one or more values, operators, and SQL functions that resolves to a value. The list of expressions that appears after the SELECT keyword and before the FROM clause is called the select list. Projection FROM No Specifies the tables or views from which the data should be retrieved. Joining WHERE No Specifies a condition to filter rows, producing a subset of the rows in the table. A condition specifies a combination of one or more expressions and logical (Boolean) operators and returns a value of TRUE, FALSE, or UNKNOWN. Selection ORDER BY No Specifies the order in which the rows should be shown. See Also: Oracle Database SQL Language Reference for SELECT syntax and semantics Parent topic: Data Manipulation Language (DML) Statements A join is a query that combines rows from two or more tables, views, or materialized views. The following example joins the employees and departments tables (FROM clause), selects only rows that meet specified criteria (WHERE clause), and uses projection to retrieve data from two columns (SELECT). Sample output follows the SQL statement. ``` SELECT email, department_name FROM employees JOIN departments ON employees.departmentid = departments.departmentid WHERE employee_id IN (100,103) ORDER BY email; EMAIL DEPARTMENT_NAME AHUNOLD IT SKING Executive ``` The following graphic represents the operations of projection and selection in the join shown in the preceding query. Figure 10-1 Projection and Selection Most joins have at least one join condition, either in the FROM clause or in the WHERE clause, that compares two columns, each from a different table. The database combines pairs of rows, each containing one row from each table, for which the join condition evaluates to TRUE. The optimizer determines the order in which the database joins tables based on the join conditions, indexes, and any available statistics for the tables. Join types include the following: Inner joins An inner join is a join of two or more tables that returns only rows that satisfy the join condition. For example, if the join condition is employees.departmentid=departments.departmentid, then rows that do not satisfy this condition are not returned. Outer joins An outer join returns all rows that satisfy the join condition and also returns rows from one table for which no rows from the other table satisfy the condition. The result of a left outer join for table A and B always contains all records of the left table A, even if the join condition does not match a record in the right table B. If no matching row from B exists, then B columns contain nulls for rows that have no match in B. For example, if not all employees are in departments, then a left outer join of employees (left table) and departments (right table) retrieves all rows in employees even if no rows in departments satisfy the join condition (employees.department_id is" }, { "data": "The result of a right outer join for table A and B contains all records of the right table B, even if the join condition does not match a row in the left table A. If no matching row from A exists, then A columns contain nulls for rows that have no match in A. For example, if not all departments have employees, a right outer join of employees (left table) and departments (right table) retrieves all rows in departments even if no rows in employees satisfy the join condition. A full outer join is the combination of a left outer join and a right outer join. Cartesian products If two tables in a join query have no join condition, then the database performs a Cartesian join. Each row of one table combines with each row of the other. For example, if employees has 107 rows and departments has 27, then the Cartesian product contains 107*27 rows. A Cartesian product is rarely useful. See Also: Oracle Database SQL Tuning Guide to learn about joins Oracle Database SQL Language Reference for detailed descriptions and examples of joins Parent topic: Data Manipulation Language (DML) Statements A subquery is a SELECT statement nested within another SQL statement. Subqueries are useful when you must execute multiple queries to solve a single problem. Each query portion of a statement is called a query block. In the following query, the subquery in parentheses is the inner query block: ``` SELECT firstname, lastname FROM employees WHERE department_id IN ( SELECT department_id FROM departments WHERE location_id = 1800 ); ``` The inner SELECT statement retrieves the IDs of departments with location ID 1800. These department IDs are needed by the outer query block, which retrieves names of employees in the departments whose IDs were supplied by the subquery. The structure of the SQL statement does not force the database to execute the inner query first. For example, the database could rewrite the entire query as a join of employees and departments, so that the subquery never executes by itself. As another example, the Virtual Private Database (VPD) feature could restrict the query of employees using a WHERE clause, so that the database queries the employees first and then obtains the department IDs. The optimizer determines the best sequence of steps to retrieve the requested rows. See Also: Oracle Database Security Guide to learn more about VPD Parent topic: Data Manipulation Language (DML) Statements Transaction control statements manage the changes made by DML statements and group DML statements into transactions. These statements enable you to: Make changes to a transaction permanent (COMMIT). Undo the changes in a transaction, since the transaction started (ROLLBACK) or since a savepoint (ROLLBACK TO SAVEPOINT). A savepoint is a user-declared intermediate marker within the context of a transaction. Note: The ROLLBACK statement ends a transaction, but ROLLBACK TO SAVEPOINT does not. Set a point to which you can roll back (SAVEPOINT). Establish properties for a transaction (SET TRANSACTION). Specify whether a deferrable integrity constraint is checked following each DML statement or when the transaction is committed (SET CONSTRAINT). The following example starts a transaction named Update salaries. The example creates a savepoint, updates an employee salary, and then rolls back the transaction to the savepoint. The example updates the salary to a different value and commits. ``` SET TRANSACTION NAME 'Update salaries'; SAVEPOINT beforesalaryupdate; UPDATE employees SET salary=9100 WHERE employee_id=1234 # DML ROLLBACK TO SAVEPOINT beforesalaryupdate; UPDATE employees SET salary=9200 WHERE" }, { "data": "# DML COMMIT COMMENT 'Updated salaries';``` See Also: Introduction to Transactions When the Database Checks Constraints for Validity Oracle Database SQL Language Reference to learn about transaction control statements Parent topic: Overview of SQL Statements Session control statements dynamically manage the properties of a user session. A session is a logical entity in the database instance memory that represents the state of a current user login to a database. A session lasts from the time the user is authenticated by the database until the user disconnects or exits the database application. Session control statements enable you to: Alter the current session by performing a specialized function, such as setting the default date format (ALTER SESSION). Enable and disable roles, which are groups of privileges, for the current session (SET ROLE). The following statement dynamically changes the default date format for your session to 'YYYY MM DD-HH24:MI:SS': ``` ALTER SESSION SET NLSDATEFORMAT = 'YYYY MM DD HH24:MI:SS'; ``` Session control statements do not implicitly commit the current transaction. See Also: Connections and Sessions Oracle Database SQL Language Reference for ALTER SESSION syntax and semantics Parent topic: Overview of SQL Statements A system control statement changes the properties of the database instance. The only system control statement is ALTER SYSTEM. It enables you to change settings such as the minimum number of shared servers, terminate a session, and perform other system-level tasks. Examples of the system control statement include: ``` ALTER SYSTEM SWITCH LOGFILE; ALTER SYSTEM KILL SESSION '39, 23'; ``` The ALTER SYSTEM statement does not implicitly commit the current transaction. See Also: Oracle Database SQL Language Reference for ALTER SYSTEM syntax and semantics Parent topic: Overview of SQL Statements Embedded SQL statements incorporate DDL, DML, and transaction control statements within a procedural language program. Embedded statements are used with the Oracle precompilers. Embedded SQL is one approach to incorporating SQL in your procedural language applications. Another approach is to use a procedural API such as Open Database Connectivity (ODBC) or Java Database Connectivity (JDBC). Embedded SQL statements enable you to: Define, allocate, and release a cursor (DECLARE CURSOR, OPEN, CLOSE). Specify a database and connect to it (DECLARE DATABASE, CONNECT). Assign variable names (DECLARE STATEMENT). Initialize descriptors (DESCRIBE). Specify how error and warning conditions are handled (WHENEVER). Parse and run SQL statements (PREPARE, EXECUTE, EXECUTE IMMEDIATE). Retrieve data from the database (FETCH). See Also: Introduction to Server-Side Programming Oracle Database Development Guide Parent topic: Overview of SQL Statements To understand how Oracle Database processes SQL statements, it is necessary to understand the part of the database called the optimizer (also known as the query optimizer or cost-based optimizer). All SQL statements use the optimizer to determine the most efficient means of accessing the specified data. Parent topic: SQL The optimizer generates execution plans describing possible methods of execution. The optimizer determines which execution plan is most efficient by considering several sources of information. For example, the optimizer considers query conditions, available access paths, statistics gathered for the system, and hints. To execute a DML statement, Oracle Database may have to perform many steps. Each step either retrieves rows of data physically from the database or prepares them for the user issuing the statement. The steps that the database uses to execute a statement greatly affect how quickly the statement runs. Many different ways of processing a DML statement are often possible. For example, the order in which tables or indexes are accessed can" }, { "data": "When determining the best execution plan for a SQL statement, the optimizer performs the following operations: Evaluation of expressions and conditions Inspection of integrity constraints to learn more about the data and optimize based on this metadata Statement transformation Choice of optimizer goals Choice of access paths Choice of join orders The optimizer generates most of the possible ways of processing a query and assigns a cost to each step in the generated execution plan. The plan with the lowest cost is chosen as the query plan to be executed. Note: You can obtain an execution plan for a SQL statement without executing the plan. Only an execution plan that the database actually uses to execute a query is correctly termed a query plan. You can influence optimizer choices by setting the optimizer goal and by gathering representative statistics for the optimizer. For example, you may set the optimizer goal to either of the following: Total throughput The ALL_ROWS hint instructs the optimizer to get the last row of the result to the client application as fast as possible. Initial response time The FIRST_ROWS hint instructs the optimizer to get the first row to the client as fast as possible. A typical end-user, interactive application would benefit from initial response time optimization, whereas a batch-mode, non-interactive application would benefit from total throughput optimization. See Also: Oracle Database PL/SQL Packages and Types Reference for information about using DBMS_STATS Oracle Database SQL Tuning Guide for more information about the optimizer and using hints Parent topic: Overview of the Optimizer The optimizer contains three main components: the transformer, estimator, and plan generator. The following diagram depicts the components: Figure 10-2 Optimizer Components The input to the optimizer is a parsed query. The optimizer performs the following operations: The optimizer receives the parsed query and generates a set of potential plans for the SQL statement based on available access paths and hints. The optimizer estimates the cost of each plan based on statistics in the data dictionary. The cost is an estimated value proportional to the expected resource use needed to execute the statement with a particular plan. The optimizer compares the costs of plans and chooses the lowest-cost plan, known as the query plan, to pass to the row source generator. See Also: SQL Parsing SQL Row Source Generation Parent topic: Overview of the Optimizer The query transformer determines whether it is helpful to change the form of the query so that the optimizer can generate a better execution plan. The input to the query transformer is a parsed query, which the optimizer represents as a set of query blocks. See Also: Query Rewrite Parent topic: Optimizer Components The estimator determines the overall cost of a given execution plan. The estimator generates three different types of measures to achieve this goal: Selectivity This measure represents a fraction of rows from a row set. The selectivity is tied to a query predicate, such as last_name='Smith', or a combination of predicates. Cardinality This measure represents the number of rows in a row set. Cost This measure represents units of work or resource used. The query optimizer uses disk I/O, CPU usage, and memory usage as units of work. If statistics are available, then the estimator uses them to compute the measures. The statistics improve the degree of accuracy of the" }, { "data": "Parent topic: Optimizer Components The plan generator tries out different plans for a submitted query. The optimizer chooses the plan with the lowest cost. For each nested subquery and unmerged view, the optimizer generates a subplan. The optimizer represents each subplan as a separate query block. The plan generator explores various plans for a query block by trying out different access paths, join methods, and join orders. The adaptive query optimization capability changes plans based on statistics collected during statement execution. All adaptive mechanisms can execute a final plan for a statement that differs from the default plan. Adaptive optimization uses either dynamic plans, which choose among subplans during statement execution, or reoptimization, which changes a plan on executions after the current execution. See Also: Oracle Database Get Started with Performance Tuning for an introduction to SQL tuning Oracle Database SQL Tuning Guide to learn about the optimizer components and adaptive optimization Parent topic: Optimizer Components An access path is the technique that a query uses to retrieve rows. For example, a query that uses an index has a different access path from a query that does not. In general, index access paths are best for statements that retrieve a small subset of table rows. Full scans are more efficient for accessing a large portion of a table. The database can use several different access paths to retrieve data from a table. The following is a representative list: Full table scans This type of scan reads all rows from a table and filters out those that do not meet the selection criteria. The database sequentially scans all data blocks in the segment, including those under the high water mark (HWM) that separates used from unused space (see \"Segment Space and the High Water Mark\"). Rowid scans The rowid of a row specifies the data file and data block containing the row and the location of the row in that block. The database first obtains the rowids of the selected rows, either from the statement WHERE clause or through an index scan, and then locates each selected row based on its rowid. Index scans This scan searches an index for the indexed column values accessed by the SQL statement (see \"Index Scans\"). If the statement accesses only columns of the index, then Oracle Database reads the indexed column values directly from the index. Cluster scans A cluster scan retrieves data from a table stored in an indexed table cluster, where all rows with the same cluster key value are stored in the same data block (see \"Overview of Indexed Clusters\"). The database first obtains the rowid of a selected row by scanning the cluster index. Oracle Database locates the rows based on this rowid. Hash scans A hash scan locates rows in a hash cluster, where all rows with the same hash value are stored in the same data block (see \"Overview of Hash Clusters\"). The database first obtains the hash value by applying a hash function to a cluster key value specified by the statement. Oracle Database then scans the data blocks containing rows with this hash value. The optimizer chooses an access path based on the available access paths for the statement and the estimated cost of using each access path or combination of" }, { "data": "See Also: Oracle Database Get Started with Performance Tuning and Oracle Database SQL Tuning Guide to learn about access paths Parent topic: Overview of the Optimizer The optimizer statistics are a collection of data that describe details about the database and the objects in the database. The statistics provide a statistically correct picture of data storage and distribution usable by the optimizer when evaluating access paths. Optimizer statistics include the following: Table statistics These include the number of rows, number of blocks, and average row length. Column statistics These include the number of distinct values and nulls in a column and the distribution of data. Index statistics These include the number of leaf blocks and index levels. System statistics These include CPU and I/O performance and utilization. Oracle Database gathers optimizer statistics on all database objects automatically and maintains these statistics as an automated maintenance task. You can also gather statistics manually using the DBMS_STATS package. This PL/SQL package can modify, view, export, import, and delete statistics. Note: Optimizer Statistics Advisor is built-in diagnostic software that analyzes how you are currently gathering statistics, the effectiveness of existing statistics gathering jobs, and the quality of the gathered statistics. Optimizer Statistics Advisor maintains rules, which embody Oracle best practices based on the current feature set. In this way, the advisor always provides the most up-to-date recommendations for statistics gathering. See Also: Oracle Database Get Started with Performance Tuning and Oracle Database SQL Tuning Guide to learn how to gather and manage statistics Oracle Database PL/SQL Packages and Types Reference to learn about DBMS_STATS Parent topic: Overview of the Optimizer A hint is a comment in a SQL statement that acts as an instruction to the optimizer. Sometimes the application designer, who has more information about a particular application's data than is available to the optimizer, can choose a more effective way to run a SQL statement. The application designer can use hints in SQL statements to specify how the statement should be run. The following examples illustrate the use of hints. Example 10-2 Execution Plan for SELECT with FIRST_ROWS Hint Suppose that your interactive application runs a query that returns 50 rows. This application initially fetches only the first 25 rows of the query to present to the end user. You want the optimizer to generate a plan that gets the first 25 records as quickly as possible so that the user is not forced to wait. You can use a hint to pass this instruction to the optimizer as shown in the SELECT statement and AUTOTRACE output in the following example: ``` SELECT /+ FIRST_ROWS(25) / employeeid, departmentid FROM hr.employees WHERE department_id > 50; | Id | Operation | Name | Rows | Bytes | 0 | SELECT STATEMENT | | 26 | 182 | 1 | TABLE ACCESS BY INDEX ROWID | EMPLOYEES | 26 | 182 |* 2 | INDEX RANGE SCAN | EMPDEPARTMENTIX | | ``` In this example, the execution plan shows that the optimizer chooses an index on the employees.department_id column to find the first 25 rows of employees whose department ID is over 50. The optimizer uses the rowid retrieved from the index to retrieve the record from the employees table and return it to the client. Retrieval of the first record is typically almost instantaneous. Example 10-3 Execution Plan for SELECT with No Hint Assume that you execute the same statement, but without the optimizer hint: ``` SELECT employeeid, departmentid FROM" }, { "data": "WHERE department_id > 50; | Id | Operation | Name | Rows | Bytes | Cos | 0 | SELECT STATEMENT | | 50 | 350 | |* 1 | VIEW | index$join$001 | 50 | 350 | |* 2 | HASH JOIN | | | | |* 3 | INDEX RANGE SCAN | EMPDEPARTMENTIX | 50 | 350 | | 4 | INDEX FAST FULL SCAN| EMPEMPID_PK | 50 | 350 | ``` In this case, the execution plan joins two indexes to return the requested records as fast as possible. Rather than repeatedly going from index to table as in Example 10-2, the optimizer chooses a range scan of EMPDEPARTMENTIX to find all rows where the department ID is over 50 and place these rows in a hash table. The optimizer then chooses to read the EMPEMPID_PK index. For each row in this index, it probes the hash table to find the department ID. In this case, the database cannot return the first row to the client until the index range scan of EMPDEPARTMENTIX completes. Thus, this generated plan would take longer to return the first record. Unlike the plan in Example 10-2, which accesses the table by index rowid, the plan uses multiblock I/O, resulting in large reads. The reads enable the last row of the entire result set to be returned more rapidly. See Also: Oracle Database SQL Tuning Guide to learn how to use optimizer hints Parent topic: Overview of the Optimizer This section explains how Oracle Database processes SQL statements. Specifically, the section explains the way in which the database processes DDL statements to create objects, DML to modify data, and queries to retrieve data. Parent topic: SQL The general stages of SQL processing are parsing, optimization, row source generation, and execution. Depending on the statement, the database may omit some of these steps. The following figure depicts the general stages: Figure 10-3 Stages of SQL Processing Parent topic: Overview of SQL Processing The first stage of SQL processing is SQL parsing. This stage involves separating the pieces of a SQL statement into a data structure that can be processed by other routines. When an application issues a SQL statement, the application makes a parse call to the database to prepare the statement for execution. The parse call opens or creates a cursor, which is a handle for the session-specific private SQL area that holds a parsed SQL statement and other processing information. The cursor and private SQL area are in the PGA. During the parse call, the database performs the following checks: Syntax check Semantic check Shared pool check The preceding checks identify the errors that can be found before statement execution. Some errors cannot be caught by parsing. For example, the database can encounter a deadlock or errors in data conversion only during statement execution. The database attempts automatic error mitigation for SQL statements that fail with an ORA-00600 error during the parse phase. An ORA-00600 is a severe error. It indicates that a process has encountered a low-level, unexpected condition. When a SQL statement fails with this error during the parse phase, automatic error mitigation traps it and attempts to resolve the condition. If a resolution is found, the database generates a SQL patch in order to adjust the SQL execution plan. If this patch enables the parse to complete successfully, then the ORA-00600 error is not raised. No exception is seen by the" }, { "data": "See Also: Locks and Deadlocks About Automatic SQL Error Mitigation Parent topic: Stages of SQL Processing Query optimization is the process of choosing the most efficient means of executing a SQL statement. The database optimizes queries based on statistics collected about the actual data being accessed. The optimizer uses the number of rows, the size of the data set, and other factors to generate possible execution plans, assigning a numeric cost to each plan. The database uses the plan with the lowest cost. The database must perform a hard parse at least once for every unique DML statement and performs optimization during this parse. DDL is never optimized unless it includes a DML component such as a subquery that requires optimization. See Also: Overview of the Optimizer Oracle Database SQL Tuning Guide for detailed information about the query optimizer Parent topic: Stages of SQL Processing The row source generator is software that receives the optimal execution plan from the optimizer and produces an iterative plan, called the query plan, that is usable by the rest of the database. The query plan takes the form of a combination of steps. Each step returns a row set. The rows in this set are either used by the next step or, in the last step, are returned to the application issuing the SQL statement. A row source is a row set returned by a step in the execution plan along with a control structure that can iteratively process the rows. The row source can be a table, view, or result of a join or grouping operation. Parent topic: Stages of SQL Processing During execution, the SQL engine executes each row source in the tree produced by the row source generator. This is the only mandatory step in DML processing. During execution, if the data is not in memory, then the database reads the data from disk into memory. The database also takes out any locks and latches necessary to ensure data integrity and logs any changes made during the SQL execution. The final stage of processing a SQL statement is closing the cursor. If the database is configured to use In-Memory Column Store (IM column store), then the database transparently routes queries to the IM column store when possible, and to disk and the database buffer cache otherwise. A single query can also use the IM column store, disk, and the buffer cache. For example, a query might join two tables, only one of which is cached in the IM column store. See Also: In-Memory Area Oracle Database SQL Tuning Guide for detailed information about execution plans and the EXPLAIN PLAN statement Parent topic: Stages of SQL Processing Oracle Database processes DDL differently from DML. For example, when you create a table, the database does not optimize the CREATE TABLE statement. Instead, Oracle Database parses the DDL statement and carries out the command. In contrast to DDL, most DML statements have a query component. In a query, execution of a cursor places the row generated by the query into the result set. The database can fetch result set rows either one row at a time or in groups. In the fetch, the database selects rows and, if requested by the query, sorts the rows. Each successive fetch retrieves another row of the result until the last row has been fetched. See Also: Oracle Database Development Guide to learn about processing DDL, transaction control, and other types of statements Parent topic: Overview of SQL Processing" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Qubole", "subcategory": "Database" }
[ { "data": "Important We would like to publicly and unequivocally acknowledge that a few words and phrases in terminology used in our industry and subsequently adopted by Qubole over the last decade is insensitive, non-inclusive, and harmful. We are committed to inclusivity and correcting these terms and the negative impressions that they have facilitated. Qubole is actively replacing the following terms in our documentation and education materials: Master becomes Coordinator Slave becomes Worker Whitelist becomes Allow List or Allowed Blacklist becomes Deny List or Denied These terms have been pervasive for too long in this industry which is wrong and we will move as fast as we can to make necessary corrections, while we need your patience we will not take it for granted. Please do not hesitate to reach out if you feel there are other areas for improvement, if you feel we are not doing enough or moving fast enough or if you want to discuss anything further in this area. Copyright 2024, Qubole." } ]
{ "category": "App Definition and Development", "file_name": "release-13-15.html.md", "project_name": "PostgreSQL", "subcategory": "Database" }
[ { "data": "| initdb | initdb.1 | initdb.2 | initdb.3 | initdb.4 | |:|:--|:-|:--|:--| | Prev | Up | PostgreSQL Server Applications | Home | Next | initdb create a new PostgreSQL database cluster initdb [option...] [ --pgdata | -D ] directory initdb creates a new PostgreSQL database cluster. Creating a database cluster consists of creating the directories in which the cluster data will live, generating the shared catalog tables (tables that belong to the whole cluster rather than to any particular database), and creating the postgres, template1, and template0 databases. The postgres database is a default database meant for use by users, utilities and third party applications. template1 and template0 are meant as source databases to be copied by later CREATE DATABASE commands. template0 should never be modified, but you can add objects to template1, which by default will be copied into databases created later. See Section23.3 for more details. Although initdb will attempt to create the specified data directory, it might not have permission if the parent directory of the desired data directory is root-owned. To initialize in such a setup, create an empty data directory as root, then use chown to assign ownership of that directory to the database user account, then su to become the database user to run initdb. initdb must be run as the user that will own the server process, because the server needs to have access to the files and directories that initdb creates. Since the server cannot be run as root, you must not run initdb as root either. (It will in fact refuse to do so.) For security reasons the new cluster created by initdb will only be accessible by the cluster owner by default. The --allow-group-access option allows any user in the same group as the cluster owner to read files in the cluster. This is useful for performing backups as a non-privileged user. initdb initializes the database cluster's default locale and character set encoding. These can also be set separately for each database when it is created. initdb determines those settings for the template databases, which will serve as the default for all other databases. By default, initdb uses the locale provider libc (see Section24.1.4). The libc locale provider takes the locale settings from the environment, and determines the encoding from the locale settings. To choose a different locale for the cluster, use the option --locale. There are also individual options --lc-* and --icu-locale (see below) to set values for the individual locale categories. Note that inconsistent settings for different locale categories can give nonsensical results, so this should be used with care. Alternatively, initdb can use the ICU library to provide locale services by specifying --locale-provider=icu. The server must be built with ICU support. To choose the specific ICU locale ID to apply, use the option --icu-locale. Note that for implementation reasons and to support legacy code, initdb will still select and initialize libc locale settings when the ICU locale provider is used. When initdb runs, it will print out the locale settings it has chosen. If you have complex requirements or specified multiple options, it is advisable to check that the result matches what was intended. More details about locale settings can be found in Section24.1. To alter the default encoding, use the --encoding. More details can be found in" }, { "data": "This option specifies the default authentication method for local users used in pg_hba.conf (host and local lines). See Section21.1 for an overview of valid values. initdb will prepopulate pg_hba.conf entries using the specified authentication method for non-replication as well as replication connections. Do not use trust unless you trust all local users on your system. trust is the default for ease of installation. This option specifies the authentication method for local users via TCP/IP connections used in pg_hba.conf (host lines). This option specifies the authentication method for local users via Unix-domain socket connections used in pg_hba.conf (local lines). This option specifies the directory where the database cluster should be stored. This is the only information required by initdb, but you can avoid writing it by setting the PGDATA environment variable, which can be convenient since the database server (postgres) can find the data directory later by the same variable. Selects the encoding of the template databases. This will also be the default encoding of any database you create later, unless you override it then. The character sets supported by the PostgreSQL server are described in Section24.3.1. By default, the template database encoding is derived from the locale. If --no-locale is specified (or equivalently, if the locale is C or POSIX), then the default is UTF8 for the ICU provider and SQL_ASCII for the libc provider. Allows users in the same group as the cluster owner to read all cluster files created by initdb. This option is ignored on Windows as it does not support POSIX-style group permissions. Specifies the ICU locale when the ICU provider is used. Locale support is described in Section24.1. Specifies additional collation rules to customize the behavior of the default collation. This is supported for ICU only. Use checksums on data pages to help detect corruption by the I/O system that would otherwise be silent. Enabling checksums may incur a noticeable performance penalty. If set, checksums are calculated for all objects, in all databases. All checksum failures will be reported in the pgstatdatabase view. See Section30.2 for details. Sets the default locale for the database cluster. If this option is not specified, the locale is inherited from the environment that initdb runs in. Locale support is described in Section24.1. Like --locale, but only sets the locale in the specified category. Equivalent to --locale=C. This option sets the locale provider for databases created in the new cluster. It can be overridden in the CREATE DATABASE command when new databases are subsequently created. The default is libc (see Section24.1.4). By default, initdb will wait for all files to be written safely to disk. This option causes initdb to return without waiting, which is faster, but means that a subsequent operating system crash can leave the data directory corrupt. Generally, this option is useful for testing, but should not be used when creating a production installation. By default, initdb will write instructions for how to start the cluster at the end of its output. This option causes those instructions to be left out. This is primarily intended for use by tools that wrap initdb in platform-specific behavior, where those instructions are likely to be incorrect. Makes initdb read the bootstrap superuser's password from a file. The first line of the file is taken as the password. Safely write all database files to disk and exit. This does not perform any of the normal initdb" }, { "data": "Generally, this option is useful for ensuring reliable recovery after changing fsync from off to on. Sets the default text search configuration. See defaulttextsearch_config for further information. Sets the user name of the bootstrap superuser. This defaults to the name of the operating-system user running initdb. Makes initdb prompt for a password to give the bootstrap superuser. If you don't plan on using password authentication, this is not important. Otherwise you won't be able to use password authentication until you have a password set up. This option specifies the directory where the write-ahead log should be stored. Set the WAL segment size, in megabytes. This is the size of each individual file in the WAL log. The default size is 16 megabytes. The value must be a power of 2 between 1 and 1024 (megabytes). This option can only be set during initialization, and cannot be changed later. It may be useful to adjust this size to control the granularity of WAL log shipping or archiving. Also, in databases with a high volume of WAL, the sheer number of WAL files per directory can become a performance and management problem. Increasing the WAL file size will reduce the number of WAL files. Other, less commonly used, options are also available: Forcibly set the server parameter name to value during initdb, and also install that setting in the generated postgresql.conf file, so that it will apply during future server runs. This option can be given more than once to set several parameters. It is primarily useful when the environment is such that the server will not start at all using the default parameters. Print debugging output from the bootstrap backend and a few other messages of lesser interest for the general public. The bootstrap backend is the program initdb uses to create the catalog tables. This option generates a tremendous amount of extremely boring output. Run the bootstrap backend with the debugdiscardcaches=1 option. This takes a very long time and is only of use for deep debugging. Specifies where initdb should find its input files to initialize the database cluster. This is normally not necessary. You will be told if you need to specify their location explicitly. By default, when initdb determines that an error prevented it from completely creating the database cluster, it removes any files it might have created before discovering that it cannot finish the job. This option inhibits tidying-up and is thus useful for debugging. Other options: Print the initdb version and exit. Show help about initdb command line arguments, and exit. Specifies the directory where the database cluster is to be stored; can be overridden using the -D option. Specifies whether to use color in diagnostic messages. Possible values are always, auto and never. Specifies the default time zone of the created database cluster. The value should be a full time zone name (see Section8.5.3). This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq (see Section34.15). initdb can also be invoked via pg_ctl initdb. | 0 | 1 | 2 | |:-|:--|:| | Prev | Up | Next | | PostgreSQL Server Applications | Home | pg_archivecleanup | If you see anything in the documentation that is not correct, does not match your experience with the particular feature or requires further clarification, please use this form to report a documentation issue. Copyright 1996-2024 The PostgreSQL Global Development Group" } ]
{ "category": "App Definition and Development", "file_name": "__ga=2.206060435.1678453019.1688754812-253506922.1688754810.md", "project_name": "Qubole", "subcategory": "Database" }
[ { "data": "Important We would like to publicly and unequivocally acknowledge that a few words and phrases in terminology used in our industry and subsequently adopted by Qubole over the last decade is insensitive, non-inclusive, and harmful. We are committed to inclusivity and correcting these terms and the negative impressions that they have facilitated. Qubole is actively replacing the following terms in our documentation and education materials: Master becomes Coordinator Slave becomes Worker Whitelist becomes Allow List or Allowed Blacklist becomes Deny List or Denied These terms have been pervasive for too long in this industry which is wrong and we will move as fast as we can to make necessary corrections, while we need your patience we will not take it for granted. Please do not hesitate to reach out if you feel there are other areas for improvement, if you feel we are not doing enough or moving fast enough or if you want to discuss anything further in this area. Copyright 2024, Qubole." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "RethinkDB", "subcategory": "Database" }
[ { "data": "JavaScript Ruby Python Java These drivers have been updated to use the JSON driver protocol and at least RethinkDB 2.0 ReQL. C# bchavez C# mfenniak C++ Clojure Common Lisp Dart Delphi Elixir Erlang Go Haskell JS neumino Lua Nim Perl PHP R Rust Swift Crystal These drivers may not support all of RethinkDB 2.0's ReQL. However, if you wish to work with one of these languages, they may provide a good starting point. Objective-C Scala Note that both the official JavaScript driver and neuminos rethinkdbdash driver are designed to work with Node.js. Havent installed the server yet? Go do that first! Help make RethinkDB available on more platformscontribute a driver for another language, or join one of the existing projects. To get started with driver development: Thanks to all our amazing driver contributors! Install in seconds on Linux or OS X. Star this project on GitHub." } ]
{ "category": "App Definition and Development", "file_name": "cookbook.md", "project_name": "RethinkDB", "subcategory": "Database" }
[ { "data": "ReQL is the RethinkDB query language. It offers a very powerful and convenient way to manipulate JSON documents. This document is a gentle introduction to ReQL concepts. You dont have to read it to be productive with RethinkDB, but it helps to understand some basics. Want to write useful queries right away? Check out the ten-minute guide. ReQL is different from other NoSQL query languages. Its built on three key principles: Lets look at these concepts in more detail. Note: the following examples use the Python driver, but most of them also apply to RethinkDB drivers for other languages. You start using ReQL in your program similarly to how youd use other databases: ``` from rethinkdb import RethinkDB # import the RethinkDB package r = RethinkDB() # create a RethinkDB object conn = r.connect() # connect to the server on localhost and default port ``` But this is where the similarity ends. Instead of constructing strings and passing them to the database server, you access ReQL by using methods from the rethinkdb package: ``` r.table_create('users').run(conn) # create a table `users` r.table('users').run(conn) # get an iterable cursor to the `users` table ``` Every ReQL query, from filters, to updates, to table joins is done by calling appropriate methods. This design has the following advantages: In ReQL, you can chain commands at the end of other commands using the . operator: ``` r.table('users').run(conn) r.table('users').pluck('last_name').run(conn) r.table('users').pluck('last_name').distinct().run(conn) r.table('users').pluck('last_name').distinct().count().run(conn) ``` Almost all ReQL operations are chainable. You can think of the . operator similarly to how youd think of a Unix pipe. You select the data from the table and pipe it into a command that transforms it. You can continue chaining transformers until your query is done. In ReQL, data flows from left to right. Even if you have a cluster of RethinkDB nodes, you can send your queries to any node and the cluster will create and execute distributed programs that get the data from relevant nodes, perform the necessary computations, and present you with final results without you ever worrying about it. This design has the following advantages: While queries are built up on the client, theyre only sent to the server once you call the run command. All processing happens on the serverthe queries dont run on the client, and dont require intermediary network round trips between the client and the server. For example, you can store queries in variables, and send them to the server later: ``` distinctlastnamesquery = r.table('users').pluck('last_name').distinct()" }, { "data": "``` Read about how this technology is implemented for more details. ReQL queries are executed lazily: ``` r.table('users').has_fields('age').limit(5).run(conn) ``` For this query RethinkDB will perform enough work to get the five documents, and stop when the query is satisfied. Even if you dont have a limit on the number of queries but use a cursor, RethinkDB will do just enough work to allow you to read the data you request. This allows queries to execute quickly without wasting CPU cycles, network bandwidth, and disk IO. Like most database systems, ReQL supports primary and secondary indexes to allow efficient data access. You can also create compound indexes and indexes based on arbitrary ReQL expressions to speed up complex queries. Learn how to use primary and secondary indexes in RethinkDB. All ReQL queries are automatically parallelized on the RethinkDB server as much as possible. Whenever possible, query execution is split across CPU cores, servers in the cluster, and even multiple datacenters. If you have large, complicated queries that require multiple stages of processing, RethinkDB will automatically break them up into stages, execute each stage in parallel, and combine data to return a complete result. While RethinkDB doesnt currently have a fully-featured query optimizer, ReQL is designed with one in mind. For example, the server has enough information to reorder the chain for efficiency, or to use alternative implementation plans to improve performance. This feature will be introduced into future versions of RethinkDB. So far weve seen only simple queries without conditions. ReQL supports a familiar syntax for building more advanced queries: ``` r.table('users').filter(lambda user: user['age'] > 30).run(conn) r.table('users').filter(r.row['age'] > 30).run(conn) ``` This query looks just like any other Python code you would normally write. Note that RethinkDB will execute this query on the server, and it doesnt execute native Python code. The client drivers do a lot of work to inspect the code and convert it to an efficient ReQL query that will be executed on the server: Read about how this technology is implemented for more details. This technology has limitations. While most operations allow you to write familiar code, you cant use native languages operations that have side effects (such as print) or control blocks (such as if and for). Instead, you have to use alternative ReQL commands: ``` r.table('users').filter(lambda user: True if user['age'] > 30 else False).run(conn) r.table('users').filter(lambda user: r.branch(user['age'] > 30, True," }, { "data": "``` This design has the following advantages: This technology has the following limitation: You can combine multiple ReQL queries to build more complex ones. Lets start with a simple example. RethinkDB supports server-side JavaScript evaluation using the embedded V8 engine (sandboxed within outside processes, of course): ``` r.js('1 + 1').run(conn) ``` Because ReQL is composable you can combine the r.js command with any other query. For example, lets use it as an alternative to get all users older than 30: ``` r.table('users').filter(lambda user: user['age'] > 30).run(conn) r.table('users').filter(r.js('(function (user) { return user.age > 30; })')).run(conn) ``` RethinkDB will seamlessly evaluate the js command by calling into the V8 engine during the evaluation of the filter query. You can combine most queries this way into progressively more complex ones. Lets say we have another table authors, and wed like to get a list of authors whose last names are also in the users table weve seen before. We can do it by combining two queries: ``` r.table('authors').filter(lambda author: r.table('users').pluck('lastname').contains(author.pluck('lastname'))). run(conn) ``` Here, we use the r.table('users').pluck('last_name') query as the inner query in filter, combining the two queries to build a more sophisticated one. Even if you have a cluster of servers and both the authors table and the users table are sharded, RethinkDB will do the right thing and evaluate relevant parts of the query above on the appropriate shards, combine bits of data as necessary, and return the complete result. A few things to note about this query: Composing queries isnt limited to simple commands and inner queries. You can also use expressions to perform complex operations. For example, suppose wed like to find all users whose salary and bonus dont exceed $90,000, and increase their salary by 10%: ``` r.table('users').filter(lambda user: user['salary'] + user['bonus'] < 90000) .update(lambda user: {'salary': user['salary'] + user['salary'] * 0.1}) ``` In addition to commands described here, ReQL supports a number of sophisticated commands that are composable similarly to the commands described here. See the following documentation for more details: This design has the following advantages: Just in case you needed another calculator, ReQL can do that too! ``` (r.expr(2) + r.expr(2)).run(conn) (r.expr(2) + 2).run(conn) (r.expr(2) + 2 / 2).run(conn) (r.expr(2) > 3).run(conn) r.branch(r.expr(2) > 3, 1, # if True, return 1 2 # otherwise, return 2 ).run(conn) r.table_create('fib').run(conn) r.table('fib').insert([{'id': 0, 'value': 0}, {'id': 1, 'value': 1}]).run(conn) r.expr([2, 3, 4, 5, 6, 7, 8, 9, 10, 11]).for_each(lambda x: r.table('fib').insert({'id': x, 'value': (r.table('fib').order_by('id').nth(x - 1)['value'] + r.table('fib').order_by('id').nth(x - 2)['value']) })).run(conn) r.table('fib').order_by('id')['value'].run(conn) ``` Browse the following resources to learn more about ReQL: Install in seconds on Linux or OS X. Star this project on GitHub." } ]
{ "category": "App Definition and Development", "file_name": "quickstart.md", "project_name": "RethinkDB", "subcategory": "Database" }
[ { "data": "JavaScript Ruby Python Java These drivers have been updated to use the JSON driver protocol and at least RethinkDB 2.0 ReQL. C# bchavez C# mfenniak C++ Clojure Common Lisp Dart Delphi Elixir Erlang Go Haskell JS neumino Lua Nim Perl PHP R Rust Swift Crystal These drivers may not support all of RethinkDB 2.0's ReQL. However, if you wish to work with one of these languages, they may provide a good starting point. Objective-C Scala Note that both the official JavaScript driver and neuminos rethinkdbdash driver are designed to work with Node.js. Havent installed the server yet? Go do that first! Help make RethinkDB available on more platformscontribute a driver for another language, or join one of the existing projects. To get started with driver development: Thanks to all our amazing driver contributors! Install in seconds on Linux or OS X. Star this project on GitHub." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Redis", "subcategory": "Database" }
[ { "data": "Optimizing Pokmon GO with a Redis Enterprise cluster Learn what you need to go from beginner to GenAI expert Optimizing Pokmon GO with a Redis Enterprise cluster Learn what you need to go from beginner to GenAI expert Understand how to use Redis as a document database This quick start guide shows you how to: The examples in this article refer to a simple bicycle inventory that contains JSON documents with the following structure: ``` { \"brand\": \"brand name\", \"condition\": \"new | used | refurbished\", \"description\": \"description\", \"model\": \"model\", \"price\": 0 } ``` The easiest way to get started with Redis Stack is to use Redis Cloud: Create a free account. Follow the instructions to create a free database. This free Redis Cloud database comes out of the box with all the Redis Stack features. You can alternatively use the installation guides to install Redis Stack on your local machine. The first step is to connect to your Redis Stack database. You can find further details about the connection options in this documentation site's connection section. The following example shows how to connect to a Redis Stack server that runs on localhost (-h 127.0.0.1) and listens on the default port (-p 6379): ``` redis-cli -h 127.0.0.1 -p 6379``` ``` import redis import redis.commands.search.aggregation as aggregations import redis.commands.search.reducers as reducers from redis.commands.json.path import Path from redis.commands.search.field import NumericField, TagField, TextField from redis.commands.search.indexDefinition import IndexDefinition, IndexType from redis.commands.search.query import Query r = redis.Redis(host=\"localhost\", port=6379, db=0, decode_responses=True) bicycle = { \"brand\": \"Velorim\", \"model\": \"Jigger\", \"price\": 270, \"description\": ( \"Small and powerful, the Jigger is the best ride \" \"for the smallest of tikes! This is the tiniest \" \"kids pedal bike on the market available without\" \" a coaster brake, the Jigger is the vehicle of \" \"choice for the rare tenacious little rider \" \"raring to go.\" ), \"condition\": \"new\", } bicycles = [ bicycle, { \"brand\": \"Bicyk\", \"model\": \"Hillcraft\", \"price\": 1200, \"description\": ( \"Kids want to ride with as little weight as possible.\" \" Especially on an incline! They may be at the age \" 'when a 27.5\" wheel bike is just too clumsy coming ' 'off a 24\" bike. The Hillcraft 26 is just the solution' \" they need!\" ), \"condition\": \"used\", }, { \"brand\": \"Nord\", \"model\": \"Chook air 5\", \"price\": 815, \"description\": ( \"The Chook Air 5 gives kids aged six years and older \" \"a durable and uberlight mountain bike for their first\" \" experience on tracks and easy cruising through forests\" \" and fields. The lower top tube makes it easy to mount\" \" and dismount in any situation, giving your kids greater\" \" safety on the trails.\" ), \"condition\": \"used\", }, { \"brand\": \"Eva\", \"model\": \"Eva 291\", \"price\": 3400, \"description\": ( \"The sister company to Nord, Eva launched in 2005 as the\" \" first and only women-dedicated bicycle brand. Designed\" \" by women for women, allEva bikes are optimized for the\" \" feminine physique using analytics from a body metrics\" \" database. If you like 29ers, try the Eva 291. Its a \" \"brand new bike for 2022.. This full-suspension, \" \"cross-country ride has been designed for velocity. The\" \" 291 has 100mm of front and rear travel, a superlight \" \"aluminum frame and fast-rolling 29-inch" }, { "data": "Yippee!\" ), \"condition\": \"used\", }, { \"brand\": \"Noka Bikes\", \"model\": \"Kahuna\", \"price\": 3200, \"description\": ( \"Whether you want to try your hand at XC racing or are \" \"looking for a lively trail bike that's just as inspiring\" \" on the climbs as it is over rougher ground, the Wilder\" \" is one heck of a bike built specifically for short women.\" \" Both the frames and components have been tweaked to \" \"include a womens saddle, different bars and unique \" \"colourway.\" ), \"condition\": \"used\", }, { \"brand\": \"Breakout\", \"model\": \"XBN 2.1 Alloy\", \"price\": 810, \"description\": ( \"The XBN 2.1 Alloy is our entry-level road bike but thats\" \" not to say that its a basic machine. With an internal \" \"weld aluminium frame, a full carbon fork, and the slick-shifting\" \" Claris gears from Shimanos, this is a bike which doesnt\" \" break the bank and delivers craved performance.\" ), \"condition\": \"new\", }, { \"brand\": \"ScramBikes\", \"model\": \"WattBike\", \"price\": 2300, \"description\": ( \"The WattBike is the best e-bike for people who still feel young\" \" at heart. It has a Bafang 1000W mid-drive system and a 48V\" \" 17.5AH Samsung Lithium-Ion battery, allowing you to ride for\" \" more than 60 miles on one charge. Its great for tackling hilly\" \" terrain or if you just fancy a more leisurely ride. With three\" \" working modes, you can choose between E-bike, assisted bicycle,\" \" and normal bike modes.\" ), \"condition\": \"new\", }, { \"brand\": \"Peaknetic\", \"model\": \"Secto\", \"price\": 430, \"description\": ( \"If you struggle with stiff fingers or a kinked neck or back after\" \" a few minutes on the road, this lightweight, aluminum bike\" \" alleviates those issues and allows you to enjoy the ride. From\" \" the ergonomic grips to the lumbar-supporting seat position, the\" \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" \" tube facilitates stability by allowing you to put a foot on the\" \" ground to balance at a stop, and the low step-over frame makes it\" \" accessible for all ability and mobility levels. The saddle is\" \" very soft, with a wide back to support your hip joints and a\" \" cutout in the center to redistribute that pressure. Rim brakes\" \" deliver satisfactory braking control, and the wide tires provide\" \" a smooth, stable ride on paved roads and gravel. Rack and fender\" \" mounts facilitate setting up the Roll Low-Entry as your preferred\" \" commuter, and the BMX-like handlebar offers space for mounting a\" \" flashlight, bell, or phone holder.\" ), \"condition\": \"new\", }, { \"brand\": \"nHill\", \"model\": \"Summit\", \"price\": 1200, \"description\": ( \"This budget mountain bike from nHill performs well both on bike\" \" paths and on the trail. The fork with 100mm of travel absorbs\" \" rough terrain. Fat Kenda Booster tires give you grip in corners\" \" and on wet trails. The Shimano Tourney drivetrain offered enough\" \" gears for finding a comfortable pace to ride uphill, and the\" \" Tektro hydraulic disc brakes break smoothly. Whether you want an\" \" affordable bike that you can take to work, but also take trail in\" \" mountains on the weekends or youre just after a stable,\" \" comfortable ride for the bike path, the Summit gives a good value\" \" for money.\" ), \"condition\": \"new\", }, { \"model\": \"ThrillCycle\", \"brand\": \"BikeShind\", \"price\": 815, \"description\": ( \"An artsy, retro-inspired bicycle thats as functional as it is\" \" pretty: The ThrillCycle steel frame offers a smooth ride. A\" \" 9-speed drivetrain has enough gears for coasting in the city, but\" \" we wouldnt suggest taking it to the mountains. Fenders protect\" \" you from mud, and a rear basket lets you transport groceries,\" \" flowers and books. The ThrillCycle comes with a limited lifetime\" \" warranty, so this little guy will last you long past" }, { "data": "), \"condition\": \"refurbished\", }, ] schema = ( TextField(\"$.brand\", as_name=\"brand\"), TextField(\"$.model\", as_name=\"model\"), TextField(\"$.description\", as_name=\"description\"), NumericField(\"$.price\", as_name=\"price\"), TagField(\"$.condition\", as_name=\"condition\"), ) index = r.ft(\"idx:bicycle\") index.create_index( schema, definition=IndexDefinition(prefix=[\"bicycle:\"], index_type=IndexType.JSON), ) for bid, bicycle in enumerate(bicycles): r.json().set(f\"bicycle:{bid}\", Path.root_path(), bicycle) res = index.search(Query(\"*\")) print(\"Documents found:\", res.total) res = index.search(Query(\"@model:Jigger\")) print(res) res = index.search( Query(\"@model:Jigger\").returnfield(\"$.price\", asfield=\"price\") ) print(res) res = index.search(Query(\"basic @price:[500 1000]\")) print(res) res = index.search(Query('@brand:\"Noka Bikes\"')) print(res) res = index.search( Query( \"@description:%analitics%\" ).dialect( # Note the typo in the word \"analytics\" 2 ) ) print(res) res = index.search( Query( \"@description:%%analitycs%%\" ).dialect( # Note 2 typos in the word \"analytics\" 2 ) ) print(res) res = index.search(Query(\"@model:hill*\")) print(res) res = index.search(Query(\"@model:*bike\")) print(res) res = index.search(Query(\"w'H?*craft'\").dialect(2)) print(res.docs[0].json) res = index.search(Query(\"mountain\").with_scores()) for sr in res.docs: print(f\"{sr.id}: score={sr.score}\") res = index.search(Query(\"mountain\").with_scores().scorer(\"BM25\")) for sr in res.docs: print(f\"{sr.id}: score={sr.score}\") req = aggregations.AggregateRequest(\"*\").group_by( \"@condition\", reducers.count().alias(\"count\") ) res = index.aggregate(req).rows print(res) ``` ``` package io.redis.examples; import java.math.BigDecimal; import java.util.*; import redis.clients.jedis.*; import redis.clients.jedis.exceptions.*; import redis.clients.jedis.search.*; import redis.clients.jedis.search.aggr.*; import redis.clients.jedis.search.schemafields.*; class Bicycle { public String brand; public String model; public BigDecimal price; public String description; public String condition; public Bicycle(String brand, String model, BigDecimal price, String condition, String description) { this.brand = brand; this.model = model; this.price = price; this.condition = condition; this.description = description; } } public class SearchQuickstartExample { public void run() { // UnifiedJedis jedis = new UnifiedJedis(\"redis://localhost:6379\"); JedisPooled jedis = new JedisPooled(\"localhost\", 6379); SchemaField[] schema = { TextField.of(\"$.brand\").as(\"brand\"), TextField.of(\"$.model\").as(\"model\"), TextField.of(\"$.description\").as(\"description\"), NumericField.of(\"$.price\").as(\"price\"), TagField.of(\"$.condition\").as(\"condition\") }; jedis.ftCreate(\"idx:bicycle\", FTCreateParams.createParams() .on(IndexDataType.JSON) .addPrefix(\"bicycle:\"), schema ); Bicycle[] bicycles = { new Bicycle( \"Velorim\", \"Jigger\", new BigDecimal(270), \"new\", \"Small and powerful, the Jigger is the best ride \" + \"for the smallest of tikes! This is the tiniest \" + \"kids pedal bike on the market available without\" + \" a coaster brake, the Jigger is the vehicle of \" + \"choice for the rare tenacious little rider \" + \"raring to go.\" ), new Bicycle( \"Bicyk\", \"Hillcraft\", new BigDecimal(1200), \"used\", \"Kids want to ride with as little weight as possible.\" + \" Especially on an incline! They may be at the age \" + \"when a 27.5 inch wheel bike is just too clumsy coming \" + \"off a 24 inch bike. The Hillcraft 26 is just the solution\" + \" they need!\" ), new Bicycle( \"Nord\", \"Chook air 5\", new BigDecimal(815), \"used\", \"The Chook Air 5 gives kids aged six years and older \" + \"a durable and uberlight mountain bike for their first\" + \" experience on tracks and easy cruising through forests\" + \" and fields. The lower top tube makes it easy to mount\" + \" and dismount in any situation, giving your kids greater\" + \" safety on the trails.\" ), new Bicycle( \"Eva\", \"Eva 291\", new BigDecimal(3400), \"used\", \"The sister company to Nord, Eva launched in 2005 as the\" + \" first and only women-dedicated bicycle brand. Designed\" + \" by women for women, allEva bikes are optimized for the\" + \" feminine physique using analytics from a body metrics\" + \" database. If you like 29ers, try the Eva 291. It's a \" + \"brand new bike for 2022.. This full-suspension, \" + \"cross-country ride has been designed for velocity. The\" + \" 291 has 100mm of front and rear travel, a superlight \" + \"aluminum frame and fast-rolling 29-inch" }, { "data": "Yippee!\" ), new Bicycle( \"Noka Bikes\", \"Kahuna\", new BigDecimal(3200), \"used\", \"Whether you want to try your hand at XC racing or are \" + \"looking for a lively trail bike that's just as inspiring\" + \" on the climbs as it is over rougher ground, the Wilder\" + \" is one heck of a bike built specifically for short women.\" + \" Both the frames and components have been tweaked to \" + \"include a womens saddle, different bars and unique \" + \"colourway.\" ), new Bicycle( \"Breakout\", \"XBN 2.1 Alloy\", new BigDecimal(810), \"new\", \"The XBN 2.1 Alloy is our entry-level road bike but thats\" + \" not to say that its a basic machine. With an internal \" + \"weld aluminium frame, a full carbon fork, and the slick-shifting\" + \" Claris gears from Shimanos, this is a bike which doesnt\" + \" break the bank and delivers craved performance.\" ), new Bicycle( \"ScramBikes\", \"WattBike\", new BigDecimal(2300), \"new\", \"The WattBike is the best e-bike for people who still feel young\" + \" at heart. It has a Bafang 1000W mid-drive system and a 48V\" + \" 17.5AH Samsung Lithium-Ion battery, allowing you to ride for\" + \" more than 60 miles on one charge. Its great for tackling hilly\" + \" terrain or if you just fancy a more leisurely ride. With three\" + \" working modes, you can choose between E-bike, assisted bicycle,\" + \" and normal bike modes.\" ), new Bicycle( \"Peaknetic\", \"Secto\", new BigDecimal(430), \"new\", \"If you struggle with stiff fingers or a kinked neck or back after\" + \" a few minutes on the road, this lightweight, aluminum bike\" + \" alleviates those issues and allows you to enjoy the ride. From\" + \" the ergonomic grips to the lumbar-supporting seat position, the\" + \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" + \" tube facilitates stability by allowing you to put a foot on the\" + \" ground to balance at a stop, and the low step-over frame makes it\" + \" accessible for all ability and mobility levels. The saddle is\" + \" very soft, with a wide back to support your hip joints and a\" + \" cutout in the center to redistribute that pressure. Rim brakes\" + \" deliver satisfactory braking control, and the wide tires provide\" + \" a smooth, stable ride on paved roads and gravel. Rack and fender\" + \" mounts facilitate setting up the Roll Low-Entry as your preferred\" + \" commuter, and the BMX-like handlebar offers space for mounting a\" + \" flashlight, bell, or phone holder.\" ), new Bicycle( \"nHill\", \"Summit\", new BigDecimal(1200), \"new\", \"This budget mountain bike from nHill performs well both on bike\" + \" paths and on the trail. The fork with 100mm of travel absorbs\" + \" rough terrain. Fat Kenda Booster tires give you grip in corners\" + \" and on wet trails. The Shimano Tourney drivetrain offered enough\" + \" gears for finding a comfortable pace to ride uphill, and the\" + \" Tektro hydraulic disc brakes break smoothly. Whether you want an\" + \" affordable bike that you can take to work, but also take trail in\" + \" mountains on the weekends or youre just after a stable,\" + \" comfortable ride for the bike path, the Summit gives a good value\" + \" for money.\" ), new Bicycle( \"ThrillCycle\", \"BikeShind\", new BigDecimal(815), \"refurbished\", \"An artsy, retro-inspired bicycle thats as functional as it is\" + \" pretty: The ThrillCycle steel frame offers a smooth ride. A\" + \" 9-speed drivetrain has enough gears for coasting in the city, but\" + \" we wouldnt suggest taking it to the mountains. Fenders protect\" + \" you from mud, and a rear basket lets you transport groceries,\" + \" flowers and" }, { "data": "The ThrillCycle comes with a limited lifetime\" + \" warranty, so this little guy will last you long past graduation.\" ), }; for (int i = 0; i < bicycles.length; i++) { jedis.jsonSetWithEscape(String.format(\"bicycle:%d\", i), bicycles[i]); } Query query1 = new Query(\"*\"); List<Document> result1 = jedis.ftSearch(\"idx:bicycle\", query1).getDocuments(); System.out.println(\"Documents found:\" + result1.size()); // Prints: Documents found: 10 Query query2 = new Query(\"@model:Jigger\"); List<Document> result2 = jedis.ftSearch(\"idx:bicycle\", query2).getDocuments(); System.out.println(result2); // Prints: [id:bicycle:0, score: 1.0, payload:null, // properties:[$={\"brand\":\"Velorim\",\"model\":\"Jigger\",\"price\":270,\"description\":\"Small and powerful, the Jigger is the best ride for the smallest of tikes! This is the tiniest kids pedal bike on the market available without a coaster brake, the Jigger is the vehicle of choice for the rare tenacious little rider raring to go.\",\"condition\":\"new\"}]] Query query3 = new Query(\"@model:Jigger\").returnFields(\"price\"); List<Document> result3 = jedis.ftSearch(\"idx:bicycle\", query3).getDocuments(); System.out.println(result3); // Prints: [id:bicycle:0, score: 1.0, payload:null, properties:[price=270]] Query query4 = new Query(\"basic @price:[500 1000]\"); List<Document> result4 = jedis.ftSearch(\"idx:bicycle\", query4).getDocuments(); System.out.println(result4); // Prints: [id:bicycle:5, score: 1.0, payload:null, // properties:[$={\"brand\":\"Breakout\",\"model\":\"XBN 2.1 Alloy\",\"price\":810,\"description\":\"The XBN 2.1 Alloy is our entry-level road bike but thats not to say that its a basic machine. With an internal weld aluminium frame, a full carbon fork, and the slick-shifting Claris gears from Shimanos, this is a bike which doesnt break the bank and delivers craved performance.\",\"condition\":\"new\"}]] Query query5 = new Query(\"@brand:\\\"Noka Bikes\\\"\"); List<Document> result5 = jedis.ftSearch(\"idx:bicycle\", query5).getDocuments(); System.out.println(result5); // Prints: [id:bicycle:4, score: 1.0, payload:null, // properties:[$={\"brand\":\"Noka Bikes\",\"model\":\"Kahuna\",\"price\":3200,\"description\":\"Whether you want to try your hand at XC racing or are looking for a lively trail bike that's just as inspiring on the climbs as it is over rougher ground, the Wilder is one heck of a bike built specifically for short women. Both the frames and components have been tweaked to include a womens saddle, different bars and unique colourway.\",\"condition\":\"used\"}]] AggregationBuilder ab = new AggregationBuilder(\"*\").groupBy(\"@condition\", Reducers.count().as(\"count\")); AggregationResult ar = jedis.ftAggregate(\"idx:bicycle\", ab); for (int i = 0; i < ar.getTotalResults(); i++) { System.out.println(ar.getRow(i).getString(\"condition\") + \" - \" ar.getRow(i).getString(\"count\")); } // Prints: // refurbished - 1 // used - 5 // new - 4 assertEquals(\"Validate aggregation results\", 3, ar.getTotalResults()); jedis.close(); } }``` ``` using NRedisStack.RedisStackCommands; using NRedisStack.Search; using NRedisStack.Search.Aggregation; using NRedisStack.Search.Literals.Enums; using NRedisStack.Tests; using StackExchange.Redis; public class SearchQuickstartExample { [SkipIfRedis(Is.OSSCluster)] public void run() { var redis = ConnectionMultiplexer.Connect(\"localhost:6379\"); var db = redis.GetDatabase(); var ft = db.FT(); var json = db.JSON(); var bike1 = new { Brand = \"Velorim\", Model = \"Jigger\", Price = 270M, Description = \"Small and powerful, the Jigger is the best ride \" + \"for the smallest of tikes! This is the tiniest \" + \"kids pedal bike on the market available without\" + \" a coaster brake, the Jigger is the vehicle of \" + \"choice for the rare tenacious little rider \" + \"raring to go.\", Condition = \"used\" }; var bicycles = new object[] { bike1, new { Brand = \"Bicyk\", Model = \"Hillcraft\", Price = 1200M, Description = \"Kids want to ride with as little weight as possible.\" + \" Especially on an incline! They may be at the age \" + \"when a 27.5 inch wheel bike is just too clumsy coming \" + \"off a 24 inch bike. The Hillcraft 26 is just the solution\" + \" they need!\", Condition = \"used\", }, new { Brand = \"Nord\", Model = \"Chook air 5\", Price = 815M, Description = \"The Chook Air 5 gives kids aged six years and older \" + \"a durable and uberlight mountain bike for their first\" + \" experience on tracks and easy cruising through forests\" + \" and" }, { "data": "The lower top tube makes it easy to mount\" + \" and dismount in any situation, giving your kids greater\" + \" safety on the trails.\", Condition = \"used\", }, new { Brand = \"Eva\", Model = \"Eva 291\", Price = 3400M, Description = \"The sister company to Nord, Eva launched in 2005 as the\" + \" first and only women-dedicated bicycle brand. Designed\" + \" by women for women, allEva bikes are optimized for the\" + \" feminine physique using analytics from a body metrics\" + \" database. If you like 29ers, try the Eva 291. Its a \" + \"brand new bike for 2022.. This full-suspension, \" + \"cross-country ride has been designed for velocity. The\" + \" 291 has 100mm of front and rear travel, a superlight \" + \"aluminum frame and fast-rolling 29-inch wheels. Yippee!\", Condition = \"used\", }, new { Brand = \"Noka Bikes\", Model = \"Kahuna\", Price = 3200M, Description = \"Whether you want to try your hand at XC racing or are \" + \"looking for a lively trail bike that's just as inspiring\" + \" on the climbs as it is over rougher ground, the Wilder\" + \" is one heck of a bike built specifically for short women.\" + \" Both the frames and components have been tweaked to \" + \"include a womens saddle, different bars and unique \" + \"colourway.\", Condition = \"used\", }, new { Brand = \"Breakout\", Model = \"XBN 2.1 Alloy\", Price = 810M, Description = \"The XBN 2.1 Alloy is our entry-level road bike but thats\" + \" not to say that its a basic machine. With an internal \" + \"weld aluminium frame, a full carbon fork, and the slick-shifting\" + \" Claris gears from Shimanos, this is a bike which doesnt\" + \" break the bank and delivers craved performance.\", Condition = \"new\", }, new { Brand = \"ScramBikes\", Model = \"WattBike\", Price = 2300M, Description = \"The WattBike is the best e-bike for people who still feel young\" + \" at heart. It has a Bafang 1000W mid-drive system and a 48V\" + \" 17.5AH Samsung Lithium-Ion battery, allowing you to ride for\" + \" more than 60 miles on one charge. Its great for tackling hilly\" + \" terrain or if you just fancy a more leisurely ride. With three\" + \" working modes, you can choose between E-bike, assisted bicycle,\" + \" and normal bike modes.\", Condition = \"new\", }, new { Brand = \"Peaknetic\", Model = \"Secto\", Price = 430M, Description = \"If you struggle with stiff fingers or a kinked neck or back after\" + \" a few minutes on the road, this lightweight, aluminum bike\" + \" alleviates those issues and allows you to enjoy the ride. From\" + \" the ergonomic grips to the lumbar-supporting seat position, the\" + \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" + \" tube facilitates stability by allowing you to put a foot on the\" + \" ground to balance at a stop, and the low step-over frame makes it\" + \" accessible for all ability and mobility levels. The saddle is\" + \" very soft, with a wide back to support your hip joints and a\" + \" cutout in the center to redistribute that pressure. Rim brakes\" + \" deliver satisfactory braking control, and the wide tires provide\" + \" a smooth, stable ride on paved roads and" }, { "data": "Rack and fender\" + \" mounts facilitate setting up the Roll Low-Entry as your preferred\" + \" commuter, and the BMX-like handlebar offers space for mounting a\" + \" flashlight, bell, or phone holder.\", Condition = \"new\", }, new { Brand = \"nHill\", Model = \"Summit\", Price = 1200M, Description = \"This budget mountain bike from nHill performs well both on bike\" + \" paths and on the trail. The fork with 100mm of travel absorbs\" + \" rough terrain. Fat Kenda Booster tires give you grip in corners\" + \" and on wet trails. The Shimano Tourney drivetrain offered enough\" + \" gears for finding a comfortable pace to ride uphill, and the\" + \" Tektro hydraulic disc brakes break smoothly. Whether you want an\" + \" affordable bike that you can take to work, but also take trail in\" + \" mountains on the weekends or youre just after a stable,\" + \" comfortable ride for the bike path, the Summit gives a good value\" + \" for money.\", Condition = \"new\", }, new { Model = \"ThrillCycle\", Brand = \"BikeShind\", Price = 815M, Description = \"An artsy, retro-inspired bicycle thats as functional as it is\" + \" pretty: The ThrillCycle steel frame offers a smooth ride. A\" + \" 9-speed drivetrain has enough gears for coasting in the city, but\" + \" we wouldnt suggest taking it to the mountains. Fenders protect\" + \" you from mud, and a rear basket lets you transport groceries,\" + \" flowers and books. The ThrillCycle comes with a limited lifetime\" + \" warranty, so this little guy will last you long past graduation.\", Condition = \"refurbished\", }, }; var schema = new Schema() .AddTextField(new FieldName(\"$.Brand\", \"Brand\")) .AddTextField(new FieldName(\"$.Model\", \"Model\")) .AddTextField(new FieldName(\"$.Description\", \"Description\")) .AddNumericField(new FieldName(\"$.Price\", \"Price\")) .AddTagField(new FieldName(\"$.Condition\", \"Condition\")); ft.Create( \"idx:bicycle\", new FTCreateParams().On(IndexDataType.JSON).Prefix(\"bicycle:\"), schema); for (int i = 0; i < bicycles.Length; i++) { json.Set($\"bicycle:{i}\", \"$\", bicycles[i]); } var query1 = new Query(\"*\"); var res1 = ft.Search(\"idx:bicycle\", query1).Documents; Console.WriteLine(string.Join(\"\\n\", res1.Count())); // Prints: Documents found: 10 var query2 = new Query(\"@Model:Jigger\"); var res2 = ft.Search(\"idx:bicycle\", query2).Documents; Console.WriteLine(string.Join(\"\\n\", res2.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and transportation.\",\"Condition\":\"new\"} var query3 = new Query(\"basic @Price:[500 1000]\"); var res3 = ft.Search(\"idx:bicycle\", query3).Documents; Console.WriteLine(string.Join(\"\\n\", res3.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and transportation.\",\"Condition\":\"new\"} var query4 = new Query(\"@Brand:\\\"Noka Bikes\\\"\"); var res4 = ft.Search(\"idx:bicycle\", query4).Documents; Console.WriteLine(string.Join(\"\\n\", res4.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and transportation.\",\"Condition\":\"new\"} var query5 = new Query(\"@Model:Jigger\").ReturnFields(\"Price\"); var res5 = ft.Search(\"idx:bicycle\", query5).Documents; Console.WriteLine(res5.First()[\"Price\"]); // Prints: 270 var request = new AggregationRequest(\"*\").GroupBy( \"@Condition\", Reducers.Count().As(\"Count\")); var result = ft.Aggregate(\"idx:bicycle\", request); for (var i = 0; i < result.TotalResults; i++) { var row = result.GetRow(i); Console.WriteLine($\"{row[\"Condition\"]} - {row[\"Count\"]}\"); } // Prints: // refurbished - 1 // used - 5 // new - 4 } }``` As explained in the in-memory data store quick start guide, Redis allows you to access an item directly via its key. You also learned how to scan the keyspace. Whereby you can use other data structures (e.g., hashes and sorted sets) as secondary indexes, your application would need to maintain those indexes manually. Redis Stack turns Redis into a document database by allowing you to declare which fields are auto-indexed. Redis Stack currently supports secondary index creation on the hashes and JSON documents. The following example shows an" }, { "data": "command that creates an index with some text fields, a numeric field (price), and a tag field (condition). The text fields have a weight of 1.0, meaning they have the same relevancy in the context of full-text searches. The field names follow the JSONPath notion. Each such index field maps to a property within the JSON document. ``` FT.CREATE idx:bicycle ON JSON PREFIX 1 bicycle: SCORE 1.0 SCHEMA $.brand AS brand TEXT WEIGHT 1.0 $.model AS model TEXT WEIGHT 1.0 $.description AS description TEXT WEIGHT 1.0 $.price AS price NUMERIC $.condition AS condition TAG SEPARATOR , OK``` ``` import redis import redis.commands.search.aggregation as aggregations import redis.commands.search.reducers as reducers from redis.commands.json.path import Path from redis.commands.search.field import NumericField, TagField, TextField from redis.commands.search.indexDefinition import IndexDefinition, IndexType from redis.commands.search.query import Query r = redis.Redis(host=\"localhost\", port=6379, db=0, decode_responses=True) bicycle = { \"brand\": \"Velorim\", \"model\": \"Jigger\", \"price\": 270, \"description\": ( \"Small and powerful, the Jigger is the best ride \" \"for the smallest of tikes! This is the tiniest \" \"kids pedal bike on the market available without\" \" a coaster brake, the Jigger is the vehicle of \" \"choice for the rare tenacious little rider \" \"raring to go.\" ), \"condition\": \"new\", } bicycles = [ bicycle, { \"brand\": \"Bicyk\", \"model\": \"Hillcraft\", \"price\": 1200, \"description\": ( \"Kids want to ride with as little weight as possible.\" \" Especially on an incline! They may be at the age \" 'when a 27.5\" wheel bike is just too clumsy coming ' 'off a 24\" bike. The Hillcraft 26 is just the solution' \" they need!\" ), \"condition\": \"used\", }, { \"brand\": \"Nord\", \"model\": \"Chook air 5\", \"price\": 815, \"description\": ( \"The Chook Air 5 gives kids aged six years and older \" \"a durable and uberlight mountain bike for their first\" \" experience on tracks and easy cruising through forests\" \" and fields. The lower top tube makes it easy to mount\" \" and dismount in any situation, giving your kids greater\" \" safety on the trails.\" ), \"condition\": \"used\", }, { \"brand\": \"Eva\", \"model\": \"Eva 291\", \"price\": 3400, \"description\": ( \"The sister company to Nord, Eva launched in 2005 as the\" \" first and only women-dedicated bicycle brand. Designed\" \" by women for women, allEva bikes are optimized for the\" \" feminine physique using analytics from a body metrics\" \" database. If you like 29ers, try the Eva 291. Its a \" \"brand new bike for 2022.. This full-suspension, \" \"cross-country ride has been designed for velocity. The\" \" 291 has 100mm of front and rear travel, a superlight \" \"aluminum frame and fast-rolling 29-inch wheels. Yippee!\" ), \"condition\": \"used\", }, { \"brand\": \"Noka Bikes\", \"model\": \"Kahuna\", \"price\": 3200, \"description\": ( \"Whether you want to try your hand at XC racing or are \" \"looking for a lively trail bike that's just as inspiring\" \" on the climbs as it is over rougher ground, the Wilder\" \" is one heck of a bike built specifically for short women.\" \" Both the frames and components have been tweaked to \" \"include a womens saddle, different bars and unique \" \"colourway.\" ), \"condition\": \"used\", }, { \"brand\": \"Breakout\", \"model\": \"XBN 2.1 Alloy\", \"price\": 810, \"description\": ( \"The XBN 2.1 Alloy is our entry-level road bike but thats\" \" not to say that its a basic machine. With an internal \" \"weld aluminium frame, a full carbon fork, and the slick-shifting\" \" Claris gears from Shimanos, this is a bike which doesnt\" \" break the bank and delivers craved" }, { "data": "), \"condition\": \"new\", }, { \"brand\": \"ScramBikes\", \"model\": \"WattBike\", \"price\": 2300, \"description\": ( \"The WattBike is the best e-bike for people who still feel young\" \" at heart. It has a Bafang 1000W mid-drive system and a 48V\" \" 17.5AH Samsung Lithium-Ion battery, allowing you to ride for\" \" more than 60 miles on one charge. Its great for tackling hilly\" \" terrain or if you just fancy a more leisurely ride. With three\" \" working modes, you can choose between E-bike, assisted bicycle,\" \" and normal bike modes.\" ), \"condition\": \"new\", }, { \"brand\": \"Peaknetic\", \"model\": \"Secto\", \"price\": 430, \"description\": ( \"If you struggle with stiff fingers or a kinked neck or back after\" \" a few minutes on the road, this lightweight, aluminum bike\" \" alleviates those issues and allows you to enjoy the ride. From\" \" the ergonomic grips to the lumbar-supporting seat position, the\" \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" \" tube facilitates stability by allowing you to put a foot on the\" \" ground to balance at a stop, and the low step-over frame makes it\" \" accessible for all ability and mobility levels. The saddle is\" \" very soft, with a wide back to support your hip joints and a\" \" cutout in the center to redistribute that pressure. Rim brakes\" \" deliver satisfactory braking control, and the wide tires provide\" \" a smooth, stable ride on paved roads and gravel. Rack and fender\" \" mounts facilitate setting up the Roll Low-Entry as your preferred\" \" commuter, and the BMX-like handlebar offers space for mounting a\" \" flashlight, bell, or phone holder.\" ), \"condition\": \"new\", }, { \"brand\": \"nHill\", \"model\": \"Summit\", \"price\": 1200, \"description\": ( \"This budget mountain bike from nHill performs well both on bike\" \" paths and on the trail. The fork with 100mm of travel absorbs\" \" rough terrain. Fat Kenda Booster tires give you grip in corners\" \" and on wet trails. The Shimano Tourney drivetrain offered enough\" \" gears for finding a comfortable pace to ride uphill, and the\" \" Tektro hydraulic disc brakes break smoothly. Whether you want an\" \" affordable bike that you can take to work, but also take trail in\" \" mountains on the weekends or youre just after a stable,\" \" comfortable ride for the bike path, the Summit gives a good value\" \" for money.\" ), \"condition\": \"new\", }, { \"model\": \"ThrillCycle\", \"brand\": \"BikeShind\", \"price\": 815, \"description\": ( \"An artsy, retro-inspired bicycle thats as functional as it is\" \" pretty: The ThrillCycle steel frame offers a smooth ride. A\" \" 9-speed drivetrain has enough gears for coasting in the city, but\" \" we wouldnt suggest taking it to the mountains. Fenders protect\" \" you from mud, and a rear basket lets you transport groceries,\" \" flowers and books. The ThrillCycle comes with a limited lifetime\" \" warranty, so this little guy will last you long past graduation.\" ), \"condition\": \"refurbished\", }, ] schema = ( TextField(\"$.brand\", as_name=\"brand\"), TextField(\"$.model\", as_name=\"model\"), TextField(\"$.description\", as_name=\"description\"), NumericField(\"$.price\", as_name=\"price\"), TagField(\"$.condition\", as_name=\"condition\"), ) index = r.ft(\"idx:bicycle\") index.create_index( schema, definition=IndexDefinition(prefix=[\"bicycle:\"], index_type=IndexType.JSON), ) for bid, bicycle in enumerate(bicycles): r.json().set(f\"bicycle:{bid}\", Path.root_path(), bicycle) res = index.search(Query(\"*\")) print(\"Documents found:\", res.total) res = index.search(Query(\"@model:Jigger\")) print(res) res = index.search( Query(\"@model:Jigger\").returnfield(\"$.price\", asfield=\"price\") ) print(res) res = index.search(Query(\"basic @price:[500 1000]\")) print(res) res = index.search(Query('@brand:\"Noka Bikes\"')) print(res) res = index.search( Query( \"@description:%analitics%\" ).dialect( # Note the typo in the word \"analytics\" 2 ) ) print(res) res = index.search( Query( \"@description:%%analitycs%%\" ).dialect( # Note 2 typos in the word \"analytics\" 2 ) ) print(res) res = index.search(Query(\"@model:hill*\")) print(res) res = index.search(Query(\"@model:*bike\")) print(res) res =" }, { "data": "print(res.docs[0].json) res = index.search(Query(\"mountain\").with_scores()) for sr in res.docs: print(f\"{sr.id}: score={sr.score}\") res = index.search(Query(\"mountain\").with_scores().scorer(\"BM25\")) for sr in res.docs: print(f\"{sr.id}: score={sr.score}\") req = aggregations.AggregateRequest(\"*\").group_by( \"@condition\", reducers.count().alias(\"count\") ) res = index.aggregate(req).rows print(res) ``` ``` package io.redis.examples; import java.math.BigDecimal; import java.util.*; import redis.clients.jedis.*; import redis.clients.jedis.exceptions.*; import redis.clients.jedis.search.*; import redis.clients.jedis.search.aggr.*; import redis.clients.jedis.search.schemafields.*; class Bicycle { public String brand; public String model; public BigDecimal price; public String description; public String condition; public Bicycle(String brand, String model, BigDecimal price, String condition, String description) { this.brand = brand; this.model = model; this.price = price; this.condition = condition; this.description = description; } } public class SearchQuickstartExample { public void run() { // UnifiedJedis jedis = new UnifiedJedis(\"redis://localhost:6379\"); JedisPooled jedis = new JedisPooled(\"localhost\", 6379); SchemaField[] schema = { TextField.of(\"$.brand\").as(\"brand\"), TextField.of(\"$.model\").as(\"model\"), TextField.of(\"$.description\").as(\"description\"), NumericField.of(\"$.price\").as(\"price\"), TagField.of(\"$.condition\").as(\"condition\") }; jedis.ftCreate(\"idx:bicycle\", FTCreateParams.createParams() .on(IndexDataType.JSON) .addPrefix(\"bicycle:\"), schema ); Bicycle[] bicycles = { new Bicycle( \"Velorim\", \"Jigger\", new BigDecimal(270), \"new\", \"Small and powerful, the Jigger is the best ride \" + \"for the smallest of tikes! This is the tiniest \" + \"kids pedal bike on the market available without\" + \" a coaster brake, the Jigger is the vehicle of \" + \"choice for the rare tenacious little rider \" + \"raring to go.\" ), new Bicycle( \"Bicyk\", \"Hillcraft\", new BigDecimal(1200), \"used\", \"Kids want to ride with as little weight as possible.\" + \" Especially on an incline! They may be at the age \" + \"when a 27.5 inch wheel bike is just too clumsy coming \" + \"off a 24 inch bike. The Hillcraft 26 is just the solution\" + \" they need!\" ), new Bicycle( \"Nord\", \"Chook air 5\", new BigDecimal(815), \"used\", \"The Chook Air 5 gives kids aged six years and older \" + \"a durable and uberlight mountain bike for their first\" + \" experience on tracks and easy cruising through forests\" + \" and fields. The lower top tube makes it easy to mount\" + \" and dismount in any situation, giving your kids greater\" + \" safety on the trails.\" ), new Bicycle( \"Eva\", \"Eva 291\", new BigDecimal(3400), \"used\", \"The sister company to Nord, Eva launched in 2005 as the\" + \" first and only women-dedicated bicycle brand. Designed\" + \" by women for women, allEva bikes are optimized for the\" + \" feminine physique using analytics from a body metrics\" + \" database. If you like 29ers, try the Eva 291. It's a \" + \"brand new bike for 2022.. This full-suspension, \" + \"cross-country ride has been designed for velocity. The\" + \" 291 has 100mm of front and rear travel, a superlight \" + \"aluminum frame and fast-rolling 29-inch wheels. Yippee!\" ), new Bicycle( \"Noka Bikes\", \"Kahuna\", new BigDecimal(3200), \"used\", \"Whether you want to try your hand at XC racing or are \" + \"looking for a lively trail bike that's just as inspiring\" + \" on the climbs as it is over rougher ground, the Wilder\" + \" is one heck of a bike built specifically for short women.\" + \" Both the frames and components have been tweaked to \" + \"include a womens saddle, different bars and unique \" + \"colourway.\" ), new Bicycle( \"Breakout\", \"XBN 2.1 Alloy\", new BigDecimal(810), \"new\", \"The XBN 2.1 Alloy is our entry-level road bike but thats\" + \" not to say that its a basic machine. With an internal \" + \"weld aluminium frame, a full carbon fork, and the slick-shifting\" + \" Claris gears from Shimanos, this is a bike which doesnt\" + \" break the bank and delivers craved" }, { "data": "), new Bicycle( \"ScramBikes\", \"WattBike\", new BigDecimal(2300), \"new\", \"The WattBike is the best e-bike for people who still feel young\" + \" at heart. It has a Bafang 1000W mid-drive system and a 48V\" + \" 17.5AH Samsung Lithium-Ion battery, allowing you to ride for\" + \" more than 60 miles on one charge. Its great for tackling hilly\" + \" terrain or if you just fancy a more leisurely ride. With three\" + \" working modes, you can choose between E-bike, assisted bicycle,\" + \" and normal bike modes.\" ), new Bicycle( \"Peaknetic\", \"Secto\", new BigDecimal(430), \"new\", \"If you struggle with stiff fingers or a kinked neck or back after\" + \" a few minutes on the road, this lightweight, aluminum bike\" + \" alleviates those issues and allows you to enjoy the ride. From\" + \" the ergonomic grips to the lumbar-supporting seat position, the\" + \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" + \" tube facilitates stability by allowing you to put a foot on the\" + \" ground to balance at a stop, and the low step-over frame makes it\" + \" accessible for all ability and mobility levels. The saddle is\" + \" very soft, with a wide back to support your hip joints and a\" + \" cutout in the center to redistribute that pressure. Rim brakes\" + \" deliver satisfactory braking control, and the wide tires provide\" + \" a smooth, stable ride on paved roads and gravel. Rack and fender\" + \" mounts facilitate setting up the Roll Low-Entry as your preferred\" + \" commuter, and the BMX-like handlebar offers space for mounting a\" + \" flashlight, bell, or phone holder.\" ), new Bicycle( \"nHill\", \"Summit\", new BigDecimal(1200), \"new\", \"This budget mountain bike from nHill performs well both on bike\" + \" paths and on the trail. The fork with 100mm of travel absorbs\" + \" rough terrain. Fat Kenda Booster tires give you grip in corners\" + \" and on wet trails. The Shimano Tourney drivetrain offered enough\" + \" gears for finding a comfortable pace to ride uphill, and the\" + \" Tektro hydraulic disc brakes break smoothly. Whether you want an\" + \" affordable bike that you can take to work, but also take trail in\" + \" mountains on the weekends or youre just after a stable,\" + \" comfortable ride for the bike path, the Summit gives a good value\" + \" for money.\" ), new Bicycle( \"ThrillCycle\", \"BikeShind\", new BigDecimal(815), \"refurbished\", \"An artsy, retro-inspired bicycle thats as functional as it is\" + \" pretty: The ThrillCycle steel frame offers a smooth ride. A\" + \" 9-speed drivetrain has enough gears for coasting in the city, but\" + \" we wouldnt suggest taking it to the mountains. Fenders protect\" + \" you from mud, and a rear basket lets you transport groceries,\" + \" flowers and books. The ThrillCycle comes with a limited lifetime\" + \" warranty, so this little guy will last you long past graduation.\" ), }; for (int i = 0; i < bicycles.length; i++) { jedis.jsonSetWithEscape(String.format(\"bicycle:%d\", i), bicycles[i]); } Query query1 = new Query(\"*\"); List<Document> result1 = jedis.ftSearch(\"idx:bicycle\", query1).getDocuments(); System.out.println(\"Documents found:\" + result1.size()); // Prints: Documents found: 10 Query query2 = new Query(\"@model:Jigger\"); List<Document> result2 = jedis.ftSearch(\"idx:bicycle\", query2).getDocuments(); System.out.println(result2); // Prints: [id:bicycle:0, score:" }, { "data": "payload:null, // properties:[$={\"brand\":\"Velorim\",\"model\":\"Jigger\",\"price\":270,\"description\":\"Small and powerful, the Jigger is the best ride for the smallest of tikes! This is the tiniest kids pedal bike on the market available without a coaster brake, the Jigger is the vehicle of choice for the rare tenacious little rider raring to go.\",\"condition\":\"new\"}]] Query query3 = new Query(\"@model:Jigger\").returnFields(\"price\"); List<Document> result3 = jedis.ftSearch(\"idx:bicycle\", query3).getDocuments(); System.out.println(result3); // Prints: [id:bicycle:0, score: 1.0, payload:null, properties:[price=270]] Query query4 = new Query(\"basic @price:[500 1000]\"); List<Document> result4 = jedis.ftSearch(\"idx:bicycle\", query4).getDocuments(); System.out.println(result4); // Prints: [id:bicycle:5, score: 1.0, payload:null, // properties:[$={\"brand\":\"Breakout\",\"model\":\"XBN 2.1 Alloy\",\"price\":810,\"description\":\"The XBN 2.1 Alloy is our entry-level road bike but thats not to say that its a basic machine. With an internal weld aluminium frame, a full carbon fork, and the slick-shifting Claris gears from Shimanos, this is a bike which doesnt break the bank and delivers craved performance.\",\"condition\":\"new\"}]] Query query5 = new Query(\"@brand:\\\"Noka Bikes\\\"\"); List<Document> result5 = jedis.ftSearch(\"idx:bicycle\", query5).getDocuments(); System.out.println(result5); // Prints: [id:bicycle:4, score: 1.0, payload:null, // properties:[$={\"brand\":\"Noka Bikes\",\"model\":\"Kahuna\",\"price\":3200,\"description\":\"Whether you want to try your hand at XC racing or are looking for a lively trail bike that's just as inspiring on the climbs as it is over rougher ground, the Wilder is one heck of a bike built specifically for short women. Both the frames and components have been tweaked to include a womens saddle, different bars and unique colourway.\",\"condition\":\"used\"}]] AggregationBuilder ab = new AggregationBuilder(\"*\").groupBy(\"@condition\", Reducers.count().as(\"count\")); AggregationResult ar = jedis.ftAggregate(\"idx:bicycle\", ab); for (int i = 0; i < ar.getTotalResults(); i++) { System.out.println(ar.getRow(i).getString(\"condition\") + \" - \" ar.getRow(i).getString(\"count\")); } // Prints: // refurbished - 1 // used - 5 // new - 4 assertEquals(\"Validate aggregation results\", 3, ar.getTotalResults()); jedis.close(); } }``` ``` using NRedisStack.RedisStackCommands; using NRedisStack.Search; using NRedisStack.Search.Aggregation; using NRedisStack.Search.Literals.Enums; using NRedisStack.Tests; using StackExchange.Redis; public class SearchQuickstartExample { [SkipIfRedis(Is.OSSCluster)] public void run() { var redis = ConnectionMultiplexer.Connect(\"localhost:6379\"); var db = redis.GetDatabase(); var ft = db.FT(); var json = db.JSON(); var bike1 = new { Brand = \"Velorim\", Model = \"Jigger\", Price = 270M, Description = \"Small and powerful, the Jigger is the best ride \" + \"for the smallest of tikes! This is the tiniest \" + \"kids pedal bike on the market available without\" + \" a coaster brake, the Jigger is the vehicle of \" + \"choice for the rare tenacious little rider \" + \"raring to go.\", Condition = \"used\" }; var bicycles = new object[] { bike1, new { Brand = \"Bicyk\", Model = \"Hillcraft\", Price = 1200M, Description = \"Kids want to ride with as little weight as possible.\" + \" Especially on an incline! They may be at the age \" + \"when a 27.5 inch wheel bike is just too clumsy coming \" + \"off a 24 inch bike. The Hillcraft 26 is just the solution\" + \" they need!\", Condition = \"used\", }, new { Brand = \"Nord\", Model = \"Chook air 5\", Price = 815M, Description = \"The Chook Air 5 gives kids aged six years and older \" + \"a durable and uberlight mountain bike for their first\" + \" experience on tracks and easy cruising through forests\" + \" and fields. The lower top tube makes it easy to mount\" + \" and dismount in any situation, giving your kids greater\" + \" safety on the trails.\", Condition = \"used\", }, new { Brand = \"Eva\", Model = \"Eva 291\", Price = 3400M, Description = \"The sister company to Nord, Eva launched in 2005 as the\" + \" first and only women-dedicated bicycle brand. Designed\" + \" by women for women, allEva bikes are optimized for the\" + \" feminine physique using analytics from a body metrics\" + \" database. If you like 29ers, try the Eva 291. Its a \" + \"brand new bike for 2022.. This full-suspension, \" + \"cross-country ride has been designed for" }, { "data": "The\" + \" 291 has 100mm of front and rear travel, a superlight \" + \"aluminum frame and fast-rolling 29-inch wheels. Yippee!\", Condition = \"used\", }, new { Brand = \"Noka Bikes\", Model = \"Kahuna\", Price = 3200M, Description = \"Whether you want to try your hand at XC racing or are \" + \"looking for a lively trail bike that's just as inspiring\" + \" on the climbs as it is over rougher ground, the Wilder\" + \" is one heck of a bike built specifically for short women.\" + \" Both the frames and components have been tweaked to \" + \"include a womens saddle, different bars and unique \" + \"colourway.\", Condition = \"used\", }, new { Brand = \"Breakout\", Model = \"XBN 2.1 Alloy\", Price = 810M, Description = \"The XBN 2.1 Alloy is our entry-level road bike but thats\" + \" not to say that its a basic machine. With an internal \" + \"weld aluminium frame, a full carbon fork, and the slick-shifting\" + \" Claris gears from Shimanos, this is a bike which doesnt\" + \" break the bank and delivers craved performance.\", Condition = \"new\", }, new { Brand = \"ScramBikes\", Model = \"WattBike\", Price = 2300M, Description = \"The WattBike is the best e-bike for people who still feel young\" + \" at heart. It has a Bafang 1000W mid-drive system and a 48V\" + \" 17.5AH Samsung Lithium-Ion battery, allowing you to ride for\" + \" more than 60 miles on one charge. Its great for tackling hilly\" + \" terrain or if you just fancy a more leisurely ride. With three\" + \" working modes, you can choose between E-bike, assisted bicycle,\" + \" and normal bike modes.\", Condition = \"new\", }, new { Brand = \"Peaknetic\", Model = \"Secto\", Price = 430M, Description = \"If you struggle with stiff fingers or a kinked neck or back after\" + \" a few minutes on the road, this lightweight, aluminum bike\" + \" alleviates those issues and allows you to enjoy the ride. From\" + \" the ergonomic grips to the lumbar-supporting seat position, the\" + \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" + \" tube facilitates stability by allowing you to put a foot on the\" + \" ground to balance at a stop, and the low step-over frame makes it\" + \" accessible for all ability and mobility levels. The saddle is\" + \" very soft, with a wide back to support your hip joints and a\" + \" cutout in the center to redistribute that pressure. Rim brakes\" + \" deliver satisfactory braking control, and the wide tires provide\" + \" a smooth, stable ride on paved roads and gravel. Rack and fender\" + \" mounts facilitate setting up the Roll Low-Entry as your preferred\" + \" commuter, and the BMX-like handlebar offers space for mounting a\" + \" flashlight, bell, or phone holder.\", Condition = \"new\", }, new { Brand = \"nHill\", Model = \"Summit\", Price = 1200M, Description = \"This budget mountain bike from nHill performs well both on bike\" + \" paths and on the trail. The fork with 100mm of travel absorbs\" + \" rough terrain. Fat Kenda Booster tires give you grip in corners\" + \" and on wet trails. The Shimano Tourney drivetrain offered enough\" + \" gears for finding a comfortable pace to ride uphill, and the\" + \" Tektro hydraulic disc brakes break" }, { "data": "Whether you want an\" + \" affordable bike that you can take to work, but also take trail in\" + \" mountains on the weekends or youre just after a stable,\" + \" comfortable ride for the bike path, the Summit gives a good value\" + \" for money.\", Condition = \"new\", }, new { Model = \"ThrillCycle\", Brand = \"BikeShind\", Price = 815M, Description = \"An artsy, retro-inspired bicycle thats as functional as it is\" + \" pretty: The ThrillCycle steel frame offers a smooth ride. A\" + \" 9-speed drivetrain has enough gears for coasting in the city, but\" + \" we wouldnt suggest taking it to the mountains. Fenders protect\" + \" you from mud, and a rear basket lets you transport groceries,\" + \" flowers and books. The ThrillCycle comes with a limited lifetime\" + \" warranty, so this little guy will last you long past graduation.\", Condition = \"refurbished\", }, }; var schema = new Schema() .AddTextField(new FieldName(\"$.Brand\", \"Brand\")) .AddTextField(new FieldName(\"$.Model\", \"Model\")) .AddTextField(new FieldName(\"$.Description\", \"Description\")) .AddNumericField(new FieldName(\"$.Price\", \"Price\")) .AddTagField(new FieldName(\"$.Condition\", \"Condition\")); ft.Create( \"idx:bicycle\", new FTCreateParams().On(IndexDataType.JSON).Prefix(\"bicycle:\"), schema); for (int i = 0; i < bicycles.Length; i++) { json.Set($\"bicycle:{i}\", \"$\", bicycles[i]); } var query1 = new Query(\"*\"); var res1 = ft.Search(\"idx:bicycle\", query1).Documents; Console.WriteLine(string.Join(\"\\n\", res1.Count())); // Prints: Documents found: 10 var query2 = new Query(\"@Model:Jigger\"); var res2 = ft.Search(\"idx:bicycle\", query2).Documents; Console.WriteLine(string.Join(\"\\n\", res2.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and transportation.\",\"Condition\":\"new\"} var query3 = new Query(\"basic @Price:[500 1000]\"); var res3 = ft.Search(\"idx:bicycle\", query3).Documents; Console.WriteLine(string.Join(\"\\n\", res3.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and transportation.\",\"Condition\":\"new\"} var query4 = new Query(\"@Brand:\\\"Noka Bikes\\\"\"); var res4 = ft.Search(\"idx:bicycle\", query4).Documents; Console.WriteLine(string.Join(\"\\n\", res4.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and transportation.\",\"Condition\":\"new\"} var query5 = new Query(\"@Model:Jigger\").ReturnFields(\"Price\"); var res5 = ft.Search(\"idx:bicycle\", query5).Documents; Console.WriteLine(res5.First()[\"Price\"]); // Prints: 270 var request = new AggregationRequest(\"*\").GroupBy( \"@Condition\", Reducers.Count().As(\"Count\")); var result = ft.Aggregate(\"idx:bicycle\", request); for (var i = 0; i < result.TotalResults; i++) { var row = result.GetRow(i); Console.WriteLine($\"{row[\"Condition\"]} - {row[\"Count\"]}\"); } // Prints: // refurbished - 1 // used - 5 // new - 4 } }``` Any pre-existing JSON documents with a key prefix bicycle: are automatically added to the index. Additionally, any JSON documents with that prefix created or modified after index creation are added or re-added to the index. The example below shows you how to use the JSON.SET command to create new JSON documents: ``` JSON.SET \"bicycle:0\" \".\" \"{\\\"brand\\\": \\\"Velorim\\\", \\\"model\\\": \\\"Jigger\\\", \\\"price\\\": 270, \\\"description\\\": \\\"Small and powerful, the Jigger is the best ride for the smallest of tikes! This is the tiniest kids\\\\u2019 pedal bike on the market available without a coaster brake, the Jigger is the vehicle of choice for the rare tenacious little rider raring to go.\\\", \\\"condition\\\": \\\"new\\\"}\" OK JSON.SET \"bicycle:1\" \".\" \"{\\\"brand\\\": \\\"Bicyk\\\", \\\"model\\\": \\\"Hillcraft\\\", \\\"price\\\": 1200, \\\"description\\\": \\\"Kids want to ride with as little weight as possible. Especially on an incline! They may be at the age when a 27.5\\\\\\\" wheel bike is just too clumsy coming off a 24\\\\\\\" bike. The Hillcraft 26 is just the solution they need!\\\", \\\"condition\\\": \\\"used\\\"}\" OK JSON.SET \"bicycle:2\"" }, { "data": "\"{\\\"brand\\\": \\\"Nord\\\", \\\"model\\\": \\\"Chook air 5\\\", \\\"price\\\": 815, \\\"description\\\": \\\"The Chook Air 5 gives kids aged six years and older a durable and uberlight mountain bike for their first experience on tracks and easy cruising through forests and fields. The lower top tube makes it easy to mount and dismount in any situation, giving your kids greater safety on the trails.\\\", \\\"condition\\\": \\\"used\\\"}\" OK JSON.SET \"bicycle:3\" \".\" \"{\\\"brand\\\": \\\"Eva\\\", \\\"model\\\": \\\"Eva 291\\\", \\\"price\\\": 3400, \\\"description\\\": \\\"The sister company to Nord, Eva launched in 2005 as the first and only women-dedicated bicycle brand. Designed by women for women, allEva bikes are optimized for the feminine physique using analytics from a body metrics database. If you like 29ers, try the Eva 291. It\\\\u2019s a brand new bike for 2022.. This full-suspension, cross-country ride has been designed for velocity. The 291 has 100mm of front and rear travel, a superlight aluminum frame and fast-rolling 29-inch wheels. Yippee!\\\", \\\"condition\\\": \\\"used\\\"}\" OK JSON.SET \"bicycle:4\" \".\" \"{\\\"brand\\\": \\\"Noka Bikes\\\", \\\"model\\\": \\\"Kahuna\\\", \\\"price\\\": 3200, \\\"description\\\": \\\"Whether you want to try your hand at XC racing or are looking for a lively trail bike that's just as inspiring on the climbs as it is over rougher ground, the Wilder is one heck of a bike built specifically for short women. Both the frames and components have been tweaked to include a women\\\\u2019s saddle, different bars and unique colourway.\\\", \\\"condition\\\": \\\"used\\\"}\" OK JSON.SET \"bicycle:5\" \".\" \"{\\\"brand\\\": \\\"Breakout\\\", \\\"model\\\": \\\"XBN 2.1 Alloy\\\", \\\"price\\\": 810, \\\"description\\\": \\\"The XBN 2.1 Alloy is our entry-level road bike \\\\u2013 but that\\\\u2019s not to say that it\\\\u2019s a basic machine. With an internal weld aluminium frame, a full carbon fork, and the slick-shifting Claris gears from Shimano\\\\u2019s, this is a bike which doesn\\\\u2019t break the bank and delivers craved performance.\\\", \\\"condition\\\": \\\"new\\\"}\" OK JSON.SET \"bicycle:6\" \".\" \"{\\\"brand\\\": \\\"ScramBikes\\\", \\\"model\\\": \\\"WattBike\\\", \\\"price\\\": 2300, \\\"description\\\": \\\"The WattBike is the best e-bike for people who still feel young at heart. It has a Bafang 1000W mid-drive system and a 48V 17.5AH Samsung Lithium-Ion battery, allowing you to ride for more than 60 miles on one charge. It\\\\u2019s great for tackling hilly terrain or if you just fancy a more leisurely ride. With three working modes, you can choose between E-bike, assisted bicycle, and normal bike modes.\\\", \\\"condition\\\": \\\"new\\\"}\" OK JSON.SET \"bicycle:7\" \".\" \"{\\\"brand\\\": \\\"Peaknetic\\\", \\\"model\\\": \\\"Secto\\\", \\\"price\\\": 430, \\\"description\\\": \\\"If you struggle with stiff fingers or a kinked neck or back after a few minutes on the road, this lightweight, aluminum bike alleviates those issues and allows you to enjoy the ride. From the ergonomic grips to the lumbar-supporting seat position, the Roll Low-Entry offers incredible comfort. The rear-inclined seat tube facilitates stability by allowing you to put a foot on the ground to balance at a stop, and the low step-over frame makes it accessible for all ability and mobility levels. The saddle is very soft, with a wide back to support your hip joints and a cutout in the center to redistribute that pressure. Rim brakes deliver satisfactory braking control, and the wide tires provide a smooth, stable ride on paved roads and gravel. Rack and fender mounts facilitate setting up the Roll Low-Entry as your preferred commuter, and the BMX-like handlebar offers space for mounting a flashlight, bell, or phone holder.\\\", \\\"condition\\\": \\\"new\\\"}\" OK JSON.SET \"bicycle:8\" \".\" \"{\\\"brand\\\": \\\"nHill\\\", \\\"model\\\": \\\"Summit\\\", \\\"price\\\": 1200, \\\"description\\\": \\\"This budget mountain bike from nHill performs well both on bike paths and on the trail. The fork with 100mm of travel absorbs rough terrain. Fat Kenda Booster tires give you grip in corners and on wet" }, { "data": "The Shimano Tourney drivetrain offered enough gears for finding a comfortable pace to ride uphill, and the Tektro hydraulic disc brakes break smoothly. Whether you want an affordable bike that you can take to work, but also take trail in mountains on the weekends or you\\\\u2019re just after a stable, comfortable ride for the bike path, the Summit gives a good value for money.\\\", \\\"condition\\\": \\\"new\\\"}\" OK JSON.SET \"bicycle:9\" \".\" \"{\\\"model\\\": \\\"ThrillCycle\\\", \\\"brand\\\": \\\"BikeShind\\\", \\\"price\\\": 815, \\\"description\\\": \\\"An artsy, retro-inspired bicycle that\\\\u2019s as functional as it is pretty: The ThrillCycle steel frame offers a smooth ride. A 9-speed drivetrain has enough gears for coasting in the city, but we wouldn\\\\u2019t suggest taking it to the mountains. Fenders protect you from mud, and a rear basket lets you transport groceries, flowers and books. The ThrillCycle comes with a limited lifetime warranty, so this little guy will last you long past graduation.\\\", \\\"condition\\\": \\\"refurbished\\\"}\" OK``` ``` import redis import redis.commands.search.aggregation as aggregations import redis.commands.search.reducers as reducers from redis.commands.json.path import Path from redis.commands.search.field import NumericField, TagField, TextField from redis.commands.search.indexDefinition import IndexDefinition, IndexType from redis.commands.search.query import Query r = redis.Redis(host=\"localhost\", port=6379, db=0, decode_responses=True) bicycle = { \"brand\": \"Velorim\", \"model\": \"Jigger\", \"price\": 270, \"description\": ( \"Small and powerful, the Jigger is the best ride \" \"for the smallest of tikes! This is the tiniest \" \"kids pedal bike on the market available without\" \" a coaster brake, the Jigger is the vehicle of \" \"choice for the rare tenacious little rider \" \"raring to go.\" ), \"condition\": \"new\", } bicycles = [ bicycle, { \"brand\": \"Bicyk\", \"model\": \"Hillcraft\", \"price\": 1200, \"description\": ( \"Kids want to ride with as little weight as possible.\" \" Especially on an incline! They may be at the age \" 'when a 27.5\" wheel bike is just too clumsy coming ' 'off a 24\" bike. The Hillcraft 26 is just the solution' \" they need!\" ), \"condition\": \"used\", }, { \"brand\": \"Nord\", \"model\": \"Chook air 5\", \"price\": 815, \"description\": ( \"The Chook Air 5 gives kids aged six years and older \" \"a durable and uberlight mountain bike for their first\" \" experience on tracks and easy cruising through forests\" \" and fields. The lower top tube makes it easy to mount\" \" and dismount in any situation, giving your kids greater\" \" safety on the trails.\" ), \"condition\": \"used\", }, { \"brand\": \"Eva\", \"model\": \"Eva 291\", \"price\": 3400, \"description\": ( \"The sister company to Nord, Eva launched in 2005 as the\" \" first and only women-dedicated bicycle brand. Designed\" \" by women for women, allEva bikes are optimized for the\" \" feminine physique using analytics from a body metrics\" \" database. If you like 29ers, try the Eva 291. Its a \" \"brand new bike for 2022.. This full-suspension, \" \"cross-country ride has been designed for velocity. The\" \" 291 has 100mm of front and rear travel, a superlight \" \"aluminum frame and fast-rolling 29-inch wheels. Yippee!\" ), \"condition\": \"used\", }, { \"brand\": \"Noka Bikes\", \"model\": \"Kahuna\", \"price\": 3200, \"description\": ( \"Whether you want to try your hand at XC racing or are \" \"looking for a lively trail bike that's just as inspiring\" \" on the climbs as it is over rougher ground, the Wilder\" \" is one heck of a bike built specifically for short women.\" \" Both the frames and components have been tweaked to \" \"include a womens saddle, different bars and unique \" \"colourway.\" ), \"condition\": \"used\", }, { \"brand\": \"Breakout\", \"model\": \"XBN 2.1 Alloy\", \"price\": 810, \"description\": ( \"The XBN 2.1 Alloy is our entry-level road bike but thats\" \" not to say that its a basic" }, { "data": "With an internal \" \"weld aluminium frame, a full carbon fork, and the slick-shifting\" \" Claris gears from Shimanos, this is a bike which doesnt\" \" break the bank and delivers craved performance.\" ), \"condition\": \"new\", }, { \"brand\": \"ScramBikes\", \"model\": \"WattBike\", \"price\": 2300, \"description\": ( \"The WattBike is the best e-bike for people who still feel young\" \" at heart. It has a Bafang 1000W mid-drive system and a 48V\" \" 17.5AH Samsung Lithium-Ion battery, allowing you to ride for\" \" more than 60 miles on one charge. Its great for tackling hilly\" \" terrain or if you just fancy a more leisurely ride. With three\" \" working modes, you can choose between E-bike, assisted bicycle,\" \" and normal bike modes.\" ), \"condition\": \"new\", }, { \"brand\": \"Peaknetic\", \"model\": \"Secto\", \"price\": 430, \"description\": ( \"If you struggle with stiff fingers or a kinked neck or back after\" \" a few minutes on the road, this lightweight, aluminum bike\" \" alleviates those issues and allows you to enjoy the ride. From\" \" the ergonomic grips to the lumbar-supporting seat position, the\" \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" \" tube facilitates stability by allowing you to put a foot on the\" \" ground to balance at a stop, and the low step-over frame makes it\" \" accessible for all ability and mobility levels. The saddle is\" \" very soft, with a wide back to support your hip joints and a\" \" cutout in the center to redistribute that pressure. Rim brakes\" \" deliver satisfactory braking control, and the wide tires provide\" \" a smooth, stable ride on paved roads and gravel. Rack and fender\" \" mounts facilitate setting up the Roll Low-Entry as your preferred\" \" commuter, and the BMX-like handlebar offers space for mounting a\" \" flashlight, bell, or phone holder.\" ), \"condition\": \"new\", }, { \"brand\": \"nHill\", \"model\": \"Summit\", \"price\": 1200, \"description\": ( \"This budget mountain bike from nHill performs well both on bike\" \" paths and on the trail. The fork with 100mm of travel absorbs\" \" rough terrain. Fat Kenda Booster tires give you grip in corners\" \" and on wet trails. The Shimano Tourney drivetrain offered enough\" \" gears for finding a comfortable pace to ride uphill, and the\" \" Tektro hydraulic disc brakes break smoothly. Whether you want an\" \" affordable bike that you can take to work, but also take trail in\" \" mountains on the weekends or youre just after a stable,\" \" comfortable ride for the bike path, the Summit gives a good value\" \" for money.\" ), \"condition\": \"new\", }, { \"model\": \"ThrillCycle\", \"brand\": \"BikeShind\", \"price\": 815, \"description\": ( \"An artsy, retro-inspired bicycle thats as functional as it is\" \" pretty: The ThrillCycle steel frame offers a smooth ride. A\" \" 9-speed drivetrain has enough gears for coasting in the city, but\" \" we wouldnt suggest taking it to the mountains. Fenders protect\" \" you from mud, and a rear basket lets you transport groceries,\" \" flowers and books. The ThrillCycle comes with a limited lifetime\" \" warranty, so this little guy will last you long past graduation.\" ), \"condition\": \"refurbished\", }, ] schema = ( TextField(\"$.brand\", as_name=\"brand\"), TextField(\"$.model\", as_name=\"model\"), TextField(\"$.description\", as_name=\"description\"), NumericField(\"$.price\", as_name=\"price\"), TagField(\"$.condition\", as_name=\"condition\"), ) index = r.ft(\"idx:bicycle\") index.create_index( schema, definition=IndexDefinition(prefix=[\"bicycle:\"], index_type=IndexType.JSON), ) for bid, bicycle in enumerate(bicycles): r.json().set(f\"bicycle:{bid}\", Path.root_path(), bicycle) res = index.search(Query(\"*\")) print(\"Documents found:\", res.total) res = index.search(Query(\"@model:Jigger\")) print(res) res = index.search( Query(\"@model:Jigger\").returnfield(\"$.price\", asfield=\"price\") ) print(res) res = index.search(Query(\"basic @price:[500 1000]\")) print(res) res = index.search(Query('@brand:\"Noka Bikes\"')) print(res) res = index.search( Query( \"@description:%analitics%\"" }, { "data": "# Note the typo in the word \"analytics\" 2 ) ) print(res) res = index.search( Query( \"@description:%%analitycs%%\" ).dialect( # Note 2 typos in the word \"analytics\" 2 ) ) print(res) res = index.search(Query(\"@model:hill*\")) print(res) res = index.search(Query(\"@model:*bike\")) print(res) res = index.search(Query(\"w'H?*craft'\").dialect(2)) print(res.docs[0].json) res = index.search(Query(\"mountain\").with_scores()) for sr in res.docs: print(f\"{sr.id}: score={sr.score}\") res = index.search(Query(\"mountain\").with_scores().scorer(\"BM25\")) for sr in res.docs: print(f\"{sr.id}: score={sr.score}\") req = aggregations.AggregateRequest(\"*\").group_by( \"@condition\", reducers.count().alias(\"count\") ) res = index.aggregate(req).rows print(res) ``` ``` package io.redis.examples; import java.math.BigDecimal; import java.util.*; import redis.clients.jedis.*; import redis.clients.jedis.exceptions.*; import redis.clients.jedis.search.*; import redis.clients.jedis.search.aggr.*; import redis.clients.jedis.search.schemafields.*; class Bicycle { public String brand; public String model; public BigDecimal price; public String description; public String condition; public Bicycle(String brand, String model, BigDecimal price, String condition, String description) { this.brand = brand; this.model = model; this.price = price; this.condition = condition; this.description = description; } } public class SearchQuickstartExample { public void run() { // UnifiedJedis jedis = new UnifiedJedis(\"redis://localhost:6379\"); JedisPooled jedis = new JedisPooled(\"localhost\", 6379); SchemaField[] schema = { TextField.of(\"$.brand\").as(\"brand\"), TextField.of(\"$.model\").as(\"model\"), TextField.of(\"$.description\").as(\"description\"), NumericField.of(\"$.price\").as(\"price\"), TagField.of(\"$.condition\").as(\"condition\") }; jedis.ftCreate(\"idx:bicycle\", FTCreateParams.createParams() .on(IndexDataType.JSON) .addPrefix(\"bicycle:\"), schema ); Bicycle[] bicycles = { new Bicycle( \"Velorim\", \"Jigger\", new BigDecimal(270), \"new\", \"Small and powerful, the Jigger is the best ride \" + \"for the smallest of tikes! This is the tiniest \" + \"kids pedal bike on the market available without\" + \" a coaster brake, the Jigger is the vehicle of \" + \"choice for the rare tenacious little rider \" + \"raring to go.\" ), new Bicycle( \"Bicyk\", \"Hillcraft\", new BigDecimal(1200), \"used\", \"Kids want to ride with as little weight as possible.\" + \" Especially on an incline! They may be at the age \" + \"when a 27.5 inch wheel bike is just too clumsy coming \" + \"off a 24 inch bike. The Hillcraft 26 is just the solution\" + \" they need!\" ), new Bicycle( \"Nord\", \"Chook air 5\", new BigDecimal(815), \"used\", \"The Chook Air 5 gives kids aged six years and older \" + \"a durable and uberlight mountain bike for their first\" + \" experience on tracks and easy cruising through forests\" + \" and fields. The lower top tube makes it easy to mount\" + \" and dismount in any situation, giving your kids greater\" + \" safety on the trails.\" ), new Bicycle( \"Eva\", \"Eva 291\", new BigDecimal(3400), \"used\", \"The sister company to Nord, Eva launched in 2005 as the\" + \" first and only women-dedicated bicycle brand. Designed\" + \" by women for women, allEva bikes are optimized for the\" + \" feminine physique using analytics from a body metrics\" + \" database. If you like 29ers, try the Eva 291. It's a \" + \"brand new bike for 2022.. This full-suspension, \" + \"cross-country ride has been designed for velocity. The\" + \" 291 has 100mm of front and rear travel, a superlight \" + \"aluminum frame and fast-rolling 29-inch wheels. Yippee!\" ), new Bicycle( \"Noka Bikes\", \"Kahuna\", new BigDecimal(3200), \"used\", \"Whether you want to try your hand at XC racing or are \" + \"looking for a lively trail bike that's just as inspiring\" + \" on the climbs as it is over rougher ground, the Wilder\" + \" is one heck of a bike built specifically for short women.\" + \" Both the frames and components have been tweaked to \" + \"include a womens saddle, different bars and unique \" + \"colourway.\" ), new Bicycle( \"Breakout\", \"XBN 2.1 Alloy\", new BigDecimal(810), \"new\", \"The XBN 2.1 Alloy is our entry-level road bike but thats\" + \" not to say that its a basic" }, { "data": "With an internal \" + \"weld aluminium frame, a full carbon fork, and the slick-shifting\" + \" Claris gears from Shimanos, this is a bike which doesnt\" + \" break the bank and delivers craved performance.\" ), new Bicycle( \"ScramBikes\", \"WattBike\", new BigDecimal(2300), \"new\", \"The WattBike is the best e-bike for people who still feel young\" + \" at heart. It has a Bafang 1000W mid-drive system and a 48V\" + \" 17.5AH Samsung Lithium-Ion battery, allowing you to ride for\" + \" more than 60 miles on one charge. Its great for tackling hilly\" + \" terrain or if you just fancy a more leisurely ride. With three\" + \" working modes, you can choose between E-bike, assisted bicycle,\" + \" and normal bike modes.\" ), new Bicycle( \"Peaknetic\", \"Secto\", new BigDecimal(430), \"new\", \"If you struggle with stiff fingers or a kinked neck or back after\" + \" a few minutes on the road, this lightweight, aluminum bike\" + \" alleviates those issues and allows you to enjoy the ride. From\" + \" the ergonomic grips to the lumbar-supporting seat position, the\" + \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" + \" tube facilitates stability by allowing you to put a foot on the\" + \" ground to balance at a stop, and the low step-over frame makes it\" + \" accessible for all ability and mobility levels. The saddle is\" + \" very soft, with a wide back to support your hip joints and a\" + \" cutout in the center to redistribute that pressure. Rim brakes\" + \" deliver satisfactory braking control, and the wide tires provide\" + \" a smooth, stable ride on paved roads and gravel. Rack and fender\" + \" mounts facilitate setting up the Roll Low-Entry as your preferred\" + \" commuter, and the BMX-like handlebar offers space for mounting a\" + \" flashlight, bell, or phone holder.\" ), new Bicycle( \"nHill\", \"Summit\", new BigDecimal(1200), \"new\", \"This budget mountain bike from nHill performs well both on bike\" + \" paths and on the trail. The fork with 100mm of travel absorbs\" + \" rough terrain. Fat Kenda Booster tires give you grip in corners\" + \" and on wet trails. The Shimano Tourney drivetrain offered enough\" + \" gears for finding a comfortable pace to ride uphill, and the\" + \" Tektro hydraulic disc brakes break smoothly. Whether you want an\" + \" affordable bike that you can take to work, but also take trail in\" + \" mountains on the weekends or youre just after a stable,\" + \" comfortable ride for the bike path, the Summit gives a good value\" + \" for money.\" ), new Bicycle( \"ThrillCycle\", \"BikeShind\", new BigDecimal(815), \"refurbished\", \"An artsy, retro-inspired bicycle thats as functional as it is\" + \" pretty: The ThrillCycle steel frame offers a smooth ride. A\" + \" 9-speed drivetrain has enough gears for coasting in the city, but\" + \" we wouldnt suggest taking it to the mountains. Fenders protect\" + \" you from mud, and a rear basket lets you transport groceries,\" + \" flowers and books. The ThrillCycle comes with a limited lifetime\" + \" warranty, so this little guy will last you long past graduation.\" ), }; for (int i = 0; i < bicycles.length; i++) { jedis.jsonSetWithEscape(String.format(\"bicycle:%d\", i), bicycles[i]); } Query query1 = new Query(\"*\"); List<Document> result1 = jedis.ftSearch(\"idx:bicycle\", query1).getDocuments(); System.out.println(\"Documents found:\" + result1.size()); // Prints: Documents found: 10 Query query2 = new Query(\"@model:Jigger\"); List<Document> result2 = jedis.ftSearch(\"idx:bicycle\", query2).getDocuments(); System.out.println(result2); // Prints: [id:bicycle:0, score:" }, { "data": "payload:null, // properties:[$={\"brand\":\"Velorim\",\"model\":\"Jigger\",\"price\":270,\"description\":\"Small and powerful, the Jigger is the best ride for the smallest of tikes! This is the tiniest kids pedal bike on the market available without a coaster brake, the Jigger is the vehicle of choice for the rare tenacious little rider raring to go.\",\"condition\":\"new\"}]] Query query3 = new Query(\"@model:Jigger\").returnFields(\"price\"); List<Document> result3 = jedis.ftSearch(\"idx:bicycle\", query3).getDocuments(); System.out.println(result3); // Prints: [id:bicycle:0, score: 1.0, payload:null, properties:[price=270]] Query query4 = new Query(\"basic @price:[500 1000]\"); List<Document> result4 = jedis.ftSearch(\"idx:bicycle\", query4).getDocuments(); System.out.println(result4); // Prints: [id:bicycle:5, score: 1.0, payload:null, // properties:[$={\"brand\":\"Breakout\",\"model\":\"XBN 2.1 Alloy\",\"price\":810,\"description\":\"The XBN 2.1 Alloy is our entry-level road bike but thats not to say that its a basic machine. With an internal weld aluminium frame, a full carbon fork, and the slick-shifting Claris gears from Shimanos, this is a bike which doesnt break the bank and delivers craved performance.\",\"condition\":\"new\"}]] Query query5 = new Query(\"@brand:\\\"Noka Bikes\\\"\"); List<Document> result5 = jedis.ftSearch(\"idx:bicycle\", query5).getDocuments(); System.out.println(result5); // Prints: [id:bicycle:4, score: 1.0, payload:null, // properties:[$={\"brand\":\"Noka Bikes\",\"model\":\"Kahuna\",\"price\":3200,\"description\":\"Whether you want to try your hand at XC racing or are looking for a lively trail bike that's just as inspiring on the climbs as it is over rougher ground, the Wilder is one heck of a bike built specifically for short women. Both the frames and components have been tweaked to include a womens saddle, different bars and unique colourway.\",\"condition\":\"used\"}]] AggregationBuilder ab = new AggregationBuilder(\"*\").groupBy(\"@condition\", Reducers.count().as(\"count\")); AggregationResult ar = jedis.ftAggregate(\"idx:bicycle\", ab); for (int i = 0; i < ar.getTotalResults(); i++) { System.out.println(ar.getRow(i).getString(\"condition\") + \" - \" ar.getRow(i).getString(\"count\")); } // Prints: // refurbished - 1 // used - 5 // new - 4 assertEquals(\"Validate aggregation results\", 3, ar.getTotalResults()); jedis.close(); } }``` ``` using NRedisStack.RedisStackCommands; using NRedisStack.Search; using NRedisStack.Search.Aggregation; using NRedisStack.Search.Literals.Enums; using NRedisStack.Tests; using StackExchange.Redis; public class SearchQuickstartExample { [SkipIfRedis(Is.OSSCluster)] public void run() { var redis = ConnectionMultiplexer.Connect(\"localhost:6379\"); var db = redis.GetDatabase(); var ft = db.FT(); var json = db.JSON(); var bike1 = new { Brand = \"Velorim\", Model = \"Jigger\", Price = 270M, Description = \"Small and powerful, the Jigger is the best ride \" + \"for the smallest of tikes! This is the tiniest \" + \"kids pedal bike on the market available without\" + \" a coaster brake, the Jigger is the vehicle of \" + \"choice for the rare tenacious little rider \" + \"raring to go.\", Condition = \"used\" }; var bicycles = new object[] { bike1, new { Brand = \"Bicyk\", Model = \"Hillcraft\", Price = 1200M, Description = \"Kids want to ride with as little weight as possible.\" + \" Especially on an incline! They may be at the age \" + \"when a 27.5 inch wheel bike is just too clumsy coming \" + \"off a 24 inch bike. The Hillcraft 26 is just the solution\" + \" they need!\", Condition = \"used\", }, new { Brand = \"Nord\", Model = \"Chook air 5\", Price = 815M, Description = \"The Chook Air 5 gives kids aged six years and older \" + \"a durable and uberlight mountain bike for their first\" + \" experience on tracks and easy cruising through forests\" + \" and fields. The lower top tube makes it easy to mount\" + \" and dismount in any situation, giving your kids greater\" + \" safety on the trails.\", Condition = \"used\", }, new { Brand = \"Eva\", Model = \"Eva 291\", Price = 3400M, Description = \"The sister company to Nord, Eva launched in 2005 as the\" + \" first and only women-dedicated bicycle" }, { "data": "Designed\" + \" by women for women, allEva bikes are optimized for the\" + \" feminine physique using analytics from a body metrics\" + \" database. If you like 29ers, try the Eva 291. Its a \" + \"brand new bike for 2022.. This full-suspension, \" + \"cross-country ride has been designed for velocity. The\" + \" 291 has 100mm of front and rear travel, a superlight \" + \"aluminum frame and fast-rolling 29-inch wheels. Yippee!\", Condition = \"used\", }, new { Brand = \"Noka Bikes\", Model = \"Kahuna\", Price = 3200M, Description = \"Whether you want to try your hand at XC racing or are \" + \"looking for a lively trail bike that's just as inspiring\" + \" on the climbs as it is over rougher ground, the Wilder\" + \" is one heck of a bike built specifically for short women.\" + \" Both the frames and components have been tweaked to \" + \"include a womens saddle, different bars and unique \" + \"colourway.\", Condition = \"used\", }, new { Brand = \"Breakout\", Model = \"XBN 2.1 Alloy\", Price = 810M, Description = \"The XBN 2.1 Alloy is our entry-level road bike but thats\" + \" not to say that its a basic machine. With an internal \" + \"weld aluminium frame, a full carbon fork, and the slick-shifting\" + \" Claris gears from Shimanos, this is a bike which doesnt\" + \" break the bank and delivers craved performance.\", Condition = \"new\", }, new { Brand = \"ScramBikes\", Model = \"WattBike\", Price = 2300M, Description = \"The WattBike is the best e-bike for people who still feel young\" + \" at heart. It has a Bafang 1000W mid-drive system and a 48V\" + \" 17.5AH Samsung Lithium-Ion battery, allowing you to ride for\" + \" more than 60 miles on one charge. Its great for tackling hilly\" + \" terrain or if you just fancy a more leisurely ride. With three\" + \" working modes, you can choose between E-bike, assisted bicycle,\" + \" and normal bike modes.\", Condition = \"new\", }, new { Brand = \"Peaknetic\", Model = \"Secto\", Price = 430M, Description = \"If you struggle with stiff fingers or a kinked neck or back after\" + \" a few minutes on the road, this lightweight, aluminum bike\" + \" alleviates those issues and allows you to enjoy the ride. From\" + \" the ergonomic grips to the lumbar-supporting seat position, the\" + \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" + \" tube facilitates stability by allowing you to put a foot on the\" + \" ground to balance at a stop, and the low step-over frame makes it\" + \" accessible for all ability and mobility levels. The saddle is\" + \" very soft, with a wide back to support your hip joints and a\" + \" cutout in the center to redistribute that pressure. Rim brakes\" + \" deliver satisfactory braking control, and the wide tires provide\" + \" a smooth, stable ride on paved roads and gravel. Rack and fender\" + \" mounts facilitate setting up the Roll Low-Entry as your preferred\" + \" commuter, and the BMX-like handlebar offers space for mounting a\" + \" flashlight, bell, or phone holder.\", Condition = \"new\", }, new { Brand = \"nHill\", Model = \"Summit\", Price = 1200M, Description = \"This budget mountain bike from nHill performs well both on bike\" + \" paths and on the trail. The fork with 100mm of travel absorbs\" + \" rough terrain. Fat Kenda Booster tires give you grip in corners\" + \" and on wet" }, { "data": "The Shimano Tourney drivetrain offered enough\" + \" gears for finding a comfortable pace to ride uphill, and the\" + \" Tektro hydraulic disc brakes break smoothly. Whether you want an\" + \" affordable bike that you can take to work, but also take trail in\" + \" mountains on the weekends or youre just after a stable,\" + \" comfortable ride for the bike path, the Summit gives a good value\" + \" for money.\", Condition = \"new\", }, new { Model = \"ThrillCycle\", Brand = \"BikeShind\", Price = 815M, Description = \"An artsy, retro-inspired bicycle thats as functional as it is\" + \" pretty: The ThrillCycle steel frame offers a smooth ride. A\" + \" 9-speed drivetrain has enough gears for coasting in the city, but\" + \" we wouldnt suggest taking it to the mountains. Fenders protect\" + \" you from mud, and a rear basket lets you transport groceries,\" + \" flowers and books. The ThrillCycle comes with a limited lifetime\" + \" warranty, so this little guy will last you long past graduation.\", Condition = \"refurbished\", }, }; var schema = new Schema() .AddTextField(new FieldName(\"$.Brand\", \"Brand\")) .AddTextField(new FieldName(\"$.Model\", \"Model\")) .AddTextField(new FieldName(\"$.Description\", \"Description\")) .AddNumericField(new FieldName(\"$.Price\", \"Price\")) .AddTagField(new FieldName(\"$.Condition\", \"Condition\")); ft.Create( \"idx:bicycle\", new FTCreateParams().On(IndexDataType.JSON).Prefix(\"bicycle:\"), schema); for (int i = 0; i < bicycles.Length; i++) { json.Set($\"bicycle:{i}\", \"$\", bicycles[i]); } var query1 = new Query(\"*\"); var res1 = ft.Search(\"idx:bicycle\", query1).Documents; Console.WriteLine(string.Join(\"\\n\", res1.Count())); // Prints: Documents found: 10 var query2 = new Query(\"@Model:Jigger\"); var res2 = ft.Search(\"idx:bicycle\", query2).Documents; Console.WriteLine(string.Join(\"\\n\", res2.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and transportation.\",\"Condition\":\"new\"} var query3 = new Query(\"basic @Price:[500 1000]\"); var res3 = ft.Search(\"idx:bicycle\", query3).Documents; Console.WriteLine(string.Join(\"\\n\", res3.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and transportation.\",\"Condition\":\"new\"} var query4 = new Query(\"@Brand:\\\"Noka Bikes\\\"\"); var res4 = ft.Search(\"idx:bicycle\", query4).Documents; Console.WriteLine(string.Join(\"\\n\", res4.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and transportation.\",\"Condition\":\"new\"} var query5 = new Query(\"@Model:Jigger\").ReturnFields(\"Price\"); var res5 = ft.Search(\"idx:bicycle\", query5).Documents; Console.WriteLine(res5.First()[\"Price\"]); // Prints: 270 var request = new AggregationRequest(\"*\").GroupBy( \"@Condition\", Reducers.Count().As(\"Count\")); var result = ft.Aggregate(\"idx:bicycle\", request); for (var i = 0; i < result.TotalResults; i++) { var row = result.GetRow(i); Console.WriteLine($\"{row[\"Condition\"]} - {row[\"Count\"]}\"); } // Prints: // refurbished - 1 // used - 5 // new - 4 } }``` You can retrieve all indexed documents using the FT.SEARCH command. Note the LIMIT clause below, which allows result pagination. ``` FT.SEARCH \"idx:bicycle\" \"*\" LIMIT 0 10 1) (integer) 10 2) \"bicycle:1\" 3) 1) \"$\" 2) \"{\\\"brand\\\":\\\"Bicyk\\\",\\\"model\\\":\\\"Hillcraft\\\",\\\"price\\\":1200,\\\"description\\\":\\\"Kids want to ride with as little weight as possible. Especially on an incline! They may be at the age when a 27.5\\\\\\\" wheel bike is just too clumsy coming off a 24\\\\\\\" bike. The Hillcraft 26 is just the solution they need!\\\",\\\"condition\\\":\\\"used\\\"}\" 4) \"bicycle:2\" 5) 1) \"$\" 2) \"{\\\"brand\\\":\\\"Nord\\\",\\\"model\\\":\\\"Chook air 5\\\",\\\"price\\\":815,\\\"description\\\":\\\"The Chook Air 5 gives kids aged six years and older a durable and uberlight mountain bike for their first experience on tracks and easy cruising through forests and fields. The lower top tube makes it easy to mount and dismount in any situation, giving your kids greater safety on the" }, { "data": "6) \"bicycle:4\" 7) 1) \"$\" 2) \"{\\\"brand\\\":\\\"Noka Bikes\\\",\\\"model\\\":\\\"Kahuna\\\",\\\"price\\\":3200,\\\"description\\\":\\\"Whether you want to try your hand at XC racing or are looking for a lively trail bike that's just as inspiring on the climbs as it is over rougher ground, the Wilder is one heck of a bike built specifically for short women. Both the frames and components have been tweaked to include a women\\xe2\\x80\\x99s saddle, different bars and unique colourway.\\\",\\\"condition\\\":\\\"used\\\"}\" 8) \"bicycle:5\" 9) 1) \"$\" 2) \"{\\\"brand\\\":\\\"Breakout\\\",\\\"model\\\":\\\"XBN 2.1 Alloy\\\",\\\"price\\\":810,\\\"description\\\":\\\"The XBN 2.1 Alloy is our entry-level road bike \\xe2\\x80\\x93 but that\\xe2\\x80\\x99s not to say that it\\xe2\\x80\\x99s a basic machine. With an internal weld aluminium frame, a full carbon fork, and the slick-shifting Claris gears from Shimano\\xe2\\x80\\x99s, this is a bike which doesn\\xe2\\x80\\x99t break the bank and delivers craved performance.\\\",\\\"condition\\\":\\\"new\\\"}\" 10) \"bicycle:0\" 11) 1) \"$\" 2) \"{\\\"brand\\\":\\\"Velorim\\\",\\\"model\\\":\\\"Jigger\\\",\\\"price\\\":270,\\\"description\\\":\\\"Small and powerful, the Jigger is the best ride for the smallest of tikes! This is the tiniest kids\\xe2\\x80\\x99 pedal bike on the market available without a coaster brake, the Jigger is the vehicle of choice for the rare tenacious little rider raring to go.\\\",\\\"condition\\\":\\\"new\\\"}\" 12) \"bicycle:6\" 13) 1) \"$\" 2) \"{\\\"brand\\\":\\\"ScramBikes\\\",\\\"model\\\":\\\"WattBike\\\",\\\"price\\\":2300,\\\"description\\\":\\\"The WattBike is the best e-bike for people who still feel young at heart. It has a Bafang 1000W mid-drive system and a 48V 17.5AH Samsung Lithium-Ion battery, allowing you to ride for more than 60 miles on one charge. It\\xe2\\x80\\x99s great for tackling hilly terrain or if you just fancy a more leisurely ride. With three working modes, you can choose between E-bike, assisted bicycle, and normal bike modes.\\\",\\\"condition\\\":\\\"new\\\"}\" 14) \"bicycle:7\" 15) 1) \"$\" 2) \"{\\\"brand\\\":\\\"Peaknetic\\\",\\\"model\\\":\\\"Secto\\\",\\\"price\\\":430,\\\"description\\\":\\\"If you struggle with stiff fingers or a kinked neck or back after a few minutes on the road, this lightweight, aluminum bike alleviates those issues and allows you to enjoy the ride. From the ergonomic grips to the lumbar-supporting seat position, the Roll Low-Entry offers incredible comfort. The rear-inclined seat tube facilitates stability by allowing you to put a foot on the ground to balance at a stop, and the low step-over frame makes it accessible for all ability and mobility levels. The saddle is very soft, with a wide back to support your hip joints and a cutout in the center to redistribute that pressure. Rim brakes deliver satisfactory braking control, and the wide tires provide a smooth, stable ride on paved roads and gravel. Rack and fender mounts facilitate setting up the Roll Low-Entry as your preferred commuter, and the BMX-like handlebar offers space for mounting a flashlight, bell, or phone holder.\\\",\\\"condition\\\":\\\"new\\\"}\" 16) \"bicycle:9\" 17) 1) \"$\" 2) \"{\\\"model\\\":\\\"ThrillCycle\\\",\\\"brand\\\":\\\"BikeShind\\\",\\\"price\\\":815,\\\"description\\\":\\\"An artsy, retro-inspired bicycle that\\xe2\\x80\\x99s as functional as it is pretty: The ThrillCycle steel frame offers a smooth ride. A 9-speed drivetrain has enough gears for coasting in the city, but we wouldn\\xe2\\x80\\x99t suggest taking it to the mountains. Fenders protect you from mud, and a rear basket lets you transport groceries, flowers and books. The ThrillCycle comes with a limited lifetime warranty, so this little guy will last you long past graduation.\\\",\\\"condition\\\":\\\"refurbished\\\"}\" 18) \"bicycle:3\" 19) 1) \"$\" 2) \"{\\\"brand\\\":\\\"Eva\\\",\\\"model\\\":\\\"Eva 291\\\",\\\"price\\\":3400,\\\"description\\\":\\\"The sister company to Nord, Eva launched in 2005 as the first and only women-dedicated bicycle brand. Designed by women for women, allEva bikes are optimized for the feminine physique using analytics from a body metrics database. If you like 29ers, try the Eva 291. It\\xe2\\x80\\x99s a brand new bike for 2022.. This full-suspension, cross-country ride has been designed for velocity. The 291 has 100mm of front and rear travel, a superlight aluminum frame and fast-rolling 29-inch wheels. Yippee!\\\",\\\"condition\\\":\\\"used\\\"}\" 20) \"bicycle:8\" 21) 1) \"$\" 2) \"{\\\"brand\\\":\\\"nHill\\\",\\\"model\\\":\\\"Summit\\\",\\\"price\\\":1200,\\\"description\\\":\\\"This budget mountain bike from nHill performs well both on bike paths and on the trail. The fork with 100mm of travel absorbs rough" }, { "data": "Fat Kenda Booster tires give you grip in corners and on wet trails. The Shimano Tourney drivetrain offered enough gears for finding a comfortable pace to ride uphill, and the Tektro hydraulic disc brakes break smoothly. Whether you want an affordable bike that you can take to work, but also take trail in mountains on the weekends or you\\xe2\\x80\\x99re just after a stable, comfortable ride for the bike path, the Summit gives a good value for money.\\\",\\\"condition\\\":\\\"new\\\"}\"``` ``` import redis import redis.commands.search.aggregation as aggregations import redis.commands.search.reducers as reducers from redis.commands.json.path import Path from redis.commands.search.field import NumericField, TagField, TextField from redis.commands.search.indexDefinition import IndexDefinition, IndexType from redis.commands.search.query import Query r = redis.Redis(host=\"localhost\", port=6379, db=0, decode_responses=True) bicycle = { \"brand\": \"Velorim\", \"model\": \"Jigger\", \"price\": 270, \"description\": ( \"Small and powerful, the Jigger is the best ride \" \"for the smallest of tikes! This is the tiniest \" \"kids pedal bike on the market available without\" \" a coaster brake, the Jigger is the vehicle of \" \"choice for the rare tenacious little rider \" \"raring to go.\" ), \"condition\": \"new\", } bicycles = [ bicycle, { \"brand\": \"Bicyk\", \"model\": \"Hillcraft\", \"price\": 1200, \"description\": ( \"Kids want to ride with as little weight as possible.\" \" Especially on an incline! They may be at the age \" 'when a 27.5\" wheel bike is just too clumsy coming ' 'off a 24\" bike. The Hillcraft 26 is just the solution' \" they need!\" ), \"condition\": \"used\", }, { \"brand\": \"Nord\", \"model\": \"Chook air 5\", \"price\": 815, \"description\": ( \"The Chook Air 5 gives kids aged six years and older \" \"a durable and uberlight mountain bike for their first\" \" experience on tracks and easy cruising through forests\" \" and fields. The lower top tube makes it easy to mount\" \" and dismount in any situation, giving your kids greater\" \" safety on the trails.\" ), \"condition\": \"used\", }, { \"brand\": \"Eva\", \"model\": \"Eva 291\", \"price\": 3400, \"description\": ( \"The sister company to Nord, Eva launched in 2005 as the\" \" first and only women-dedicated bicycle brand. Designed\" \" by women for women, allEva bikes are optimized for the\" \" feminine physique using analytics from a body metrics\" \" database. If you like 29ers, try the Eva 291. Its a \" \"brand new bike for 2022.. This full-suspension, \" \"cross-country ride has been designed for velocity. The\" \" 291 has 100mm of front and rear travel, a superlight \" \"aluminum frame and fast-rolling 29-inch wheels. Yippee!\" ), \"condition\": \"used\", }, { \"brand\": \"Noka Bikes\", \"model\": \"Kahuna\", \"price\": 3200, \"description\": ( \"Whether you want to try your hand at XC racing or are \" \"looking for a lively trail bike that's just as inspiring\" \" on the climbs as it is over rougher ground, the Wilder\" \" is one heck of a bike built specifically for short women.\" \" Both the frames and components have been tweaked to \" \"include a womens saddle, different bars and unique \" \"colourway.\" ), \"condition\": \"used\", }, { \"brand\": \"Breakout\", \"model\": \"XBN 2.1 Alloy\", \"price\": 810, \"description\": ( \"The XBN 2.1 Alloy is our entry-level road bike but thats\" \" not to say that its a basic machine. With an internal \" \"weld aluminium frame, a full carbon fork, and the slick-shifting\" \" Claris gears from Shimanos, this is a bike which doesnt\" \" break the bank and delivers craved performance.\" ), \"condition\": \"new\", }, { \"brand\": \"ScramBikes\", \"model\": \"WattBike\", \"price\": 2300, \"description\": ( \"The WattBike is the best e-bike for people who still feel young\" \" at" }, { "data": "It has a Bafang 1000W mid-drive system and a 48V\" \" 17.5AH Samsung Lithium-Ion battery, allowing you to ride for\" \" more than 60 miles on one charge. Its great for tackling hilly\" \" terrain or if you just fancy a more leisurely ride. With three\" \" working modes, you can choose between E-bike, assisted bicycle,\" \" and normal bike modes.\" ), \"condition\": \"new\", }, { \"brand\": \"Peaknetic\", \"model\": \"Secto\", \"price\": 430, \"description\": ( \"If you struggle with stiff fingers or a kinked neck or back after\" \" a few minutes on the road, this lightweight, aluminum bike\" \" alleviates those issues and allows you to enjoy the ride. From\" \" the ergonomic grips to the lumbar-supporting seat position, the\" \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" \" tube facilitates stability by allowing you to put a foot on the\" \" ground to balance at a stop, and the low step-over frame makes it\" \" accessible for all ability and mobility levels. The saddle is\" \" very soft, with a wide back to support your hip joints and a\" \" cutout in the center to redistribute that pressure. Rim brakes\" \" deliver satisfactory braking control, and the wide tires provide\" \" a smooth, stable ride on paved roads and gravel. Rack and fender\" \" mounts facilitate setting up the Roll Low-Entry as your preferred\" \" commuter, and the BMX-like handlebar offers space for mounting a\" \" flashlight, bell, or phone holder.\" ), \"condition\": \"new\", }, { \"brand\": \"nHill\", \"model\": \"Summit\", \"price\": 1200, \"description\": ( \"This budget mountain bike from nHill performs well both on bike\" \" paths and on the trail. The fork with 100mm of travel absorbs\" \" rough terrain. Fat Kenda Booster tires give you grip in corners\" \" and on wet trails. The Shimano Tourney drivetrain offered enough\" \" gears for finding a comfortable pace to ride uphill, and the\" \" Tektro hydraulic disc brakes break smoothly. Whether you want an\" \" affordable bike that you can take to work, but also take trail in\" \" mountains on the weekends or youre just after a stable,\" \" comfortable ride for the bike path, the Summit gives a good value\" \" for money.\" ), \"condition\": \"new\", }, { \"model\": \"ThrillCycle\", \"brand\": \"BikeShind\", \"price\": 815, \"description\": ( \"An artsy, retro-inspired bicycle thats as functional as it is\" \" pretty: The ThrillCycle steel frame offers a smooth ride. A\" \" 9-speed drivetrain has enough gears for coasting in the city, but\" \" we wouldnt suggest taking it to the mountains. Fenders protect\" \" you from mud, and a rear basket lets you transport groceries,\" \" flowers and books. The ThrillCycle comes with a limited lifetime\" \" warranty, so this little guy will last you long past graduation.\" ), \"condition\": \"refurbished\", }, ] schema = ( TextField(\"$.brand\", as_name=\"brand\"), TextField(\"$.model\", as_name=\"model\"), TextField(\"$.description\", as_name=\"description\"), NumericField(\"$.price\", as_name=\"price\"), TagField(\"$.condition\", as_name=\"condition\"), ) index = r.ft(\"idx:bicycle\") index.create_index( schema, definition=IndexDefinition(prefix=[\"bicycle:\"], index_type=IndexType.JSON), ) for bid, bicycle in enumerate(bicycles): r.json().set(f\"bicycle:{bid}\", Path.root_path(), bicycle) res = index.search(Query(\"*\")) print(\"Documents found:\", res.total) res = index.search(Query(\"@model:Jigger\")) print(res) res = index.search( Query(\"@model:Jigger\").returnfield(\"$.price\", asfield=\"price\") ) print(res) res = index.search(Query(\"basic @price:[500 1000]\")) print(res) res = index.search(Query('@brand:\"Noka Bikes\"')) print(res) res = index.search( Query( \"@description:%analitics%\" ).dialect( # Note the typo in the word \"analytics\" 2 ) ) print(res) res = index.search( Query( \"@description:%%analitycs%%\" ).dialect( # Note 2 typos in the word \"analytics\" 2 ) ) print(res) res = index.search(Query(\"@model:hill*\")) print(res) res = index.search(Query(\"@model:*bike\")) print(res) res = index.search(Query(\"w'H?*craft'\").dialect(2)) print(res.docs[0].json) res = index.search(Query(\"mountain\").with_scores()) for sr in res.docs: print(f\"{sr.id}: score={sr.score}\") res = index.search(Query(\"mountain\").with_scores().scorer(\"BM25\")) for sr in res.docs: print(f\"{sr.id}: score={sr.score}\") req = aggregations.AggregateRequest(\"*\").group_by( \"@condition\"," }, { "data": ") res = index.aggregate(req).rows print(res) ``` ``` package io.redis.examples; import java.math.BigDecimal; import java.util.*; import redis.clients.jedis.*; import redis.clients.jedis.exceptions.*; import redis.clients.jedis.search.*; import redis.clients.jedis.search.aggr.*; import redis.clients.jedis.search.schemafields.*; class Bicycle { public String brand; public String model; public BigDecimal price; public String description; public String condition; public Bicycle(String brand, String model, BigDecimal price, String condition, String description) { this.brand = brand; this.model = model; this.price = price; this.condition = condition; this.description = description; } } public class SearchQuickstartExample { public void run() { // UnifiedJedis jedis = new UnifiedJedis(\"redis://localhost:6379\"); JedisPooled jedis = new JedisPooled(\"localhost\", 6379); SchemaField[] schema = { TextField.of(\"$.brand\").as(\"brand\"), TextField.of(\"$.model\").as(\"model\"), TextField.of(\"$.description\").as(\"description\"), NumericField.of(\"$.price\").as(\"price\"), TagField.of(\"$.condition\").as(\"condition\") }; jedis.ftCreate(\"idx:bicycle\", FTCreateParams.createParams() .on(IndexDataType.JSON) .addPrefix(\"bicycle:\"), schema ); Bicycle[] bicycles = { new Bicycle( \"Velorim\", \"Jigger\", new BigDecimal(270), \"new\", \"Small and powerful, the Jigger is the best ride \" + \"for the smallest of tikes! This is the tiniest \" + \"kids pedal bike on the market available without\" + \" a coaster brake, the Jigger is the vehicle of \" + \"choice for the rare tenacious little rider \" + \"raring to go.\" ), new Bicycle( \"Bicyk\", \"Hillcraft\", new BigDecimal(1200), \"used\", \"Kids want to ride with as little weight as possible.\" + \" Especially on an incline! They may be at the age \" + \"when a 27.5 inch wheel bike is just too clumsy coming \" + \"off a 24 inch bike. The Hillcraft 26 is just the solution\" + \" they need!\" ), new Bicycle( \"Nord\", \"Chook air 5\", new BigDecimal(815), \"used\", \"The Chook Air 5 gives kids aged six years and older \" + \"a durable and uberlight mountain bike for their first\" + \" experience on tracks and easy cruising through forests\" + \" and fields. The lower top tube makes it easy to mount\" + \" and dismount in any situation, giving your kids greater\" + \" safety on the trails.\" ), new Bicycle( \"Eva\", \"Eva 291\", new BigDecimal(3400), \"used\", \"The sister company to Nord, Eva launched in 2005 as the\" + \" first and only women-dedicated bicycle brand. Designed\" + \" by women for women, allEva bikes are optimized for the\" + \" feminine physique using analytics from a body metrics\" + \" database. If you like 29ers, try the Eva 291. It's a \" + \"brand new bike for 2022.. This full-suspension, \" + \"cross-country ride has been designed for velocity. The\" + \" 291 has 100mm of front and rear travel, a superlight \" + \"aluminum frame and fast-rolling 29-inch wheels. Yippee!\" ), new Bicycle( \"Noka Bikes\", \"Kahuna\", new BigDecimal(3200), \"used\", \"Whether you want to try your hand at XC racing or are \" + \"looking for a lively trail bike that's just as inspiring\" + \" on the climbs as it is over rougher ground, the Wilder\" + \" is one heck of a bike built specifically for short women.\" + \" Both the frames and components have been tweaked to \" + \"include a womens saddle, different bars and unique \" + \"colourway.\" ), new Bicycle( \"Breakout\", \"XBN 2.1 Alloy\", new BigDecimal(810), \"new\", \"The XBN 2.1 Alloy is our entry-level road bike but thats\" + \" not to say that its a basic machine. With an internal \" + \"weld aluminium frame, a full carbon fork, and the slick-shifting\" + \" Claris gears from Shimanos, this is a bike which doesnt\" + \" break the bank and delivers craved performance.\" ), new Bicycle( \"ScramBikes\", \"WattBike\", new BigDecimal(2300), \"new\", \"The WattBike is the best e-bike for people who still feel young\" + \" at" }, { "data": "It has a Bafang 1000W mid-drive system and a 48V\" + \" 17.5AH Samsung Lithium-Ion battery, allowing you to ride for\" + \" more than 60 miles on one charge. Its great for tackling hilly\" + \" terrain or if you just fancy a more leisurely ride. With three\" + \" working modes, you can choose between E-bike, assisted bicycle,\" + \" and normal bike modes.\" ), new Bicycle( \"Peaknetic\", \"Secto\", new BigDecimal(430), \"new\", \"If you struggle with stiff fingers or a kinked neck or back after\" + \" a few minutes on the road, this lightweight, aluminum bike\" + \" alleviates those issues and allows you to enjoy the ride. From\" + \" the ergonomic grips to the lumbar-supporting seat position, the\" + \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" + \" tube facilitates stability by allowing you to put a foot on the\" + \" ground to balance at a stop, and the low step-over frame makes it\" + \" accessible for all ability and mobility levels. The saddle is\" + \" very soft, with a wide back to support your hip joints and a\" + \" cutout in the center to redistribute that pressure. Rim brakes\" + \" deliver satisfactory braking control, and the wide tires provide\" + \" a smooth, stable ride on paved roads and gravel. Rack and fender\" + \" mounts facilitate setting up the Roll Low-Entry as your preferred\" + \" commuter, and the BMX-like handlebar offers space for mounting a\" + \" flashlight, bell, or phone holder.\" ), new Bicycle( \"nHill\", \"Summit\", new BigDecimal(1200), \"new\", \"This budget mountain bike from nHill performs well both on bike\" + \" paths and on the trail. The fork with 100mm of travel absorbs\" + \" rough terrain. Fat Kenda Booster tires give you grip in corners\" + \" and on wet trails. The Shimano Tourney drivetrain offered enough\" + \" gears for finding a comfortable pace to ride uphill, and the\" + \" Tektro hydraulic disc brakes break smoothly. Whether you want an\" + \" affordable bike that you can take to work, but also take trail in\" + \" mountains on the weekends or youre just after a stable,\" + \" comfortable ride for the bike path, the Summit gives a good value\" + \" for money.\" ), new Bicycle( \"ThrillCycle\", \"BikeShind\", new BigDecimal(815), \"refurbished\", \"An artsy, retro-inspired bicycle thats as functional as it is\" + \" pretty: The ThrillCycle steel frame offers a smooth ride. A\" + \" 9-speed drivetrain has enough gears for coasting in the city, but\" + \" we wouldnt suggest taking it to the mountains. Fenders protect\" + \" you from mud, and a rear basket lets you transport groceries,\" + \" flowers and books. The ThrillCycle comes with a limited lifetime\" + \" warranty, so this little guy will last you long past graduation.\" ), }; for (int i = 0; i < bicycles.length; i++) { jedis.jsonSetWithEscape(String.format(\"bicycle:%d\", i), bicycles[i]); } Query query1 = new Query(\"*\"); List<Document> result1 = jedis.ftSearch(\"idx:bicycle\", query1).getDocuments(); System.out.println(\"Documents found:\" + result1.size()); // Prints: Documents found: 10 Query query2 = new Query(\"@model:Jigger\"); List<Document> result2 = jedis.ftSearch(\"idx:bicycle\", query2).getDocuments(); System.out.println(result2); // Prints: [id:bicycle:0, score: 1.0, payload:null, // properties:[$={\"brand\":\"Velorim\",\"model\":\"Jigger\",\"price\":270,\"description\":\"Small and powerful, the Jigger is the best ride for the smallest of tikes! This is the tiniest kids pedal bike on the market available without a coaster brake, the Jigger is the vehicle of choice for the rare tenacious little rider raring to go.\",\"condition\":\"new\"}]] Query query3 = new Query(\"@model:Jigger\").returnFields(\"price\"); List<Document> result3 = jedis.ftSearch(\"idx:bicycle\", query3).getDocuments(); System.out.println(result3); // Prints: [id:bicycle:0, score:" }, { "data": "payload:null, properties:[price=270]] Query query4 = new Query(\"basic @price:[500 1000]\"); List<Document> result4 = jedis.ftSearch(\"idx:bicycle\", query4).getDocuments(); System.out.println(result4); // Prints: [id:bicycle:5, score: 1.0, payload:null, // properties:[$={\"brand\":\"Breakout\",\"model\":\"XBN 2.1 Alloy\",\"price\":810,\"description\":\"The XBN 2.1 Alloy is our entry-level road bike but thats not to say that its a basic machine. With an internal weld aluminium frame, a full carbon fork, and the slick-shifting Claris gears from Shimanos, this is a bike which doesnt break the bank and delivers craved performance.\",\"condition\":\"new\"}]] Query query5 = new Query(\"@brand:\\\"Noka Bikes\\\"\"); List<Document> result5 = jedis.ftSearch(\"idx:bicycle\", query5).getDocuments(); System.out.println(result5); // Prints: [id:bicycle:4, score: 1.0, payload:null, // properties:[$={\"brand\":\"Noka Bikes\",\"model\":\"Kahuna\",\"price\":3200,\"description\":\"Whether you want to try your hand at XC racing or are looking for a lively trail bike that's just as inspiring on the climbs as it is over rougher ground, the Wilder is one heck of a bike built specifically for short women. Both the frames and components have been tweaked to include a womens saddle, different bars and unique colourway.\",\"condition\":\"used\"}]] AggregationBuilder ab = new AggregationBuilder(\"*\").groupBy(\"@condition\", Reducers.count().as(\"count\")); AggregationResult ar = jedis.ftAggregate(\"idx:bicycle\", ab); for (int i = 0; i < ar.getTotalResults(); i++) { System.out.println(ar.getRow(i).getString(\"condition\") + \" - \" ar.getRow(i).getString(\"count\")); } // Prints: // refurbished - 1 // used - 5 // new - 4 assertEquals(\"Validate aggregation results\", 3, ar.getTotalResults()); jedis.close(); } }``` ``` using NRedisStack.RedisStackCommands; using NRedisStack.Search; using NRedisStack.Search.Aggregation; using NRedisStack.Search.Literals.Enums; using NRedisStack.Tests; using StackExchange.Redis; public class SearchQuickstartExample { [SkipIfRedis(Is.OSSCluster)] public void run() { var redis = ConnectionMultiplexer.Connect(\"localhost:6379\"); var db = redis.GetDatabase(); var ft = db.FT(); var json = db.JSON(); var bike1 = new { Brand = \"Velorim\", Model = \"Jigger\", Price = 270M, Description = \"Small and powerful, the Jigger is the best ride \" + \"for the smallest of tikes! This is the tiniest \" + \"kids pedal bike on the market available without\" + \" a coaster brake, the Jigger is the vehicle of \" + \"choice for the rare tenacious little rider \" + \"raring to go.\", Condition = \"used\" }; var bicycles = new object[] { bike1, new { Brand = \"Bicyk\", Model = \"Hillcraft\", Price = 1200M, Description = \"Kids want to ride with as little weight as possible.\" + \" Especially on an incline! They may be at the age \" + \"when a 27.5 inch wheel bike is just too clumsy coming \" + \"off a 24 inch bike. The Hillcraft 26 is just the solution\" + \" they need!\", Condition = \"used\", }, new { Brand = \"Nord\", Model = \"Chook air 5\", Price = 815M, Description = \"The Chook Air 5 gives kids aged six years and older \" + \"a durable and uberlight mountain bike for their first\" + \" experience on tracks and easy cruising through forests\" + \" and fields. The lower top tube makes it easy to mount\" + \" and dismount in any situation, giving your kids greater\" + \" safety on the trails.\", Condition = \"used\", }, new { Brand = \"Eva\", Model = \"Eva 291\", Price = 3400M, Description = \"The sister company to Nord, Eva launched in 2005 as the\" + \" first and only women-dedicated bicycle brand. Designed\" + \" by women for women, allEva bikes are optimized for the\" + \" feminine physique using analytics from a body metrics\" + \" database. If you like 29ers, try the Eva 291. Its a \" + \"brand new bike for 2022.. This full-suspension, \" + \"cross-country ride has been designed for velocity. The\" + \" 291 has 100mm of front and rear travel, a superlight \" + \"aluminum frame and fast-rolling 29-inch" }, { "data": "Yippee!\", Condition = \"used\", }, new { Brand = \"Noka Bikes\", Model = \"Kahuna\", Price = 3200M, Description = \"Whether you want to try your hand at XC racing or are \" + \"looking for a lively trail bike that's just as inspiring\" + \" on the climbs as it is over rougher ground, the Wilder\" + \" is one heck of a bike built specifically for short women.\" + \" Both the frames and components have been tweaked to \" + \"include a womens saddle, different bars and unique \" + \"colourway.\", Condition = \"used\", }, new { Brand = \"Breakout\", Model = \"XBN 2.1 Alloy\", Price = 810M, Description = \"The XBN 2.1 Alloy is our entry-level road bike but thats\" + \" not to say that its a basic machine. With an internal \" + \"weld aluminium frame, a full carbon fork, and the slick-shifting\" + \" Claris gears from Shimanos, this is a bike which doesnt\" + \" break the bank and delivers craved performance.\", Condition = \"new\", }, new { Brand = \"ScramBikes\", Model = \"WattBike\", Price = 2300M, Description = \"The WattBike is the best e-bike for people who still feel young\" + \" at heart. It has a Bafang 1000W mid-drive system and a 48V\" + \" 17.5AH Samsung Lithium-Ion battery, allowing you to ride for\" + \" more than 60 miles on one charge. Its great for tackling hilly\" + \" terrain or if you just fancy a more leisurely ride. With three\" + \" working modes, you can choose between E-bike, assisted bicycle,\" + \" and normal bike modes.\", Condition = \"new\", }, new { Brand = \"Peaknetic\", Model = \"Secto\", Price = 430M, Description = \"If you struggle with stiff fingers or a kinked neck or back after\" + \" a few minutes on the road, this lightweight, aluminum bike\" + \" alleviates those issues and allows you to enjoy the ride. From\" + \" the ergonomic grips to the lumbar-supporting seat position, the\" + \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" + \" tube facilitates stability by allowing you to put a foot on the\" + \" ground to balance at a stop, and the low step-over frame makes it\" + \" accessible for all ability and mobility levels. The saddle is\" + \" very soft, with a wide back to support your hip joints and a\" + \" cutout in the center to redistribute that pressure. Rim brakes\" + \" deliver satisfactory braking control, and the wide tires provide\" + \" a smooth, stable ride on paved roads and gravel. Rack and fender\" + \" mounts facilitate setting up the Roll Low-Entry as your preferred\" + \" commuter, and the BMX-like handlebar offers space for mounting a\" + \" flashlight, bell, or phone holder.\", Condition = \"new\", }, new { Brand = \"nHill\", Model = \"Summit\", Price = 1200M, Description = \"This budget mountain bike from nHill performs well both on bike\" + \" paths and on the trail. The fork with 100mm of travel absorbs\" + \" rough terrain. Fat Kenda Booster tires give you grip in corners\" + \" and on wet trails. The Shimano Tourney drivetrain offered enough\" + \" gears for finding a comfortable pace to ride uphill, and the\" + \" Tektro hydraulic disc brakes break" }, { "data": "Whether you want an\" + \" affordable bike that you can take to work, but also take trail in\" + \" mountains on the weekends or youre just after a stable,\" + \" comfortable ride for the bike path, the Summit gives a good value\" + \" for money.\", Condition = \"new\", }, new { Model = \"ThrillCycle\", Brand = \"BikeShind\", Price = 815M, Description = \"An artsy, retro-inspired bicycle thats as functional as it is\" + \" pretty: The ThrillCycle steel frame offers a smooth ride. A\" + \" 9-speed drivetrain has enough gears for coasting in the city, but\" + \" we wouldnt suggest taking it to the mountains. Fenders protect\" + \" you from mud, and a rear basket lets you transport groceries,\" + \" flowers and books. The ThrillCycle comes with a limited lifetime\" + \" warranty, so this little guy will last you long past graduation.\", Condition = \"refurbished\", }, }; var schema = new Schema() .AddTextField(new FieldName(\"$.Brand\", \"Brand\")) .AddTextField(new FieldName(\"$.Model\", \"Model\")) .AddTextField(new FieldName(\"$.Description\", \"Description\")) .AddNumericField(new FieldName(\"$.Price\", \"Price\")) .AddTagField(new FieldName(\"$.Condition\", \"Condition\")); ft.Create( \"idx:bicycle\", new FTCreateParams().On(IndexDataType.JSON).Prefix(\"bicycle:\"), schema); for (int i = 0; i < bicycles.Length; i++) { json.Set($\"bicycle:{i}\", \"$\", bicycles[i]); } var query1 = new Query(\"*\"); var res1 = ft.Search(\"idx:bicycle\", query1).Documents; Console.WriteLine(string.Join(\"\\n\", res1.Count())); // Prints: Documents found: 10 var query2 = new Query(\"@Model:Jigger\"); var res2 = ft.Search(\"idx:bicycle\", query2).Documents; Console.WriteLine(string.Join(\"\\n\", res2.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and transportation.\",\"Condition\":\"new\"} var query3 = new Query(\"basic @Price:[500 1000]\"); var res3 = ft.Search(\"idx:bicycle\", query3).Documents; Console.WriteLine(string.Join(\"\\n\", res3.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and transportation.\",\"Condition\":\"new\"} var query4 = new Query(\"@Brand:\\\"Noka Bikes\\\"\"); var res4 = ft.Search(\"idx:bicycle\", query4).Documents; Console.WriteLine(string.Join(\"\\n\", res4.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and transportation.\",\"Condition\":\"new\"} var query5 = new Query(\"@Model:Jigger\").ReturnFields(\"Price\"); var res5 = ft.Search(\"idx:bicycle\", query5).Documents; Console.WriteLine(res5.First()[\"Price\"]); // Prints: 270 var request = new AggregationRequest(\"*\").GroupBy( \"@Condition\", Reducers.Count().As(\"Count\")); var result = ft.Aggregate(\"idx:bicycle\", request); for (var i = 0; i < result.TotalResults; i++) { var row = result.GetRow(i); Console.WriteLine($\"{row[\"Condition\"]} - {row[\"Count\"]}\"); } // Prints: // refurbished - 1 // used - 5 // new - 4 } }``` The following command shows a simple single-term query for finding all bicycles with a specific model: ``` FT.SEARCH \"idx:bicycle\" \"@model:Jigger\" LIMIT 0 10 1) (integer) 1 2) \"bicycle:0\" 3) 1) \"$\" 2) \"{\\\"brand\\\":\\\"Velorim\\\",\\\"model\\\":\\\"Jigger\\\",\\\"price\\\":270,\\\"description\\\":\\\"Small and powerful, the Jigger is the best ride for the smallest of tikes! This is the tiniest kids\\xe2\\x80\\x99 pedal bike on the market available without a coaster brake, the Jigger is the vehicle of choice for the rare tenacious little rider raring to go.\\\",\\\"condition\\\":\\\"new\\\"}\"``` ``` import redis import redis.commands.search.aggregation as aggregations import redis.commands.search.reducers as reducers from redis.commands.json.path import Path from redis.commands.search.field import NumericField, TagField, TextField from redis.commands.search.indexDefinition import IndexDefinition, IndexType from redis.commands.search.query import Query r = redis.Redis(host=\"localhost\", port=6379, db=0, decode_responses=True) bicycle = { \"brand\": \"Velorim\", \"model\": \"Jigger\", \"price\": 270, \"description\": ( \"Small and powerful, the Jigger is the best ride \" \"for the smallest of tikes! This is the tiniest \" \"kids pedal bike on the market available without\" \" a coaster brake, the Jigger is the vehicle of \" \"choice for the rare tenacious little rider \" \"raring to go.\" ), \"condition\": \"new\", } bicycles = [ bicycle, { \"brand\": \"Bicyk\", \"model\": \"Hillcraft\", \"price\": 1200, \"description\": ( \"Kids want to ride with as little weight as possible.\" \" Especially on an incline! They may be at the age \" 'when a" }, { "data": "wheel bike is just too clumsy coming ' 'off a 24\" bike. The Hillcraft 26 is just the solution' \" they need!\" ), \"condition\": \"used\", }, { \"brand\": \"Nord\", \"model\": \"Chook air 5\", \"price\": 815, \"description\": ( \"The Chook Air 5 gives kids aged six years and older \" \"a durable and uberlight mountain bike for their first\" \" experience on tracks and easy cruising through forests\" \" and fields. The lower top tube makes it easy to mount\" \" and dismount in any situation, giving your kids greater\" \" safety on the trails.\" ), \"condition\": \"used\", }, { \"brand\": \"Eva\", \"model\": \"Eva 291\", \"price\": 3400, \"description\": ( \"The sister company to Nord, Eva launched in 2005 as the\" \" first and only women-dedicated bicycle brand. Designed\" \" by women for women, allEva bikes are optimized for the\" \" feminine physique using analytics from a body metrics\" \" database. If you like 29ers, try the Eva 291. Its a \" \"brand new bike for 2022.. This full-suspension, \" \"cross-country ride has been designed for velocity. The\" \" 291 has 100mm of front and rear travel, a superlight \" \"aluminum frame and fast-rolling 29-inch wheels. Yippee!\" ), \"condition\": \"used\", }, { \"brand\": \"Noka Bikes\", \"model\": \"Kahuna\", \"price\": 3200, \"description\": ( \"Whether you want to try your hand at XC racing or are \" \"looking for a lively trail bike that's just as inspiring\" \" on the climbs as it is over rougher ground, the Wilder\" \" is one heck of a bike built specifically for short women.\" \" Both the frames and components have been tweaked to \" \"include a womens saddle, different bars and unique \" \"colourway.\" ), \"condition\": \"used\", }, { \"brand\": \"Breakout\", \"model\": \"XBN 2.1 Alloy\", \"price\": 810, \"description\": ( \"The XBN 2.1 Alloy is our entry-level road bike but thats\" \" not to say that its a basic machine. With an internal \" \"weld aluminium frame, a full carbon fork, and the slick-shifting\" \" Claris gears from Shimanos, this is a bike which doesnt\" \" break the bank and delivers craved performance.\" ), \"condition\": \"new\", }, { \"brand\": \"ScramBikes\", \"model\": \"WattBike\", \"price\": 2300, \"description\": ( \"The WattBike is the best e-bike for people who still feel young\" \" at heart. It has a Bafang 1000W mid-drive system and a 48V\" \" 17.5AH Samsung Lithium-Ion battery, allowing you to ride for\" \" more than 60 miles on one charge. Its great for tackling hilly\" \" terrain or if you just fancy a more leisurely ride. With three\" \" working modes, you can choose between E-bike, assisted bicycle,\" \" and normal bike modes.\" ), \"condition\": \"new\", }, { \"brand\": \"Peaknetic\", \"model\": \"Secto\", \"price\": 430, \"description\": ( \"If you struggle with stiff fingers or a kinked neck or back after\" \" a few minutes on the road, this lightweight, aluminum bike\" \" alleviates those issues and allows you to enjoy the ride. From\" \" the ergonomic grips to the lumbar-supporting seat position, the\" \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" \" tube facilitates stability by allowing you to put a foot on the\" \" ground to balance at a stop, and the low step-over frame makes it\" \" accessible for all ability and mobility levels. The saddle is\" \" very soft, with a wide back to support your hip joints and a\" \" cutout in the center to redistribute that pressure. Rim brakes\" \" deliver satisfactory braking control, and the wide tires provide\" \" a smooth, stable ride on paved roads and" }, { "data": "Rack and fender\" \" mounts facilitate setting up the Roll Low-Entry as your preferred\" \" commuter, and the BMX-like handlebar offers space for mounting a\" \" flashlight, bell, or phone holder.\" ), \"condition\": \"new\", }, { \"brand\": \"nHill\", \"model\": \"Summit\", \"price\": 1200, \"description\": ( \"This budget mountain bike from nHill performs well both on bike\" \" paths and on the trail. The fork with 100mm of travel absorbs\" \" rough terrain. Fat Kenda Booster tires give you grip in corners\" \" and on wet trails. The Shimano Tourney drivetrain offered enough\" \" gears for finding a comfortable pace to ride uphill, and the\" \" Tektro hydraulic disc brakes break smoothly. Whether you want an\" \" affordable bike that you can take to work, but also take trail in\" \" mountains on the weekends or youre just after a stable,\" \" comfortable ride for the bike path, the Summit gives a good value\" \" for money.\" ), \"condition\": \"new\", }, { \"model\": \"ThrillCycle\", \"brand\": \"BikeShind\", \"price\": 815, \"description\": ( \"An artsy, retro-inspired bicycle thats as functional as it is\" \" pretty: The ThrillCycle steel frame offers a smooth ride. A\" \" 9-speed drivetrain has enough gears for coasting in the city, but\" \" we wouldnt suggest taking it to the mountains. Fenders protect\" \" you from mud, and a rear basket lets you transport groceries,\" \" flowers and books. The ThrillCycle comes with a limited lifetime\" \" warranty, so this little guy will last you long past graduation.\" ), \"condition\": \"refurbished\", }, ] schema = ( TextField(\"$.brand\", as_name=\"brand\"), TextField(\"$.model\", as_name=\"model\"), TextField(\"$.description\", as_name=\"description\"), NumericField(\"$.price\", as_name=\"price\"), TagField(\"$.condition\", as_name=\"condition\"), ) index = r.ft(\"idx:bicycle\") index.create_index( schema, definition=IndexDefinition(prefix=[\"bicycle:\"], index_type=IndexType.JSON), ) for bid, bicycle in enumerate(bicycles): r.json().set(f\"bicycle:{bid}\", Path.root_path(), bicycle) res = index.search(Query(\"*\")) print(\"Documents found:\", res.total) res = index.search(Query(\"@model:Jigger\")) print(res) res = index.search( Query(\"@model:Jigger\").returnfield(\"$.price\", asfield=\"price\") ) print(res) res = index.search(Query(\"basic @price:[500 1000]\")) print(res) res = index.search(Query('@brand:\"Noka Bikes\"')) print(res) res = index.search( Query( \"@description:%analitics%\" ).dialect( # Note the typo in the word \"analytics\" 2 ) ) print(res) res = index.search( Query( \"@description:%%analitycs%%\" ).dialect( # Note 2 typos in the word \"analytics\" 2 ) ) print(res) res = index.search(Query(\"@model:hill*\")) print(res) res = index.search(Query(\"@model:*bike\")) print(res) res = index.search(Query(\"w'H?*craft'\").dialect(2)) print(res.docs[0].json) res = index.search(Query(\"mountain\").with_scores()) for sr in res.docs: print(f\"{sr.id}: score={sr.score}\") res = index.search(Query(\"mountain\").with_scores().scorer(\"BM25\")) for sr in res.docs: print(f\"{sr.id}: score={sr.score}\") req = aggregations.AggregateRequest(\"*\").group_by( \"@condition\", reducers.count().alias(\"count\") ) res = index.aggregate(req).rows print(res) ``` ``` package io.redis.examples; import java.math.BigDecimal; import java.util.*; import redis.clients.jedis.*; import redis.clients.jedis.exceptions.*; import redis.clients.jedis.search.*; import redis.clients.jedis.search.aggr.*; import redis.clients.jedis.search.schemafields.*; class Bicycle { public String brand; public String model; public BigDecimal price; public String description; public String condition; public Bicycle(String brand, String model, BigDecimal price, String condition, String description) { this.brand = brand; this.model = model; this.price = price; this.condition = condition; this.description = description; } } public class SearchQuickstartExample { public void run() { // UnifiedJedis jedis = new UnifiedJedis(\"redis://localhost:6379\"); JedisPooled jedis = new JedisPooled(\"localhost\", 6379); SchemaField[] schema = { TextField.of(\"$.brand\").as(\"brand\"), TextField.of(\"$.model\").as(\"model\"), TextField.of(\"$.description\").as(\"description\"), NumericField.of(\"$.price\").as(\"price\"), TagField.of(\"$.condition\").as(\"condition\") }; jedis.ftCreate(\"idx:bicycle\", FTCreateParams.createParams() .on(IndexDataType.JSON) .addPrefix(\"bicycle:\"), schema ); Bicycle[] bicycles = { new Bicycle( \"Velorim\", \"Jigger\", new BigDecimal(270), \"new\", \"Small and powerful, the Jigger is the best ride \" + \"for the smallest of tikes! This is the tiniest \" + \"kids pedal bike on the market available without\" + \" a coaster brake, the Jigger is the vehicle of \" + \"choice for the rare tenacious little rider \" + \"raring to go.\" ), new Bicycle( \"Bicyk\", \"Hillcraft\", new BigDecimal(1200), \"used\", \"Kids want to ride with as little weight as possible.\" + \" Especially on an incline! They may be at the age \" + \"when a" }, { "data": "inch wheel bike is just too clumsy coming \" + \"off a 24 inch bike. The Hillcraft 26 is just the solution\" + \" they need!\" ), new Bicycle( \"Nord\", \"Chook air 5\", new BigDecimal(815), \"used\", \"The Chook Air 5 gives kids aged six years and older \" + \"a durable and uberlight mountain bike for their first\" + \" experience on tracks and easy cruising through forests\" + \" and fields. The lower top tube makes it easy to mount\" + \" and dismount in any situation, giving your kids greater\" + \" safety on the trails.\" ), new Bicycle( \"Eva\", \"Eva 291\", new BigDecimal(3400), \"used\", \"The sister company to Nord, Eva launched in 2005 as the\" + \" first and only women-dedicated bicycle brand. Designed\" + \" by women for women, allEva bikes are optimized for the\" + \" feminine physique using analytics from a body metrics\" + \" database. If you like 29ers, try the Eva 291. It's a \" + \"brand new bike for 2022.. This full-suspension, \" + \"cross-country ride has been designed for velocity. The\" + \" 291 has 100mm of front and rear travel, a superlight \" + \"aluminum frame and fast-rolling 29-inch wheels. Yippee!\" ), new Bicycle( \"Noka Bikes\", \"Kahuna\", new BigDecimal(3200), \"used\", \"Whether you want to try your hand at XC racing or are \" + \"looking for a lively trail bike that's just as inspiring\" + \" on the climbs as it is over rougher ground, the Wilder\" + \" is one heck of a bike built specifically for short women.\" + \" Both the frames and components have been tweaked to \" + \"include a womens saddle, different bars and unique \" + \"colourway.\" ), new Bicycle( \"Breakout\", \"XBN 2.1 Alloy\", new BigDecimal(810), \"new\", \"The XBN 2.1 Alloy is our entry-level road bike but thats\" + \" not to say that its a basic machine. With an internal \" + \"weld aluminium frame, a full carbon fork, and the slick-shifting\" + \" Claris gears from Shimanos, this is a bike which doesnt\" + \" break the bank and delivers craved performance.\" ), new Bicycle( \"ScramBikes\", \"WattBike\", new BigDecimal(2300), \"new\", \"The WattBike is the best e-bike for people who still feel young\" + \" at heart. It has a Bafang 1000W mid-drive system and a 48V\" + \" 17.5AH Samsung Lithium-Ion battery, allowing you to ride for\" + \" more than 60 miles on one charge. Its great for tackling hilly\" + \" terrain or if you just fancy a more leisurely ride. With three\" + \" working modes, you can choose between E-bike, assisted bicycle,\" + \" and normal bike modes.\" ), new Bicycle( \"Peaknetic\", \"Secto\", new BigDecimal(430), \"new\", \"If you struggle with stiff fingers or a kinked neck or back after\" + \" a few minutes on the road, this lightweight, aluminum bike\" + \" alleviates those issues and allows you to enjoy the ride. From\" + \" the ergonomic grips to the lumbar-supporting seat position, the\" + \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" + \" tube facilitates stability by allowing you to put a foot on the\" + \" ground to balance at a stop, and the low step-over frame makes it\" + \" accessible for all ability and mobility levels. The saddle is\" + \" very soft, with a wide back to support your hip joints and a\" + \" cutout in the center to redistribute that pressure. Rim brakes\" + \" deliver satisfactory braking control, and the wide tires provide\" + \" a smooth, stable ride on paved roads and" }, { "data": "Rack and fender\" + \" mounts facilitate setting up the Roll Low-Entry as your preferred\" + \" commuter, and the BMX-like handlebar offers space for mounting a\" + \" flashlight, bell, or phone holder.\" ), new Bicycle( \"nHill\", \"Summit\", new BigDecimal(1200), \"new\", \"This budget mountain bike from nHill performs well both on bike\" + \" paths and on the trail. The fork with 100mm of travel absorbs\" + \" rough terrain. Fat Kenda Booster tires give you grip in corners\" + \" and on wet trails. The Shimano Tourney drivetrain offered enough\" + \" gears for finding a comfortable pace to ride uphill, and the\" + \" Tektro hydraulic disc brakes break smoothly. Whether you want an\" + \" affordable bike that you can take to work, but also take trail in\" + \" mountains on the weekends or youre just after a stable,\" + \" comfortable ride for the bike path, the Summit gives a good value\" + \" for money.\" ), new Bicycle( \"ThrillCycle\", \"BikeShind\", new BigDecimal(815), \"refurbished\", \"An artsy, retro-inspired bicycle thats as functional as it is\" + \" pretty: The ThrillCycle steel frame offers a smooth ride. A\" + \" 9-speed drivetrain has enough gears for coasting in the city, but\" + \" we wouldnt suggest taking it to the mountains. Fenders protect\" + \" you from mud, and a rear basket lets you transport groceries,\" + \" flowers and books. The ThrillCycle comes with a limited lifetime\" + \" warranty, so this little guy will last you long past graduation.\" ), }; for (int i = 0; i < bicycles.length; i++) { jedis.jsonSetWithEscape(String.format(\"bicycle:%d\", i), bicycles[i]); } Query query1 = new Query(\"*\"); List<Document> result1 = jedis.ftSearch(\"idx:bicycle\", query1).getDocuments(); System.out.println(\"Documents found:\" + result1.size()); // Prints: Documents found: 10 Query query2 = new Query(\"@model:Jigger\"); List<Document> result2 = jedis.ftSearch(\"idx:bicycle\", query2).getDocuments(); System.out.println(result2); // Prints: [id:bicycle:0, score: 1.0, payload:null, // properties:[$={\"brand\":\"Velorim\",\"model\":\"Jigger\",\"price\":270,\"description\":\"Small and powerful, the Jigger is the best ride for the smallest of tikes! This is the tiniest kids pedal bike on the market available without a coaster brake, the Jigger is the vehicle of choice for the rare tenacious little rider raring to go.\",\"condition\":\"new\"}]] Query query3 = new Query(\"@model:Jigger\").returnFields(\"price\"); List<Document> result3 = jedis.ftSearch(\"idx:bicycle\", query3).getDocuments(); System.out.println(result3); // Prints: [id:bicycle:0, score: 1.0, payload:null, properties:[price=270]] Query query4 = new Query(\"basic @price:[500 1000]\"); List<Document> result4 = jedis.ftSearch(\"idx:bicycle\", query4).getDocuments(); System.out.println(result4); // Prints: [id:bicycle:5, score: 1.0, payload:null, // properties:[$={\"brand\":\"Breakout\",\"model\":\"XBN 2.1 Alloy\",\"price\":810,\"description\":\"The XBN 2.1 Alloy is our entry-level road bike but thats not to say that its a basic machine. With an internal weld aluminium frame, a full carbon fork, and the slick-shifting Claris gears from Shimanos, this is a bike which doesnt break the bank and delivers craved performance.\",\"condition\":\"new\"}]] Query query5 = new Query(\"@brand:\\\"Noka Bikes\\\"\"); List<Document> result5 = jedis.ftSearch(\"idx:bicycle\", query5).getDocuments(); System.out.println(result5); // Prints: [id:bicycle:4, score: 1.0, payload:null, // properties:[$={\"brand\":\"Noka Bikes\",\"model\":\"Kahuna\",\"price\":3200,\"description\":\"Whether you want to try your hand at XC racing or are looking for a lively trail bike that's just as inspiring on the climbs as it is over rougher ground, the Wilder is one heck of a bike built specifically for short women. Both the frames and components have been tweaked to include a womens saddle, different bars and unique colourway.\",\"condition\":\"used\"}]] AggregationBuilder ab = new AggregationBuilder(\"*\").groupBy(\"@condition\", Reducers.count().as(\"count\")); AggregationResult ar = jedis.ftAggregate(\"idx:bicycle\", ab); for (int i = 0; i < ar.getTotalResults(); i++) { System.out.println(ar.getRow(i).getString(\"condition\") + \" - \" ar.getRow(i).getString(\"count\")); } // Prints: // refurbished - 1 // used - 5 // new - 4 assertEquals(\"Validate aggregation results\", 3, ar.getTotalResults()); jedis.close(); } }``` ``` using NRedisStack.RedisStackCommands; using NRedisStack.Search; using NRedisStack.Search.Aggregation; using NRedisStack.Search.Literals.Enums; using NRedisStack.Tests; using StackExchange.Redis; public class SearchQuickstartExample {" }, { "data": "public void run() { var redis = ConnectionMultiplexer.Connect(\"localhost:6379\"); var db = redis.GetDatabase(); var ft = db.FT(); var json = db.JSON(); var bike1 = new { Brand = \"Velorim\", Model = \"Jigger\", Price = 270M, Description = \"Small and powerful, the Jigger is the best ride \" + \"for the smallest of tikes! This is the tiniest \" + \"kids pedal bike on the market available without\" + \" a coaster brake, the Jigger is the vehicle of \" + \"choice for the rare tenacious little rider \" + \"raring to go.\", Condition = \"used\" }; var bicycles = new object[] { bike1, new { Brand = \"Bicyk\", Model = \"Hillcraft\", Price = 1200M, Description = \"Kids want to ride with as little weight as possible.\" + \" Especially on an incline! They may be at the age \" + \"when a 27.5 inch wheel bike is just too clumsy coming \" + \"off a 24 inch bike. The Hillcraft 26 is just the solution\" + \" they need!\", Condition = \"used\", }, new { Brand = \"Nord\", Model = \"Chook air 5\", Price = 815M, Description = \"The Chook Air 5 gives kids aged six years and older \" + \"a durable and uberlight mountain bike for their first\" + \" experience on tracks and easy cruising through forests\" + \" and fields. The lower top tube makes it easy to mount\" + \" and dismount in any situation, giving your kids greater\" + \" safety on the trails.\", Condition = \"used\", }, new { Brand = \"Eva\", Model = \"Eva 291\", Price = 3400M, Description = \"The sister company to Nord, Eva launched in 2005 as the\" + \" first and only women-dedicated bicycle brand. Designed\" + \" by women for women, allEva bikes are optimized for the\" + \" feminine physique using analytics from a body metrics\" + \" database. If you like 29ers, try the Eva 291. Its a \" + \"brand new bike for 2022.. This full-suspension, \" + \"cross-country ride has been designed for velocity. The\" + \" 291 has 100mm of front and rear travel, a superlight \" + \"aluminum frame and fast-rolling 29-inch wheels. Yippee!\", Condition = \"used\", }, new { Brand = \"Noka Bikes\", Model = \"Kahuna\", Price = 3200M, Description = \"Whether you want to try your hand at XC racing or are \" + \"looking for a lively trail bike that's just as inspiring\" + \" on the climbs as it is over rougher ground, the Wilder\" + \" is one heck of a bike built specifically for short women.\" + \" Both the frames and components have been tweaked to \" + \"include a womens saddle, different bars and unique \" + \"colourway.\", Condition = \"used\", }, new { Brand = \"Breakout\", Model = \"XBN 2.1 Alloy\", Price = 810M, Description = \"The XBN 2.1 Alloy is our entry-level road bike but thats\" + \" not to say that its a basic machine. With an internal \" + \"weld aluminium frame, a full carbon fork, and the slick-shifting\" + \" Claris gears from Shimanos, this is a bike which doesnt\" + \" break the bank and delivers craved performance.\", Condition = \"new\", }, new { Brand = \"ScramBikes\", Model = \"WattBike\", Price = 2300M, Description = \"The WattBike is the best e-bike for people who still feel young\" + \" at heart. It has a Bafang 1000W mid-drive system and a 48V\" + \"" }, { "data": "Samsung Lithium-Ion battery, allowing you to ride for\" + \" more than 60 miles on one charge. Its great for tackling hilly\" + \" terrain or if you just fancy a more leisurely ride. With three\" + \" working modes, you can choose between E-bike, assisted bicycle,\" + \" and normal bike modes.\", Condition = \"new\", }, new { Brand = \"Peaknetic\", Model = \"Secto\", Price = 430M, Description = \"If you struggle with stiff fingers or a kinked neck or back after\" + \" a few minutes on the road, this lightweight, aluminum bike\" + \" alleviates those issues and allows you to enjoy the ride. From\" + \" the ergonomic grips to the lumbar-supporting seat position, the\" + \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" + \" tube facilitates stability by allowing you to put a foot on the\" + \" ground to balance at a stop, and the low step-over frame makes it\" + \" accessible for all ability and mobility levels. The saddle is\" + \" very soft, with a wide back to support your hip joints and a\" + \" cutout in the center to redistribute that pressure. Rim brakes\" + \" deliver satisfactory braking control, and the wide tires provide\" + \" a smooth, stable ride on paved roads and gravel. Rack and fender\" + \" mounts facilitate setting up the Roll Low-Entry as your preferred\" + \" commuter, and the BMX-like handlebar offers space for mounting a\" + \" flashlight, bell, or phone holder.\", Condition = \"new\", }, new { Brand = \"nHill\", Model = \"Summit\", Price = 1200M, Description = \"This budget mountain bike from nHill performs well both on bike\" + \" paths and on the trail. The fork with 100mm of travel absorbs\" + \" rough terrain. Fat Kenda Booster tires give you grip in corners\" + \" and on wet trails. The Shimano Tourney drivetrain offered enough\" + \" gears for finding a comfortable pace to ride uphill, and the\" + \" Tektro hydraulic disc brakes break smoothly. Whether you want an\" + \" affordable bike that you can take to work, but also take trail in\" + \" mountains on the weekends or youre just after a stable,\" + \" comfortable ride for the bike path, the Summit gives a good value\" + \" for money.\", Condition = \"new\", }, new { Model = \"ThrillCycle\", Brand = \"BikeShind\", Price = 815M, Description = \"An artsy, retro-inspired bicycle thats as functional as it is\" + \" pretty: The ThrillCycle steel frame offers a smooth ride. A\" + \" 9-speed drivetrain has enough gears for coasting in the city, but\" + \" we wouldnt suggest taking it to the mountains. Fenders protect\" + \" you from mud, and a rear basket lets you transport groceries,\" + \" flowers and books. The ThrillCycle comes with a limited lifetime\" + \" warranty, so this little guy will last you long past graduation.\", Condition = \"refurbished\", }, }; var schema = new Schema() .AddTextField(new FieldName(\"$.Brand\", \"Brand\")) .AddTextField(new FieldName(\"$.Model\", \"Model\")) .AddTextField(new FieldName(\"$.Description\", \"Description\")) .AddNumericField(new FieldName(\"$.Price\", \"Price\")) .AddTagField(new FieldName(\"$.Condition\", \"Condition\")); ft.Create( \"idx:bicycle\", new FTCreateParams().On(IndexDataType.JSON).Prefix(\"bicycle:\"), schema); for (int i = 0; i < bicycles.Length; i++) { json.Set($\"bicycle:{i}\", \"$\", bicycles[i]); } var query1 = new Query(\"*\"); var res1 = ft.Search(\"idx:bicycle\", query1).Documents; Console.WriteLine(string.Join(\"\\n\", res1.Count())); // Prints: Documents found: 10 var query2 = new Query(\"@Model:Jigger\"); var res2 = ft.Search(\"idx:bicycle\", query2).Documents; Console.WriteLine(string.Join(\"\\n\", res2.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and" }, { "data": "var query3 = new Query(\"basic @Price:[500 1000]\"); var res3 = ft.Search(\"idx:bicycle\", query3).Documents; Console.WriteLine(string.Join(\"\\n\", res3.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and transportation.\",\"Condition\":\"new\"} var query4 = new Query(\"@Brand:\\\"Noka Bikes\\\"\"); var res4 = ft.Search(\"idx:bicycle\", query4).Documents; Console.WriteLine(string.Join(\"\\n\", res4.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and transportation.\",\"Condition\":\"new\"} var query5 = new Query(\"@Model:Jigger\").ReturnFields(\"Price\"); var res5 = ft.Search(\"idx:bicycle\", query5).Documents; Console.WriteLine(res5.First()[\"Price\"]); // Prints: 270 var request = new AggregationRequest(\"*\").GroupBy( \"@Condition\", Reducers.Count().As(\"Count\")); var result = ft.Aggregate(\"idx:bicycle\", request); for (var i = 0; i < result.TotalResults; i++) { var row = result.GetRow(i); Console.WriteLine($\"{row[\"Condition\"]} - {row[\"Count\"]}\"); } // Prints: // refurbished - 1 // used - 5 // new - 4 } }``` Below is a command to perform an exact match query that finds all bicycles with the brand name Noka Bikes. You must use double quotes around the search term when constructing an exact match query on a text field. ``` FT.SEARCH \"idx:bicycle\" \"@brand:\\\"Noka Bikes\\\"\" LIMIT 0 10 1) (integer) 1 2) \"bicycle:4\" 3) 1) \"$\" 2) \"{\\\"brand\\\":\\\"Noka Bikes\\\",\\\"model\\\":\\\"Kahuna\\\",\\\"price\\\":3200,\\\"description\\\":\\\"Whether you want to try your hand at XC racing or are looking for a lively trail bike that's just as inspiring on the climbs as it is over rougher ground, the Wilder is one heck of a bike built specifically for short women. Both the frames and components have been tweaked to include a women\\xe2\\x80\\x99s saddle, different bars and unique colourway.\\\",\\\"condition\\\":\\\"used\\\"}\"``` ``` import redis import redis.commands.search.aggregation as aggregations import redis.commands.search.reducers as reducers from redis.commands.json.path import Path from redis.commands.search.field import NumericField, TagField, TextField from redis.commands.search.indexDefinition import IndexDefinition, IndexType from redis.commands.search.query import Query r = redis.Redis(host=\"localhost\", port=6379, db=0, decode_responses=True) bicycle = { \"brand\": \"Velorim\", \"model\": \"Jigger\", \"price\": 270, \"description\": ( \"Small and powerful, the Jigger is the best ride \" \"for the smallest of tikes! This is the tiniest \" \"kids pedal bike on the market available without\" \" a coaster brake, the Jigger is the vehicle of \" \"choice for the rare tenacious little rider \" \"raring to go.\" ), \"condition\": \"new\", } bicycles = [ bicycle, { \"brand\": \"Bicyk\", \"model\": \"Hillcraft\", \"price\": 1200, \"description\": ( \"Kids want to ride with as little weight as possible.\" \" Especially on an incline! They may be at the age \" 'when a 27.5\" wheel bike is just too clumsy coming ' 'off a 24\" bike. The Hillcraft 26 is just the solution' \" they need!\" ), \"condition\": \"used\", }, { \"brand\": \"Nord\", \"model\": \"Chook air 5\", \"price\": 815, \"description\": ( \"The Chook Air 5 gives kids aged six years and older \" \"a durable and uberlight mountain bike for their first\" \" experience on tracks and easy cruising through forests\" \" and fields. The lower top tube makes it easy to mount\" \" and dismount in any situation, giving your kids greater\" \" safety on the trails.\" ), \"condition\": \"used\", }, { \"brand\": \"Eva\", \"model\": \"Eva 291\", \"price\": 3400, \"description\": ( \"The sister company to Nord, Eva launched in 2005 as the\" \" first and only women-dedicated bicycle brand. Designed\" \" by women for women, allEva bikes are optimized for the\" \" feminine physique using analytics from a body metrics\" \" database. If you like 29ers, try the Eva 291. Its a \" \"brand new bike for 2022.. This full-suspension, \" \"cross-country ride has been designed for" }, { "data": "The\" \" 291 has 100mm of front and rear travel, a superlight \" \"aluminum frame and fast-rolling 29-inch wheels. Yippee!\" ), \"condition\": \"used\", }, { \"brand\": \"Noka Bikes\", \"model\": \"Kahuna\", \"price\": 3200, \"description\": ( \"Whether you want to try your hand at XC racing or are \" \"looking for a lively trail bike that's just as inspiring\" \" on the climbs as it is over rougher ground, the Wilder\" \" is one heck of a bike built specifically for short women.\" \" Both the frames and components have been tweaked to \" \"include a womens saddle, different bars and unique \" \"colourway.\" ), \"condition\": \"used\", }, { \"brand\": \"Breakout\", \"model\": \"XBN 2.1 Alloy\", \"price\": 810, \"description\": ( \"The XBN 2.1 Alloy is our entry-level road bike but thats\" \" not to say that its a basic machine. With an internal \" \"weld aluminium frame, a full carbon fork, and the slick-shifting\" \" Claris gears from Shimanos, this is a bike which doesnt\" \" break the bank and delivers craved performance.\" ), \"condition\": \"new\", }, { \"brand\": \"ScramBikes\", \"model\": \"WattBike\", \"price\": 2300, \"description\": ( \"The WattBike is the best e-bike for people who still feel young\" \" at heart. It has a Bafang 1000W mid-drive system and a 48V\" \" 17.5AH Samsung Lithium-Ion battery, allowing you to ride for\" \" more than 60 miles on one charge. Its great for tackling hilly\" \" terrain or if you just fancy a more leisurely ride. With three\" \" working modes, you can choose between E-bike, assisted bicycle,\" \" and normal bike modes.\" ), \"condition\": \"new\", }, { \"brand\": \"Peaknetic\", \"model\": \"Secto\", \"price\": 430, \"description\": ( \"If you struggle with stiff fingers or a kinked neck or back after\" \" a few minutes on the road, this lightweight, aluminum bike\" \" alleviates those issues and allows you to enjoy the ride. From\" \" the ergonomic grips to the lumbar-supporting seat position, the\" \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" \" tube facilitates stability by allowing you to put a foot on the\" \" ground to balance at a stop, and the low step-over frame makes it\" \" accessible for all ability and mobility levels. The saddle is\" \" very soft, with a wide back to support your hip joints and a\" \" cutout in the center to redistribute that pressure. Rim brakes\" \" deliver satisfactory braking control, and the wide tires provide\" \" a smooth, stable ride on paved roads and gravel. Rack and fender\" \" mounts facilitate setting up the Roll Low-Entry as your preferred\" \" commuter, and the BMX-like handlebar offers space for mounting a\" \" flashlight, bell, or phone holder.\" ), \"condition\": \"new\", }, { \"brand\": \"nHill\", \"model\": \"Summit\", \"price\": 1200, \"description\": ( \"This budget mountain bike from nHill performs well both on bike\" \" paths and on the trail. The fork with 100mm of travel absorbs\" \" rough terrain. Fat Kenda Booster tires give you grip in corners\" \" and on wet trails. The Shimano Tourney drivetrain offered enough\" \" gears for finding a comfortable pace to ride uphill, and the\" \" Tektro hydraulic disc brakes break smoothly. Whether you want an\" \" affordable bike that you can take to work, but also take trail in\" \" mountains on the weekends or youre just after a stable,\" \" comfortable ride for the bike path, the Summit gives a good value\" \" for money.\" ), \"condition\": \"new\", }, { \"model\": \"ThrillCycle\", \"brand\": \"BikeShind\", \"price\": 815, \"description\": ( \"An artsy, retro-inspired bicycle thats as functional as it is\" \" pretty: The ThrillCycle steel frame offers a smooth" }, { "data": "A\" \" 9-speed drivetrain has enough gears for coasting in the city, but\" \" we wouldnt suggest taking it to the mountains. Fenders protect\" \" you from mud, and a rear basket lets you transport groceries,\" \" flowers and books. The ThrillCycle comes with a limited lifetime\" \" warranty, so this little guy will last you long past graduation.\" ), \"condition\": \"refurbished\", }, ] schema = ( TextField(\"$.brand\", as_name=\"brand\"), TextField(\"$.model\", as_name=\"model\"), TextField(\"$.description\", as_name=\"description\"), NumericField(\"$.price\", as_name=\"price\"), TagField(\"$.condition\", as_name=\"condition\"), ) index = r.ft(\"idx:bicycle\") index.create_index( schema, definition=IndexDefinition(prefix=[\"bicycle:\"], index_type=IndexType.JSON), ) for bid, bicycle in enumerate(bicycles): r.json().set(f\"bicycle:{bid}\", Path.root_path(), bicycle) res = index.search(Query(\"*\")) print(\"Documents found:\", res.total) res = index.search(Query(\"@model:Jigger\")) print(res) res = index.search( Query(\"@model:Jigger\").returnfield(\"$.price\", asfield=\"price\") ) print(res) res = index.search(Query(\"basic @price:[500 1000]\")) print(res) res = index.search(Query('@brand:\"Noka Bikes\"')) print(res) res = index.search( Query( \"@description:%analitics%\" ).dialect( # Note the typo in the word \"analytics\" 2 ) ) print(res) res = index.search( Query( \"@description:%%analitycs%%\" ).dialect( # Note 2 typos in the word \"analytics\" 2 ) ) print(res) res = index.search(Query(\"@model:hill*\")) print(res) res = index.search(Query(\"@model:*bike\")) print(res) res = index.search(Query(\"w'H?*craft'\").dialect(2)) print(res.docs[0].json) res = index.search(Query(\"mountain\").with_scores()) for sr in res.docs: print(f\"{sr.id}: score={sr.score}\") res = index.search(Query(\"mountain\").with_scores().scorer(\"BM25\")) for sr in res.docs: print(f\"{sr.id}: score={sr.score}\") req = aggregations.AggregateRequest(\"*\").group_by( \"@condition\", reducers.count().alias(\"count\") ) res = index.aggregate(req).rows print(res) ``` ``` package io.redis.examples; import java.math.BigDecimal; import java.util.*; import redis.clients.jedis.*; import redis.clients.jedis.exceptions.*; import redis.clients.jedis.search.*; import redis.clients.jedis.search.aggr.*; import redis.clients.jedis.search.schemafields.*; class Bicycle { public String brand; public String model; public BigDecimal price; public String description; public String condition; public Bicycle(String brand, String model, BigDecimal price, String condition, String description) { this.brand = brand; this.model = model; this.price = price; this.condition = condition; this.description = description; } } public class SearchQuickstartExample { public void run() { // UnifiedJedis jedis = new UnifiedJedis(\"redis://localhost:6379\"); JedisPooled jedis = new JedisPooled(\"localhost\", 6379); SchemaField[] schema = { TextField.of(\"$.brand\").as(\"brand\"), TextField.of(\"$.model\").as(\"model\"), TextField.of(\"$.description\").as(\"description\"), NumericField.of(\"$.price\").as(\"price\"), TagField.of(\"$.condition\").as(\"condition\") }; jedis.ftCreate(\"idx:bicycle\", FTCreateParams.createParams() .on(IndexDataType.JSON) .addPrefix(\"bicycle:\"), schema ); Bicycle[] bicycles = { new Bicycle( \"Velorim\", \"Jigger\", new BigDecimal(270), \"new\", \"Small and powerful, the Jigger is the best ride \" + \"for the smallest of tikes! This is the tiniest \" + \"kids pedal bike on the market available without\" + \" a coaster brake, the Jigger is the vehicle of \" + \"choice for the rare tenacious little rider \" + \"raring to go.\" ), new Bicycle( \"Bicyk\", \"Hillcraft\", new BigDecimal(1200), \"used\", \"Kids want to ride with as little weight as possible.\" + \" Especially on an incline! They may be at the age \" + \"when a 27.5 inch wheel bike is just too clumsy coming \" + \"off a 24 inch bike. The Hillcraft 26 is just the solution\" + \" they need!\" ), new Bicycle( \"Nord\", \"Chook air 5\", new BigDecimal(815), \"used\", \"The Chook Air 5 gives kids aged six years and older \" + \"a durable and uberlight mountain bike for their first\" + \" experience on tracks and easy cruising through forests\" + \" and fields. The lower top tube makes it easy to mount\" + \" and dismount in any situation, giving your kids greater\" + \" safety on the trails.\" ), new Bicycle( \"Eva\", \"Eva 291\", new BigDecimal(3400), \"used\", \"The sister company to Nord, Eva launched in 2005 as the\" + \" first and only women-dedicated bicycle brand. Designed\" + \" by women for women, allEva bikes are optimized for the\" + \" feminine physique using analytics from a body metrics\" + \" database. If you like 29ers, try the Eva 291. It's a \" + \"brand new bike for 2022.. This full-suspension, \" + \"cross-country ride has been designed for" }, { "data": "The\" + \" 291 has 100mm of front and rear travel, a superlight \" + \"aluminum frame and fast-rolling 29-inch wheels. Yippee!\" ), new Bicycle( \"Noka Bikes\", \"Kahuna\", new BigDecimal(3200), \"used\", \"Whether you want to try your hand at XC racing or are \" + \"looking for a lively trail bike that's just as inspiring\" + \" on the climbs as it is over rougher ground, the Wilder\" + \" is one heck of a bike built specifically for short women.\" + \" Both the frames and components have been tweaked to \" + \"include a womens saddle, different bars and unique \" + \"colourway.\" ), new Bicycle( \"Breakout\", \"XBN 2.1 Alloy\", new BigDecimal(810), \"new\", \"The XBN 2.1 Alloy is our entry-level road bike but thats\" + \" not to say that its a basic machine. With an internal \" + \"weld aluminium frame, a full carbon fork, and the slick-shifting\" + \" Claris gears from Shimanos, this is a bike which doesnt\" + \" break the bank and delivers craved performance.\" ), new Bicycle( \"ScramBikes\", \"WattBike\", new BigDecimal(2300), \"new\", \"The WattBike is the best e-bike for people who still feel young\" + \" at heart. It has a Bafang 1000W mid-drive system and a 48V\" + \" 17.5AH Samsung Lithium-Ion battery, allowing you to ride for\" + \" more than 60 miles on one charge. Its great for tackling hilly\" + \" terrain or if you just fancy a more leisurely ride. With three\" + \" working modes, you can choose between E-bike, assisted bicycle,\" + \" and normal bike modes.\" ), new Bicycle( \"Peaknetic\", \"Secto\", new BigDecimal(430), \"new\", \"If you struggle with stiff fingers or a kinked neck or back after\" + \" a few minutes on the road, this lightweight, aluminum bike\" + \" alleviates those issues and allows you to enjoy the ride. From\" + \" the ergonomic grips to the lumbar-supporting seat position, the\" + \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" + \" tube facilitates stability by allowing you to put a foot on the\" + \" ground to balance at a stop, and the low step-over frame makes it\" + \" accessible for all ability and mobility levels. The saddle is\" + \" very soft, with a wide back to support your hip joints and a\" + \" cutout in the center to redistribute that pressure. Rim brakes\" + \" deliver satisfactory braking control, and the wide tires provide\" + \" a smooth, stable ride on paved roads and gravel. Rack and fender\" + \" mounts facilitate setting up the Roll Low-Entry as your preferred\" + \" commuter, and the BMX-like handlebar offers space for mounting a\" + \" flashlight, bell, or phone holder.\" ), new Bicycle( \"nHill\", \"Summit\", new BigDecimal(1200), \"new\", \"This budget mountain bike from nHill performs well both on bike\" + \" paths and on the trail. The fork with 100mm of travel absorbs\" + \" rough terrain. Fat Kenda Booster tires give you grip in corners\" + \" and on wet trails. The Shimano Tourney drivetrain offered enough\" + \" gears for finding a comfortable pace to ride uphill, and the\" + \" Tektro hydraulic disc brakes break smoothly. Whether you want an\" + \" affordable bike that you can take to work, but also take trail in\" + \" mountains on the weekends or youre just after a stable,\" + \" comfortable ride for the bike path, the Summit gives a good value\" + \" for" }, { "data": "), new Bicycle( \"ThrillCycle\", \"BikeShind\", new BigDecimal(815), \"refurbished\", \"An artsy, retro-inspired bicycle thats as functional as it is\" + \" pretty: The ThrillCycle steel frame offers a smooth ride. A\" + \" 9-speed drivetrain has enough gears for coasting in the city, but\" + \" we wouldnt suggest taking it to the mountains. Fenders protect\" + \" you from mud, and a rear basket lets you transport groceries,\" + \" flowers and books. The ThrillCycle comes with a limited lifetime\" + \" warranty, so this little guy will last you long past graduation.\" ), }; for (int i = 0; i < bicycles.length; i++) { jedis.jsonSetWithEscape(String.format(\"bicycle:%d\", i), bicycles[i]); } Query query1 = new Query(\"*\"); List<Document> result1 = jedis.ftSearch(\"idx:bicycle\", query1).getDocuments(); System.out.println(\"Documents found:\" + result1.size()); // Prints: Documents found: 10 Query query2 = new Query(\"@model:Jigger\"); List<Document> result2 = jedis.ftSearch(\"idx:bicycle\", query2).getDocuments(); System.out.println(result2); // Prints: [id:bicycle:0, score: 1.0, payload:null, // properties:[$={\"brand\":\"Velorim\",\"model\":\"Jigger\",\"price\":270,\"description\":\"Small and powerful, the Jigger is the best ride for the smallest of tikes! This is the tiniest kids pedal bike on the market available without a coaster brake, the Jigger is the vehicle of choice for the rare tenacious little rider raring to go.\",\"condition\":\"new\"}]] Query query3 = new Query(\"@model:Jigger\").returnFields(\"price\"); List<Document> result3 = jedis.ftSearch(\"idx:bicycle\", query3).getDocuments(); System.out.println(result3); // Prints: [id:bicycle:0, score: 1.0, payload:null, properties:[price=270]] Query query4 = new Query(\"basic @price:[500 1000]\"); List<Document> result4 = jedis.ftSearch(\"idx:bicycle\", query4).getDocuments(); System.out.println(result4); // Prints: [id:bicycle:5, score: 1.0, payload:null, // properties:[$={\"brand\":\"Breakout\",\"model\":\"XBN 2.1 Alloy\",\"price\":810,\"description\":\"The XBN 2.1 Alloy is our entry-level road bike but thats not to say that its a basic machine. With an internal weld aluminium frame, a full carbon fork, and the slick-shifting Claris gears from Shimanos, this is a bike which doesnt break the bank and delivers craved performance.\",\"condition\":\"new\"}]] Query query5 = new Query(\"@brand:\\\"Noka Bikes\\\"\"); List<Document> result5 = jedis.ftSearch(\"idx:bicycle\", query5).getDocuments(); System.out.println(result5); // Prints: [id:bicycle:4, score: 1.0, payload:null, // properties:[$={\"brand\":\"Noka Bikes\",\"model\":\"Kahuna\",\"price\":3200,\"description\":\"Whether you want to try your hand at XC racing or are looking for a lively trail bike that's just as inspiring on the climbs as it is over rougher ground, the Wilder is one heck of a bike built specifically for short women. Both the frames and components have been tweaked to include a womens saddle, different bars and unique colourway.\",\"condition\":\"used\"}]] AggregationBuilder ab = new AggregationBuilder(\"*\").groupBy(\"@condition\", Reducers.count().as(\"count\")); AggregationResult ar = jedis.ftAggregate(\"idx:bicycle\", ab); for (int i = 0; i < ar.getTotalResults(); i++) { System.out.println(ar.getRow(i).getString(\"condition\") + \" - \" ar.getRow(i).getString(\"count\")); } // Prints: // refurbished - 1 // used - 5 // new - 4 assertEquals(\"Validate aggregation results\", 3, ar.getTotalResults()); jedis.close(); } }``` ``` using NRedisStack.RedisStackCommands; using NRedisStack.Search; using NRedisStack.Search.Aggregation; using NRedisStack.Search.Literals.Enums; using NRedisStack.Tests; using StackExchange.Redis; public class SearchQuickstartExample { [SkipIfRedis(Is.OSSCluster)] public void run() { var redis = ConnectionMultiplexer.Connect(\"localhost:6379\"); var db = redis.GetDatabase(); var ft = db.FT(); var json = db.JSON(); var bike1 = new { Brand = \"Velorim\", Model = \"Jigger\", Price = 270M, Description = \"Small and powerful, the Jigger is the best ride \" + \"for the smallest of tikes! This is the tiniest \" + \"kids pedal bike on the market available without\" + \" a coaster brake, the Jigger is the vehicle of \" + \"choice for the rare tenacious little rider \" + \"raring to go.\", Condition = \"used\" }; var bicycles = new object[] { bike1, new { Brand = \"Bicyk\", Model = \"Hillcraft\", Price = 1200M, Description = \"Kids want to ride with as little weight as possible.\" + \" Especially on an incline! They may be at the age \" + \"when a 27.5 inch wheel bike is just too clumsy coming \" + \"off a 24 inch" }, { "data": "The Hillcraft 26 is just the solution\" + \" they need!\", Condition = \"used\", }, new { Brand = \"Nord\", Model = \"Chook air 5\", Price = 815M, Description = \"The Chook Air 5 gives kids aged six years and older \" + \"a durable and uberlight mountain bike for their first\" + \" experience on tracks and easy cruising through forests\" + \" and fields. The lower top tube makes it easy to mount\" + \" and dismount in any situation, giving your kids greater\" + \" safety on the trails.\", Condition = \"used\", }, new { Brand = \"Eva\", Model = \"Eva 291\", Price = 3400M, Description = \"The sister company to Nord, Eva launched in 2005 as the\" + \" first and only women-dedicated bicycle brand. Designed\" + \" by women for women, allEva bikes are optimized for the\" + \" feminine physique using analytics from a body metrics\" + \" database. If you like 29ers, try the Eva 291. Its a \" + \"brand new bike for 2022.. This full-suspension, \" + \"cross-country ride has been designed for velocity. The\" + \" 291 has 100mm of front and rear travel, a superlight \" + \"aluminum frame and fast-rolling 29-inch wheels. Yippee!\", Condition = \"used\", }, new { Brand = \"Noka Bikes\", Model = \"Kahuna\", Price = 3200M, Description = \"Whether you want to try your hand at XC racing or are \" + \"looking for a lively trail bike that's just as inspiring\" + \" on the climbs as it is over rougher ground, the Wilder\" + \" is one heck of a bike built specifically for short women.\" + \" Both the frames and components have been tweaked to \" + \"include a womens saddle, different bars and unique \" + \"colourway.\", Condition = \"used\", }, new { Brand = \"Breakout\", Model = \"XBN 2.1 Alloy\", Price = 810M, Description = \"The XBN 2.1 Alloy is our entry-level road bike but thats\" + \" not to say that its a basic machine. With an internal \" + \"weld aluminium frame, a full carbon fork, and the slick-shifting\" + \" Claris gears from Shimanos, this is a bike which doesnt\" + \" break the bank and delivers craved performance.\", Condition = \"new\", }, new { Brand = \"ScramBikes\", Model = \"WattBike\", Price = 2300M, Description = \"The WattBike is the best e-bike for people who still feel young\" + \" at heart. It has a Bafang 1000W mid-drive system and a 48V\" + \" 17.5AH Samsung Lithium-Ion battery, allowing you to ride for\" + \" more than 60 miles on one charge. Its great for tackling hilly\" + \" terrain or if you just fancy a more leisurely ride. With three\" + \" working modes, you can choose between E-bike, assisted bicycle,\" + \" and normal bike modes.\", Condition = \"new\", }, new { Brand = \"Peaknetic\", Model = \"Secto\", Price = 430M, Description = \"If you struggle with stiff fingers or a kinked neck or back after\" + \" a few minutes on the road, this lightweight, aluminum bike\" + \" alleviates those issues and allows you to enjoy the ride. From\" + \" the ergonomic grips to the lumbar-supporting seat position, the\" + \" Roll Low-Entry offers incredible comfort. The rear-inclined seat\" + \" tube facilitates stability by allowing you to put a foot on the\" + \" ground to balance at a stop, and the low step-over frame makes it\" + \" accessible for all ability and mobility" }, { "data": "The saddle is\" + \" very soft, with a wide back to support your hip joints and a\" + \" cutout in the center to redistribute that pressure. Rim brakes\" + \" deliver satisfactory braking control, and the wide tires provide\" + \" a smooth, stable ride on paved roads and gravel. Rack and fender\" + \" mounts facilitate setting up the Roll Low-Entry as your preferred\" + \" commuter, and the BMX-like handlebar offers space for mounting a\" + \" flashlight, bell, or phone holder.\", Condition = \"new\", }, new { Brand = \"nHill\", Model = \"Summit\", Price = 1200M, Description = \"This budget mountain bike from nHill performs well both on bike\" + \" paths and on the trail. The fork with 100mm of travel absorbs\" + \" rough terrain. Fat Kenda Booster tires give you grip in corners\" + \" and on wet trails. The Shimano Tourney drivetrain offered enough\" + \" gears for finding a comfortable pace to ride uphill, and the\" + \" Tektro hydraulic disc brakes break smoothly. Whether you want an\" + \" affordable bike that you can take to work, but also take trail in\" + \" mountains on the weekends or youre just after a stable,\" + \" comfortable ride for the bike path, the Summit gives a good value\" + \" for money.\", Condition = \"new\", }, new { Model = \"ThrillCycle\", Brand = \"BikeShind\", Price = 815M, Description = \"An artsy, retro-inspired bicycle thats as functional as it is\" + \" pretty: The ThrillCycle steel frame offers a smooth ride. A\" + \" 9-speed drivetrain has enough gears for coasting in the city, but\" + \" we wouldnt suggest taking it to the mountains. Fenders protect\" + \" you from mud, and a rear basket lets you transport groceries,\" + \" flowers and books. The ThrillCycle comes with a limited lifetime\" + \" warranty, so this little guy will last you long past graduation.\", Condition = \"refurbished\", }, }; var schema = new Schema() .AddTextField(new FieldName(\"$.Brand\", \"Brand\")) .AddTextField(new FieldName(\"$.Model\", \"Model\")) .AddTextField(new FieldName(\"$.Description\", \"Description\")) .AddNumericField(new FieldName(\"$.Price\", \"Price\")) .AddTagField(new FieldName(\"$.Condition\", \"Condition\")); ft.Create( \"idx:bicycle\", new FTCreateParams().On(IndexDataType.JSON).Prefix(\"bicycle:\"), schema); for (int i = 0; i < bicycles.Length; i++) { json.Set($\"bicycle:{i}\", \"$\", bicycles[i]); } var query1 = new Query(\"*\"); var res1 = ft.Search(\"idx:bicycle\", query1).Documents; Console.WriteLine(string.Join(\"\\n\", res1.Count())); // Prints: Documents found: 10 var query2 = new Query(\"@Model:Jigger\"); var res2 = ft.Search(\"idx:bicycle\", query2).Documents; Console.WriteLine(string.Join(\"\\n\", res2.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and transportation.\",\"Condition\":\"new\"} var query3 = new Query(\"basic @Price:[500 1000]\"); var res3 = ft.Search(\"idx:bicycle\", query3).Documents; Console.WriteLine(string.Join(\"\\n\", res3.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and transportation.\",\"Condition\":\"new\"} var query4 = new Query(\"@Brand:\\\"Noka Bikes\\\"\"); var res4 = ft.Search(\"idx:bicycle\", query4).Documents; Console.WriteLine(string.Join(\"\\n\", res4.Select(x => x[\"json\"]))); // Prints: {\"Brand\":\"Moore PLC\",\"Model\":\"Award Race\",\"Price\":3790.76, // \"Description\":\"This olive folding bike features a carbon frame // and 27.5 inch wheels. This folding bike is perfect for compact // storage and transportation.\",\"Condition\":\"new\"} var query5 = new Query(\"@Model:Jigger\").ReturnFields(\"Price\"); var res5 = ft.Search(\"idx:bicycle\", query5).Documents; Console.WriteLine(res5.First()[\"Price\"]); // Prints: 270 var request = new AggregationRequest(\"*\").GroupBy( \"@Condition\", Reducers.Count().As(\"Count\")); var result = ft.Aggregate(\"idx:bicycle\", request); for (var i = 0; i < result.TotalResults; i++) { var row = result.GetRow(i); Console.WriteLine($\"{row[\"Condition\"]} - {row[\"Count\"]}\"); } // Prints: // refurbished - 1 // used - 5 // new - 4 } }``` Please see the query documentation to learn how to make more advanced queries. You can learn more about how" } ]
{ "category": "App Definition and Development", "file_name": "kubectl.md", "project_name": "SchemaHero", "subcategory": "Database" }
[ { "data": "SchemaHero has 2 different components: an in-cluster Kubernetes Operator and a client side kubectl plugin that you can use to interact with the operator. The best way to get started is to install the kubectl plugin: The SchemaHero client component is packaged as a kubectl plugin, and distributed through the krew package manager. If you don't already have krew installed, head over to the krew installation guide, follow the steps there and then come back here. Install the SchemaHero client component using: ``` kubectl krew install schemahero ``` Note: This will not install anything to your cluster, it only places a single binary named kubectl-schemahero on your path. Verify the installation by checking the version: ``` kubectl schemahero version ``` You should see the version of SchemaHero installed on your workstation (0.12.1 or similar). SchemaHero relies on an in-cluster operator. The next step in the installation is the operator components: It's easy to install the operator using the built-in command: ``` kubectl schemahero install ``` The above command will create a schemahero-system namespace, and install 3 new Custom Resource Definitions to your cluster. An alternative approach is to let the kubectl plugin generate the YAML that can be checked in, commited, and deployed using another tool: ``` kubectl schemahero install --yaml ``` This will create the necessary YAML to install the in-cluster SchemaHero operator. After inspection, you can use kubectl to apply this YAML to your cluster. To verify the deployment, you can run: ``` kubectl get pods -n schemahero-system ``` There should be 1 pod running in this namespace: ``` $ kubectl get pods -n schemahero-system NAME READY STATUS RESTARTS AGE schemahero-0 1/1 Running 0 66s ``` We sign the official container images that are published on each release. These are signed using cosign. To verify the container image, you download our public key into a file named schemahero.pub and then: ``` cosign verify -key schemahero.pub schemahero/schemahero:0.12.3 ``` If the container image was properly signed, you will see output similar to: ``` Verification for schemahero/schemahero:0.12.3 -- The following checks were performed on each of these signatures: The cosign claims were validated The signatures were verified against the specified public key Any certificates were verified against the Fulcio roots. {\"critical\":{\"identity\":{\"docker-reference\":\"index.docker.io/schemahero/schemahero\"},\"image\":{\"docker-manifest-digest\":\"sha256:d8f2a52b42d80917f4de89f254c5bdfd55edc5a866fe97e2703259405315bc8b\"},\"type\":\"cosign container image signature\"},\"optional\":null} ``` We also publish a SBOM (Software Bill of Materials) in SPDX format for each release. To download the SBOM for a specific version, use the cosign tool and run: ``` cosign download sbom schemahero/schemahero:0.12.3 ``` It's sometimes useful to save the SBOM to a file: ``` cosign download sbom schemahero/schemahero:0.12.3 > sbom.txt Found SBOM of media type: text/spdx ```" } ]
{ "category": "App Definition and Development", "file_name": "docs.singlestore.com.md", "project_name": "SingleStore", "subcategory": "Database" }
[ { "data": "Getting Started with SingleStore Helios Connect to Your Workspace Create a Database Integrate with SingleStore Helios Load Data Query Data Manage Data Developer Resources User and Workspace Administration Security Reference Release Notes Support Glossary Learn to develop for SingleStore Helios, the distributed SQL database built to power data-intensive applications. This is the documentation for SingleStore Helios. You may also visit the SingleStore Self-Managed documentation. Ask SQrL, our AI assistant, trained on all things SingleStore Step 1 Step 2 Step 3 Step 4 Learn more about what makes SingleStore Helios tick. About SingleStore Helios Editions Glossary FAQs Use SingleStore to efficiently store, index, and search your vector data. Working with Vector Data Vector Type Vector Indexing Hybrid Search - Re-ranking and Blending Searches Connect to SingleStore Helios using a variety of SQL clients, drivers, programming languages, and frameworks. SQL Developer SingleStore JDBC Driver SingleStore Client All SQL Clients & Drivers All Languages & Frameworks Load your data from a multitude of data sources. Amazon S3 Kafka MongoDB Spark All Data Sources Explore SingleStores code and examples. Code Repository Sample Data-Intensive App Build a Full-Stack App Basic Query Examples Explore other ways to learn and participate Engineering Blog Training Events and Meetups Already an Expert? Troubleshoot issues and get help from the community. Troubleshooting Reference Solve MySQL Out-of-Memory Error SingleStore Forums Support FAQ This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. | | |" } ]
{ "category": "App Definition and Development", "file_name": "what-is-seata.md", "project_name": "Seata", "subcategory": "Database" }
[ { "data": "Seata is an open source distributed transaction solution dedicated to providing high performance and easy to use distributed transaction services. Seata will provide users with AT, TCC, SAGA, and XA transaction models to create a one-stop distributed solution for users. Evolution from the two phases commit protocol: For example: Two transactions tx1 and tx2 are trying to update field m of table a. The original value of m is 1000. tx1 starts first, begins a local transaction, acquires the local lock, do the update operation: m = 1000 - 100 = 900. tx1 must acquire the global lock before committing the local transaction, after that, commit local transaction and release local lock. next, tx2 begins local transaction, acquires local lock, do the update operation: m = 900 - 100 = 800. Before tx2 can commit local transaction, it must acquire the global lock, but the global lock may be hold by tx1, so tx2 will do retry. After tx1 does the global commit and releases the global lock, tx2 can acquire the global lock, then it can commit local transaction and release local lock. See the figure above, tx1 does the global commit in phase 2 and release the global lock, tx2 acquires the global lock and commits local transaction. See the figure above, if tx1 wants to do the global rollback, it must acquire local lock to revert the update operation of phase 1. However, now the local lock is held by tx2 which hopes to acquire the global lock, so tx1 fails to rollback, but it would try it many times until it's timeout for tx2 to acquire the global lock, then tx2 rollbacks local transaction and releases local lock, after that, tx1 can acquire the local lock, and do the branch rollback successfully. Because the global lock is held by tx1 during the whole process, there isn't no problem of dirty write. The isolation level of local database is read committed or above, so the default isolation level of the global transaction is read uncommitted. If it needs the isolation level of the global transaction is read committed, currently, Seata implements it via SELECT FOR UPDATE statement. The global lock is be applied during the execution of SELECT FOR UPDATE statement, if the global lock is held by other transactions, the transaction will release local lock retry execute the SELECT FOR UPDATE statement. During the whole process, the query is blocked until the global lock is acquired, if the lock is acquired, it means the other global transaction has committed, so the isolation level of global transaction is read committed. For the performance consideration, Seata only does proxy work for SELECT FOR UPDATE. For the general SELECT statement, do nothing. Take an example to illustrate" }, { "data": "A business table:product | Field | Type | Key | |:--|:-|:| | id | bigint(20) | PRI | | name | varchar(100) | nan | | since | varchar(100) | nan | The sql of branch transaction in AT mode: ``` update product set name = 'GTS' where name = 'TXC';``` Process: ``` select id, name, since from product where name = 'TXC';``` Got the \"before image\" | id | name | since | |--:|:-|--:| | 1 | TXC | 2014 | ``` select id, name, since from product where id = 1;``` Got the after image: | id | name | since | |--:|:-|--:| | 1 | GTS | 2014 | ``` { \"branchId\": 641789253, \"undoItems\": [{ \"afterImage\": { \"rows\": [{ \"fields\": [{ \"name\": \"id\", \"type\": 4, \"value\": 1 }, { \"name\": \"name\", \"type\": 12, \"value\": \"GTS\" }, { \"name\": \"since\", \"type\": 12, \"value\": \"2014\" }] }], \"tableName\": \"product\" }, \"beforeImage\": { \"rows\": [{ \"fields\": [{ \"name\": \"id\", \"type\": 4, \"value\": 1 }, { \"name\": \"name\", \"type\": 12, \"value\": \"TXC\" }, { \"name\": \"since\", \"type\": 12, \"value\": \"2014\" }] }], \"tableName\": \"product\" }, \"sqlType\": \"UPDATE\" }], \"xid\": \"xid:xxx\"}``` ``` update product set name = 'TXC' where id = 1;``` UNDO_LOG Tablethere is a little bit difference on the data type for different databases. For MySQL example: | Field | Type | |:--|:-| | branch_id | bigint PK | | xid | varchar(100) | | context | varchar(128) | | rollback_info | longblob | | log_status | tinyint | | log_created | datetime | | log_modified | datetime | ``` -- Note that 0.7.0+ adds the field contextCREATE TABLE `undolog` ( `id` bigint(20) NOT NULL AUTOINCREMENT, `branchid` bigint(20) NOT NULL, `xid` varchar(100) NOT NULL, `context` varchar(128) NOT NULL, `rollbackinfo` longblob NOT NULL, `logstatus` int(11) NOT NULL, `logcreated` datetime NOT NULL, `logmodified` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `uxundolog` (`xid`,`branchid`)) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;``` Review the description in the overview: A distributed global transaction, the whole is a two-phase commit model. The global transaction is composed of several branch transactions. The branch transaction must meet the requirements of the two-phase commit model, that is, each branch transaction must have its own: According to the two-phase behavior mode, we divide branch transactions into Automatic (Branch) Transaction Mode and TCC (Branch) Transaction Mode. The AT mode is based on a relational database that supports local ACID transactions: Correspondingly, the TCC mode does not rely on transaction support of the underlying data resources: The so-called TCC mode refers to the support of customized's branch transactions into the management of global transactions. The Saga model is a long transaction solution provided by SEATA. In the Saga model, each participant in the business process submits a local transaction. When a participant fails, the previous successful participant is compensated. One stage is positive serving and The two-stage compensation services are implemented by business development. Theoretical basis: Hector & Kenneth Post a comment Sagas 1987" } ]
{ "category": "App Definition and Development", "file_name": "docs.md", "project_name": "SpiceDB", "subcategory": "Database" }
[ { "data": "On This Page Welcome to the official documentation for the SpiceDB ecosystem. SpiceDB is an open source, Google Zanzibar (opens in a new tab)-inspired database system for real-time, security-critical application permissions. Developers create and apply a schema (opens in a new tab) that models their application's resources and permissions. From their applications, client libraries (opens in a new tab) are used to insert relationships or check permissions in their applications. Building modern authorization from scratch is non-trivial and requires years of development from domain experts. Until SpiceDB, the only developers with access to these workflows were employed by massive tech companies that could invest in building mature, but proprietary solutions. Now we have a community organized around sharing this technology so that the entire industry can benefit. In some scenarios, SpiceDB can be challenging to operate because it is a critical, low-latency, distirbuted system. For folks interested in a managed SpiceDB services and enterprise functionality, there are AuthZed's products. In August 2020, the founders of AuthZed left Red Hat (opens in a new tab), who had acquired their previous company CoreOS (opens in a new tab). In the following month, they would write the first API-complete implementation of Zanzibar; project Arrakis was written in lazily-evaluated, type-annotated Python. In September, Arrakis was demoed as a part of their YCombinator (opens in a new tab) application. In March 2021, Arrakis was rewritten in Go, a project code-named Caladan. This rewrite would eventually be open-sourced in September 2021 under the name SpiceDB (opens in a new tab). You can read also read the history of Google's Zanzibar project, which is the spirtual predecessor and inspiration for SpiceDB. Features that distinguish SpiceDB from other systems include: SpiceDB developers and community members have recorded videos explaining concepts, modeling familiar applications, and deep diving on the tech powering everything! Thousands of community members chat interactively in our Discord (opens in a new tab). Why not ask them a question or two? SpiceDB and Zed run on Linux, macOS, and Windows on both AMD64 and ARM64 architectures. Follow the instructions below install to your development machine: We've documented the concepts SpiceDB users should understand: After these, we recommend these concepts for running SpiceDB: Finally, there are some more advanced concepts that are still fundamental: You can experiment with and share schema and data snippets on the Playground (opens in a new tab). When you're done, you can easily import these into a real SpiceDB instance using zed import. Here's a very example to toy with: Once you're ready to take things into production, you can reference our guides or explore a managed solution with AuthZed. Even if you aren't interested in paid products, you can still schedule a call or reach out on Discord." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "SingleStore", "subcategory": "Database" }
[ { "data": "Getting Started with SingleStore Helios Connect to Your Workspace Create a Database Integrate with SingleStore Helios Load Data Query Data Manage Data Developer Resources User and Workspace Administration Security Reference Release Notes SingleStore Helios Release Notes 8.7 Release Notes 8.5 Release Notes 8.1 Release Notes 8.0 Release Notes 7.9 Maintenance Release Changelog (SingleStore Helios Only) 7.8 Release Notes 7.6 Release Notes 7.5 Release Notes 7.3 Release Notes 7.1 Release Notes 7.0 Release Notes Documentation Changelog Support Glossary June 2024 May 2024 April 2024 March 2024 February 2024 January 2024 December 2023 November 2023 October 2023 September 2023 August 2023 July 2023 June 2023 May 2023 April 2023 March 2023 February 2023 January 2023 December 2022 June 2022 April 2022 November 2021 New features added to SingleStore Helios listed by month. For general engine update release notes, see the engine version specific release notes. A major update was released this month. Please see the associated release notes for a complete list of new features, enhancements, and bugfixes. In addition the the large update, the following updates have been made this month: Enhancement: Added the enableuseofstaleincremental_stats engine variable which allows the optimizer to use stale stats instead of querying for stats during optimization. Enhancement: Added thethrottlereplaybehindtailmbengine variable which controls how far the system allows a child aggregator to lag behind replicating the transaction log of it's master aggregator before throttling is applied to the master aggregator. Bugfix: Addressed a crash issue with REGEX_REPLACE. Bugfix: Fixed an issue in ALTER TABLE when adding a computed column containing JSONBUILDOBJECT. Bugfix: Reduced stack usage during compilation for some CASE expressions with many WHEN-THEN statements. Bugfix: Fixed a potential deadlock issue between background garbage collector threads and foreground query execution threads. Bugfix: Fixed a collation issue in the JSONCOLUMNSCHEMA information schema table. Enhancement: SingleStore now supports module eviction ofidletables with an ALTER history. This means that, prior to the 8.5.20 release, tables that had been altered could not be evicted. Altered here includes CREATE INDEX, ALTER TABLE ADD COLUMN, and some others. Bugfix: Fixed a bug with Wasm UDFs where REPLACE FUNCTION did not correctly evict from the plancache, causing continued usage of the old UDF after replacement. Bugfix: Fixed an upgrade issue present in version 8.5.17. Bugfix: Fixed a BSON issue that could occur when large documents are read and mistakenly reported as corrupted. Enhancement: Upgraded OpenSSL 1.0.2u to 1.0.2zj Bugfix: Fixed a crash issue that could occur under a rare race condition in some distributed join queries. Bugfix: Fixed statement atomicity in transactions writing to tables that have projections on them. Enhancement: Allow Kerberos users to skip password validation for users who are authenticated with plugins. Enhancement: Added support for loading data from compressed Avro datafiles. Bugfix: Fixed an issue that occurred when primary key is NULLABLE when copying a table using \"Create table as select\" statement. Bugfix: Addressed a crash issue occurring while executing queries with vector builtins on clusters running via Docker containers on Mac machines equipped with Apple silicon processors (M1). Bugfix: No longer show information_schema pipeline metadata of databases that are not attached to the workspace. Enhancement: Use SeekableString encoding by default for storing VECTOR datatype columns in columnstore tables. Bugfix: Fixed an issue that caused rare \"Arrangement already exists in namespace\" errors in recursive CTE queries. Bugfix: Improved the hashGroupBy spilling condition to trigger (spill) when" }, { "data": "Enhancement: Added a new engine variable costingmaxdjo_tables which sets the maximum amount of tables we allow in full DQO estimate search when filling table row estimates for distributed join hints. Enhancement: Made the following engine variables configurable on SingleStore Helios: multiinserttuplecount, columnstorerowvaluetablelockthreshold, internalcolumnstoremaxuncompressedblobsize, internalcolumnstoremaxuncompressedrowindexblobsize, columnstorevalidateblobbeforemerge, defaultmaxhistogrambuckets, enablevarbufferdictionarycompression, enablealiasspacetrim, skipsegelimwithinlistthreshold, defaultautostatshistogrammode, defaultautostatscolumnstorecardinalitymode, defaultautostatsrowstorecardinalitymode, defaultautostatscolumnstoresampling, experimentaldisablemultipartitionparallelread, internalenableparallelquerythrottling, enablespilling, spillingnodememorythresholdratio, spillingqueryoperatormemorythreshold, regexpcompilememmb, selectivityestimationminsamplesize, repartitionminimumbuffersize, useseekablejson, jsondocumentmaxchildren, jsondocumentmaxleaves, jsondocumentsparsechildrencheckthreshold, jsondocumentsparsechildrencheckratio, jsondocumentabsolutesparsekeycheckratio, jsondocumentpagedatasoftlimit, ignoreinsertintocomputedcolumn, maxsubselectaggregatorrowcount, leafpushdownenablerowcount, reshufflegroupbybasecost, maxbroadcasttreerowcount, enablebroadcastleftjoin, displayfullestimationstats, forcebushyjoins, forceheuristicrewrites, forcetablepushdown, forcebloomfilters, explainjoinplancosts, statisticswarnings, optimizerwarnings, optimizeconstants, optimizehugeexpressions, optimizeexpressionslargerthan, optimizestmtthreshold, quadraticrewritesizelimit, batchexternalfunctions, batchexternalfunctionssize, optimizerenablejsontextmatching, optimizerenableorderbylimitselfjoin, distributedoptimizerbroadcastmult, distributedoptimizeroldselectivitytablethreshold, distributedoptimizerselectivityfallbackthreshold, distributedoptimizerverbose, oldlocaljoinoptimizer, optimizerhashjoincost, optimizermergejoincost, optimizernestedjoincost, optimizerdisablerightjoin, interpretermodesamplingthreshold, hashgroupbysegmentdistinctvaluesthreshold, samplingestimatesforcomplexfilters, enablehistogramtounwrapliterals, estimatezerorowswhensamplingdataismissing, excludescalarsubselectsfromfilters, leafpushdowndefault, distributedoptimizernodes, optimizernumpartitions, enablebinaryprotocol, enablemultipartitionqueries, enablelocalshufflegroupby, enableskiplistsamplingforselectivity, columnstoresampleperpartitionlimit, disablesamplingestimation, disablehistogramestimation, inlistprecisionlimit, allowmaterializectewithunion, optimizercrossjoincost, distributedoptimizerrunlegacyheuristic, distributedoptimizerminjoinsizeruninitialheuristics, distributedoptimizerunrestrictedsearchthreshold, singleboxoptimizercostbasedthreshold, distributedoptimizerestimatedrestrictedsearchcostbound, disablereferencetableleftjoinwherepushdown, disablesamplingestimationwithhistograms, disablesubquerymergewithstraightjoins, defaultcolumnstoretablelockthreshold, defaultspilldependentoutputters, queryrewriteloopiterations, verifyfieldsintransitivity, optimizerminreferencetablesforgatheredjoin, optimizerminreferencerowsforgatheredjoin, maxexpressionquerylimit, maxexpressionitemlimit, optimizeremptytableslimit, optimizerbeamwidth, optimizerdisablesubselecttojoin, disableremoveredundantgbyrewrite, subquerymergewithouterjoins, optimizerdisableoperatortreenullabilitycheck, clamphistogramdateestimates, varcharcolumnstringoptimizationlength, histogramcolumncorrelation, usecolumncorrelation, considersecondaryprojection, optimizerdisablesemijoinreduction, optimizergbplacementtablelimit, optimizerdisabletransitive_predicates Enhancement: Added more config information about pipelines to memsql_exporter. Enhancement: Improved error handling for VECTOR type. Enhancement: Added support for aggregate functions (FIRST, LAST) with VECTOR datatype. Enhancement: Added support for Kafka key (producer) in the SELECT ... INTO KAFKA command. Enhancement: Improved Kafka error messages to make them more specific. Bugfix: Fixed a case where a killed query could incorrectly return a JSON error rather than query interrupted message. Bugfix: MVDISKUSAGE queries can no longer block garbage collection. Bugfix: Fixed a crash-on-startup issue on Centos9. Bugfix: Fixed potential deadlock between table module eviction and GC manager. Bugfix: Fixed a crash that could occur when a scalar subselect is passed as an argument to an external TVF. Bugfix: Fixed an issue with restoring from public backups. Bugfix: Backups now throttle after snapshot. New Feature: Added ability to create a Projection which is a copy of some or all columns in a table, and may be sharded and sorted differently than the primary table. Projections can be used to speed up several types of queries including range filters, ORDER BY/LIMIT, GROUP BY, count(DISTINCT...), DISTINCT, joins, and row lookups on secondary keys. Projections depend on the table and are updated in real time when you update the table. Related information schema table: PROJECTIONS. Bugfix: Removed an unnecessary metadata copy to improve JSON performance. Enhancement: Multi-insert queries now respect lower columnstoresegmentrows settings. Enhancement: Subprocess and S3 API tracing are now also enabled via the enablesubprocesstracing engine variable. Enhancement: Added the optional IF NOT EXISTS clause to the CREATE TABLE ... AS INFER PIPELINE statement. Refer to Replicate MongoDB Collections to SingleStore for more information. Bugfix: LASTINSERTID will now correctly retrieve the last inserted ID for a forwarded INSERT query. Bugfix: The rule based optimization warning message will only be shown when cost based optimization is meaningful. Bugfix: Fixed a minor performance issue where a table module could be evicted prematurely during database recovery. Bugfix: Fixed a very rare deadlock between ALTER of a table and a DROP of a database. Bugfix: Fixed a JSON insertion failure to old utf8 sample tables that contain utf8mb4 characters by encoding it with base64 format beforehand for backward compatibility. Enhancement: Disk Spilling now takes QUERYMEMORYPERCENTAGE resource pool setting into consideration. Bugfix: Fixed an issue with a blob file leak in a rare crash scenario. Bugfix: Fixed a potential deadlock in a scenario involving DDL, a 2PC transaction, and a concurrent failover. Bugfix: Fixed a deadlock involving BACKUP and ALTER in a rare case. Enhancement: Added support for utf8mb4 symbols in column comments, table comments, and user comments. Enhancement: Enabled additional query hint for non-distributed queries so that first run of those queries can be" }, { "data": "Enhancement: Updated timezone data, now using IANA's tzdata version 2024a; new timezones Kyiv, Ciudad_Juarez, and Kanton supported. Enhancement: Idle Table Eviction improvement. In the 8.5 GA version, an idle table's objects cannot be evicted from memory if the table has one or more UNIQUE key columns (both PRIMARY KEYs and any other keys with a UNIQUE constraint). In the 8.5.11 patch, idle tables' code objectscanbe evicted from memory if the table has one or more UNIQUE key columns. Bugfix: Fixed an issue in background merger that allowed concurrent deletes of empty segments. Bugfix: Fixed an issue to ensure preserving a field of a table correctly. Enhancement: Added numInferredPaths and schemaSize as new columns to JSONCOLUMNSCHEMA table. numInferredPaths is the number of key paths inferred for the segment. schemaSize is the size of schema_json in bytes. Bugfix: Added safety checks around dropping databases without exclusive access to storage. Bugfix: Fixed an issue where explicitly defining a JSON type in the RETURNS of a Wasm TVF can cause an error when it is run. Bugfix: Fixed a rare crash scenario. Bugfix: Fixed an issue that could lead to the risk of undefined behavior when running DROP EXTENSION IF EXISTS. Enhancement: IN-lists will now use hashmap optimization for matching parameters in *MATCHANY statements. Enhancement: Made defaultdistributedddl_timeout a sync variable on all nodes. Enhancement: Modified the conversion logic for converting from VECTOR to BSON types. Now, when casting VECTOR(F32), it will generate a BSON array of doubles, rather than a combination of numeric types. Bugfix: Addressed a family of issues relating to optimal execution of JSON extracts on schema'd data. Bugfix: Fixed an issue with regression in replay performance for databases with many tables. Enhancement: Added support for usage of vector built-ins with string/JSON arguments without requiring an explicit typecast to VECTOR (e.g., 'SELECT @vec<*> '[1,2,3]'). Enhancement: Added spilling metrics for TopSort (ORDER BY with LIMIT). Enhancement: Added support to the VECTOR built-ins for all VECTOR elements types (e.g., F32, F64, I8, I16, I32, and I64). Enhancement: Added support for creating numeric histograms based on JSON values. (Feature flag gated, contact your SingleStore representative) Enhancement: Added new metrics to memsqlexporter based on the informationschema.mvsysinfodisk columns. Enhancement: Creation of computed columns with VECTOR data type is now allowed. Enhancement: Added ability for informationschema.optimizerstatistics to display JSON keys. Enhancement: Added ability to skip eviction when log replaying hits the blob cache space limit, providing a greater chance to succeed. Enhancement: Improved JSON histograms to support numeric histograms, analyze command, infoschema.optimizerstatistics, and displays JSON histograms. Enhancement: Added support for warming blob cache with table's column data. Syntax is: OPTIMIZE TABLE <tablename> WARM BLOB CACHE FOR COLUMN <columnnames>: ``` OPTIMIZE TABLE t WARM BLOB CACHE FOR COLUMN c1, c2; OPTIMIZE TABLE t WARM BLOB CACHE FOR COLUMN *;``` Enhancement: Disabled the default semi join reduction rewrite. Enhancement: Now recognize more EXTRACT/MATCH functions as candidates for pushdown/computed column matching. Enhancement: Replaced expression pushdown (EPD) approach with more complete version that does not over project rows. Bugfix: Fixed an issue where the existing websocket connection would close when variables are updated in global scope. Bugfix: Removed the hard-coded 'collation_server' from constant.StrictModeParams. Bugfix: Fixed an issue with a sensitive information leak inside of out-of-memory reports. Bugfix: Fixed the result collation from VECTOR built-ins to be utf8mb4_bin instead of binary. Bugfix: Fixed an issue with nested extracts in the JSONMATCHANY() statement. Bugfix: Fixed an issue with backup restore when the backup has a corrupted GV timestamp in the about snapshot" }, { "data": "Enhancement: The new engine variable parametrizejsonkeys allows JSON keys to be parametrized and plans can be reused. Bugfix: Fixed an issue that causes CREATE TABLES AS INFER PIPELINE to fail with an Out Of Memory error. Bugfix: Fixed an issue with exporter not working with Public-Key Cryptography Standards #8(PKCS#8) pem format. Bugfix: Fixed an issue with potential crashes when specifying a vector index in index hint. Bugfix: Fixed a crash that occurred when trying to grant access to the table with no context database in per-privilege mode. Bugfix: Fixed a memory issue, when calling a Table-Valued Function (TVF) that has arguments with default values. Bugfix: Fixed an issue with the RG pool selector function return type check. Enhancement: Added ability for informationschema.mvconnectionattributes to show tlsversion and tls_cipher for connections where SSL is enabled. Enhancement: Added ability for predicate pushdown with NOW() and user defined functions (UDF). Enhancement.: Reduced amount of memory used by unlimited storage file listing, including point-in-time recovery (PITR). Enhancement. Added an engine variable, innodblockwait_timeout. This variable exists for backwards compatibility with MySQL and is non-operational in SingleStore Helios. Enhancement. Optimized performance when using VECTOR built-ins with the VECTOR data type. Bugfix: Fixed an issue where a backup would inadvertently carry metadata about the retention period of the backed-up unlimited storage database. Bugfix: Fixed the change to aggregator activity queries in memsql_exporter not being applied properly. Bugfix: Fixed an issue withSTART PIPELINE FOREGROUNDskipping files in some circumstances. Enhancement: Added ability to memsql_exporter to always return all sub-activities of an aggregator activity. Enhancement: Improved ability to configure Kafka extract/egress by allowing additional options. Enhancement: Improved performance for certain query shapes used by Kai. Bugfix: Fixed the query-events endpoint link on the memsql_exporter landing page. Bugfix: Fixed an issue where a segmentation fault (crash) appeared after terminating a query on the remote node. Bugfix: Fixed an issue with crashes occurring when using certain JSONEXTRACTs in the predicate expression of JSONMATCH_ANY(). Bugfix: Fixed an issue where the number of rows affected with SELECT INTO object storage was erroneously reported as 0 rows. SmartDR SmartDR creates and manages a continuous replication of data to a geographically separate secondary region thereby allowing you to failover to the secondary region with minimal downtime. For more information, refer to Smart Disaster Recovery (DR): SmartDR Database Branching Database branching enables you to quickly create private, independent copies of your database for development, testing and other scenarios. For more information, refer to Database Branching Extensions Extensions in SingleStore allow you to combine user-defined objects, such as UDFs or UDAFs, into a packaged archive (the extension) and then create, manage, and deploy these objects and other resources using a single command. Extensions support both Wasm-based and PSQL functions. For more information, refer to Extensions. Trace Events and Query History Added the ability to trace query completions as events, which is the initial installment of the larger event tracing framework. The Query History feature relies on query event tracing, and can be used to display query trace events over time. The Query History feature can therefore be used to troubleshoot and optimize query performance, including, but not limited to, tracing and recording expensive queries, resolving unexpected slowdowns, and viewing and optimizing workloads in real time. Refer to Query History for more information. Improved Memory Management for Resource Pools Added the QUERYMEMORYPERCENTAGE option for resource pools, which restricts memory usage in the pool on a per-individual query" }, { "data": "This in contrast to MEMORY_PERCENTAGE which restricts usage based on total memory used within the current pool. For example, when creating or altering a resource pool, setting MEMORYPERCENTAGE to 60% and QUERYMEMORY_PERCENTAGE to 50% would configure the system so that all queries running within the specified resource pool should together use a maximum of 60% of system memory, and any single query running within the pool should use, at most, 50% of system memory. Example syntax: ``` CREATE RESOURCE POOL rpoolmain WITH MEMORYPERCENTAGE = 60, QUERYMEMORYPERCENTAGE = 50, SOFTCPULIMITPERCENTAGE = 65, MAX_CONCURRENCY = 40;``` Load Data Updates SingleStore now supports loading data using the Change Data Capture (CDC) pipelines from the following data sources: MongoDB and MySQL. Refer to Replicate MongoDB Collections to SingleStore or Load Data from MySQL for information on loading data from the respective data source. Enhancement: Other Performance Enhancements SingleStore now supports creating shallow copies of tables. The WITH SHALLOW COPY feature copies an existing table and creates a new table that will have the same structure as the original table. The data is not physically copied to the new table, but referenced against the original table. SingleStore now supports sorted scan query plan operators for queries containing ORDER BY/LIMIT clauses when utilizing flexible parallelism. Before this enhancement, there could be performance regressions for this query shape using flexible parallelism. Improved performance when completing large sets of security operations (creating a lot of groups/users/roles, etc.). Added the ability to use named argument notation when calling a PSQL SP or function. Can reduce total lines of code and make code more readable. Added reduction of memory pre-allocation during columnstore JSON reads. Added ability to check if all leaf node partitions are available, before processing new batches. Addressed a table resolution issue for embedded recursive Common Table Expressions (CTEs). SingleStore now natively supports the BSON data type. Enhancement: Query Optimization Enhancements Added support for Row Count and Selectivity hints in views. Added new join logic to recognize when a non-reference table is being joined exclusively to reference tables and then gather the non-reference table to avoid duplicating work across every partition. Remove redundant aggregation functions and GROUP BY statements. JSON expressions are properly pushed down. Allow columnstore optimization for JSONMATCHANY with JSON_EXTRACT in predicate. Added support for hash joins on null-accepting expressions in the ON condition of outer joins. Automatically rewrite A=B OR (A IS NULL AND B IS NULL) to null safe equal (A<=>B) so that many important optimizations (e.g. shard key joins, hash joins) will work. Perform a subselect to join rewrite in an UPDATE statement when there are multiple columns in the subselect. Removed some query shape lockdowns. Added support for flipping join order for full outer join. Improved performance by not executing query optimization procedures for read queries during the process of persistent plan cache lookup. This optimization strategy has resulted in improved lookup performance, leading to faster data retrieval operations. Added support for statistics on correlations between columns in cases where highly correlated filters are used. The Data API now supports HTTPS for connections where sslkey is encrypted with sslkey_passphrase. Removed parametrization of LIMIT 0 and LIMIT 1 to unlock more rewrites, especially for subselects. Modified computed column matching to accurately evaluate JSON expressions containing equality and non-safe equality comparisons. Fixed an issue where filtering with a JSONEXTRACT<type> function performs inconsistently. The query optimizer now considers more LEFT JOIN elimination cases. Enhancement: New Information Schema Views and Columns Added a new view, correlatedcolumnstatistics, to provide metadata on correlated" }, { "data": "Added a new view, RESOURCEPOOLPRIVILEGES, to provide information about resource pool grants and privileges. Added the following new columns to MVBACKUPHISTORY: error_code: Error code for failed backups. error_message: Error message for failed backups. Added the following new columns to MVSYSINFODISK: readoperationscumulativeperdevice: Number of read operations performed by the device since start up. writeoperationscumulativeperdevice: Number of write operations performed by the device since start up. devicename: Name of the device to which the values in readoperationscumulativeperdevice and writeoperationscumulativeper_device are associated. Added the following to support trace events that are used by the Query History feature: MVTRACEEVENTS: A snapshot of all trace events, the size of which is dictated by the traceeventsqueue_size variable MVTRACEEVENTS_STATUS: A view that reflects the status of current trace events LMVTRACEEVENTS: A snapshot of each node's trace events Added the following new column to ADVANCEDHISTOGRAMS, L/MVQUERYPROSPECTIVEHISTOGRAMS, and L/MVPROSPECTIVEHISTOGRAMS: JSONKEY: an entry for each (column, jsonkey) pair. For non-json columns JSON_KEY is NULL. Added the blobcachemissb and blobcachewaittimems columns to the following information schema views: informationschema.plancache, informationschema.MVACTIVITIES, informationschema.MVACTIVITIESCUMULATIVE, informationschema.MVTASKS, informationschema.MVFINISHEDTASKS, and informationschema.MVQUERYACTIVITIESEXTENDED_CUMULATIVE. Bugfix: Updated theinformation_schema.USERSview to reflect the account status for locked users. Enhancement: New Commands and Functions Added support for the REGEXP_MATCH() function. This function returns a JSON array of matching substring(s) within the first match of a regular expression pattern to a string. Added support for CUBE and ROLLUP grouping operations to Wasm-based user-defined aggregate functions (UDAFs) in SingleStore. For more information, refer to CREATE AGGREGATE. Added the following to support trace events that are used by the Query History feature: CREATE EVENT TRACE to create a trace event DROP EVENT to drop a trace event Added ability to use DELETE on identical keys with the ON DUPLICATE KEY clause. This is in addition to existing \"upsert\" support with ON DUPLICATE KEY UPDATE. This allows new scenarios such as the ability to manage streaming aggregation with INSERT ON DUPLICATE KEY UPDATE ELSE DELETE . INFER PIPELINE for MongoDB CDC-in now generates tables with BSON column types. Enhancement: New or Modified Engine Variables Added enableidletableoptimizations and enableidletableeviction, which are used to reduce table memory overhead for idle tables on a cluster. The feature is enabled by default on all new and existing clusters. The variable enableidletableoptimizations needs to be set at runtime and requires a restart for changes to take effect. It can be set to OFF or ON (default). The other related variable, enableidletableeviction, can be set during a session (though a very small amount of overhead will remain until the server is restarted) and can be set to Full, SkipListsOnly, and Off. It defaults to SkipListsOnly, which means SingleStore will only evict skiplist indexes for idle tables on the cluster. Full means it will evict skiplists and table modules, and Off means no eviction. Added privilegecachesupdate_mode, which can be used to address some performance issues that occur when performing large sets of security operations (creating a lot of groups/users/roles, etc.). Added the optimizeruseaverage_rowsize engine variable which can be used to now allow row size estimations in query optimization costing. The queryparallelism engine variable (which was deprecated in 8.1) now is non-functional. To modify Flexible Parallelism settings, use queryparallelismperleaf_core instead. Added the useuserprovidedindextypesinshow engine variable which controls what will be displayed via the DESCRIBE <table_name> or SHOW COLUMNS syntaxes for backward compatibility. Added the throttlereplaybehindtailmb engine variable which controls how far the system allows a child aggregator to lag behind replicating the transaction log of it's master aggregator before throttling is applied to the master" }, { "data": "Added the ability to use the ANALYZE command with JSON keys to create histograms when the new engine variable enablejsonstatistics is enabled. The engine variable enablejsonstatistics is disabled by default. Added traceeventsqueue_size to capture trace events, the first of which is query event tracing. This engine variable is enabled by default (set to a value of 16 MB, where the value must be provided in bytes). Refer to Query History for more information. Added the optimizerdisabletransitive_predicatesengine variable which disables predicate transitivity on query rewrites if set to TRUE. This engine variable defaults to FALSE. Added a new engine variable pipelinescdcjavaheapsize to specify the JVM heap size limit for CDC-in pipelines. Added the bottomlessexperimentalblobstore_mode engine variable, which when enabled, completes additional verification of persisted files immediately after upload. This mode is experimental and may reduce upload speed. Please use with caution. The new engine variable maxtablememoryroommb sets the maximum amount of memory required when creating or attaching a database. Configuring this engine variable allows more control over whether a detached database can be reattached. The engine variable backupmultipartupload_concurrency maximum value has been increased to 15. Miscellaneous Enhancements and Bugfixes: Enhancement: Introduced a new VECTOR data type that is recommended over the BLOB data type for vector operations and use with ANN indexes. The VECTOR type improves code readability, error checking, and reduces total lines of code. Enhancement: Added support for DDL forwarding for CLEAR BACKUPHISTORY.CLEAR BACKUPHISTORY works on the DML endpoint now. Enhancement: Improved retry logic for connection reset: write errors. Enhancement: SingleStore now natively supports the BSON data type. Enhancement: Added support for collection.exclude.list, database.include.list, and database.exclude.list parameters to the CONFIG/CREDENTIAL clause of the CREATE AGGREGATOR PIPELINE ... AS LOAD DATA MONGODB statement. Refer to Replicate MongoDB Collections to SingleStore for more information. Enhancement: Added support for Approximate Nearest Neighbor (ANN) vector search using inverted file (IVF) and hierarchical navigable small world (HNSW) indexes, and variants of them based on product quantization (PQ). Enables support of larger-scale semantic search and generative AI applications. Enhancement: Increased name length limit to 256 characters for tables, views, table columns, view columns, procedures, functions, and aliases. Enhancement: Added the ability to truncate plancache file names if they exceed the operating system's specified limit (255 bytes). Bugfix: Both SHOW PIPELINES and SELECT * FROM information_schema.pipelines now show consistent pipeline state information across master and child aggregators. Bugfix: Specific pipeline built-ins like pipelinesourcefile() and pipelinebatchId() should not be used in UPSERT clause when creating a pipeline. Bugfix: Improved the ability to terminate expressions containing JSON built-ins. Enhancement: Added error-handling details for pipelines, including state, error, and performance-related metrics through monitoring solutions. Enhancement: MemSQL Procedural SQL (MPSQL) has been renamed to simply Procedural SQL (PSQL). The name change will only show in some SHOW command output and information schema views. For example, SHOW FUNCTIONS output changed. Enhancement: Added support for %ROWTYPE and %TYPE for use in declaring scalar type variables and parameters. Employing these abbreviations in Procedural SQL (PSQL) can lead to a reduction in the required lines of code. Enhancement: Introduced blob cache profiling metrics for columnstore tables on unlimited storage databases. Refer to the PROFILE page for more information on what blob cache metrics are available. Enhancement: Added infix operators for dotproduct (<*>) and euclideandistance (<->). Enhancement: Added ability to delay retry attempts for pipeline retries. Bugfix: Updated the output for the Keyname and Indextype columns in the SHOW INDEX, SHOW INDEXES, and SHOW KEYS commands for primary keys on columnstore" }, { "data": "Refer to the SHOW INDEX, SHOW INDEXES, SHOW KEYS page for more information. Bugfix: Improved the error message displayed when trying to create a primary key on an existing table. Bugfix: Improved the error message displayed when a GRANT command fails due to missing permissions. The error message will now show the missing permissions: ``` GRANT SELECT, UPDATE, DELETE, EXECUTE ON . TO test2;``` ``` ERROR 1045 (28000): Current user is missing UPDATE, DELETE permission(s) for this GRANT``` Bugfix: Fixed a case when UNIX_TIMESTAMP() was incorrectly returning 999999999.999999 for DATETIME data types with precision. Bugfix: Fixed an issue where a user could be erroneously marked as deleted. Bugfix: Fixed an issue where a child aggregator could crash if it ran out of memory during query forwarding. Bugfix: Fixed an issue with blob cache LRU2 eviction that could occur when a query fetches a blob, evict it, and fetches it again. Bugfix: Fixed an issue that could cause information for a blob to be missing from an Information Schema table. Bugfix: Disk Spilling now takes the resource pool settings into consideration. Enhancement: Auto user creation is deprecated and the NOAUTOCREATE_USER variable is enabled by default. Bugfix: Fixed an erroneous access denied issue to views selecting from shard tables with computed columns. Bugfix: Fixed a rare issue where the incorrect timezone could be used in logging. Bugfix: Fixed an issue where using user-defined variables inside ORDER/GROUP BY statements could cause a crash. Enhancement: Added column name to error messages when invalid date/time is inserted into a column. Enhancement: Added BSON columnstore functionality over Parquet storage. Enhancement: Added support for SELECT ... INTO KAFKA using OAUTH credentials. Bugfix: Prevent the ability to create a Kafka Pipeline using Parquet Format. Bugfix: Queued Time is now excluded from the cost estimate for workload management leaf memory. Bugfix: Specific error messages are now logged for GCS subprocess failures. Bugfix: Fixed a network communication error that occurred after query rewrites. Bugfix: Fixed an error that occurred when a user attempts to access a view that is based on a table that has no privileges granted on it. Enhancement: Improved the ability to kill queries containing JSON built-in functions. Bugfix: JSON_KEY escape characters were not working as expected. Bugfix: ALTER PIPELINE setting maxpartitionsper_batch to use a default 0 value is now allowed. Enhancement: Improved performance for JSONEXTRACT<type> built-ins in ORDER BY clauses. Enhancement: Now suppressing a harmless traceSuspiciousClockUpdate trace message during recovery. Bugfix: Fixed an issue where INSERT...SELECT queries with a partition_id() filter generating an error. Bugfix: Fixed an issue with memory crashing when using REGEXPMATCH, JSONINCLUDEMASK, or JSONEXCLUDE_MASK built-ins. Enhancement: Improved performance by optimizing joins on TABLE(JSONTOARRAY()) queries. Bugfix: Fixed an allocation issue that caused poor performance on high load insertion queries. Enhancement: Enabled support for external UDFs used in INSERT statements with multiple VALUE clauses. Enhancement: Added BSON fundamentals and column type support for SingleStore Kai. Bugfix: Fixed a bug that could result in unrecoverable databases if the database had 1024 or more tables. Bugfix: Fixed an optimization out-of-memory issue cause by operators generated for wide user tables. Bugfix: Fixed a crash that occurs in rare scenarios involving ALTER TABLE and failovers. Bugfix: Fixed ineffective search options that change in subsequent vector search queries. Bugfix: Resolved an issue related to FROMBASE64 and TOBASE64 builtins when processing large string inputs thereby preventing potential errors in reading communication packets. Bugfix: Fixed the code involved in backups to improve download error" }, { "data": "Bugfix: Fixed an bug that could cause scans using non-unique indexes which could return incorrect results or cause crashes. Enhancement: Added support for creating a DEEP COPY of tables with computed columns. Enhancement: Added additional config and credential option validation while creating pipelines. Bugfix: Fixed an issue where valid LINK config and credential parameters were not supported for both reading from and writing to a datasource. Enhancement: Notebooks now have autosave (currently saves every 5 seconds). Enhancement: Added additional node metrics to the /cluster-metrics endpoint of the memsql_exporter. Added three new fields to MVSYSINFODISK: READOPERATIONSCUMULATIVEPERDEVICE: Number of read operations performed by the device since start up. WRITEOPERATIONSCUMULATIVEPERDEVICE: Number of write operations performed by the device since start up. DEVICENAME: Name of the device to which the values in readoperationscumulativeperdevice and writeoperationscumulativeper_device are associated. Enhancement: Changed the SingleStore Helios workspace default and range for the pipelinescdcrowemitdelay_us engine variable. Throttling default value is set to 1. The supported range is from 0 to 1000000. Enhancement: Addressed some performance issues that occur when performing large sets of security operations (creating a lot of groups/users/roles, etc.) via the new privilegecachesupdate_mode engine variable. Enhancement: Improved performance during snapshotting for CDC-in pipelines. Enhancement: Added ability to infer CSV data with text boolean values. Enhancement: Added support for simple multi-column update with sub-query. Enhancement: Added ability to use SSL keys with a password in the HTTP API. Bugfix: Fixed an issue preventing nodes from attaching during upgrade. Bugfix: Fixed an upgrade issue where some databases could temporarily become unrecoverable if snapshots were skipped during a pre-upgrade on a recently attached unlimited database. Enhancement: The singlestore_bundle.pem file, which SQL clients can use to connect to SingleStore Helios, will be updated as of October 20, 2023. As a consequence, connecting to SingleStore Helios may not be possible until this file has been (re-)downloaded. Refer to Connect to SingleStore Helios using TLS/SSL for more information. New Feature: SingleStore Spaces - Find a gallery of notebooks to learn about scenarios that SingleStore covers at: https://www.singlestore.com/spaces/ Enhancement: Notebooks have been improved with the addition of the following features: SQL Notebooks Hints to connect to external sources for the notebook firewall settings Performance improvements around loading time Jupyterlab 4.0 New Feature: Datadog integration. Monitor the health and performance of your SingleStore Helios workspaces in Datadog. Enhancement: Enhanced the performance of DDL statements for role manipulation. Enhancement: Added two engine variables, jwksusernamefield and jwksrequireaudience to add flexibility and improve security. Enhancement: Added two engine variables: maxexpressionquerylimit which sets a limit on the number of expressions within an entire query and maxexpressionitemlimit which sets a limit on the number of expressions within a query item. Both can be set to a range between 100 and the maximum unsigned INT value. Setting these engine variables to the maximum unsigned INT value disables both features. Enhancement: Added support for materializing CTEs without recomputing them when the query contains UNION, UNION ALL, and other SET operations. To enable the feature, set the engine variableallowmaterializectewithunion to TRUE. Bugfix: Fixed several issues causing slow compilation for queries over wide tables. New Feature: Persistent Cache/Disk Monitoring - Monitoring dashboard to help explain \"What's consuming Persistent Cache\" as well as the Blob Cache downloaded/evicted rate. Bugfix: Fixed an issue where a crash occurs where the engine improperly rewrites queries with a UNION in an EXCEPT clause. New Feature: Added support for the REGEXP_MATCH() function. This function returns a JSON array of matching substring(s) within the first match of a regular expression pattern to a string. Enhancement: Improved performance of multi-part GCS" }, { "data": "Enhancement: Improved memory consumption in json decoding. Enhancement: Introduced a new global engine variable, jsondocumentmax_leaves which limits the number of JSON key paths inferred within a segment. The default value is 10000. Enhancement: Introduced a new global variable, drminconnectiontimeoutms, which allows users to adjust the minimum timeout period in Disaster Recovery (DR) replication. Enhancement: Added support for multiple uncorrelated IN-subselects in more query shapes. Enhancement: SKIP PARSER ERRORS is now supported for Kafka. Additionally, a new related engine variable, pipelinesparseerrors_threshold, has been added. Enhancement: SingleStore automatically rewrites A=B OR (A IS NULL AND B IS NULL) to null safe equal (A<=>B) to enable hash joins. Enhancement: Added support for hash joins on null-accepting expressions in the ON condition of outer joins. Bugfix: Fixed an issue with significant memory reduction from spilling for hash join queries with variable length strings (varchar, text, etc.) involved. Bugfix: Fixed an issue where BACKUP DATABASE WITH INIT would fail under out-of-memory conditions. Bugfix: Fixed a potential crash in NFS backup when encountering an IO error. Enhancement: Added support for Control Group v2 (cgroup v2). Bugfix: Fixed an issue where a large spike in the query memory consumption on a any SingleStore node could cause replicas on the same node to become unrecoverable. Bugfix: Fixed a pipeline wrong state issue caused by an error in a table with a computed column. Bugfix: Fixed a potential issue where REBALANCE PARTITION would not stabilize to a partition placement on a read replicas workspace. Bugfix: Fixed a potential crash in some edge cases when using Parquet pipelines. Enhancement: The SingleStore Python Client is now the standard for our notebooks. This upgrade supports the ingestion of dataframes with specialized data types, including geospatial or vector data, into the database. Additionally, it incorporates the Ibis component, enabling Python to directly interact with the database. This allows dataframes to be executed within the database itself, greatly enhancing performance. Enhancement: Default values for BLOB/TEXT and JSON columns are allowed, as well as NULL and empty strings. Bugfix: Fixed a crash that may occur when using SELECT FOR UPDATE LIMIT <n> in a multi-statement transaction when twophasecommit is ON. Bugfix: Fixed a bug where the permissions do not clear for Data Manipulation Language (DML) queries and then leak to subsequent Data Definition Language (DDL) queries. Bugfix: Fixed a bug where the mvdiskusage table would show incorrect results for the plancache directory. Bugfix: Fixed a crash that may occur when a function calls more than one external function. In the following example get_embedding() is an external function: ``` SELECT DOTPRODUCT(JSONARRAYPACK(getembedding('house')), ``` Enhancement: Added an opt-in optimization to Kerberos for HDFS pipelines to reduce the amount of requests. Bugfix: Fixed a JSON formatting issue for the PARTITIONUNRECOVERABLE event in MVEVENTS details column. Bugfix: Fixed the dependency on PIPESASCONCAT sqlmode inside user defined functions. The sqlmode state is now stored when a user defined function is created and used whenever the function is executed. Enhancement: Introduced the 8.1 dataconversioncompatibility_level which provides additional out-of-range checks for the TIMESTAMP data type. Bugfix: Fixed a bug where a HashJoin on a materialized CTE gets executed as a NestedLoopJoin in some query shapes causing slow join performance. Bugfix: Fixed the SPLITstring function that was not detecting the correct argument type when used inside a table-valued function (TVF) or a user-defined scalar-valued function (UDF). Bugfix: Reduced memory usage in situations where tables are repeatedly created and dropped. Enhancement: Added mvdiskusage and mvdatadisk_usage information schema tables that report the breakdown of disk utilization by SingleStore" }, { "data": "Enhancement: Added the maxexecutiontimeengine variable. It is unused and setting this variable has no effect. It exists so certain MySQL-compatible apps that have this variable set will not fail when connected to SingleStore Helios. Bugfix: Fixed an issue on sample tables that can be created with different collations on different partitions. Bugfix: Fixed an issue where JSON computed column matching incorrectly triggers a JSON function. Bugfix: Fixed a rare case where the query optimizer re-wrote a query into a non-equivalent query which produced more result rows. Bugfix: Fixed unlimited storage S3 multipart uploads to properly retry on 200 responses which contain an embedded \"SlowDown\" error, instead of failing immediately. Bugfix: Fixed an issue where memory accumulates over time if reshuffle or repartition operators are executed for many unique query shapes. Bugfix: Fixed an issue with crashes that occur when SSL is enabled on universal storage databases. SingleStore Kai (Preview) SingleStore Kai allows you to run MongoDB queries natively in a SingleStore Helios workspace. This feature is currently available in these regions. You can enable this feature while creating a workspace. Each SingleStore Kai-enabled workspace has an additional mongodb:// endpoint, that can be used to connect from supported MongoDB tools/applications to run MongoDB queries. For more information, refer to SingleStore Kai. Here's a few additional references: Migrate from MongoDB to SingleStore MongoDB to SQL Mapping SingleStore Extension Commands Supported MongoDB Commands, Data Types, and Operators New Feature: Introduced a columnstore row data structure (COLUMN GROUP) that will create a materialized copy of individual rows as a separate structure in a columnstore table. This index will speed up full row retrieval and updates. New Feature: SingleStore Helios now supports creation of Wasm-based user-defined aggregate functions (UDAFs). Refer to CREATE AGGREGATE for more information. Enhancement: Introduced the autostatsflushinterval_secs engine variable. It determines when autostats are flushed to disk if they are not used within the specified time. The default value is 600 seconds. If the engine variable is set to 0, autostats will always stay in memory. Enhancement: Introduced two new Sync variables control the dynamic reuse of WM queues: workloadmanagementdy-namicresourceallocation and workloadmanagementqueuesizeallowupgrade. The default value for workloadmanagementqueuesizeallowupgrade is 1. This means we can upgrade a medium queue to a large queue until the large queue becomes equal to 1. Enhancement: Added two new monitoring dashboards: Pipeline Summary and Pipeline Performance. Currently, both dashboards are in preview mode. Enhancement: Improved JSON encoding speed for sparse schemas (JSON schemas with a very large number of distinct property names across the full set of documents, with most properties missing in each document). This applies only to universal storage. Enhancement: A disk manager now brokers the usage of disk space between competing uses like data, log, cache, and spilling. Enhancement: Added capability to create Wasm functions in SingleStore Helios workspaces with heterogeneous hardware, e.g., with different levels of SIMD support. New Feature: Added a METADATA_ONLYargument for shard keys which will prevent an index from being created on shard keys thereby saving memory. It can cause queries that would have used that index to run slower. Enhancement: Reduced processing costs for queries containing an ORDER BY clause by minimizing the amount of data that needs to be processed. This is achieved by only projecting the primary key at the lower levels of the query plan and then joining the primary key with the original data to retrieve the remaining" }, { "data": "This can significantly improve performance when dealing with large amounts of data and when only a small portion of the data needs to be retrieved. New Feature: Added support for query profiling that is efficient enough to be enabled all the time by introducing the autoprofiletype engine variable with FULL and LITE options for automatic profiling. The enableautoprofile engine variable must be set to ON for the FULL or LITE options to work. Enhancement: The updated full-text search tokenizer in version 8.1 supports utf8mb4. The tokenizer properly tokeni- zes emojis and other extended 4-byte characters. In addition, certain emoji and glyph characters in utf8mb3 are also recognized as their own token rather than being treated as blanks. Enhancement: Enhanced support for recursive common table expressions (CTEs) by expanding the range of query shapes allowed, improving column type checking and resolution between the base and recursive branches, and adding cross database support. Also, resolved the issue that caused the \"ERUN- SUPPORTEDRECURSIVECTESHAPE\" error with the accompanying message about dependen- cy on a streaming result table outside the cycle. New Feature: Added two Information Schema views for tracking connection attributes: LMVCONNECTIONATTRIBUTES and MVCONNECTIONATTRIBUTES. Enhancement: Added support to optimize table scans with an Order-By-Limit by (a) do the order and limit first, with only the minimum needed columns, and then (b) using a self-join to retrieve the additional necessary columns for only the qualifying rows. This optimization can be enabled/disabled via the optimizerenableorderbylimitself_join session variable, which is ON by default. New Feature: Added ATTRIBUTE and COMMENT fields for users. These can be set via the CREATE USER and ALTER USER commands. The values are shown in the INFORMATION_SCHEMA.USERSview. New Feature: Notebooks feature (Preview) is available to all customers through our Cloud service. It gives users the ability to marry SQL and Python interoperably and in a secure way. This is available in the Develop section within the Cloud Portal. Bugfix: The sqlmode session variable now shows up in SHOW VARIABLES and the mvglobal_variables view. Bugfix: Fixed an issue that could cause an incorrect error message to be generated on an out of memory error. Bugfix: Fixed an issue where a NULL could be pushed down to a union which could cause the type-cast/collation to be lost, resulting in an error: \"Operation 'Scanning a hash join build' is not allowed\". Bugfix: Fixed an issue with DATETIME columns with ON UPDATE CURRENT_TIMESTAMP defaults that could cause incorrect results. Enhancement: Removed EXECUTE privilege requirement for Kai internal UDFs. Enhancement: Changed system information built-ins to use utf8generalci collation for the collation_server. Enhancement: Removed the restrictions on the names of JWT users. Customers may now create users with names that look like UUIDs and emails. Enhancement: Improved the performance of a left join of a reference table with a sharded table by pushing the reference table clauses to the leaves. The engine variable disablereferencetableleftjoinwhere_pushdown must be set to \"OFF\" to enable this operation. Bugfix: Fixed an issue with optimizing queries containing no-operation predicates such as 1=1. Bugfix: Fixed rare issue with sync durability and unlimited storage that could cause log file corruption in the blob storage. Enhancement: Added more information about blob files that need to be downloaded from unlimited storage to the blob cache when running PROFILE. Bugfix: Fixed an issue where the query optimizer could choose incorrectly between using the HashGroupBy or the StreamingGroupBy operators. Bugfix: Disabled semi-join operator reduction optimization. Bugfix: Fixed an issue where errors could occur during backup when large files are" }, { "data": "Bugfix: Fixed an issue where S3 pipelines could stop loading data from the bucket if files are deleted from the bucket while the pipeline is active. Bugfix: Fixed an issue where, in rare cases, an UPDATE statement with a left join in the filter clauses could fail to execute. Bugfix: Fixed an issue where replica blobs could be loaded into the blob cache with an incorrect LRU timestamp. New Feature: Added support for LOAD DATA from S3 for Avro and Parquet data. New Feature: Added JSONINCLUDE/EXCLUDEMASK function. When applied to a JSON document it will return a subset of the document based on the mask. Bugfix: Fixed a potential crash issue inUPDATEqueries that involve joins and that have scalar subselects inSETclauses. Bugfix: Fixed an issue where runningDROP PLAN FROM PLANCACHE ... FOR QUERYon a query with invalid syntax could cause a crash. Enhancement: Updated timezone metadata to include Mexico's latest timezone change. Enhancement: Added a new information schema viewinternaltablestatisticswhich shows memory use of SingleStoreinternal metadata tables. The columns displayed are the same as those shown fortable_statistics. Bugfix: Fixed a column name resolution issue for recursive CTEs when the column type is the same across the base branch and the recursive branch. Bugfix: Fixed an issue where CLEAR ORPHAN DATABASE could cause data loss if run when the master aggregator has a database in an unrecoverable state. Bugfix: Added the option to use HTTPS with memsqlexporter. To use HTTPS, enable the engine variablesexporterusehttps,exportersslcert, andexporterssl_key. Bugfix: Fixed an issue with missing row counts during optimization when sampling is disabled. Bugfix: Fixed an issue where background mergers were not stopping quickly enough, resulting in delayed DDL operations or REBALANCE commands. Bugfix: Fixed an issue in information schema view JSONCOLUMNSCHEMA where incorrect details were being shown for the leaf columns. Bugfix: Fixed an issue where the memory used by an external function was not being freed quickly enough. Enhancement: The defaultpartitionsper_leaf global variable will no longer be user-settable in SingleStore Helios, ensuring that it's always defined for the most optimal performance, according to the current resource size. Bugfix: Fixed an issue where filters using the rangestats column in the informationschema.optimizer_statistics table were not allowed. Bugfix: Fixed an issue where incorrect results could be returned when segment-level flexible parallelism is used inside a subquery containing both a shard key filter and either an aggregate, a window function, or the limit clause. Bugfix: Fixed an issue where an internal result table was not created for a recursive CTE involving data across databases. New Feature: Added self-service historical monitoring that allows you to quickly and easily understand your application workloads and debug performance-related issues. Enhancement: Improved column type resolution for base and recursive branches in recursive common table expressions (CTEs). Bugfix: Fixed an error that could occur when attaching databases with a snapshot file of greater than 5 GB. Bugfix: Fixed a bug where too many rows are sampled from columnstore tables with more than 1 billion rows. Bugfix: Fixed an issue with histogram estimation in columns with a negative value TIME type. Bugfix: Fixed \"table partition count mismatch errors\" that occur due to the following conditions: the system variable enableworkspaceha is set and there is an upgraded workspace with an attached read replica. Bugfix: Fixed an issue with DDL endpoint queries using a lower-than-specified query parallelism setting when workspaces are enabled. Bugfix: Fixed a bug that prevents GROUP BY Push-Down optimization if the join filter contains a mismatched column type. Bugfix: Fixed a data conversion compatibility level 8.0 error that may occur when sampling columnstore" }, { "data": "Bugfix: Fixed a possible deadlock that may occur between the blob cache and the Rowstore Garbage Collection (GC) when the blob cache encounters an out-of-memory error. Bugfix: Fixed an error caused by setting the collation_server global variable to a non-default value when performing a REBALANCE PARTITIONS resource availability check. Enhancement: Improved the parsing performance of queries that contain several tables. Enhancement: The readadvancedcounters, snapshottriggersize, and snapshottokeep engine variables can now be set on SingleStore Helios. New Feature: Added new session variable disableremoveredundantgbyrewrite to prevent the GROUP BY columns from being removed when used in an ORDER BY clause. Enhancement: Introduced disk and memory availability checks that run before a database is allowed to be attached to a workspace. Enhancement: Added the ability to cache histogram results during optimization to reduce the work performed by the histograms. Enhancement: Improved the performance of S3 pipelines when Garbage Collection (GC) is enabled. Enhancement: Added the ability to backup a database to an HTTPS S3 target with an unverified SSL certificate when using the option: verify_ssl: false. New feature: ORDER BY SELF JOIN, it creates a self join on ORDER BY LIMIT queries to take advantage of differences in bandwidth. Bugfix: Fixed an issue with column type checking on base and recursive case of a recursive common table expression. Bugfix: Fixed an issue that may cause a \"Table doesn't exist\" error when a multi-insert contains expressions and the target table has a computed column as its shard key. Enhancement: Expanded existing Unicode characters to support Private Use Area (PUA) code points. Including one in the Basic Multilingual Plane (U+E000U+F8FF) and one in each plane 15 and 16 (U+F0000U+FFFFD, U+100000U+10FFFD). Bugfix: Fixed a crash that could occur when a computed column definition refers to a table name. New Feature: Added the ability to set the maximumblobcachesizepercent global variable for workspaces. Bugfix: Fixed an issue with promote lock timeout errors that may occur during a rebalance due to a heavy ingest workload, which causes the merger to be slow to pause. New Feature: Introduced a new global variable subprocessmaxretries, which is used for retrying on retry-able connection failures during select into/backup queries for S3 and GCS. New feature: ORDER BY SELF JOIN, it creates a self join on ORDER BY LIMIT queries to take advantage of differences in bandwidth. Enhancement: Background snapshots are now allowed to run during BACKUP DATABASE commands. This prevents increased disk usage by logs during a long-running backup. Bugfix: The CREATETIME, CREATEUSER, ALTERTIME, and ALTERUSER columns in the information_schema.TABLESview are now properly set for views and TVFs (table-valued functions). Bugfix: Fixed an issue that occurred when the MATCHPARAM<type> argument of the JSONMATCHANYfunction was not in a predicate expression. Enhancement: Improved the performance of the JSONMATCHANY_EXISTSfunction over columnstore tables. Bugfix: Fixed a profiling issue specific to non-collocated hash joins where the memory usage and disk spilling are missing under the join operators. Enhancement: Background snapshots are now allowed to run during BACKUP commands.This prevents increased disk use by logs during a long running BACKUP. Bugfix: Attaching a database that exceeds available workspace memory or persistent cache is now automatically blocked. Enhancement: Workspace creation and resume times are reduced by running operations in parallel. Bugfix: Fixed an issue where the database engine locks up on certain out-of-memory errors. Bugfix: Fixed a parsing issue for queries containing multi-line comments near GROUP BY or ORDER BY clauses. Enhancement: The numbackgroundmerger_threadsengine variable is now settable on Cloud. Enhancement: The ORDER BY ALL [DESC|ASC] (or ORDER BY *) syntax is now" }, { "data": "Enhancement: The GROUP BY ALL [DESC|ASC] (or GROUP BY *) syntax is now supported. Enhancement: Improved the query execution performance of JSON columns under a higher level of parallelism. Enhancement: Expanded support for encoded GROUP BY query shapes containing expressions in aggregates. Bugfix: Fixed an issue where extra CPU was used when a read-only database is attached to a workspace without any writable mount for the read-only database. Enhancement: Sampling will no longer be used for table size estimation when statistics are present. Enhancement: Added the /api/v2/jwkssetup endpoint to Data API to allow users to enable JWT Auth in Data API on Cloud. See jwkssetup for more information. Enhancement: Improved the code generation performance of tables with a large number of indexes. Bugfix: Fixed an issue causing incorrect trace messages in master logs where clocks were incorrectly advancing from \"0\". Enhancement: Added the DATETIMEPRECISION column to both PARAMETER and ROUTINESinformationschema views. Also, the DATETIMEPRECISION column will include TIME and TIMESTAMP data types in the COLUMNSinformationschema view. Enhancement: Added the REVERSE() built-in string function that reverses the target string. Bugfix: Fixed some error handling issues with unlimited storage download and upload processes. Enhancement: The SHOW TABLE STATUS command now displays the memory usage by GLOBAL TEMPORARY tables. Bugfix: Fixed a crash when parsing certain Parquet data into a pipeline. Enhancement: Added support for using a connection link for inserting data with the FORMAT PARQUET option. Bugfix: Fixed an issue with aggregate functions using incorrect enum/set types that may result in inaccurate output in the operator tree. Bugfix: Fixed an issue with a transaction leak on the master aggregator when running CREATE TABLE AS SELECT on a child aggregator using autocommit=0 on the connection. Bugfix: Fixed a bug that may cause a query to hang when comparing an utf8 column with an utf8mb4 constant. This issue occurs when collation_server is set to one of the utf8mb4 collations. Bugfix: Improved the accuracy of network time reporting in query profiles regarding the time spent sending the results back to the user connection. Bugfix: Fixed an edge case issue causing a potential memory leak when running an UPSERT statement against a columnstore table. Bugfix: Fixed an issue that could cause the engine to crash from info_schema query submissions. Enhancement: Improved the performance of bushy join rewrites. Bugfix: Fixed an edge case issue where the engine could crash when performing multi-inserts. Bugfix: The avro schema registry URL portion of the CREATE PIPELINE syntax is now redacted in processlist. Bugfix: Fixed an issue where the engine could crash during recursive set operations. Bugfix: The information_schema.statistics \"collation\" column now correctly indicates whether an index is ascending (\"A\") or descending (\"D\"). Enhancement: Improved performance of comparing utf8mb4 strings. Bugfix: Fixed an edge case issue which could cause the engine to hang during shutdown. Enhancement: Added the skipsegelimwithinlistthreshold engine variable, which will skip segment elimination with the IN list if its size is larger than threshold (default is 1000 elements). Bugfix: informationschema.tablestatistics now correctly shows information about global temporary tables. The following features may require you to enable them manually. New Feature: Improved Seekability in Universal Storage Tables delivers large performance gains for transactional workloads on universal storage tables. Added support for fast seeking into JSON columns in a universal storage table using subsegment access. Improved seek performance for string data types for universal storage for LZ4 and run-length encoded (RLE) data. New Feature: Recursive common table expressions (CTE) are now supported by" }, { "data": "Previously, complex operations including temporary tables within a stored procedure would be needed to perform the actions that a simple recursive CTE query can handle. For more information, see WITH (Common Table Expressions). Enhancement: New Information Schema Views Added the MVRECOVERYSTATUS view which includes information about the status of the current recovery process. Added several Replication Management views. Enhancement: Subselect lockdown messages are now more informative and they indicate the line number and character offset of the subselect that caused the error. In addition, up to 100 bytes of text from the beginning of the referred subselect is also displayed. ``` SELECT (SELECT DISTINCT t1.a FROM t ORDER BY a) FROM t t1;``` Old output: \"Feature 'subselect containing dependent field inside group by' is not supported by SingleStore.\" New output: \"Feature 'subselect containing dependent field inside group by' is not supported by SingleStore.Near '(SELECT DISTINCT t1.a FROM t ORDER BY a) FROM t t1' at line 1, character 7.\" Enhancement: Decreased the memory overhead for columnstore cardinality statistics by 25% as the first phase of an overall project to improve memory for auto-stats in general. Enhancement: Improved performance for user-defined functions (UDFs) and Stored Procedures that take JSON arguments, and the JSONTOARRAY command. Enhancement: Updated the supported syntax for DROP FROM PLANCACHE so plans on a specified node and plans from all aggregators based on the query text can be dropped. ``` DROP planid FROM PLANCACHE ON NODE nodeid;DROP PLAN FROM PLANCACHE [ON AGGREGATORS] FOR QUERY <query_text>;``` Enhancement: Setting Collation for String Literals You can set the collation for string literals explicitly: ``` SELECT \"My string\" COLLATE utf8mb4unicodeci;``` Enhancement: Created the ALTER USER permission. Users must have this permission or the GRANT permission to be able to execute the ALTER USER command. Enhancement: Added ALTER USER ... ACCOUNT LOCK to manually lock accounts: ``` ALTER USER 'test'@'%' ACCOUNT LOCK; ALTER USER 'test'@'%' ACCOUNT UNLOCK;``` Enhancement: Added sampling (a small portion of the rows in the table are used for analysis) for Reference tables as part of query optimization. Enhancement: Improved the performance of the PROFILE functionality such as lower memory overheads, lower performance impacts to OLAP queries, and better statistics collecting. Enhancement: Added support for improved segment elimination in queries with WHERE clauses containing DATE and TIME functions. The functions that are supported for segment elimination are DATE, DATETRUNC, TIMESTAMP, UNIXTIMESTAMP, and YEAR. Enhancement: The dataconversioncompatibilitylevel engine variable can now be set to '8.0' for stricter data type conversions. This will now be the default value. This new dataconversioncompatibilitylevel setting additionally flags invalid string-to-number conversion in INSERT statements. Enhancement: The sync_permissions engine variable default value is now ON. The default value only impacts newly installed clusters. Existing clusters must be manually updated to the variable. Enhancement: The enableautoprofile engine variable now has a third value: LITE. LITE is the new default value for new customers. It has a lower memory overhead that ON. The default value for existing customers is ON. Enhancement: The columnstoresmallblobcombinationthreshold engine variable default value has been changed to 5242880 bytes. Prior to the 8.0 release, the default value was 33554432 bytes. Enhancement: Added support for encoded GROUP BY clauses in queries containing conditional and character expressions in aggregate functions. Enhancement: Expanded the type of query execution operations (hash joins, window functions, and sort operations) to offload memory to disk using spilling to allow a large memory footprint query to succeed at the cost of query execution times in a memory constraint" }, { "data": "Enhancement: Added support for ? and [ ] glob patterns to FS pipelines. Enhancement: Added the optional parameter DEFINER for CREATE PROCEDURE, FUNCTION, and AGGREGATE. Enhancement: Added ability for a JSON computed column to be returned in a query instead of the entire document. Enhancement: Added ability to use use the ORDER BY clause with the JSON_AGG function. Enhancement: Expressions can be assigned to system variables. System variables, literals, or any combination of these can be referenced using built-ins like CONCAT as a variant of complex expressions. Enhancement: For unlimited storage databases, SingleStore caches data within the workspace. It uses a modified least-recently-used (LRU(2)) replacement policy. Information is retained to indicate if objects are frequently-accessed. This reduces the chance that a single large query will flush frequently-accessed data from the cache. Enhancement: Added support for the AUTO option in the computed column definition clause of a CREATE TABLE statement to automatically infer the data type of a computed column expression. For more information, see CREATE TABLE. Enhancement: Added two Workload Management engine variables: workloadmanagementqueuesizeallowupgrade and workloadmanagementdynamicresource_allocation.These variables work together to dynamically move queries to another queue if the original queue is saturated. Enhancement: Storage of CHAR(<length>) as VARCHAR(<length>): For a column defined as type CHAR of length len, SingleStore will store the column as a VARCHAR of length len if len greater than or equal to the value of the new engine variable varcharcolumnstringoptimizationlength. If the value of the variable is 0, the column is not stored as a VARCHAR. Enhancement: After scaling a workspace, the persistent cache on the workspace is warmed with copies of blobs before new resources begin handling queries. It is fully automatic. Enhancement: The BACKUP command no longer blocks the ALTER TABLE and several other commands for the duration of the backup. This allows you to run commands like TRUNCATE on your tables even during the backup of a very large deployment. Enhancement: Added the ability to use JSONMATCH<ANY>. Returns true if, in the JSON, there is a value at the specified filter path which evaluates the optional filter predicate as true. If no filter predicate is provided, will return true if the filter path exists. Enhancement: Made the following Selectivity Estimation improvements: Enabled sampling for reference tables.Improved date/time histogram estimates by utilizing a heuristic when the current date/time is outside of the histogram range. Added selectivity estimation for filters containing uncorrelated scalar subselects. This behavior can be controlled by the engine variable excludescalarsubselectsfromfilters. This change has the side-effect of enabling bloom filters more often. Changed the estimation source to heuristics when sampling is turned on but the total sampled rows are zero. Added ability to use histogram estimation for filtering predicates that use a stored procedure parameter. Increased the default value for engine variable optimizercrossjoin_cost to reduce the chance of Cartesian Joins being included when there are incorrect estimations. Improved the GROUP BY cardinality estimates for predicates using OR expressions. Enabled ability to combine histogram and sampling selectivity estimates by default. Enhancement: Made the following Query Optimization enhancements: Moved sub-queries for some outer joins from the ON clause to a WHERE clause to enable subselects to be rewritten as joins. Enabled repartition on expressions. Added ability to use GROUP BY push down for outer joins. Enhanced column pruning by eliminating derived duplicate columns. Removed redundant GROUP BY clauses that are implied by equi-joins. Bugfix: Fixed an issue where REGEXP and RLIKE were case-insensitive. They are now case-sensitive. New Feature: Management API now supports Workspaces. For more information see, Management API" }, { "data": "The following features may require you to enable them manually. Code Engine - Powered by Wasm The Code Engine feature allows you to create UDFs using code compiled to WebAssembly (Wasm). This feature supports any language that can compile to the Wasm core specification. For more information, see Code Engine - Powered by Wasm. Workspaces The Workspace feature allows you to spin up compute resources and size them up or down on-demand independent of storage. Workspaces also provide greater flexibility than workspaces by allowing databases to be shared across multiple workspaces thereby eliminating the need of maintaining data across multiple workloads. See What is a Workspace for more information. The SingleStore Management API now supports Workspaces. For more information see, Management API Reference. OUTBOUND privilege The OUTBOUND privilege can be used to mitigate security risks. The privilege can be assigned to users who are allowed to create outbound internet connectivity. For more information, see GRANT. The following features may require you to enable them manually. New Feature: Flexible Parallelism allows multiple cores on the same node to access the same database partition. With Flexible Parallelism, as database partitions are created they are divided into sub-partitions. As a query runs on a leaf node, multiple cores working on behalf of the query can process different sub-partitions of a partition in parallel. As an example, if you are currently at one partition per core with Flexible Parallelism, doubling the size of your workspace and then rebalancing will result in two cores for each partition. As a result, a simple query that scans and aggregates all the data in a single partition will now execute more quickly than it did before. Added new engine variables used for enabling and configuring Flexible Parallelism: subtophysicalpartitionratio, queryparallelismperleafcore, and expectedleafcorecount. The existing engine variable nodedegreeofparallelism is deprecated. For more information, see Flexible Parallelism. New Function: Added the ISNUMERIC function, used to determine whether the provided expression is a valid numeric type. New Function: Added the SESSION_USER function, used to return the user name you specified when connecting to the server, and the client host from which you connected. New Function: Added the SET function, used to initialize a user-defined session variable. New Function: Added new vector functions, namely VECTORELEMENTSSUM, VECTORKTHELEMENT,VECTORNUMELEMENTS, VECTORSORT, and VECTORSUBVECTOR. Enhancement: Added support for TRIM string function. TRIM is now multi-byte safe which means the result of an operation using TRIM is either a valid string or an unmodified string. Enhancement: Unlimited storage databases now support the BACKUP WITH SPLIT PARTITIONS command. Enhancement: The DROP MILESTONE command is used to delete a milestone of a currently attached database. Enhancement: Improved performance for columnstore seeks into string columns now it is no longer necessary to scan an entire segment to look up the data for a string value for a row when seeking to find that one row. New Function: A new clause \"AS newdbname\" has been added to the RESTORE DATABASE command which allows the use of the full original backup path if trying to restore to a new database name. New Function: SECRET - Added the ability to hide credentials from queries. Passing credentials in queries can leave them exposed in plain text during parameterization and they can be seen in logs and the process list. To counter this, you can use the SECRET function (similar in function to NOPARAM). SECRET takes a string (such as a password or other sensitive information) and replaces it with the literal string \"<password>\" during" }, { "data": "The string is unchanged for the query however. ``` CALL db.dosomethinguseful('root', SECRET('super-secret-password'));``` See SECRET for more information. Enhancement: Added per privilege transferability from one user to another via the new TRANSFERABLE clause and SYSTEMVARIABLESADMIN grant in the REVOKE security management command. A new engine variable, privilegetransfermode, must be set to perprivilege for this functionality to work as expected. Also, this new functionality will affect the results of the SHOW GRANTS command. If the privilegetransfermode engine variable remains on the default value of grantoption, then the output is one row and can include the WITH GRANT OPTION privilege. If the value of privilegetransfermode is per_privilege, then the output can be two rows. The first row will display the non-transferable privileges. The second row will display the transferable privileges. Enhancement: Added new EXPLAIN and PROFILE reproduction clause syntax. EXPLAIN REPRO outputs the explain information in JSON format and provides important debugging information. EXPLAIN REPRO will work for SELECT queries only. The PROFILE REPRO syntax will replace the need to set the engine variable setprofilefor_debug to on. The engine variable will continue to be supported for backward compatibility. Enhancement: Added ability to match a computed column expression and the same expression appearing in a query, to improve query performance, especially for indexed computed JSON fields. The enhancement promotes data independence between the physical and application layer. Enhancement: Spilling for GROUP BY statements is enabled by default starting in engine version 7.8. Added an additional engine variable, spillingminimaldiskspace. If a node has less disk space than spillingminimaldiskspace (default is 500MB), queries on that node that require spilling will fail instead of spilling to disk. Enhancement: Materialized CTEs are now on by default and no longer considered a preview feature.Reduced the memory usage of approxcountdistinct by using a more compact representation. Enhancement: Existing queries are no longer recompiled on minor upgrades (from 7.8.x to 7.8.y for example). New Function: Added ALTERTIME, ALTERUSER, and CREATEUSER to the informationschema.TABLES table, to show the time of the latest update the table, the user who made the change, and the user who created the table. For existing tables, the ALTERTIME value will be NULL until the table is altered. For new tables, ALTERTIME will be the same as CREATE_TIME. New Function: Added FLAGS column to the informationschema.VIEWS table, to indicate whether a view is a Table Valued Function (TVF). A value of ISTABLEVALUEDFUNCTION indicates a TVF. Enhancement: An internal component, the LLVM code generation framework, was updated to version 10 from version 3.8. This improves performance of query compilation for DELETES on tables with a very large number of columns. New Feature: Point-in-time recovery (PITR) has moved from preview to production status, and is now supported for production use cases. Enhancement: By default, all backups are now lock-free. Distributed write transactions no longer have to wait when a backup starts. New feature: Introduced row-level decompression for the string data type which will increase performance on reads against columnstore tables. Before this improvement, decompression occurred on the order of blocks of data which consists of 4096 rows. New Feature: Added support for cross-database INSERT...SELECT into columnstore temporary tables. Enhancement: Allow spilling hash GROUP BY operator. New Feature: Added support for SELECT ... INTO AZURE. This command supports the WITH COMPRESSION option, which is described in the next release note. New Feature: Added the WITH COMPRESSION option to SELECT INTO FS, SELECT INTO GCS, and SELECT INTO S3. WITH COMPRESSION writes the SELECT query results, in compressed" }, { "data": "files, to an object store. New Feature: Added support for new vector functions, namely JSONARRAYUNPACK, SCALARVECTORMUL, VECTORADD, and VECTORMUL. New Feature: Added support for the current user security model in stored procedures. In this model, when the current user executes a stored procedure, the stored procedure is executed using the security permissions of that user. New Feature: Added support for external functions, as a preview feature. An external function calls code that is executed outside of a SingleStore database. For more information, see CREATE [OR REPLACE] EXTERNAL FUNCTION. Enhancement: Improved full-text filter performance when used with other secondary hash index filters. For highly selective full-text filters, the improvement in execution speed can be 10 times faster. Enhancement: Introduced new logic to determine when to evict a compiled image of a query plan. The logic will sort on the oldest number of plans while considering the explicitly set memory limit usage of each plan. The feature is disabled by default. To enable the logic, the engine variable enablecompiledimageseviction must be set to ON. The engine variable compiledimagesevictionmemorylimitmb is used to set the memory limit. New Feature: Added a new function, JSON_KEYS, which returns the top-level keys of a JSON object in the form of a JSON array. Optionally, if a keypath is defined, returns the top-level keys from the keypath. Enhancement: Added support for more query shapes with FULL JOIN or correlated subselects when reference tables are involved.Prior to this release, these query shapes would hit a lockdown error. New Feature: Added support for query shapes that include repartitioned subqueries containing SELECT statements with aggregated column(s) without a GROUP BY clause. Prior to this release, these query shapes would hit a lockdown error. New Feature: Added support for LEFT JOIN when the left table is a reference table without a primary key.Prior to this release, this query shape would hit a lockdown error. Enhancement: Improved query execution for repartition DELETE FROM ...LIMIT and broadcast LEFT JOIN. Enhancement: Improved selectivity estimate for RIGHT JOIN query shapes when doing BloomFilter decision. See Query Plan Operations for a detailed explanation of BloomFilter and other filtering methods. Enhancement: Improved query performance using SORT KEY() and KEY () WITH CLUSTERED COLUMNSTORE columns with integer data types. Enhancement: Added passwordexpiration column to the informationschema.USERS table. If the passwordexpirationseconds engine variable is not enabled, the passwordexpiration column will be NULL. If the passwordexpirationseconds engine variable is enabled, the passwordexpiration column will display the number of seconds remaining for the password to expire. Enhancement: Improved the performance of selective filters using string columns in columnstore tables. New Feature: Added support for UNION between reference and sharded tables. Prior to this, this query shape would hit a lockdown error. Enhancement: Improved EXPLAIN output to clarify a result table for a broadcast LEFT JOIN or for a MATERIALIZE_CTE as they can have the same result table name. For broadcast LEFT JOIN, a branch operator is added so that the branching operation on the shared result table is reflected. See Query Plan Operations for a detailed explanation of broadcasts and other distributed data movement. New Feature: Ingest, Added support for transactions in Kafka pipelines. Last modified: June 10, 2024 This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. | | | June 2024 May 2024 April 2024 March 2024 February 2024 January 2024 December 2023 November 2023 October 2023 September 2023 August 2023 July 2023 June 2023 May 2023 April 2023 March 2023 February 2023 January 2023 December 2022 June 2022 April 2022 November 2021" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "TDengine", "subcategory": "Database" }
[ { "data": "TDengine.Connector is the C# language connector provided by TDengine. C# developers can use it to develop C# application software that accesses TDengine cluster data. TDengine.Connector provides 2 connection types. For a detailed introduction of the connection types, please refer to: Establish Connection The supported platforms are the same as those supported by the TDengine client driver. TDengine no longer supports 32-bit Windows platforms. | Connector version | TDengine version | major features | |:--|:-|:-| | 3.1.3 | 3.2.1.0/3.1.1.18 | support WebSocket reconnect | | 3.1.2 | 3.2.1.0/3.1.1.18 | fix schemaless result release | | 3.1.1 | 3.2.1.0/3.1.1.18 | support varbinary and geometry | | 3.1.0 | 3.2.1.0/3.1.1.18 | WebSocket uses native implementation | TDengine.Connector will throw an exception and the application needs to handle the exception. The taosc exception type TDengineError contains error code and error information, and the application can handle it based on the error code and error information. | TDengine DataType | C# Type | |:--|:| | TIMESTAMP | DateTime | | TINYINT | sbyte | | SMALLINT | short | | INT | int | | BIGINT | long | | TINYINT UNSIGNED | byte | | SMALLINT UNSIGNED | ushort | | INT UNSIGNED | uint | | BIGINT UNSIGNED | ulong | | FLOAT | float | | DOUBLE | double | | BOOL | bool | | BINARY | byte[] | | NCHAR | string (utf-8 encoding) | | JSON | byte[] | | VARBINARY | byte[] | | GEOMETRY | byte[] | Note: JSON type is only supported in tag. Nuget package TDengine.Connector can be added to the current project through dotnet CLI under the path of the current .NET project. ``` dotnet add package TDengine.Connector``` You can also modify the .csproj file of the current project and add the following ItemGroup. ``` <ItemGroup> <PackageReference Include=\"TDengine.Connector\" Version=\"3.1.*\" /> </ItemGroup>``` ``` var builder = new ConnectionStringBuilder(\"host=localhost;port=6030;username=root;password=taosdata\");using (var client = DbDriver.Open(builder)){ Console.WriteLine(\"connected\")}``` ``` var builder = new ConnectionStringBuilder(\"protocol=WebSocket;host=localhost;port=6041;useSSL=false;username=root;password=taosdata\");using (var client = DbDriver.Open(builder)){ Console.WriteLine(\"connected\")}``` The parameters supported by ConnectionStringBuilder are as follows: NoteEnabling automatic reconnection is only effective for simple SQL statement execution, schemaless writing, and data subscription. It is not effective for parameter binding. Automatic reconnection is only effective for the database specified by parameters when the connection is established, and it is not effective for the use db statement to switch databases later. The C# connector does not support this feature The C# connector does not support this feature ``` using System;using System.Text;using TDengine.Driver;using TDengine.Driver.Client;namespace NativeQuery{ internal class Query { public static void Main(string[] args) { var builder = new ConnectionStringBuilder(\"host=localhost;port=6030;username=root;password=taosdata\"); using (var client = DbDriver.Open(builder)) { try { client.Exec(\"create database power\"); client.Exec(\"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))\"); } catch (Exception e) { Console.WriteLine(e.ToString()); throw; } } } }}``` ``` using System;using System.Text;using TDengine.Driver;using TDengine.Driver.Client;namespace WSQuery{ internal class Query { public static void Main(string[] args) { var builder = new ConnectionStringBuilder(\"protocol=WebSocket;host=localhost;port=6041;useSSL=false;username=root;password=taosdata\"); using (var client = DbDriver.Open(builder)) { try { client.Exec(\"create database power\"); client.Exec(\"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))\"); } catch (Exception e) { Console.WriteLine(e.ToString()); throw; } } } }}``` ``` using System;using System.Text;using TDengine.Driver;using TDengine.Driver.Client;namespace NativeQuery{ internal class Query { public static void Main(string[] args) { var builder = new ConnectionStringBuilder(\"host=localhost;port=6030;username=root;password=taosdata\"); using (var client =" }, { "data": "{ try { string insertQuery = \"INSERT INTO \" + \"power.d1001 USING power.meters TAGS(2,'California.SanFrancisco') \" + \"VALUES \" + \"('2023-10-03 14:38:05.000', 10.30000, 219, 0.31000) \" + \"('2023-10-03 14:38:15.000', 12.60000, 218, 0.33000) \" + \"('2023-10-03 14:38:16.800', 12.30000, 221, 0.31000) \" + \"power.d1002 USING power.meters TAGS(3, 'California.SanFrancisco') \" + \"VALUES \" + \"('2023-10-03 14:38:16.650', 10.30000, 218, 0.25000) \" + \"power.d1003 USING power.meters TAGS(2,'California.LosAngeles') \" + \"VALUES \" + \"('2023-10-03 14:38:05.500', 11.80000, 221, 0.28000) \" + \"('2023-10-03 14:38:16.600', 13.40000, 223, 0.29000) \" + \"power.d1004 USING power.meters TAGS(3,'California.LosAngeles') \" + \"VALUES \" + \"('2023-10-03 14:38:05.000', 10.80000, 223, 0.29000) \" + \"('2023-10-03 14:38:06.500', 11.50000, 221, 0.35000)\"; client.Exec(insertQuery); } catch (Exception e) { Console.WriteLine(e.ToString()); throw; } } } }}``` ``` using System;using System.Text;using TDengine.Driver;using TDengine.Driver.Client;namespace WSQuery{ internal class Query { public static void Main(string[] args) { var builder = new ConnectionStringBuilder(\"protocol=WebSocket;host=localhost;port=6041;useSSL=false;username=root;password=taosdata\"); using (var client = DbDriver.Open(builder)) { try { string insertQuery = \"INSERT INTO \" + \"power.d1001 USING power.meters TAGS(2,'California.SanFrancisco') \" + \"VALUES \" + \"('2023-10-03 14:38:05.000', 10.30000, 219, 0.31000) \" + \"('2023-10-03 14:38:15.000', 12.60000, 218, 0.33000) \" + \"('2023-10-03 14:38:16.800', 12.30000, 221, 0.31000) \" + \"power.d1002 USING power.meters TAGS(3, 'California.SanFrancisco') \" + \"VALUES \" + \"('2023-10-03 14:38:16.650', 10.30000, 218, 0.25000) \" + \"power.d1003 USING power.meters TAGS(2,'California.LosAngeles') \" + \"VALUES \" + \"('2023-10-03 14:38:05.500', 11.80000, 221, 0.28000) \" + \"('2023-10-03 14:38:16.600', 13.40000, 223, 0.29000) \" + \"power.d1004 USING power.meters TAGS(3,'California.LosAngeles') \" + \"VALUES \" + \"('2023-10-03 14:38:05.000', 10.80000, 223, 0.29000) \" + \"('2023-10-03 14:38:06.500', 11.50000, 221, 0.35000)\"; client.Exec(insertQuery); } catch (Exception e) { Console.WriteLine(e.ToString()); throw; } } } }}``` ``` using System;using System.Text;using TDengine.Driver;using TDengine.Driver.Client;namespace NativeQuery{ internal class Query { public static void Main(string[] args) { var builder = new ConnectionStringBuilder(\"host=localhost;port=6030;username=root;password=taosdata\"); using (var client = DbDriver.Open(builder)) { try { client.Exec(\"use power\"); string query = \"SELECT * FROM meters\"; using (var rows = client.Query(query)) { while (rows.Read()) { Console.WriteLine($\"{((DateTime)rows.GetValue(0)):yyyy-MM-dd HH:mm:ss.fff}, {rows.GetValue(1)}, {rows.GetValue(2)}, {rows.GetValue(3)}, {rows.GetValue(4)}, {Encoding.UTF8.GetString((byte[])rows.GetValue(5))}\"); } } } catch (Exception e) { Console.WriteLine(e.ToString()); throw; } } } }}``` ``` using System;using System.Text;using TDengine.Driver;using TDengine.Driver.Client;namespace WSQuery{ internal class Query { public static void Main(string[] args) { var builder = new ConnectionStringBuilder(\"protocol=WebSocket;host=localhost;port=6041;useSSL=false;username=root;password=taosdata\"); using (var client = DbDriver.Open(builder)) { try { client.Exec(\"use power\"); string query = \"SELECT * FROM meters\"; using (var rows = client.Query(query)) { while (rows.Read()) { Console.WriteLine($\"{((DateTime)rows.GetValue(0)):yyyy-MM-dd HH:mm:ss.fff}, {rows.GetValue(1)}, {rows.GetValue(2)}, {rows.GetValue(3)}, {rows.GetValue(4)}, {Encoding.UTF8.GetString((byte[])rows.GetValue(5))}\"); } } } catch (Exception e) { Console.WriteLine(e.ToString()); throw; } } } }}``` The reqId is very similar to TraceID in distributed tracing systems. In a distributed system, a request may need to pass through multiple services or modules to be completed. The reqId is used to identify and associate all related operations of this request, allowing us to track and understand the complete execution path of the request. Here are some primary usage of reqId: If the user does not set a reqId, the client library will generate one randomly internally, but it is still recommended for the user to set it, as it can better associate with the user's request. ``` using System;using System.Text;using TDengine.Driver;using TDengine.Driver.Client;namespace NativeQueryWithReqID{ internal abstract class QueryWithReqID { public static void Main(string[] args) { var builder = new ConnectionStringBuilder(\"host=localhost;port=6030;username=root;password=taosdata\"); using (var client = DbDriver.Open(builder)) { try { client.Exec($\"create database if not exists test_db\",ReqId.GetReqId()); } catch (Exception e) { Console.WriteLine(e.ToString()); throw; } } } }}``` ``` using System;using System.Text;using TDengine.Driver;using" }, { "data": "WSQueryWithReqID{ internal abstract class QueryWithReqID { public static void Main(string[] args) { var builder = new ConnectionStringBuilder(\"protocol=WebSocket;host=localhost;port=6041;useSSL=false;username=root;password=taosdata\"); using (var client = DbDriver.Open(builder)) { try { client.Exec($\"create database if not exists test_db\",ReqId.GetReqId()); } catch (Exception e) { Console.WriteLine(e.ToString()); throw; } } } }}``` ``` using System;using TDengine.Driver;using TDengine.Driver.Client;namespace NativeStmt{ internal abstract class NativeStmt { public static void Main(string[] args) { var builder = new ConnectionStringBuilder(\"host=localhost;port=6030;username=root;password=taosdata\"); using (var client = DbDriver.Open(builder)) { try { client.Exec(\"create database power\"); client.Exec( \"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))\"); using (var stmt = client.StmtInit()) { stmt.Prepare( \"Insert into power.d1001 using power.meters tags(2,'California.SanFrancisco') values(?,?,?,?)\"); var ts = new DateTime(2023, 10, 03, 14, 38, 05, 000); stmt.BindRow(new object[] { ts, (float)10.30000, (int)219, (float)0.31000 }); stmt.AddBatch(); stmt.Exec(); var affected = stmt.Affected(); Console.WriteLine($\"affected rows: {affected}\"); } } catch (Exception e) { Console.WriteLine(e); throw; } } } }}``` ``` using System;using TDengine.Driver;using TDengine.Driver.Client;namespace WSStmt{ internal abstract class WSStmt { public static void Main(string[] args) { var builder = new ConnectionStringBuilder(\"protocol=WebSocket;host=localhost;port=6041;useSSL=false;username=root;password=taosdata\"); using (var client = DbDriver.Open(builder)) { try { client.Exec(\"create database power\"); client.Exec( \"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))\"); using (var stmt = client.StmtInit()) { stmt.Prepare( \"Insert into power.d1001 using power.meters tags(2,'California.SanFrancisco') values(?,?,?,?)\"); var ts = new DateTime(2023, 10, 03, 14, 38, 05, 000); stmt.BindRow(new object[] { ts, (float)10.30000, (int)219, (float)0.31000 }); stmt.AddBatch(); stmt.Exec(); var affected = stmt.Affected(); Console.WriteLine($\"affected rows: {affected}\"); } } catch (Exception e) { Console.WriteLine(e); throw; } } } }}``` Note: When using BindRow, you need to pay attention to the one-to-one correspondence between the original C# column type and the TDengine column type. For the specific correspondence, please refer to TDengine DataType and C# DataType. ``` using TDengine.Driver;using TDengine.Driver.Client;namespace NativeSchemaless{ internal class Program { public static void Main(string[] args) { var builder = new ConnectionStringBuilder(\"host=localhost;port=6030;username=root;password=taosdata\"); using (var client = DbDriver.Open(builder)) { client.Exec(\"create database sml\"); client.Exec(\"use sml\"); var influxDBData = \"st,t1=3i64,t2=4f64,t3=\\\"t3\\\" c1=3i64,c3=L\\\"passit\\\",c2=false,c4=4f64 1626006833639000000\"; client.SchemalessInsert(new string[] { influxDBData }, TDengineSchemalessProtocol.TSDBSMLLINEPROTOCOL, TDengineSchemalessPrecision.TSDBSMLTIMESTAMPNANOSECONDS, 0, ReqId.GetReqId()); var telnetData = \"stb00 1626006833 4 host=host0 interface=eth0\"; client.SchemalessInsert(new string[] { telnetData }, TDengineSchemalessProtocol.TSDBSMLTELNETPROTOCOL, TDengineSchemalessPrecision.TSDBSMLTIMESTAMPMILLISECONDS, 0, ReqId.GetReqId()); var jsonData = \"{\\\"metric\\\": \\\"metercurrent\\\",\\\"timestamp\\\": 1626846400,\\\"value\\\": 10.3, \\\"tags\\\": {\\\"groupid\\\": 2, \\\"location\\\": \\\"California.SanFrancisco\\\", \\\"id\\\": \\\"d1001\\\"}}\"; client.SchemalessInsert(new string[] { jsonData }, TDengineSchemalessProtocol.TSDBSMLJSONPROTOCOL, TDengineSchemalessPrecision.TSDBSMLTIMESTAMPMILLI_SECONDS, 0, ReqId.GetReqId()); } } }}``` ``` using TDengine.Driver;using TDengine.Driver.Client;namespace WSSchemaless{ internal class Program { public static void Main(string[] args) { var builder = new ConnectionStringBuilder(\"protocol=WebSocket;host=localhost;port=6041;useSSL=false;username=root;password=taosdata\"); using (var client = DbDriver.Open(builder)) { client.Exec(\"create database sml\"); client.Exec(\"use sml\"); var influxDBData = \"st,t1=3i64,t2=4f64,t3=\\\"t3\\\" c1=3i64,c3=L\\\"passit\\\",c2=false,c4=4f64 1626006833639000000\"; client.SchemalessInsert(new string[] { influxDBData }, TDengineSchemalessProtocol.TSDBSMLLINEPROTOCOL, TDengineSchemalessPrecision.TSDBSMLTIMESTAMPNANOSECONDS, 0, ReqId.GetReqId()); var telnetData = \"stb00 1626006833 4 host=host0 interface=eth0\"; client.SchemalessInsert(new string[] { telnetData }, TDengineSchemalessProtocol.TSDBSMLTELNETPROTOCOL, TDengineSchemalessPrecision.TSDBSMLTIMESTAMPMILLISECONDS, 0, ReqId.GetReqId()); var jsonData = \"{\\\"metric\\\": \\\"metercurrent\\\",\\\"timestamp\\\": 1626846400,\\\"value\\\": 10.3, \\\"tags\\\": {\\\"groupid\\\": 2, \\\"location\\\": \\\"California.SanFrancisco\\\", \\\"id\\\": \\\"d1001\\\"}}\"; client.SchemalessInsert(new string[] { jsonData }, TDengineSchemalessProtocol.TSDBSMLJSONPROTOCOL, TDengineSchemalessPrecision.TSDBSMLTIMESTAMPMILLI_SECONDS, 0, ReqId.GetReqId()); } } }}``` ``` public void SchemalessInsert(string[] lines, TDengineSchemalessProtocol protocol, TDengineSchemalessPrecision precision, int ttl, long reqId)``` ``` using System;using System.Text;using TDengine.Driver;using TDengine.Driver.Client;namespace NativeSubscription{ internal class Program { public static void Main(string[] args) { var builder = new ConnectionStringBuilder(\"host=localhost;port=6030;username=root;password=taosdata\"); using (var client = DbDriver.Open(builder)) { try { client.Exec(\"create database power\"); client.Exec(\"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))\"); client.Exec(\"CREATE TOPIC topic_meters as SELECT * from power.meters\"); } catch (Exception e) { Console.WriteLine(e.ToString()); throw; } } } }}``` ``` using System;using System.Text;using TDengine.Driver;using" }, { "data": "WSSubscription{ internal class Program { public static void Main(string[] args) { var builder = new ConnectionStringBuilder(\"protocol=WebSocket;host=localhost;port=6041;useSSL=false;username=root;password=taosdata\"); using (var client = DbDriver.Open(builder)) { try { client.Exec(\"create database power\"); client.Exec(\"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))\"); client.Exec(\"CREATE TOPIC topic_meters as SELECT * from power.meters\"); } catch (Exception e) { Console.WriteLine(e.ToString()); throw; } } } }}``` ``` var cfg = new Dictionary<string, string>(){ { \"group.id\", \"group1\" }, { \"auto.offset.reset\", \"latest\" }, { \"td.connect.ip\", \"127.0.0.1\" }, { \"td.connect.user\", \"root\" }, { \"td.connect.pass\", \"taosdata\" }, { \"td.connect.port\", \"6030\" }, { \"client.id\", \"tmq_example\" }, { \"enable.auto.commit\", \"true\" }, { \"msg.with.table.name\", \"false\" },};var consumer = new ConsumerBuilder<Dictionary<string, object>>(cfg).Build();``` ``` var cfg = new Dictionary<string, string>(){ { \"td.connect.type\", \"WebSocket\" }, { \"group.id\", \"group1\" }, { \"auto.offset.reset\", \"latest\" }, { \"td.connect.ip\", \"localhost\" }, { \"td.connect.port\",\"6041\"}, { \"useSSL\", \"false\" }, { \"td.connect.user\", \"root\" }, { \"td.connect.pass\", \"taosdata\" }, { \"client.id\", \"tmq_example\" }, { \"enable.auto.commit\", \"true\" }, { \"msg.with.table.name\", \"false\" },};var consumer = new ConsumerBuilder<Dictionary<string, object>>(cfg).Build();``` The configuration parameters supported by consumer are as follows: Supports subscribing to the result set Dictionary<string, object> where the key is the column name and the value is the column value. If you use object to receive column values, you need to pay attention to: An example is as follows Result class ``` class Result { public DateTime ts { get; set; } public float current { get; set; } public int voltage { get; set; } public float phase { get; set; } }``` Set up parser ``` var tmqBuilder = new ConsumerBuilder<Result>(cfg);tmqBuilder.SetValueDeserializer(new ReferenceDeserializer<Result>());var consumer = tmqBuilder.Build();``` You can also implement a custom deserializer, implement the IDeserializer<T> interface and pass it in through the ConsumerBuilder.SetValueDeserializer method. ``` public interface IDeserializer<T> { T Deserialize(ITMQRows data, bool isNull, SerializationContext context); }``` ``` consumer.Subscribe(new List<string>() { \"topic_meters\" });while (true){ using (var cr = consumer.Consume(500)) { if (cr == null) continue; foreach (var message in cr.Message) { Console.WriteLine( $\"message {{{((DateTime)message.Value[\"ts\"]).ToString(\"yyyy-MM-dd HH:mm:ss.fff\")}, \" + $\"{message.Value[\"current\"]}, {message.Value[\"voltage\"]}, {message.Value[\"phase\"]}}}\"); } }}``` ``` consumer.Assignment.ForEach(a =>{ Console.WriteLine($\"{a}, seek to 0\"); consumer.Seek(new TopicPartitionOffset(a.Topic, a.Partition, 0)); Thread.Sleep(TimeSpan.FromSeconds(1));});``` ``` public void Commit(ConsumeResult<TValue> consumerResult)public List<TopicPartitionOffset> Commit()public void Commit(IEnumerable<TopicPartitionOffset> offsets)``` ``` consumer.Unsubscribe();consumer.Close();``` ``` using System;using System.Collections.Generic;using System.Threading.Tasks;using TDengine.Driver;using TDengine.Driver.Client;using TDengine.TMQ;namespace NativeSubscription{ internal class Program { public static void Main(string[] args) { var builder = new ConnectionStringBuilder(\"host=localhost;port=6030;username=root;password=taosdata\"); using (var client = DbDriver.Open(builder)) { try { client.Exec(\"CREATE DATABASE power\"); client.Exec(\"USE power\"); client.Exec( \"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))\"); client.Exec(\"CREATE TOPIC topicmeters as SELECT * from power.meters\"); var cfg = new Dictionary<string, string>() { { \"group.id\", \"group1\" }, { \"auto.offset.reset\", \"latest\" }, { \"td.connect.ip\", \"127.0.0.1\" }, { \"td.connect.user\", \"root\" }, { \"td.connect.pass\", \"taosdata\" }, { \"td.connect.port\", \"6030\" }, { \"client.id\", \"tmqexample\" }, { \"enable.auto.commit\", \"true\" }, { \"msg.with.table.name\", \"false\" }, }; var consumer = new ConsumerBuilder<Dictionary<string, object>>(cfg).Build(); consumer.Subscribe(new List<string>() { \"topic_meters\" }); Task.Run(InsertData); while (true) { using (var cr = consumer.Consume(500)) { if (cr == null) continue; foreach (var message in cr.Message) { Console.WriteLine( $\"message {{{((DateTime)message.Value[\"ts\"]).ToString(\"yyyy-MM-dd HH:mm:ss.fff\")}, \" + $\"{message.Value[\"current\"]}, {message.Value[\"voltage\"]}, {message.Value[\"phase\"]}}}\"); } consumer.Commit(); } } } catch (Exception e) { Console.WriteLine(e.ToString()); throw; } } } static void InsertData() { var builder = new ConnectionStringBuilder(\"host=localhost;port=6030;username=root;password=taosdata\"); using (var client = DbDriver.Open(builder)) { while (true) { client.Exec(\"INSERT into power.d1001 using power.meters tags(2,'California.SanFrancisco') values(now,11.5,219,0.30)\"); Task.Delay(1000).Wait(); } } } }}``` ``` using System;using System.Collections.Generic;using System.Threading.Tasks;using TDengine.Driver;using TDengine.Driver.Client;using" }, { "data": "WSSubscription{ internal class Program { public static void Main(string[] args) { var builder = new ConnectionStringBuilder(\"protocol=WebSocket;host=localhost;port=6041;useSSL=false;username=root;password=taosdata\"); using (var client = DbDriver.Open(builder)) { try { client.Exec(\"CREATE DATABASE power\"); client.Exec(\"USE power\"); client.Exec( \"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))\"); client.Exec(\"CREATE TOPIC topicmeters as SELECT * from power.meters\"); var cfg = new Dictionary<string, string>() { { \"td.connect.type\", \"WebSocket\" }, { \"group.id\", \"group1\" }, { \"auto.offset.reset\", \"latest\" }, { \"td.connect.ip\", \"localhost\" }, { \"td.connect.port\",\"6041\"}, { \"useSSL\", \"false\" }, { \"td.connect.user\", \"root\" }, { \"td.connect.pass\", \"taosdata\" }, { \"client.id\", \"tmqexample\" }, { \"enable.auto.commit\", \"true\" }, { \"msg.with.table.name\", \"false\" }, }; var consumer = new ConsumerBuilder<Dictionary<string, object>>(cfg).Build(); consumer.Subscribe(new List<string>() { \"topic_meters\" }); Task.Run(InsertData); while (true) { using (var cr = consumer.Consume(500)) { if (cr == null) continue; foreach (var message in cr.Message) { Console.WriteLine( $\"message {{{((DateTime)message.Value[\"ts\"]).ToString(\"yyyy-MM-dd HH:mm:ss.fff\")}, \" + $\"{message.Value[\"current\"]}, {message.Value[\"voltage\"]}, {message.Value[\"phase\"]}}}\"); } consumer.Commit(); } } } catch (Exception e) { Console.WriteLine(e.ToString()); throw; } } } static void InsertData() { var builder = new ConnectionStringBuilder(\"protocol=WebSocket;host=localhost;port=6041;useSSL=false;username=root;password=taosdata\"); using (var client = DbDriver.Open(builder)) { while (true) { client.Exec(\"INSERT into power.d1001 using power.meters tags(2,'California.SanFrancisco') values(now,11.5,219,0.30)\"); Task.Delay(1000).Wait(); } } } }}``` The C# connector supports the ADO.NET interface, and you can connect to the TDengine running instance through the ADO.NET interface to perform operations such as data writing and querying. ``` using System;using TDengine.Data.Client;namespace NativeADO{ internal class Program { public static void Main(string[] args) { const string connectionString = \"host=localhost;port=6030;username=root;password=taosdata\"; using (var connection = new TDengineConnection(connectionString)) { try { connection.Open(); using (var command = new TDengineCommand(connection)) { command.CommandText = \"create database power\"; command.ExecuteNonQuery(); connection.ChangeDatabase(\"power\"); command.CommandText = \"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))\"; command.ExecuteNonQuery(); command.CommandText = \"INSERT INTO \" + \"power.d1001 USING power.meters TAGS(2,'California.SanFrancisco') \" + \"VALUES \" + \"(?,?,?,?)\"; var parameters = command.Parameters; parameters.Add(new TDengineParameter(\"@0\", new DateTime(2023,10,03,14,38,05,000))); parameters.Add(new TDengineParameter(\"@1\", (float)10.30000)); parameters.Add(new TDengineParameter(\"@2\", (int)219)); parameters.Add(new TDengineParameter(\"@3\", (float)0.31000)); command.ExecuteNonQuery(); command.Parameters.Clear(); command.CommandText = \"SELECT * FROM meters\"; using (var reader = command.ExecuteReader()) { while (reader.Read()) { Console.WriteLine( $\"{((DateTime) reader.GetValue(0)):yyyy-MM-dd HH:mm:ss.fff}, {reader.GetValue(1)}, {reader.GetValue(2)}, {reader.GetValue(3)}, {reader.GetValue(4)}, {System.Text.Encoding.UTF8.GetString((byte[]) reader.GetValue(5))}\"); } } } } catch (Exception e) { Console.WriteLine(e); throw; } } } }}``` ``` using System;using TDengine.Data.Client;namespace WSADO{ internal class Program { public static void Main(string[] args) { const string connectionString = \"protocol=WebSocket;host=localhost;port=6041;useSSL=false;username=root;password=taosdata\"; using (var connection = new TDengineConnection(connectionString)) { try { connection.Open(); using (var command = new TDengineCommand(connection)) { command.CommandText = \"create database power\"; command.ExecuteNonQuery(); connection.ChangeDatabase(\"power\"); command.CommandText = \"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))\"; command.ExecuteNonQuery(); command.CommandText = \"INSERT INTO \" + \"power.d1001 USING power.meters TAGS(2,'California.SanFrancisco') \" + \"VALUES \" + \"(?,?,?,?)\"; var parameters = command.Parameters; parameters.Add(new TDengineParameter(\"@0\", new DateTime(2023,10,03,14,38,05,000))); parameters.Add(new TDengineParameter(\"@1\", (float)10.30000)); parameters.Add(new TDengineParameter(\"@2\", (int)219)); parameters.Add(new TDengineParameter(\"@3\", (float)0.31000)); command.ExecuteNonQuery(); command.Parameters.Clear(); command.CommandText = \"SELECT * FROM meters\"; using (var reader = command.ExecuteReader()) { while (reader.Read()) { Console.WriteLine( $\"{((DateTime) reader.GetValue(0)):yyyy-MM-dd HH:mm:ss.fff}, {reader.GetValue(1)}, {reader.GetValue(2)}, {reader.GetValue(3)}, {reader.GetValue(4)}, {System.Text.Encoding.UTF8.GetString((byte[]) reader.GetValue(5))}\"); } } } } catch (Exception e) { Console.WriteLine(e); throw; } } } }}``` sample program Thank you for being part of our community! Your feedback and bug reports for TDengine and its documentation are highly appreciated. TDengine is a next generation data historian purpose-built for Industry 4.0 and Industrial IoT. It enables real-time data ingestion, storage, analysis, and distribution of petabytes per day, generated by billions of sensors and data collectors. With TDengine making big data affordable and accessible, digital transformation has never been easier. 20222024 TDengine" } ]
{ "category": "App Definition and Development", "file_name": "docs.tdengine.com.md", "project_name": "TDengine", "subcategory": "Database" }
[ { "data": "TDengine is an open-source, cloud-native time-series database optimized for the Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design, and other topics. It's written mainly for architects, developers, and system administrators. To get an overview of TDengine, such as a feature list, benchmarks, and competitive advantages, please browse through the Introduction section. TDengine greatly improves the efficiency of data ingestion, querying, and storage by exploiting the characteristics of time series data, introducing the novel concepts of \"one table for one data collection point\" and \"super table\", and designing an innovative storage engine. To understand the new concepts in TDengine and make full use of the features and capabilities of TDengine, please read Concepts thoroughly. If you are a developer, please read the Developer Guide carefully. This section introduces the database connection, data modeling, data ingestion, query, continuous query, cache, data subscription, user-defined functions, and other functionality in detail. Sample code is provided for a variety of programming languages. In most cases, you can just copy and paste the sample code, and make a few changes to accommodate your application, and it will work. We live in the era of big data, and scale-up is unable to meet the growing needs of the business. Any modern data system must have the ability to scale out, and clustering has become an indispensable feature of big data systems. Not only did the TDengine team develop the cluster feature, but also decided to open source this important feature. To learn how to deploy, manage and maintain a TDengine cluster please refer to Cluster" }, { "data": "TDengine uses ubiquitous SQL as its query language, which greatly reduces learning costs and migration costs. In addition to the standard SQL, TDengine has extensions to better support time series data analysis. These extensions include functions such as roll-up, interpolation, and time-weighted average, among many others. The SQL Reference chapter describes the SQL syntax in detail and lists the various supported commands and functions. If you are a system administrator who cares about installation, upgrade, fault tolerance, disaster recovery, data import, data export, system configuration, how to monitor whether TDengine is running healthily, and how to improve system performance, please refer to, and thoroughly read the Administration section. If you want to know more about TDengine tools and the REST API, please see the Reference chapter. For information about connecting to TDengine with different programming languages, see Client Libraries. If you are very interested in the internal design of TDengine, please read the chapter Inside TDengine, which introduces the cluster design, data partitioning, sharding, writing, and reading processes in detail. If you want to study TDengine code or even contribute code, please read this chapter carefully. To get more general introduction about time series database, please read through a series of articles. To lean more competitive advantages about TDengine, please read through a series of blogs. TDengine is an open-source database, and we would love for you to be a part of TDengine. If you find any errors in the documentation or see parts where more clarity or elaboration is needed, please click \"Edit this page\" at the bottom of each page to edit it directly. Together, we make a difference! Thank you for being part of our community! Your feedback and bug reports for TDengine and its documentation are highly appreciated. TDengine is a next generation data historian purpose-built for Industry 4.0 and Industrial IoT. It enables real-time data ingestion, storage, analysis, and distribution of petabytes per day, generated by billions of sensors and data collectors. With TDengine making big data affordable and accessible, digital transformation has never been easier. 20222024 TDengine" } ]
{ "category": "App Definition and Development", "file_name": "glossary#new-enemy-problem.md", "project_name": "SpiceDB", "subcategory": "Database" }
[ { "data": "On This Page SpiceDB is developed as an Apache 2.0-licensed (opens in a new tab) open-source, community-first effort. Large contributions must follow a proposal and feedback process regardless of whether the authors are maintainers, AuthZed employees, or brand new to the community. Other AuthZed open source projects are typically licensed Apache 2.0 (opens in a new tab) unless they are a fork of another codebase. Example code is MIT-licensed (opens in a new tab) so that they can be modified and adopted into any codebase. Not all code produced at AuthZed is open source. There are two conditions under which code is kept proprietary: SpiceDB is primarily accessed by a gRPC (opens in a new tab) API and thus client libraries can be generated for any programming language. AuthZed builds and maintains client libraries for the following languages: AuthZed also develops zed (opens in a new tab), a command-line client for interacting with the SpiceDB API. You can find more languages and integrations maintained by the community in the Clients section (opens in a new tab) of the Awesome SpiceDB (opens in a new tab) repository. SpiceDB is a database designed to be integrated into applications. There are some organizations with homegrown IT use-cases that use SpiceDB. However, for most IT use cases, this is probably more low-level than what you need. We recommend looking into tools designed around specific IT workflows such as auditing (Orca (opens in a new tab), PrismaCloud (opens in a new tab)), goverance, access management (Indent (opens in a new tab), ConductorOne (opens in a new tab)). SpiceDB is not a policy engine. SpiceDB was inspired by Zanzibar, which popularized the concept of Relationship-based access control (ReBAC). ReBAC systems offer correctness, performance, and scaling guarantees that are not possible in systems designed purely around" }, { "data": "Notably, policy engines cannot implement Reverse Indices. However, there are some scenarios where ReBAC systems can benefit from dynamic enforcement. For these scenarios, SpiceDB supports Caveats as a light-weight form of policy that avoids pitfalls present in many other systems. The best first step is to join Discord (opens in a new tab). Discord is a great place to chat with other community members and the maintainers of the software. If you're looking to contribute code, you can read CONTRIBUTING.md (opens in a new tab) in our open source projects for details how to contribute, good first issues, and common development workflows. Reverse-index expand answers the question \"what does this employee have access to?\", which most organizations validate as part of meeting those compliance obligations. But, even more critically, organizations use this information to debug access issues and as baseline data to ensure careful data handling. Lea Kissner, Zanzibar Coauthor In SpiceDB, reverse indices often refer to the LookupResources (opens in a new tab) and LookupSubjects (opens in a new tab) APIs which are designed to answer the following questions, respectively: At a high-level, SpiceDB attempts to remain true to Zanzibar's design principles, but without any assumptions around Google's internal infrastructure and use cases. As a result, many things in SpiceDB are more flexible to accomodate different kinds of users with different software stacks. For example, modeling complex user systems is possible in SpiceDB, but in Zanzibar all users must be a uint64 identifier. Because SpiceDB is not forced on developers as company-wide requirement, the project also values developer experience and making the tooling pleasant to work with. You can see this in our Schema Language and Playground (opens in a new tab) which vastly improves the user experience of directly manipulating Protocol Buffers at Google. For more specific details, see the documentation on the Zanzibar." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Timescale", "subcategory": "Database" }
[ { "data": "Timescale is a database platform engineered to deliver speed and scale to resource-intensive workloads, which makes it great for things like time series, event, and analytics data. Timescale is built on PostgreSQL, so you have access to the entire PostgreSQL ecosystem, with a user-friendly interface that simplifies database deployment and management." } ]
{ "category": "App Definition and Development", "file_name": "release-notes.md", "project_name": "Timescale", "subcategory": "Database" }
[ { "data": "This page contains release notes for TimescaleDB2.10.0 and newer. For release notes for older versions, see the past releases section. Want to stay up-to-date with new releases? You can subscribe to new releases on GitHub and be notified by email whenever a new release is available. On the Github page, click Watch, select Custom and then check Releases. This release contains the bug fixes introduced since TimescaleDB v2.15.1. Best practice is to upgrade at the next available opportunity. After you run ALTER EXTENSION, you must run this SQL script. For more details, see the following pull request #6797. If you are migrating from TimescaleDB v2.15.0 or v2.15.1, no changes are required. This release contains the performance improvements and bug fixes introduced since TimescaleDB v2.15.0. Best practice is to upgrade at the next available opportunity. After you run ALTER EXTENSION, you must run this SQL script. For more details, see the following pull request #6797. If you are migrating from TimescaleDB v2.15.0, no changes are required. This release contains the performance improvements and bug fixes introduced since TimescaleDB v2.14.2. Best practice is to upgrade at the next available opportunity. After you run ALTER EXTENSION, you must run this SQL script. For more details, see the following pull requests #6797. This release contains bug fixes since the 2.14.1 release. We recommend that you upgrade at the next available opportunity. This release contains bug fixes since the 2.14.0 release. We recommend that you upgrade at the next available opportunity. This release contains performance improvements and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next available opportunity. For this release only, you will need to restart the database before running ALTER EXTENSION Following the deprecation announcement for Multi-node in TimescaleDB 2.13, Multi-node is no longer supported starting with TimescaleDB 2.14. TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it here. If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the migration documentation. TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14, works on both uncompressed and partially compressed chunks. The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter). These release notes are for the release of TimescaleDB2.13.1 on 2024-01-08. This release contains bug fixes since the 2.13.0 release. It is recommended that you upgrade at the next available opportunity. This release contains performance improvements, an improved hypertable DDL API and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next available opportunity. TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it here. If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the migration" }, { "data": "We will continue supporting PostgreSQL 13 until April 2024. Sooner to that time, we will announce the specific version of TimescaleDB in which PostgreSQL 13 support will not be included going forward. These release notes are for the release of TimescaleDB2.12.2 on 2023-10-20. This release contains bug fixes since the 2.12.1 release. It is recommended that you upgrade at the next available opportunity. These release notes are for the release of TimescaleDB2.12.1 on 2023-10-12. This release contains bug fixes since the 2.12.0 release. It is recommended that you upgrade at the next available opportunity. Timescale thanks: These release notes are for the release of TimescaleDB2.12.0 on 2023-09-27. This release contains performance improvements for compressed hypertables and continuous aggregates and bug fixes since the 2.11.2 release. It is recommended that you upgrade at the next available opportunity. This release moves all internal functions from the timescaledbinternal schema into the timescaledbfunctions schema. This separates code from internal data objects and improves security by allowing more restrictive permissions for the code schema. If you are calling any of those internal functions you should adjust your code as soon as possible. This version also includes a compatibility layer that allows calling them in the old location but that layer will be removed in 2.14.0. Following the deprecation announcement for PostgreSQL 12 in TimescaleDB2.10, PostgreSQL 12 is not supported starting with TimescaleDB2.12. Currently supported PostgreSQL major versions are 13, 14 and 15. PostgreSQL 16 support will be added with a following TimescaleDB release. Timescale thanks: These release notes are for the release of TimescaleDB2.11.2 on 2023-08-17. This release contains bug fixes since the 2.11.1 release. It is recommended that you upgrade at the next available opportunity. Timescale thanks: These release notes are for the release of TimescaleDB2.11.1 on 2023-06-29. This release contains bug fixes since the last release. It is considered low priority for upgrading. Upgrade your TimescaleDB installation at your next opportunity. Timescale thanks: These release notes are for the release of TimescaleDB2.11.0 on 2023-05-22. This release contains new features and bug fixes since the 2.10.3 release. We deem it moderate priority for upgrading. This release includes these new features: Timescale thanks: These release notes are for the release of TimescaleDB2.10.2 on 2023-04-20. This release contains bug fixes since the last release. It is considered low priority for upgrading. Upgrade your TimescaleDB installation at your next opportunity. Timescale thanks: These release notes are for the release of Timescale2.10.1 on 2023-03-07. This release contains bug fixes since the last release. It is considered low priority for upgrading. Upgrade your TimescaleDB installation at your next opportunity. Timescale thanks: These release notes are for the release of TimescaleDB2.10.1 on 2023-03-07. This release contains new features and bug fixes since the last release. It is considered moderate priority for upgrading. Upgrade your TimescaleDB installation as soon as possible. This release includes these new features: This release deprecates these features: Timescale thanks: For release notes for older TimescaleDB versions, see the past releases section. Keywords Found an issue on this page?Report an issueor Edit this page in GitHub." } ]
{ "category": "App Definition and Development", "file_name": "about-continuous-aggregates.md", "project_name": "Timescale", "subcategory": "Database" }
[ { "data": "Time-series data usually grows very quickly. And that means that aggregating the data into useful summaries can become very slow. Continuous aggregates makes aggregating data lightning fast. If you are collecting data very frequently, you might want to aggregate your data into minutes or hours instead. For example, if you have a table of temperature readings taken every second, you can find the average temperature for each hour. Every time you run this query, the database needs to scan the entire table and recalculate the average every time. Continuous aggregates are a kind of hypertable that is refreshed automatically in the background as new data is added, or old data is modified. Changes to your dataset are tracked, and the hypertable behind the continuous aggregate is automatically updated in the background. You don't need to manually refresh your continuous aggregates, they are continuously and incrementally updated in the background. Continuous aggregates also have a much lower maintenance burden than regular PostgreSQL materialized views, because the whole view is not created from scratch on each refresh. This means that you can get on with working your data instead of maintaining your database. Because continuous aggregates are based on hypertables, you can query them in exactly the same way as your other tables, and enable compression or tiered storage on your continuous aggregates. You can even create continuous aggregates on top of your continuous aggregates. By default, querying continuous aggregates provides you with real-time data. Pre-aggregated data from the materialized view is combined with recent data that hasn't been aggregated yet. This gives you up-to-date results on every query. There are three main ways to make aggregation easier: materialized views, continuous aggregates, and real time aggregates. Materialized views are a standard PostgreSQL function. They are used to cache the result of a complex query so that you can reuse it later on. Materialized views do not update regularly, although you can manually refresh them as required. Continuous aggregates are a Timescale only feature. They work in a similar way to a materialized view, but they are updated automatically in the background, as new data is added to your database. Continuous aggregates are updated continuously and incrementally, which means they are less resource intensive to maintain than materialized views. Continuous aggregates are based on hypertables, and you can query them in the same way as you do your other tables. Real time aggregates are a Timescale only feature. They are the same as continuous aggregates, but they add the most recent raw data to the previously aggregated data to provide accurate and up to date results, without needing to aggregate data as it is being written. You can create a continuous aggregate on top of another continuous aggregate. This allows you to summarize data at different granularities. For example, you might have a raw hypertable that contains second-by-second data. Create a continuous aggregate on the hypertable to calculate hourly data. To calculate daily data, create a continuous aggregate on top of your hourly continuous aggregate. For more information, see the documentation about continuous aggregates on continuous aggregates. In TimescaleDB" }, { "data": "and later, continuous aggregates support JOINS, as long as they meet these conditions: This section includes some examples of JOIN conditions that work with continuous aggregates. For these to work, either table1 or table2 must be a hypertable. It does not matter which is the hypertable and which is a standard PostgreSQL table. INNER JOIN on a single equality condition, using the ON clause: ``` CREATE MATERIALIZED VIEW myview WITH (timescaledb.continuous) ASSELECT ...FROM table1 t1JOIN table2 t2 ON t1.t2id = t2.idGROUP BY ...``` INNER JOIN on a single equality condition, using the ON clause, with a further condition added in the WHERE clause: ``` CREATE MATERIALIZED VIEW myview WITH (timescaledb.continuous) ASSELECT ...FROM table1 t1JOIN table2 t2 ON t1.t2id = t2.idWHERE t1.id IN (1, 2, 3, 4)GROUP BY ...``` INNER JOIN on a single equality condition specified in WHERE clause, this is allowed but not recommended: ``` CREATE MATERIALIZED VIEW myview WITH (timescaledb.continuous) ASSELECT ...FROM table1 t1, table2 t2WHERE t1.t2id = t2.idGROUP BY ...``` These are examples of JOIN conditions won't work with continuous aggregates: An INNER JOIN on multiple equality conditions is not allowed. ``` CREATE MATERIALIZED VIEW myview WITH (timescaledb.continuous) ASSELECT ...FROM table1 t1JOIN table2 t2 ON t1.t2id = t2.id AND t1.t2id2 = t2.idGROUP BY ...``` A JOIN with a single equality condition specified in WHERE clause cannot be combined with further conditions in the WHERE clause. ``` CREATE MATERIALIZED VIEW myview WITH (timescaledb.continuous) ASSELECT ...FROM table1 t1, table2 t2WHERE t1.t2id = t2.idAND t1.id IN (1, 2, 3, 4)GROUP BY ...``` In TimescaleDB 2.7 and later, continuous aggregates support all PostgreSQL aggregate functions. This includes both parallelizable aggregates, such as SUM and AVG, and non-parallelizable aggregates, such as RANK. In TimescaleDB2.10.0 and later, the FROM clause supports JOINS, with some restrictions. For more information, see the JOIN support section. In older versions of Timescale, continuous aggregates only support aggregate functions that can be parallelized by PostgreSQL. You can work around this by aggregating the other parts of your query in the continuous aggregate, then using the window function to query the aggregate. This table summarizes aggregate function support in continuous aggregates: | Function, clause, or feature | TimescaleDB 2.6 and earlier | TimescaleDB 2.7, 2.8, and 2.9 | TimescaleDB 2.10 and later | |:|:|:--|:--| | Parallelizable aggregate functions | | | | | Non-parallelizable aggregate functions | | | | | ORDER BY | | | | | Ordered-set aggregates | | | | | Hypothetical-set aggregates | | | | | DISTINCT in aggregate functions | | | | | FILTER in aggregate functions | | | | | FROM clause supports JOINS | | | | If you want the old behavior in later versions of TimescaleDB, set the timescaledb.finalized parameter to false when you create your continuous aggregate. Continuous aggregates consist of: Continuous aggregates take raw data from the original hypertable, aggregate it, and store the intermediate state in a materialization hypertable. When you query the continuous aggregate view, the state is returned to you as" }, { "data": "Using the same temperature example, the materialization table looks like this: | day | location | chunk | avg temperature partial | |:--|:--|--:|:--| | 2021/01/01 | New York | 1 | {3, 219} | | 2021/01/01 | Stockholm | 1 | {4, 280} | | 2021/01/02 | New York | 2 | nan | | 2021/01/02 | Stockholm | 2 | {5, 345} | The materialization table is stored as a Timescale hypertable, to take advantage of the scaling and query optimizations that hypertables offer. Materialization tables contain a column for each group-by clause in the query, a chunk column identifying which chunk in the raw data this entry came from, and a partial aggregate column for each aggregate in the query. The partial column is used internally to calculate the output. In this example, because the query looks for an average, the partial column contains the number of rows seen, and the sum of all their values. The most important thing to know about partials is that they can be combined to create new partials spanning all of the old partials' rows. This is important if you combine groups that span multiple chunks. For more information, see materialization hypertables. The materialization engine performs two transactions. The first transaction blocks all INSERTs, UPDATEs, and DELETEs, determines the time range to materialize, and updates the invalidation threshold. The second transaction unblocks other transactions, and materializes the aggregates. The first transaction is very quick, and most of the work happens during the second transaction, to ensure that the work does not interfere with other operations. When you query the continuous aggregate view, the materialization engine combines the aggregate partials into a single partial for each time range, and calculates the value that is returned. For example, to compute an average, each partial sum is added up to a total sum, and each partial count is added up to a total count, then the average is computed as the total sum divided by the total count. Any change to the data in a hypertable could potentially invalidate some materialized rows. The invalidation engine checks to ensure that the system does not become swamped with invalidations. Fortunately, time-series data means that nearly all INSERTs and UPDATEs have a recent timestamp, so the invalidation engine does not materialize all the data, but to a set point in time called the materialization threshold. This threshold is set so that the vast majority of INSERTs contain more recent timestamps. These data points have never been materialized by the continuous aggregate, so there is no additional work needed to notify the continuous aggregate that they have been added. When the materializer next runs, it is responsible for determining how much new data can be materialized without invalidating the continuous aggregate. It then materializes the more recent data and moves the materialization threshold forward in time. This ensures that the threshold lags behind the point-in-time where data changes are common, and that most INSERTs do not require any extra writes. When data older than the invalidation threshold is changed, the maximum and minimum timestamps of the changed rows is logged, and the values are used to determine which rows in the aggregation table need to be recalculated. This logging does cause some write load, but because the threshold lags behind the area of data that is currently changing, the writes are small and rare. Keywords Found an issue on this page?Report an issueor Edit this page in GitHub." } ]
{ "category": "App Definition and Development", "file_name": "docs.md", "project_name": "Vald", "subcategory": "Database" }
[ { "data": "This document will introduce you to the example of what Vald can do. Vald is a highly scalable distributed fast approximate nearest neighbor dense vector search engine, which uses NGT as the core engine of Vald, and Vald manages to integrate with Kubernetes. You cannot generally search your unstructured data using the inverted index, like images and videos. Applying a model like BERT or VGG can convert your unstructured data into vectors. After converting them into vectors, you can insert them into the Vald cluster and process them in the Vald cluster. Here are some general use cases of Vald or vector search engines. You can use Vald as the image/video processing engine to search the similar images/videos or analyze the image/video for your use case. Vald is capable of processing a huge number of images at the same time, so its case fits with your use case. Here are some examples of what you can do with images and videos using Vald. Audio processing is important for personal assistant implementation. Vald can act as a brain of the personal assistant function, conversation interpreter, and natural language generation. Here are some examples of what you can process using Vald. Using a text vectorizing model like BERT, you can process your text data in Vald. Here are some examples of the use case of text processing using Vald. Vald can process the vector data, you can analyze every data you can vectorize. Here are some examples of the use case of data analysis. AI malware detection To detect the malware using Vald, you need to vectorize the binary file and insert it into Vald first. You can analyze your binary by searching for a similar binary in Vald. If your binary closely resembles the malware binary, you can trigger the alert for users. Price optimization By applying the price optimization technique using Vald, you can find the most optimized price for your business. You can apply models like GLMs to achieve it and use Vald as a machine learning engine for your business. Social analysis To analyze the social relationship of users, you can suggest to them their related friends, page recommendations, or other use cases. You can apply different models to analyze social data and use Vald as a recommendation engine for your business. Besides the general use case of Vald or vector search engine, Vald supports a user-defined filter that the user can customize the filter to filter the specific result. For example when the user chose a mans t-shirt and the recommended product is going to be searched in Vald. Without the filtering functionality, the womens t-shirt may be searched in Vald and displayed because womens t-shirt is similar to the mens t-shirt and it is very hard to differentiate the image of mens and womens t-shirt. By implementing the custom filter, you can filter only the mans t-shirt based on your criteria and needs. 2019-2024 vald.vdaas.org Vald team" } ]
{ "category": "App Definition and Development", "file_name": "about-vald.md", "project_name": "Vald", "subcategory": "Database" }
[ { "data": "This document will introduce you to the example of what Vald can do. Vald is a highly scalable distributed fast approximate nearest neighbor dense vector search engine, which uses NGT as the core engine of Vald, and Vald manages to integrate with Kubernetes. You cannot generally search your unstructured data using the inverted index, like images and videos. Applying a model like BERT or VGG can convert your unstructured data into vectors. After converting them into vectors, you can insert them into the Vald cluster and process them in the Vald cluster. Here are some general use cases of Vald or vector search engines. You can use Vald as the image/video processing engine to search the similar images/videos or analyze the image/video for your use case. Vald is capable of processing a huge number of images at the same time, so its case fits with your use case. Here are some examples of what you can do with images and videos using Vald. Audio processing is important for personal assistant implementation. Vald can act as a brain of the personal assistant function, conversation interpreter, and natural language generation. Here are some examples of what you can process using Vald. Using a text vectorizing model like BERT, you can process your text data in Vald. Here are some examples of the use case of text processing using Vald. Vald can process the vector data, you can analyze every data you can vectorize. Here are some examples of the use case of data analysis. AI malware detection To detect the malware using Vald, you need to vectorize the binary file and insert it into Vald first. You can analyze your binary by searching for a similar binary in Vald. If your binary closely resembles the malware binary, you can trigger the alert for users. Price optimization By applying the price optimization technique using Vald, you can find the most optimized price for your business. You can apply models like GLMs to achieve it and use Vald as a machine learning engine for your business. Social analysis To analyze the social relationship of users, you can suggest to them their related friends, page recommendations, or other use cases. You can apply different models to analyze social data and use Vald as a recommendation engine for your business. Besides the general use case of Vald or vector search engine, Vald supports a user-defined filter that the user can customize the filter to filter the specific result. For example when the user chose a mans t-shirt and the recommended product is going to be searched in Vald. Without the filtering functionality, the womens t-shirt may be searched in Vald and displayed because womens t-shirt is similar to the mens t-shirt and it is very hard to differentiate the image of mens and womens t-shirt. By implementing the custom filter, you can filter only the mans t-shirt based on your criteria and needs. 2019-2024 vald.vdaas.org Vald team" } ]
{ "category": "App Definition and Development", "file_name": "configuration.md", "project_name": "Vald", "subcategory": "Database" }
[ { "data": "This page introduces best practices for setting up values for the Vald Helm Chart. Before reading, please read the overview of Vald Helm Chart in its README. It is highly recommended to specify the Vald version. You can specify the image version by setting image.tag field in each component ([component].image.tag) or defaults section. ``` defaults: image: tag: v1.5.6 ``` or you can use the older image only for a target component, e.g., the agent, ``` agent: image: tag: v1.5.5 ``` The default logging levels and formats are configured in defaults.logging.level and defaults.logging.format. You can also specify them in each component section ([component].logging). ``` defaults: logging: level: info format: raw ``` You can specify log level debug and JSON format for lb-gateway by the followings: ``` gateway: lb: logging: level: debug format: json ``` The logging level is defined in the Coding Style Guide. Each Vald component has several types of servers. They can be configured by specifying the values in defaults.server_config. Examples: ``` defaults: server_config: servers: grpc: enabled: true host: 0.0.0.0 port: 8081 servicePort: 8081 server: mode: GRPC ... ``` In addition, they can be overwritten by setting each [component].serverconfig, e.g., gateway.lb.serverconfig is following. ``` gateway: lb: server_config: servers: rest: enabled: true host: 0.0.0.0 port: 8080 servicePort: 8080 server: mode: REST ... ``` gRPC server should be enabled because all Vald components use gRPC to communicate with others. The API specs are placed in Vald APIs. REST server is optional. The swagger specs are placed in Vald APIs Swagger. There are two built-in health check servers: liveness and readiness. They are used as servers for Kubernetes liveness and readiness probe. The liveness health server is disabled by default due to the liveness probe may accidentally kill the Vald Agent component. ``` agent: server_config: healths: liveness: enabled: false ``` The metrics server enables easier debugging and monitoring of Vald components. There are two types of metrics servers: pprof and Prometheus. pprof server is implemented using Gos net/http/pprof package. You can use googles pprof to analyze the exported profile result. Prometheus server is a Prometheus exporter. It is required to set the observability section on each Vald component to enable the monitoring using Prometheus. Please refer to the next section. The observability features are useful for monitoring Vald components. These settings can be enabled by setting the defaults.observability.enabled field to the value true or by overriding it in each component ([component].observability.enabled). And also, enable each feature by setting the value true on its enabled field. If observability features are enabled, the metrics will be collected periodically. The duration can be set on observability.collector.duration. Please refer to the Vald operation guide for more" }, { "data": "Vald Agent NGT uses yahoojapan/NGT as a core library for searching vectors. The behaviors of NGT can be configured by setting agent.ngt field object. The important parameters are the followings: Users should configure these parameters first to fit their use case. For further details, please read the NGT wiki. Vald Agent NGT has a feature to start indexing automatically. The behavior of this feature can be configured with these parameters: Vald Agent Faiss uses facebookresearch/faiss as a core library for searching vectors. The behaviors of Faiss can be configured by setting agent.faiss field object. The important parameters are the followings: Users should configure these parameters first to fit their use case. For further details, please read the Fiass wiki. Vald Agent Faiss has a feature to start indexing automatically. The behavior of this feature can be configured with these parameters: Because the Vald Agent pod places indexes on memory, termination of agent pods causes loss of indexes. It is important to set the resource requests and limits appropriately to avoid terminating the Vald Agent pods. Requesting 40% of cluster memory for agent pods is highly recommended. Also, it is highly recommended not to set the resource limits for the Vald Agent pods. Pod priorities are also useful for saving agent pods from eviction. By default, very high priority is set to agent pods in the Chart. The capacity planning page helps to estimate the resources. It is recommended to schedule agent pods on different nodes as much as possible. To achieve this, the following podAntiAffinity is set by default. ``` agent: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: weight: 100 podAffinityTerm: topologyKey: kubernetes.io/hostname labelSelector: matchExpressions: key: app operator: In values: vald-agent-ngt ``` It can also be achieved by using pod topology spread constraints. ``` agent: topologySpreadConstraints: topologyKey: node maxSkew: 1 whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app: vald-agent-ngt affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: [] # to disable default settings ``` Ingress for gateways can be configured by gateway.{filter,lb}.ingress field object. It is important to set your host to gateway.{filter,lb}.ingress.host field. gateway.{filter,lb}.ingress.servicePort should be grpc or rest. ``` gateway: lb: ingress: enabled: true host: vald.vdaas.org # Set correct hostname here servicePort: grpc ``` gateway.lb.gatewayconfig.indexreplica represents how many Vald Agent pods a vector will be inserted into. The maximum value of the index replica should be 30% of the Vald Agent pods deployed. ``` gateway: lb: gateway_config: index_replica: 3 // By setting the index replica to 3, the number of Vald Agent pods deployed should be more than 9 (3 / 0.3). ``` The gateways resource requests and limits depend on the request traffic and available resources. If the request traffic varies largely, enabling HPA for the gateway and adjusting the resource requests is recommended." }, { "data": "represents the frequency of sending requests to the discoverer. If the discoverers CPU utilization is too high, make this value longer or reduce the number of LB gateway pods. ``` gateway: lb: gateway_config: discoverer: duration: 2s ``` Vald Discoverer gets the Node and Pod metrics from kube-apiserver as described in Vald Discoverer. Valds Helm deployment supports RBAC as default, and the default configuration is the following. ``` discoverer: clusterRole: enabled: true name: discoverer clusterRoleBinding: enabled: true name: discoverer serviceAccount: enabled: true name: vald ``` When RBAC is unavailable in your environment, or you would like to put some restrictions, please modify it and grant the permissions to the user executing the discoverer. Each configuration file is the following: The number of discoverer pods and resource limits can be estimated by the configurations of your LB gateways and index managers because its APIs are called by them. Discoverer CPU loads almost depend on API request traffic. ``` (the number of LB gateways x its request frequency) + (the number of index managers x its request frequency). ``` Vald Index Manager controls the indexing timing for all Vald Agent pods in the Vald cluster. These parameters are related to the control process. ``` manager: index: indexer: agent_namespace: vald # namespace of agent pods to manage autoindexcheck_duration: \"1m\" autoindexduration_limit: \"30m\" autoindexlength: 100 autosaveindexdurationlimit: \"3h\" autosaveindexwaitduration: \"10m\" concurrency: 1 creationpoolsize: 10000 ``` Same as LB gateway, manager.index.indexer.discoverer.duration represents the frequency of sending requests to the discoverer. For further details, there are references to the Helm values in the Vald GitHub repository. Backup Configuration Applies backup feature for saving and restoring indexes Filtering Configuration Applies filtering feature to the Vald cluster Mirroring Configuration Applies mirror gateway for running multi Vald clusters Cluster Role Binding Configures cluster role for Vald cluster 2019-2024 vald.vdaas.org Vald team glg.Fatal(err) } if i%10 == 0 { glg.Infof(\"Removed %d\", i) } } ``` Flush Remove all remaining training datasets from the Vald agent. ``` , err := client.Flush(ctx, &payload.FlushRequest{}) if err != nil { glg.Fatal(err) } ``` In the last, you can remove the deployed Vald Cluster by executing the below command. ``` helm uninstall vald ``` Congratulation! You completely entered the Vald World! If you want, you can try other tutorials such as: For more information, we recommend you to check: Get Started With Faiss Agent Running Vald cluster with faiss Agent on Kubernetes and execute client codes Vald Agent Standalone on K8s Running only Vald Agent on Kubernetes and execute client codes Vald Agent Standalone on Docker Running Vald Agent on Docker and execute client codes Vald Multicluster on K8s Running Multi Vald Clusters with Mirror Gateway on Kubernetes and execute client codes 2019-2024 vald.vdaas.org Vald team" } ]
{ "category": "App Definition and Development", "file_name": "tutorials.md", "project_name": "Timescale", "subcategory": "Database" }
[ { "data": "Timescale tutorials are designed to help you get up and running with Timescale fast. They walk you through a variety of scenarios using example datasets, to teach you how to construct interesting queries, find out what information your database has hidden in it, and even gives you options for visualizing and graphing your results. | Cryptocurrency | Energy | Finance | Transport | |:|:|:-|:--| | Part 1Do your own research on the Bitcoin blockchain | Part 1Optimize your energy consumption for a rooftop solar PV system | Part 1Chart the trading highs and lows for your favorite stock | Part 1Find out about taxi rides taken in and around NYC | | Part 2 Discover the relationship between transactions, blocks, fees, and miner revenue | Coming Soon! | Part 2Use a websocket connection to visualize the trading highs and lows for your favorite stock | Part 2Map the longest taxi rides in NYC | Found an issue on this page?Report an issueor Edit this page in GitHub." } ]
{ "category": "App Definition and Development", "file_name": "local.md", "project_name": "Vitess", "subcategory": "Database" }
[ { "data": "Local Install Instructions for using Vitess on your machine for testing purposes This guide covers installing Vitess locally for testing purposes, from pre-compiled binaries. We will launch multiple copies of mysqld, so it is recommended to have greater than 4GB RAM, as well as 20GB of available disk space. A docker setup is also available, which requires no dependencies on your local host. Vitess supports the databases listed here. We recommend MySQL 8.0 if your installation method provides that option: ``` sudo apt install -y mysql-server etcd-server etcd-client curl sudo apt install -y default-mysql-server default-mysql-client etcd curl sudo yum -y localinstall https://dev.mysql.com/get/mysql80-community-release-el8-3.noarch.rpm sudo yum -y install mysql-community-server etcd curl ``` On apt-based distributions the services mysqld and etcd will need to be shutdown, since etcd will conflict with the etcd started in the examples, and mysqlctl will start its own copies of mysqld: ``` sudo service mysql stop sudo service etcd stop sudo systemctl disable mysql sudo systemctl disable etcd ``` ``` curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.5/install.sh | bash ``` Ensure the following is in your bashrc/zshrc or similar. nvm automatically attempts to add them: ``` export NVM_DIR=\"$HOME/.nvm\" [ -s \"$NVMDIR/nvm.sh\" ] && \\. \"$NVMDIR/nvm.sh\" # This loads nvm [ -s \"$NVMDIR/bashcompletion\" ] && \\. \"$NVMDIR/bashcompletion\" ``` Finally, install node: ``` nvm install 18 nvm use 18 ``` See the vtadmin README for more details. AppArmor/SELinux will not allow Vitess to launch MySQL in any data directory by default. You will need to disable it: AppArmor: ``` sudo ln -s /etc/apparmor.d/usr.sbin.mysqld /etc/apparmor.d/disable/ sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.mysqld sudo aa-status | grep mysqld ``` SELinux: ``` sudo setenforce 0 ``` Download the latest binary release for Vitess on Linux. For example with Vitess 19: Notes: ``` version=19.0.4 file=vitess-${version}-2c56724.tar.gz wget https://github.com/vitessio/vitess/releases/download/v${version}/${file} tar -xzf ${file} cd ${file/.tar.gz/} sudo mkdir -p /usr/local/vitess sudo cp -r * /usr/local/vitess/ ``` Make sure to add /usr/local/vitess/bin to the PATH environment variable. You can do this by adding the following to your $HOME/.bashrc file: ``` export PATH=/usr/local/vitess/bin:${PATH} ``` You are now ready to start your first cluster! Open a new terminal window to ensure your .bashrc file changes take effect. Start by copying the local examples included with Vitess to your preferred location. For our first example we will deploy a single unsharded keyspace. The file 101initialcluster.sh is for example 1 phase 01. Lets execute it now: ``` vitess_path=/usr/local/vitess mkdir ~/my-vitess-example cp -r ${vitess_path}/{examples,web} ~/my-vitess-example cd ~/my-vitess-example/examples/local ./101initialcluster.sh ``` You should see an output similar to the following: ``` $ ./101initialcluster.sh add /vitess/global add /vitess/zone1 add zone1 CellInfo Created cell: zone1 etcd start done... Starting vtctld... vtctld is running! Successfully created keyspace commerce. Result: { \"name\": \"commerce\", \"keyspace\": { \"served_froms\": [], \"keyspace_type\": 0, \"base_keyspace\": \"\", \"snapshot_time\": null, \"durabilitypolicy\": \"semisync\", \"throttler_config\": null, \"sidecardbname\": \"_vt\" } } Starting MySQL for tablet zone1-0000000100... Starting vttablet for zone1-0000000100... HTTP/1.1 200 OK Date: Mon, 26 Jun 2023 19:21:51 GMT Content-Type: text/html; charset=utf-8 Starting MySQL for tablet zone1-0000000101... Starting vttablet for zone1-0000000101... HTTP/1.1 200 OK Date: Mon, 26 Jun 2023 19:21:54 GMT Content-Type: text/html; charset=utf-8 Starting MySQL for tablet zone1-0000000102... Starting vttablet for zone1-0000000102..." }, { "data": "200 OK Date: Mon, 26 Jun 2023 19:21:56 GMT Content-Type: text/html; charset=utf-8 vtorc is running! UI: http://localhost:16000 Logs: /Users/florentpoinsard/Code/vitess/vtdataroot/tmp/vtorc.out PID: 49556 New VSchema object: { \"sharded\": false, \"vindexes\": {}, \"tables\": { \"corder\": { \"type\": \"\", \"column_vindexes\": [], \"auto_increment\": null, \"columns\": [], \"pinned\": \"\", \"columnlistauthoritative\": false, \"source\": \"\" }, \"customer\": { \"type\": \"\", \"column_vindexes\": [], \"auto_increment\": null, \"columns\": [], \"pinned\": \"\", \"columnlistauthoritative\": false, \"source\": \"\" }, \"product\": { \"type\": \"\", \"column_vindexes\": [], \"auto_increment\": null, \"columns\": [], \"pinned\": \"\", \"columnlistauthoritative\": false, \"source\": \"\" } }, \"requireexplicitrouting\": false } If this is not what you expected, check the input data (as JSON parsing will skip unexpected fields). Waiting for vtgate to be up... vtgate is up! Access vtgate at http://Florents-MacBook-Pro-2.local:15001/debug/status vtadmin-api is running! API: http://Florents-MacBook-Pro-2.local:14200 Logs: /Users/florentpoinsard/Code/vitess/vtdataroot/tmp/vtadmin-api.out PID: 49695 vtadmin-web is running! Browser: http://Florents-MacBook-Pro-2.local:14201 Logs: /Users/florentpoinsard/Code/vitess/vtdataroot/tmp/vtadmin-web.out PID: 49698 ``` You can also verify that the processes have started with pgrep: ``` $ pgrep -fl vitess 14119 etcd 14176 vtctld 14251 mysqld_safe 14720 mysqld 14787 vttablet 14885 mysqld_safe 15352 mysqld 15396 vttablet 15492 mysqld_safe 15959 mysqld 16006 vttablet 16112 vtgate 16788 vtorc ``` The exact list of processes will vary. For example, you may not see mysqld_safe listed. If you encounter any errors, such as ports already in use, you can kill the processes and start over: ``` pkill -9 -f '(vtdataroot|VTDATAROOT|vitess|vtadmin)' # kill Vitess processes rm -rf vtdataroot ``` For ease-of-use, Vitess provides aliases for mysql and vtcltdclient: ``` source ../common/env.sh ``` Setting up aliases changes mysql to always connect to Vitess for your current session. To revert this, type unalias mysql && unalias vtctldclient or close your session. You should now be able to connect to the VTGate server that was started in 101initialcluster.sh: ``` $ mysql Welcome to the MySQL monitor. Commands end with ; or \\g. Your MySQL connection id is 2 Server version: 8.0.31-Vitess (Ubuntu) Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\\h' for help. Type '\\c' to clear the current input statement. mysql> show tables; +--+ | Tablesinvt_commerce | +--+ | corder | | customer | | product | +--+ 3 rows in set (0.00 sec) ``` You can also now browse and administer your new Vitess cluster using the VTAdmin UI at the following URL: ``` http://localhost:14201 ``` VTOrc is also setup as part of the initialization. You can look at its user-interface at: ``` http://localhost:16000 ``` In this example, we deployed a single unsharded keyspace named commerce. Unsharded keyspaces have a single shard named 0. The following schema reflects a common ecommerce scenario that was created by the script: ``` create table product ( sku varbinary(128), description varbinary(128), price bigint, primary key(sku) ); create table customer ( customerid bigint not null autoincrement, email varbinary(128), primary key(customer_id) ); create table corder ( orderid bigint not null autoincrement, customer_id bigint, sku varbinary(128), price bigint, primary key(order_id) ); ``` The schema has been simplified to include only those fields that are significant to the example: You can now proceed with MoveTables. Or alternatively, if you would like to teardown your example: ``` ./401_teardown.sh rm -rf vtdataroot ``` Last updated May 8, 2024" } ]
{ "category": "App Definition and Development", "file_name": "docs.md", "project_name": "Vitess", "subcategory": "Database" }
[ { "data": "Documentation Get involved with Vitess development Debug common issues with Vitess Frequently Asked Questions about Vitess Learn about how Vitess releases work Additional resources including Presentations and Roadmap Collection of Vitess design docs Under construction, development release. Everything you need to know about scaling MySQL with Vitess. Latest stable release. Everything you need to know about scaling MySQL with Vitess. Everything you need to know about scaling MySQL with Vitess. Everything you need to know about scaling MySQL with Vitess. Last updated June 30, 2022" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "YDB", "subcategory": "Database" }
[ { "data": "JSON is a lightweight data-interchange format. In YQL, it's represented by the Json type. Unlike relational tables, JSON can store data with no schema defined. Here is an example of a valid JSON object: ``` [ { \"name\": \"Jim Holden\", \"age\": 30 }, { \"name\": \"Naomi Nagata\", \"age\": \"twenty years old\" } ] ``` Despite the fact that the age field in the first object is of the Number type (\"age\": 21) and in the second object its type is String (\"age\": \"twenty years old\"), this is a fully valid JSON object. To work with JSON, YQL implements a subset of the SQL support for JavaScript Object Notation (JSON) standard, which is part of the common ANSI SQL standard. Values inside JSON objects are accessed using a query language called JsonPath. All functions for JSON accept a JsonPath query as an argument. Let's look at an example. Suppose we have a JSON object like: ``` { \"comments\": [ { \"id\": 123, \"text\": \"A whisper will do, if it's all that you can manage.\" }, { \"id\": 456, \"text\": \"My life has become a single, ongoing revelation that I havent been cynical enough.\" } ] } ``` Then, to get the text of the second comment, we can write the following JsonPath query: ``` $.comments[1].text ``` In this query: | Operation | Example | |:|:-| | Retrieving a JSON object key | $.key | | Retrieving all JSON object keys | $.* | | Accessing an array element | $[25] | | Retrieving an array subsegment | $[2 to 5] | | Accessing the last array element | $[last] | | Accessing all array elements | $[*] | | Unary operations | - 1 | | Binary operations | (12 * 3) % 4 + 8 | | Accessing a variable | $variable | | Logical operations | `(1 > 2) || (3 <= 4) && (\"string\" == \"another\")| | | Matching a regular expression | $.name like_regex \"^[A-Za-z]+$\" | | Checking the string prefix | $.name starts with \"Bobbie\" | | Checking if a path exists | exists ($.profile.name) | | Checking a Boolean expression for null | ($.age > 20) is unknown | | Filtering values | $.friends ? (@.age >= 18 && @.gender == \"male\") | | Getting the value type | $.name.type() | | Getting the array size | $.friends.size() | | Converting a string to a number | $.number.double() | | Rounding up a number | $.number.ceiling() | | Rounding down a number | $.number.floor() | | Returning the absolute value | $.number.abs() | | Getting key-value pairs from an object | $.profile.keyvalue() | The result of executing all JsonPath expressions is a sequence of JSON values. For example: If the input sequence consists of multiple values, some operations are performed for each element (for example, accessing a JSON object key). However, other operations require a sequence of one element as input (for example, binary arithmetic operations). The behavior of a specific operation is described in the corresponding section of the documentation. JsonPath supports two execution modes, lax and strict. Setting the mode is optional. By default, lax. The mode is specified at the beginning of a query. For example, strict $.key. The behavior for each mode is described in the corresponding sections with JsonPath" }, { "data": "When accessing a JSON object key in lax mode, arrays are automatically unpacked. Example: ``` [ { \"key\": 123 }, { \"key\": 456 } ] ``` The lax $.key query is successful and returns 123, 456. As $ is an array, it's automatically unpacked and accessing the key of the $.key JSON object is executed for each element in the array. The strict $.key query returns an error. In strict mode, there is no support for auto unpacking of arrays. Since $ is an array and not an object, accessing the $.key object key is impossible. You can fix this by writing strict $[*].key. Unpacking is only 1 level deep. In the event of nested arrays, only the outermost one is unpacked. When accessing an array element in lax mode, JSON values are automatically wrapped in an array. Example: ``` { \"name\": \"Avasarala\" } ``` The lax $[0].name query is successful and returns \"Avasarala\". As $ isn't an array, it's automatically wrapped in an array of length 1. Accessing the first element $[0] returns the source JSON object where the name key is taken. The strict $[0].name query returns an error. In strict mode, values aren't wrapped in an array automatically. Since $ is an object and not an array, accessing the $[0] element is impossible. You can fix this by writing strict $.name. Some errors are converted to an empty result when a query is executed in lax mode. Values of some types can be specified in a JsonPath query using literals: | Type | Example | |:|:-| | Numbers | 42, -1.23e-5 | | Boolean values | false, true | | Null | Null | | Stings | \"Belt\" | JsonPath supports accessing JSON object keys, such as $.session.user.name. Note Accessing keys without quotes is only supported for keys that start with an English letter or underscore and only contain English letters, underscores, numbers, and a dollar sign. Use quotes for all other keys. For example, $.profile.\"this string has spaces\" or $.user.\"42 is the answer\" For each value from the input sequence: The expression execution result is the concatenation of the results for each value from the input sequence. Example: ``` { \"name\": \"Amos\", \"friends\": [ { \"name\": \"Jim\" }, { \"name\": \"Alex\" } ] } ``` | Unnamed: 0 | lax | strict | |:|:--|:| | $.name | \"Amos\" | \"Amos\" | | $.surname | Empty result | Error | | $.friends.name | \"Jim\", \"Alex\" | Error | JsonPath supports accessing all JSON object keys at once: $.*. For each value from the input sequence: The expression execution result is the concatenation of the results for each value from the input sequence. Example: ``` { \"profile\": { \"id\": 123, \"name\": \"Amos\" }, \"friends\": [ { \"name\": \"Jim\" }, { \"name\": \"Alex\" } ] } ``` | Unnamed: 0 | lax | strict | |:-|:--|:| | $.profile.* | 123, \"Amos\" | 123, \"Amos\" | | $.friends.* | \"Jim\", \"Alex\" | Error | JsonPath supports accessing array elements: $.friends[1, 3 to last -" }, { "data": "For each value from the input sequence: Examples: ``` [ { \"name\": \"Camina\", \"surname\": \"Drummer\" }, { \"name\": \"Josephus\", \"surname\": \"Miller\" }, { \"name\": \"Bobbie\", \"surname\": \"Draper\" }, { \"name\": \"Julie\", \"surname\": \"Mao\" } ] ``` | Unnamed: 0 | lax | strict | |:-|:|:| | $[0].name | \"Camina\" | \"Camina\" | | $[1, 2 to 3].name | \"Josephus\", \"Bobbie\", \"Julie\" | \"Josephus\", \"Bobbie\", \"Julie\" | | $[last - 2].name | \"Josephus\" | \"Josephus\" | | $[2, last + 200 to 50].name | \"Bobbie\" | Error | | $[50].name | Empty result | Error | JsonPath supports accessing all array elements at once: $[*]. For each value from the input sequence: Examples: ``` [ { \"class\": \"Station\", \"title\": \"Medina\" }, { \"class\": \"Corvette\", \"title\": \"Rocinante\" } ] ``` | Unnamed: 0 | lax | strict | |:|:-|:-| | $[*].title | \"Medina\", \"Rocinante\" | \"Medina\", \"Rocinante\" | | lax $.class | \"Station\" | Error | Let's analyze the last example step by step: Note All arithmetic operations work with numbers as with Double. Calculations are made with potential loss of accuracy. JsonPath supports unary + and -. A unary operation applies to all values from the input sequence. If a unary operation's input is a value that isn't a number, a query fails in both modes. Example: ``` [1, 2, 3, 4] ``` The strict -$[*] query is successful and returns -1, -2, -3, -4. The lax -$ query fails as $ is an array and not a number. JsonPath supports binary arithmetic operations (in descending order of priority): You can change the order of operations using parentheses. If each argument of a binary operation is not a single number or a number is divided by 0, the query fails in both modes. Examples: Unlike some other programming languages, Boolean values in JsonPath are not only true and false, but also null (uncertainty). JsonPath considers any values received from a JSON document to be non-Boolean. For example, a query like ! $.isvaliduser (a logical negation applied to the isvaliduser) field is syntactically invalid because the isvaliduser field value is not Boolean (even when it actually stores true or false). The correct way to write this kind of query is to explicitly use a comparison with a Boolean value, such as $.isvaliduser == false. JsonPath supports some logical operations for Boolean values. The arguments of any logical operation must be a single Boolean value. All logical operations return a Boolean" }, { "data": "Logical negation,! Truth table: | x | !x | |:|:| | true | false | | false | true | | Null | Null | Logical AND, && In the truth table, the first column is the left argument, the first row is the right argument, and each cell is the result of using the Logical AND both with the left and right arguments: | && | true | false | Null | |:|:-|:--|:-| | true | true | False | Null | | false | false | False | false | | Null | Null | False | Null | Logical OR, || In the truth table, the first column is the left argument, the first row is the right argument, and each cell is the result of using the logical OR with both the left and right arguments: | || | true | false | Null | |:|:-|:--|:-| | true | True | true | true | | false | True | false | Null | | Null | True | Null | Null | Examples: JsonPath implements comparison operators for values: All comparison operators return a Boolean value. Both operator arguments support multiple values. If an error occurs when calculating the operator arguments, it returns null. In this case, the JsonPath query execution continues. The arrays of each of the arguments are automatically unpacked. After that, for each pair where the first element is taken from the sequence of the left argument and the second one from the sequence of the right argument: If the pair analysis results in: We can say that this algorithm considers all pairs from the Cartesian product of the left and right arguments, trying to find the pair whose comparison returns true. Elements in a pair are compared according to the following rules: Example: Let's take a JSON document as an example ``` { \"left\": [1, 2], \"right\": [4, \"Inaros\"] } ``` and analyze the steps for executing the lax $.left < $.right query: Let's take the same query in a different execution mode: strict $.left < $.right: JsonPath supports predicates which are expressions that return a Boolean value and check a certain condition. You can use them, for example, in filters. The like_regex predicate lets you check if a string matches a regular expression. The syntax of regular expressions is the same as in Hyperscan UDF and REGEXP. Syntax ``` <expression> like_regex <regexp string> [flag <flag string>] ``` Where: Supported flags: Execution Before the check, the input sequence arrays are automatically unpacked. After that, for each element of the input sequence: If the pair analysis results in: Examples The starts with predicate lets you check if one string is a prefix of another. Syntax ``` <string expression> starts with <prefix expression> ``` Where: This means that the predicate will check that the <string expression> starts with the <prefix expression> string. Execution The first argument of the predicate must be a single string. The second argument of the predicate must be a sequence of (possibly, multiple) strings. For each element in a sequence of prefix strings: If the pair analysis results in: Examples The exists predicate lets you check whether a JsonPath expression returns at least one element. Syntax ``` exists (<expression>) ``` Where <expression> is the JsonPath expression to be checked. Parentheses around the expression are required. Execution Examples Let's take a JSON document: ``` { \"profile\": { \"name\": \"Josephus\", \"surname\": \"Miller\" } } ``` The is unknown predicate lets you check if a Boolean value is null. Syntax ``` (<expression>) is unknown ``` Where <expression> is the JsonPath expression to be checked. Only expressions that return a Boolean value are allowed. Parentheses around the expression are required. Execution Examples JsonPath lets you filter values obtained during query execution. An expression in a filter must return a Boolean value. Before filtering, the input sequence arrays are automatically unpacked. For each element of the input sequence: Example: Suppose we have a JSON document describing the user's friends ``` { \"friends\": [ { \"name\": \"James Holden\", \"age\": 35, \"money\": 500 }, { \"name\": \"Naomi Nagata\", \"age\": 30, \"money\": 345 } ] } ``` and we want to get a list of friends who are over 32 years old using a JsonPath query. To do this, you can write the following query: ``` $.friends ?" }, { "data": "> 32) ``` Let's analyze the query in parts: The query only returns the first friend from the array of user's friends. Like many other JsonPath operators, filters can be arranged in chains. Let's take a more complex query that selects the names of friends who are older than 20 and have less than 400 currency units: ``` $.friends ? (@.age > 20) ? (@.money < 400) . name ``` Let's analyze the query in parts: The query returns a sequence of a single element: \"Naomi Nagata\". In practice, it's recommended to combine multiple filters into one if possible. The above query is equivalent to $.friends ? (@.age > 20 && @.money < 400) . name. JsonPath supports methods that are functions converting one sequence of values to another. The syntax for calling a method is similar to accessing the object key: ``` $.friends.size() ``` Just like in the case of accessing object keys, method calls can be arranged in chains: ``` $.numbers.double().floor() ``` The type method returns a string with the type of the passed value. For each element of the input sequence, the method adds this string to the output sequence according to the table below: | Value type | String with type | |:--|:-| | Null | \"null\" | | Boolean value | \"boolean\" | | Number | \"number\" | | String | \"string\" | | Array | \"array\" | | Object | \"object\" | Examples The size method returns the size of the array. For each element of the input sequence, the method adds the following to the output sequence: Examples Let's take a JSON document: ``` { \"array\": [1, 2, 3], \"object\": { \"a\": 1, \"b\": 2 }, \"scalar\": \"string\" } ``` And queries to it: The double method converts strings to numbers. Before its execution, the input sequence arrays are automatically unpacked. All elements in the input sequence must be strings that contain decimal numbers. It's allowed to specify the fractional part and exponent. Examples The ceiling method rounds up a number. Before its execution, the input sequence arrays are automatically unpacked. All elements in the input sequence must be numbers. Examples The floor method rounds down a number. Before its execution, the input sequence arrays are automatically unpacked. All elements in the input sequence must be numbers. Examples The abs method calculates the absolute value of a number (removes the sign). Before its execution, the input sequence arrays are automatically unpacked. All elements in the input sequence must be numbers. Examples The keyvalue method converts an object to a sequence of key-value pairs. Before its execution, the input sequence arrays are automatically unpacked. All elements in the input sequence must be objects. For each element of the input sequence: Examples Let's take a JSON document: ``` { \"name\": \"Chrisjen\", \"surname\": \"Avasarala\", \"age\": 70 } ``` The $.keyvalue() query returns the following sequence for it: ``` { \"name\": \"age\", \"value\": 70 }, { \"name\": \"name\", \"value\": \"Chrisjen\" }, { \"name\": \"surname\", \"value\": \"Avasarala\" } ``` Functions using JsonPath can pass values into a query. They are called variables. To access a variable, write the $ character and the variable name: $variable. Example: Let the planet variable be equal to ``` { \"name\": \"Mars\", \"gravity\": 0.376 } ``` Then the strict $planet.name query returns" }, { "data": "Unlike many programming languages, JsonPath doesn't support creating new variables or modifying existing ones. All functions for JSON accept: Lets you pass values to a JsonPath query as variables. Syntax: ``` PASSING <expression 1> AS <variable name 1>, <expression 2> AS <variable name 2>, ... ``` <expression> can have the following types: You can set a <variable name> in several ways: Example: ``` JSON_VALUE( $json, \"$.timestamp - $Now + $Hour\" PASSING 24 * 60 as Hour, CurrentUtcTimestamp() as \"Now\" ) ``` The JSON_EXISTS function checks if a JSON value meets the specified JsonPath. Syntax: ``` JSON_EXISTS( <JSON expression>, <JsonPath query>, [<PASSING clause>] [{TRUE | FALSE | UNKNOWN | ERROR} ON ERROR] ) ``` Return value: Bool? Default value: If the ON ERROR section isn't specified, the used section is FALSE ON ERROR Behavior: Examples: ``` $json = CAST(@@{ \"title\": \"Rocinante\", \"crew\": [ \"James Holden\", \"Naomi Nagata\", \"Alex Kamai\", \"Amos Burton\" ] }@@ as Json); SELECT JSON_EXISTS($json, \"$.title\"), -- True JSON_EXISTS($json, \"$.crew[*]\"), -- True JSON_EXISTS($json, \"$.nonexistent\"); -- False, as JsonPath returns an empty result SELECT -- JsonPath error, False is returned because the default section used is FALSE ON ERROR JSON_EXISTS($json, \"strict $.nonexistent\"); SELECT -- JsonPath error, the entire YQL query fails. JSON_EXISTS($json, \"strict $.nonexistent\" ERROR ON ERROR); ``` The JSON_VALUE function retrieves a scalar value from JSON (anything that isn't an array or object). Syntax: ``` JSON_VALUE( <JSON expression>, <JsonPath query>, [<PASSING clause>] [RETURNING <type>] [{ERROR | NULL | DEFAULT <expr>} ON EMPTY] [{ERROR | NULL | DEFAULT <expr>} ON ERROR] ) ``` Return value: <type>? Default values: Behavior: Correlation between JSON and YQL types: Errors executing JSON_VALUE are as follows: The RETURNING section supports such types as numbers, Date, DateTime, Timestamp, Utf8, String, and Bool. Examples: ``` $json = CAST(@@{ \"friends\": [ { \"name\": \"James Holden\", \"age\": 35 }, { \"name\": \"Naomi Nagata\", \"age\": 30 } ] }@@ as Json); SELECT JSON_VALUE($json, \"$.friends[0].age\"), -- \"35\" (type Utf8?) JSON_VALUE($json, \"$.friends[0].age\" RETURNING Uint64), -- 35 (type Uint64?) JSON_VALUE($json, \"$.friends[0].age\" RETURNING Utf8); -- an empty Utf8? due to an error. The JSON's Number type doesn't match the string Utf8 type. SELECT -- \"empty\" (type String?) JSON_VALUE( $json, \"$.friends[50].name\" RETURNING String DEFAULT \"empty\" ON EMPTY ); SELECT -- 20 (type Uint64?). The result of JsonPath execution is empty, but the -- default value from the ON EMPTY section can't be cast to Uint64. -- That's why the value from ON ERROR is used. JSON_VALUE( $json, \"$.friends[50].age\" RETURNING Uint64 DEFAULT -1 ON EMPTY DEFAULT 20 ON ERROR ); ``` The JSON_QUERY function lets you retrieve arrays and objects from JSON. Syntax: ``` JSON_QUERY( <JSON expression>, <JsonPath query>, [<PASSING clause>] [WITHOUT [ARRAY] | WITH [CONDITIONAL | UNCONDITIONAL] [ARRAY] WRAPPER] [{ERROR | NULL | EMPTY ARRAY | EMPTY OBJECT} ON EMPTY] [{ERROR | NULL | EMPTY ARRAY | EMPTY OBJECT} ON ERROR] ) ``` Return value: Json? Default values: Behavior: Note You can't specify the WITH ... WRAPPER and ON EMPTY sections at the same time. Errors running a JSON_QUERY: Examples: ``` $json = CAST(@@{ \"friends\": [ { \"name\": \"James Holden\", \"age\": 35 }, { \"name\": \"Naomi Nagata\", \"age\": 30 } ] }@@ as Json); SELECT JSON_QUERY($json, \"$.friends[0]\"); -- {\"name\": \"James Holden\", \"age\": 35} SELECT JSON_QUERY($json, \"$.friends.name\" WITH UNCONDITIONAL WRAPPER); -- [\"James Holden\", \"Naomi Nagata\"] SELECT JSON_QUERY($json, \"$.friends[0]\" WITH CONDITIONAL WRAPPER), -- {\"name\": \"James Holden\", \"age\": 35} JSON_QUERY($json, \"$.friends.name\" WITH CONDITIONAL WRAPPER); -- [\"James Holden\", \"Naomi Nagata\"] ```" } ]
{ "category": "App Definition and Development", "file_name": "quickstart.md", "project_name": "YDB", "subcategory": "Database" }
[ { "data": "In this guide, you will install a single-node local YDB cluster and execute simple queries against your database. Normally, YDB stores data on multiple SSD/NVMe or HDD raw disk devices without any filesystem. However, for simplicity, this guide emulates disks in RAM or using a file in a regular filesystem. Thus, this setup is unsuitable for any production usage or even benchmarks. See the documentation for DevOps Engineers to learn how to run YDB in a production environment. Note The recommended environment to run YDB is x86_64 Linux. If you don't have access to one, feel free to switch to the instructions on the \"Docker\" tab. Create a directory for YDB and use it as the current working directory: ``` mkdir ~/ydbd && cd ~/ydbd ``` Download and run the installation script: ``` curl https://install.ydb.tech | bash ``` This will download and unpack the archive containing the ydbd executable, libraries, configuration files, and scripts needed to start and stop the local cluster. The script is executed entirely with the current user privileges (notice the lack of sudo). Therefore, it can't do much on the system. You can check which exactly commands it runs by opening the same URL in your browser. Start the cluster in one of the following storage modes: In-memory data: ``` ./start.sh ram ``` In this case, all data is stored only in RAM, it will be lost when the cluster is stopped. Data on disk: ``` ./start.sh disk ``` When you run this command an 80GB ydb.data file will be created in the working directory if it weren't there before. Make sure there's enough disk space available to create it. This file will be used to emulate a raw disk device, which would have been used in production environments. Data on a real disk drive: ``` ./start.sh drive \"/dev/$DRIVE_NAME\" ``` Replace /dev/$DRIVE_NAME with an actual device name that is not used for anything else, for example /dev/sdb. The first time you run this command, the specified disk drive will be fully wiped and then used for YDB data storage. It is recommended to use a NVMe or SSD drive with at least 800Gb data volume. Such setup can be used for single-node performance testing or other environments that do not have any fault-tolerance requirements. Result: ``` Starting storage process... Initializing storage ... Registering database ... Starting database process... Database started. Connection options for YDB CLI: -e grpc://localhost:2136 -d /Root/test ``` Create a directory for YDB and use it as the current working directory: ``` mkdir ~/ydbd && cd ~/ydbd mkdir ydb_data mkdir ydb_certs ``` Run the Docker container: ``` docker run -d --rm --name ydb-local -h localhost \\ --platform linux/amd64 \\ -p 2135:2135 -p 2136:2136 -p 8765:8765 \\ -v $(pwd)/ydbcerts:/ydbcerts -v $(pwd)/ydbdata:/ydbdata \\ -e GRPCTLSPORT=2135 -e GRPCPORT=2136 -e MONPORT=8765 \\ -e YDBUSEINMEMORYPDISKS=true \\" }, { "data": "``` If the container starts successfully, you'll see the container's ID. The container might take a few minutes to initialize. The database will not be available until container initialization is complete. The YDBUSEINMEMORYPDISKS setting makes all data volatile, stored only in RAM. Currently, data persistence by turning it off is supported only on x86_64 processors. Install the Kubernetes CLI kubectl and Helm 3 package manager. Install and run Minikube. Clone the repository with YDB Kubernetes Operator: ``` git clone https://github.com/ydb-platform/ydb-kubernetes-operator && cd ydb-kubernetes-operator ``` Install the YDB controller in the cluster: ``` helm upgrade --install ydb-operator deploy/ydb-operator --set metrics.enabled=false ``` Apply the manifest for creating a YDB cluster: ``` kubectl apply -f samples/minikube/storage.yaml ``` Wait for kubectl get storages.ydb.tech to become Ready. Apply the manifest for creating a database: ``` kubectl apply -f samples/minikube/database.yaml ``` Wait for kubectl get databases.ydb.tech to become Ready. After processing the manifest, a StatefulSet object that describes a set of dynamic nodes is created. The created database will be accessible from inside the Kubernetes cluster by the database-minikube-sample DNS name on port 2135. To continue, get access to port 8765 from outside Kubernetes using kubectl port-forward database-minikube-sample-0 8765. The simplest way to launch your first YDB query is via the built-in web interface. It is launched by default on port 8765 of the YDB server. If you have launched it locally, open localhost:8765 in your web browser. If not, replace localhost with your server's hostname in this URL or use ssh -L 8765:localhost:8765 my-server-hostname-or-ip.example.com to set up port forwarding and still open localhost:8765. You'll see a page like this: YDB is designed to be a multi-tenant system, with potentially thousands of users working with the same cluster simultaneously. Hence, most logical entities inside a YDB cluster reside in a flexible hierarchical structure more akin to Unix's virtual filesystem rather than a fixed-depth schema you might be familiar with from other database management systems. As you can see, the first level of hierarchy consists of databases running inside a single YDB process that might belong to different tenants. /Root is for system purposes, while /Root/test or /local (depending on the chosen installation method) is a playground created during installation in the previous step. Click on either /Root/test or /local, enter your first query, and hit the \"Run\" button: ``` SELECT \"Hello, world!\"u; ``` The query returns the greeting, as it is supposed to: Note Did you notice the odd u suffix? YDB and its query language, YQL, are strongly typed. Regular strings in YDB can contain any binary data, while this suffix indicates that this string literal is of the Utf8 data type, which can only contain valid UTF-8 sequences. Learn more about YDB's type" }, { "data": "The second simplest way to run a SQL query with YDB is the command line interface (CLI), while most real-world applications will likely communicate with YDB via one of the available software development kits (SDK). Feel free to follow the rest of the guide using either the CLI or one of the SDKs instead of the web UI if you feel comfortable doing so. The main purpose of database management systems is to store data for later retrieval. As an SQL-based system, YDB's primary abstraction for data storage is a table. To create our first one, run the following query: ``` CREATE TABLE example ( key UInt64, value String, PRIMARY KEY (key) ); ``` As you can see, it is a simple key-value table. Let's walk through the query step-by-step: Now let's fill our table with some data. The simplest way is to just use literals: ``` INSERT INTO example (key, value) VALUES (123, \"hello\"), (321, \"world\"); ``` Step-by-step walkthrough: To double-check that the rows were indeed added to the table, there's a common query that should return 2 in this case: ``` SELECT COUNT(*) FROM example; ``` A few notable details in this one: Another common way to fill a table with data is by combining INSERT INTO (or UPSERT INTO) and SELECT. In this case, values to be stored are calculated inside the database instead of being provided by the client as literals. We'll use a slightly more realistic query to demonstrate this: ``` $subquery = SELECT ListFromRange(1000, 10000) AS keys; UPSERT INTO example SELECT key, CAST(RandomUuid(key) AS String) AS value FROM $subquery FLATTEN LIST BY keys AS key ``` There's quite a lot going on in this query; let's dig into it: Quick question! What will the SELECT COUNT(*) FROM example; query return now? Stop the local YDB cluster after you have finished experimenting: To stop the local cluster, run the following command: ``` ~/ydbd/stop.sh ``` Optionally, you can then clean up your filesystem by removing your working directory with the rm -rf ~/ydbd command. All data inside the local YDB cluster will be lost. To stop the Docker container with the local cluster, run the following command: ``` docker kill ydb-local ``` Optionally, you can then clean up your filesystem by removing your working directory with the rm -rf ~/ydbd command. All data inside the local YDB cluster will be lost. To delete the YDB database, it is enough to delete the Database resource associated with it: ``` kubectl delete database.ydb.tech database-minikube-sample ``` To delete the YDB cluster, execute the following commands (all data will be lost): ``` kubectl delete storage.ydb.tech storage-minikube-sample ``` To remove the YDB controller from the Kubernetes cluster, delete the release created by Helm: ``` helm delete ydb-operator ``` After getting a hold of some basics demonstrated in this guide, you should be ready to jump into more advanced topics. Choose what looks the most relevant depending on your use case and role:" } ]
{ "category": "App Definition and Development", "file_name": "get-started.md", "project_name": "Vald", "subcategory": "Database" }
[ { "data": "There is a list of documents about Vald.Let's try to check the documents that you want to know. Overview shows the concept of Vald and mentions the top level design of Vald. Tutorial takes you Vald World!!!You can deploy Vald in your Kubernetes cluster and running Vald with sample code. Usecase supports you to imagine how to use Vald for your services.Also, you can get the introduction examples. If you'd like to configure for your Vald Cluster or wonder how to operate, you can find out the answer from these documents. Vald provides Insert, Update, Upsert, Search, and Delete APIs.Each API to use with gRPC for request to the Vald Cluster.Each document describes the definition of API, which helps you to use Vald Official Client. When you encounter any problem, please refer to these documents and try to resolve it. We are welcome to contribute to Vald even not as a developer.Please make sure, how to contribute to Vald. When wondering anything about Vald, please contact to us via Slack or Github. 2019-2024 vald.vdaas.org Vald team" } ]
{ "category": "App Definition and Development", "file_name": "topic.md", "project_name": "YDB", "subcategory": "Database" }
[ { "data": "A topic in YDB is an entity for storing unstructured messages and delivering them to multiple subscribers. Basically, a topic is a named set of messages. A producer app writes messages to a topic. Consumer apps are independent of each other, they receive and read messages from the topic in the order they were written there. Topics implement the publish-subscribe architectural pattern. YDB topics have the following properties: Data is transferred as message streams. A message is the minimum atomic unit of user information. A message consists of a body and attributes and additional system properties. The content of a message is an array of bytes which is not interpreted by YDB in any way. Messages may contain user-defined attributes in \"key-value\" format. They are returned along with the message body when reading the message. User-defined attributes let the consumer decide whether it should process the message without unpacking the message body. Message attributes are set when initializing a write session. This means that all messages written within a single write session will have the same attributes when reading them. To enable horizontal scaling, a topic is divided into partitions that are units of parallelism. Each partition has a limited bandwidth. The recommended write speed is 1 MBps. Note As for now, you can only reduce the number of partitions in a topic by deleting and recreating a topic with a smaller number of partitions. All messages within a partition have a unique sequence number called an offset An offset monotonically increases as new messages are written. Messages are ordered using the producerid and messagegroup_id. The order of written messages is maintained within pairs: <producer ID, message group ID>. When used for the first time, a pair of <producer ID, message group ID> is linked to a topic's partition using the round-robin algorithm and all messages with this pair of IDs get into the same partition. The link is removed if there are no new messages using this producer ID for 14 days. Warning The recommended maximum number of <producer ID, message group ID> pairs is up to 100 thousand per partition in the last 14 days. Let's consider a finance application that calculates the balance on a user's account and permits or prohibits debiting the funds. For such tasks, you can use a message queue. When you top up your account, debit funds, or make a purchase, a message with the account ID, amount, and transaction type is registered in the queue. The application processes incoming messages and calculates the balance. To accurately calculate the balance, the message processing order is crucial. If a user first tops up their account and then makes a purchase, messages with details about these transactions must be processed by the app in the same order. Otherwise there may be an error in the business logic and the app will reject the purchase as a result of insufficient funds. There are guaranteed delivery order mechanisms, but they cannot ensure a message order within a single queue on an arbitrary data" }, { "data": "When several application instances read messages from a stream, a message about account top-ups can be received by one instance and a message about debiting by another. In this case, there's no guaranteed instance with accurate balance information. To avoid this issue, you can, for example, save data in the DBMS, share information between application instances, and implement a distributed cache. YDB can write data so that messages from one source (for example, about transactions from one account) arrive at the same application instance. The source of a message is identified by the source_id, while the sequence number of a message from the source is used to ensure there are no duplicate messages. YDB arranges data streams so that messages from the same source arrive at the same partition. As a result, transaction messages for a given account will always arrive at the same partition and be processed by the application instance linked to this partition. Each of the instances processes its own subset of partitions and there's no need to synchronize the instances. Below is an example when all transactions on accounts with even ids are transferred to the first instance of the application, and with odd ones to the second. For some tasks, the message processing order is not critical. For example, it's sometimes important to simply deliver data that will then be ordered by the storage system. For such tasks, the 'no-deduplication' mode can be used. In this scenario neither producerid or sourceid are specified in write session setup and sequence numbers are also not used for messages. The no-deduplication mode offers better perfomance and requires less server resources, but there is no message ordering or deduplication on the server side, which means that some message sent to the server multiple times (for example due to network instablity or writer process crash) also may be written to the topic multiple times. Warning We strongly recommend that you don't use random or pseudo-random source IDs. We recommend using a maximum of 100 thousand different source IDs per partition. A source ID is an arbitrary string up to 2048 characters long. This is usually the ID of a file server or some other ID. | Type | ID | Description | |:-|:-|:--| | File | Server ID | Files are used to store application logs. In this case, it's convenient to use the server ID as a source ID. | | User actions | ID of the class of user actions, such as \"viewing a page\", \"making a purchase\", and so on. | It's important to handle user actions in the order they were performed by the user. At the same time, there is no need to handle every single user action in one application. In this case, it's convenient to group user actions by class. | A message group ID is an arbitrary string up to 2048 characters long. This is usually a file name or user" }, { "data": "| Type | ID | Description | |:-|:|:--| | File | Full file path | All data from the server and the file it hosts will be sent to the same partition. | | User actions | User ID | It's important to handle user actions in the order they were performed. In this case, it's convenient to use the user ID as a source ID. | All messages from the same source have a sequence number used for their deduplication. A message sequence number should monotonically increase within a topic, source pair. If the server receives a message whose sequence number is less than or equal to the maximum number written for the topic, source pair, the message will be skipped as a duplicate. Some sequence numbers in the sequence may be skipped. Message sequence numbers must be unique within the topic, source pair. Sequence numbers are not used if no-deduplication mode is enabled. | Type | Example | Description | |:|:--|:--| | File | Offset of transferred data from the beginning of a file | You can't delete lines from the beginning of a file, since this will lead to skipping some data as duplicates or losing some data. | | DB table | Auto-increment record ID | nan | The message retention period is set for each topic. After it expires, messages are automatically deleted. An exception is data that hasn't been read by an important consumer: this data will be stored until it's read. When transferring data, the producer app indicates that a message can be compressed using one of the supported codecs. The codec name is passed while writing a message, saved along with it, and returned when reading the message. Compression applies to each individual message, no batch message compression is supported. Data is compressed and decompressed on the producer and consumer apps end. Supported codecs are explicitly listed in each topic. When making an attempt to write data to a topic with a codec that is not supported, a write error occurs. | Codec | Description | |:--|:| | raw | No compression. | | gzip | Gzip compression. | | lzop | lzop compression. | | zstd | zstd compression. | A consumer is a named entity that reads data from a topic. A consumer contains committed consumer offsets for each topic read on their behalf. A consumer offset is a saved offset of a consumer by each topic partition. It's saved by a consumer after sending commits of the data read. When a new read session is established, messages are delivered to the consumer starting with the saved consumer offset. This lets users avoid saving the consumer offset on their end. A consumer may be flagged as \"important\". This flag indicates that messages in a topic won't be removed until the consumer reads and confirms them. You can set this flag for most critical consumers that need to handle all data even if there's a long idle time. Warning As a long timeout of an important consumer may result in full use of all available free space by unread messages, be sure to monitor important consumers' data read lags." } ]
{ "category": "App Definition and Development", "file_name": "ydb_docker.md", "project_name": "YDB", "subcategory": "Database" }
[ { "data": "In this guide, you will install a single-node local YDB cluster and execute simple queries against your database. Normally, YDB stores data on multiple SSD/NVMe or HDD raw disk devices without any filesystem. However, for simplicity, this guide emulates disks in RAM or using a file in a regular filesystem. Thus, this setup is unsuitable for any production usage or even benchmarks. See the documentation for DevOps Engineers to learn how to run YDB in a production environment. Note The recommended environment to run YDB is x86_64 Linux. If you don't have access to one, feel free to switch to the instructions on the \"Docker\" tab. Create a directory for YDB and use it as the current working directory: ``` mkdir ~/ydbd && cd ~/ydbd ``` Download and run the installation script: ``` curl https://install.ydb.tech | bash ``` This will download and unpack the archive containing the ydbd executable, libraries, configuration files, and scripts needed to start and stop the local cluster. The script is executed entirely with the current user privileges (notice the lack of sudo). Therefore, it can't do much on the system. You can check which exactly commands it runs by opening the same URL in your browser. Start the cluster in one of the following storage modes: In-memory data: ``` ./start.sh ram ``` In this case, all data is stored only in RAM, it will be lost when the cluster is stopped. Data on disk: ``` ./start.sh disk ``` When you run this command an 80GB ydb.data file will be created in the working directory if it weren't there before. Make sure there's enough disk space available to create it. This file will be used to emulate a raw disk device, which would have been used in production environments. Data on a real disk drive: ``` ./start.sh drive \"/dev/$DRIVE_NAME\" ``` Replace /dev/$DRIVE_NAME with an actual device name that is not used for anything else, for example /dev/sdb. The first time you run this command, the specified disk drive will be fully wiped and then used for YDB data storage. It is recommended to use a NVMe or SSD drive with at least 800Gb data volume. Such setup can be used for single-node performance testing or other environments that do not have any fault-tolerance requirements. Result: ``` Starting storage process... Initializing storage ... Registering database ... Starting database process... Database started. Connection options for YDB CLI: -e grpc://localhost:2136 -d /Root/test ``` Create a directory for YDB and use it as the current working directory: ``` mkdir ~/ydbd && cd ~/ydbd mkdir ydb_data mkdir ydb_certs ``` Run the Docker container: ``` docker run -d --rm --name ydb-local -h localhost \\ --platform linux/amd64 \\ -p 2135:2135 -p 2136:2136 -p 8765:8765 \\ -v $(pwd)/ydbcerts:/ydbcerts -v $(pwd)/ydbdata:/ydbdata \\ -e GRPCTLSPORT=2135 -e GRPCPORT=2136 -e MONPORT=8765 \\ -e YDBUSEINMEMORYPDISKS=true \\" }, { "data": "``` If the container starts successfully, you'll see the container's ID. The container might take a few minutes to initialize. The database will not be available until container initialization is complete. The YDBUSEINMEMORYPDISKS setting makes all data volatile, stored only in RAM. Currently, data persistence by turning it off is supported only on x86_64 processors. Install the Kubernetes CLI kubectl and Helm 3 package manager. Install and run Minikube. Clone the repository with YDB Kubernetes Operator: ``` git clone https://github.com/ydb-platform/ydb-kubernetes-operator && cd ydb-kubernetes-operator ``` Install the YDB controller in the cluster: ``` helm upgrade --install ydb-operator deploy/ydb-operator --set metrics.enabled=false ``` Apply the manifest for creating a YDB cluster: ``` kubectl apply -f samples/minikube/storage.yaml ``` Wait for kubectl get storages.ydb.tech to become Ready. Apply the manifest for creating a database: ``` kubectl apply -f samples/minikube/database.yaml ``` Wait for kubectl get databases.ydb.tech to become Ready. After processing the manifest, a StatefulSet object that describes a set of dynamic nodes is created. The created database will be accessible from inside the Kubernetes cluster by the database-minikube-sample DNS name on port 2135. To continue, get access to port 8765 from outside Kubernetes using kubectl port-forward database-minikube-sample-0 8765. The simplest way to launch your first YDB query is via the built-in web interface. It is launched by default on port 8765 of the YDB server. If you have launched it locally, open localhost:8765 in your web browser. If not, replace localhost with your server's hostname in this URL or use ssh -L 8765:localhost:8765 my-server-hostname-or-ip.example.com to set up port forwarding and still open localhost:8765. You'll see a page like this: YDB is designed to be a multi-tenant system, with potentially thousands of users working with the same cluster simultaneously. Hence, most logical entities inside a YDB cluster reside in a flexible hierarchical structure more akin to Unix's virtual filesystem rather than a fixed-depth schema you might be familiar with from other database management systems. As you can see, the first level of hierarchy consists of databases running inside a single YDB process that might belong to different tenants. /Root is for system purposes, while /Root/test or /local (depending on the chosen installation method) is a playground created during installation in the previous step. Click on either /Root/test or /local, enter your first query, and hit the \"Run\" button: ``` SELECT \"Hello, world!\"u; ``` The query returns the greeting, as it is supposed to: Note Did you notice the odd u suffix? YDB and its query language, YQL, are strongly typed. Regular strings in YDB can contain any binary data, while this suffix indicates that this string literal is of the Utf8 data type, which can only contain valid UTF-8 sequences. Learn more about YDB's type" }, { "data": "The second simplest way to run a SQL query with YDB is the command line interface (CLI), while most real-world applications will likely communicate with YDB via one of the available software development kits (SDK). Feel free to follow the rest of the guide using either the CLI or one of the SDKs instead of the web UI if you feel comfortable doing so. The main purpose of database management systems is to store data for later retrieval. As an SQL-based system, YDB's primary abstraction for data storage is a table. To create our first one, run the following query: ``` CREATE TABLE example ( key UInt64, value String, PRIMARY KEY (key) ); ``` As you can see, it is a simple key-value table. Let's walk through the query step-by-step: Now let's fill our table with some data. The simplest way is to just use literals: ``` INSERT INTO example (key, value) VALUES (123, \"hello\"), (321, \"world\"); ``` Step-by-step walkthrough: To double-check that the rows were indeed added to the table, there's a common query that should return 2 in this case: ``` SELECT COUNT(*) FROM example; ``` A few notable details in this one: Another common way to fill a table with data is by combining INSERT INTO (or UPSERT INTO) and SELECT. In this case, values to be stored are calculated inside the database instead of being provided by the client as literals. We'll use a slightly more realistic query to demonstrate this: ``` $subquery = SELECT ListFromRange(1000, 10000) AS keys; UPSERT INTO example SELECT key, CAST(RandomUuid(key) AS String) AS value FROM $subquery FLATTEN LIST BY keys AS key ``` There's quite a lot going on in this query; let's dig into it: Quick question! What will the SELECT COUNT(*) FROM example; query return now? Stop the local YDB cluster after you have finished experimenting: To stop the local cluster, run the following command: ``` ~/ydbd/stop.sh ``` Optionally, you can then clean up your filesystem by removing your working directory with the rm -rf ~/ydbd command. All data inside the local YDB cluster will be lost. To stop the Docker container with the local cluster, run the following command: ``` docker kill ydb-local ``` Optionally, you can then clean up your filesystem by removing your working directory with the rm -rf ~/ydbd command. All data inside the local YDB cluster will be lost. To delete the YDB database, it is enough to delete the Database resource associated with it: ``` kubectl delete database.ydb.tech database-minikube-sample ``` To delete the YDB cluster, execute the following commands (all data will be lost): ``` kubectl delete storage.ydb.tech storage-minikube-sample ``` To remove the YDB controller from the Kubernetes cluster, delete the release created by Helm: ``` helm delete ydb-operator ``` After getting a hold of some basics demonstrated in this guide, you should be ready to jump into more advanced topics. Choose what looks the most relevant depending on your use case and role:" } ]
{ "category": "App Definition and Development", "file_name": "table#olap-data-types.md", "project_name": "YDB", "subcategory": "Database" }
[ { "data": "A table is a relational table containing a set of related data, composed of rows and columns. Tables represent entities. For instance, a blog article can be represented by a table named article with columns: id, date_create, title, author, body and so on. Rows in the table hold the data, while columns define the data types. For example, the id column cannot be empty (NOT NULL) and should contain only unique integer values. A record in YQL might look like this: ``` CREATE TABLE article ( id Int64 NOT NULL, date_create Date, author String, title String, body String, PRIMARY KEY (id) ) ``` Please note that currently, the NOT NULL constraint can only be applied to columns that are part of the primary key. YDB supports the creation of both row-oriented and column-oriented tables. The primary difference between them lies in their use-cases and how data is stored on the disk drive. In row-oriented tables, data is stored sequentially in the form of rows, while in column-oriented tables, data is stored in the form of columns. Each table type has its own specific purpose. Row-oriented tables are well-suited for transactional queries generated by Online Transaction Processing (OLTP) systems, such as weather service backends or online stores. Row-oriented tables offer efficient access to a large number of columns simultaneously. Lookups in row-oriented tables are optimized due to the utilization of indexes. An index is a data structure that improves the speed of data retrieval operations based on one or several columns. It's analogous to an index in a book: instead of scanning every page of the book to find a specific chapter, you can refer to the index at the back of the book and quickly navigate to the desired page. Searching using an index allows you to swiftly locate the required rows without scanning through all the data. For instance, if you have an index on the author column and you're looking for articles written by Gray, the DBMS leverages this index to quickly identify all rows associated with that surname. You can create a row-oriented table through the YDB web interface, CLI, or SDK. Regardless of the method you choose to interact with YDB, it's important to keep in mind the following rule: the table must have at least one primary key column, and it's permissible to create a table consisting solely of primary key columns. By default, when creating a row-oriented table, all columns are optional and can have NULL values. This behavior can be modified by setting the NOT NULL conditions for key columns that are part of the primary key. Primary keys are unique, and row-oriented tables are always sorted by this key. This means that point reads by the key, as well as range queries by key or key prefix, are efficiently executed (essentially using an index). It's permissible to create a table consisting solely of key columns. When choosing a key, it's crucial to be careful, so we recommend reviewing the article: \"Choosing a Primary Key for Maximum" }, { "data": "A row-oriented database table can be partitioned by primary key value ranges. Each partition of the table is responsible for the specific section of primary keys. Key ranges served by different partitions do not overlap. Different table partitions can be served by different cluster nodes (including ones in different locations). Partitions can also move independently between servers to enable rebalancing or ensure partition operability if servers or network equipment goes offline. If there is not a lot of data or load, the table may consist of a single shard. As the amount of data served by the shard or the load on the shard grows, YDB automatically splits this shard into two shards. The data is split by the median value of the primary key if the shard size exceeds the threshold. If partitioning by load is used, the shard first collects a sample of the requested keys (that can be read, written, and deleted) and, based on this sample, selects a key for partitioning to evenly distribute the load across new shards. So in the case of load-based partitioning, the size of new shards may significantly vary. The size-based shard split threshold and automatic splitting can be configured (enabled/disabled) individually for each database table. In addition to automatically splitting shards, you can create an empty table with a predefined number of shards. You can manually set the exact shard key split range or evenly split it into a predefined number of shards. In this case, ranges are created based on the first component of the primary key. You can set even splitting for tables that have a Uint64 or Uint32 integer as the first component of the primary key. Partitioning parameters refer to the table itself rather than to secondary indexes built on its data. Each index is served by its own set of shards and decisions to split or merge its partitions are made independently based on the default settings. These settings may become available to users in the future like the settings of the main table. A split or a merge usually takes about 500 milliseconds. During this time, the data involved in the operation becomes temporarily unavailable for reads and writes. Without raising it to the application level, special wrapper methods in the YDB SDK make automatic retries when they discover that a shard is being split or merged. Please note that if the system is overloaded for some reason (for example, due to a general shortage of CPU or insufficient DB disk throughput), split and merge operations may take longer. The following table partitioning parameters are defined in the data schema: Automatic partitioning by partition size. If a partition size exceeds the value specified by the AUTOPARTITIONINGPARTITIONSIZEMB parameter, it is enqueued for splitting. If the total size of two or more adjacent partitions is less than 50% of the AUTOPARTITIONINGPARTITIONSIZEMB value, they are enqueued for merging. Automatic partitioning by" }, { "data": "If a shard consumes more than 50% of the CPU for a few dozens of seconds, it is enqueued for splitting. If the total load on two or more adjacent shards uses less than 35% of a single CPU core within an hour, they are enqueued for merging. Performing split or merge operations uses the CPU and takes time. Therefore, when dealing with a variable load, we recommend both enabling this mode and setting AUTOPARTITIONINGMINPARTITIONSCOUNT to a value other than 1. This ensures that a decreased load does not cause the number of partitions to drop below the required value, resulting in a need to split them again when the load increases. When choosing the minimum number of partitions, it makes sense to consider that one table partition can only be hosted on one server and use no more than 1 CPU core for data update operations. Hence, you can set the minimum number of partitions for a table on which a high load is expected to at least the number of nodes (servers) or, preferably, to the number of CPU cores allocated to the database. Partition size threshold in MB. If exceeded, a shard splits. Takes effect when the AUTOPARTITIONINGBY_SIZE mode is enabled. Partitions are only merged if their actual number exceeds the value specified by this parameter. When using automatic partitioning by load, we recommend that you set this parameter to a value other than 1, so that periodic load drops don't lead to a decrease in the number of partitions below the required one. Partitions are only split if their number doesn't exceed the value specified by this parameter. With any automatic partitioning mode enabled, we recommend that you set a meaningful value for this parameter and monitor when the actual number of partitions approaches this value, otherwise splitting of partitions will stop sooner or later under an increase in data or load, which will lead to a failure. The number of partitions for uniform initial table partitioning. The primary key's first column must have type Uint64 or Uint32. A created table is immediately divided into the specified number of partitions. When automatic partitioning is enabled, make sure to set the correct value for AUTOPARTITIONINGMINPARTITIONSCOUNT to avoid merging all partitions into one immediately after creating the table. Boundary values of keys for initial table partitioning. It's a list of boundary values separated by commas and surrounded with brackets. Each boundary value can be either a set of values of key columns (also separated by commas and surrounded with brackets) or a single value if only the values of the first key column are specified. Examples: (100, 1000), ((100, \"abc\"), (1000, \"cde\")). When automatic partitioning is enabled, make sure to set the correct value for AUTOPARTITIONINGMINPARTITIONSCOUNT to avoid merging all partitions into one immediately after creating the table. When making queries in YDB, the actual execution of a query to each shard is performed at a single point serving the distributed transaction" }, { "data": "By storing data in shared storage, you can run one or more shard followers without allocating additional storage space: the data is already stored in replicated format, and you can serve more than one reader (but there is still only one writer at any given moment). Reading data from followers allows you: You can enable running read replicas for each shard of the table in the table data schema. The read replicas (followers) are typically accessed without leaving the data center network, which ensures response delays in milliseconds. | Parameter name | Description | Type | Acceptable values | Update capability | Reset capability | |:--|:--|:-|:-|:--|:-| | READREPLICASSETTINGS | PERAZ means using the specified number of replicas in each AZ and ANYAZ in all AZs in total. | String | \"PERAZ:<count>\", \"ANYAZ:<count>\", where <count> is the number of replicas | Yes | No | The internal state of each of the followers is restored exactly and fully consistently from the leader state. Besides the data state in storage, followers also receive a stream of updates from the leader. Updates are sent in real time, immediately after the commit to the log. However, they are sent asynchronously, resulting in some delay (usually no more than dozens of milliseconds, but sometimes longer in the event of cluster connectivity issues) in applying updates to followers relative to their commit on the leader. Therefore, reading data from followers is only supported in the transaction mode StaleReadOnly(). If there are multiple followers, their delay from the leader may vary: although each follower of each of the shards retains internal consistency, artifacts may be observed from shard to shard. Please provide for this in your application code. For that same reason, it's currently impossible to perform cross-shard transactions from followers. YDB supports automatic background deletion of expired data. A table data schema may define a column containing a Datetime or a Timestamp value. A comparison of this value with the current time for all rows will be performed in the background. Rows for which the current time becomes greater than the column value plus specified delay, will be deleted. | Parameter name | Type | Acceptable values | Update capability | Reset capability | |:--|:--|:-|:--|:-| | TTL | Expression | Interval(\"<literal>\") ON <column> [AS <unit>] | Yes | Yes | Where <unit> is a unit of measurement, specified only for column with a numeric type: For more information about deleting expired data, see Time to Live (TTL). YDB lets you rename an existing table, move it to another directory of the same database, or replace one table with another, deleting the data in the replaced table. Only the metadata of the table is changed by operations (for example, its path and name). The table data is neither moved nor overwritten. Operations are performed in isolation, the external process sees only two states of the table: before and after the operation. This is critical, for example, for table replacement: the data of the replaced table is deleted by the same transaction that renames the replacing table. During the replacement, there might be errors in queries to the replaced table that have retryable" }, { "data": "The speed of renaming is determined by the type of data transactions currently running against the table and doesn't depend on the table size. Using a Bloom filter lets you more efficiently determine if some keys are missing in a table when making multiple point queries by primary key. This reduces the number of required disk I/O operations but increases the amount of memory consumed. | Parameter name | Type | Acceptable values | Update capability | Reset capability | |:--|:-|:--|:--|:-| | KEYBLOOMFILTER | Enum | ENABLED, DISABLED | Yes | No | Warning Column-oriented YDB tables are in the Preview mode. YDB's column-oriented tables store data of each column separately (independently) from each other. This data storage principle is optimized for handling Online Analytical Processing (OLAP) workloads, as only the columns directly involved in the query are read during its execution. One of the key advantages of this approach is the high data compression ratios since columns often contain repetitive or similar data. A downside, however, is that operations on whole rows become more resource-intensive. At the moment, the main use case for YDB column-oriented tables is writing data with an increasing primary key (for example, event time), analyzing this data, and deleting outdated data based on TTL. The optimal way to add data to YDB column-oriented tables is batch upload, performed in MB-sized blocks. Data packet insertion is atomic: data will be written either to all partitions or none. In most cases, working with YDB column-oriented tables is similar to working with row tables, but there are differences: Let's recreate the \"article\" table, this time in column-oriented format, using the following YQL command: ``` CREATE TABLE articlecolumntable ( id Int64 NOT NULL, author String, title String, body String, PRIMARY KEY (id) ) WITH (STORE = COLUMN); ``` At the moment, not all functionality of column-oriented tables is implemented. The following features are not currently supported: Unlike row-oriented YDB tables, you cannot partition column-oriented tables by primary keys but only by specially designated partitioning keys. Partitioning keys constitute a subset of the table's primary keys. Example of column-oriented partitioning: ``` CREATE TABLE articlecolumntable ( id Int64 NOT NULL, author String, title String, body String, PRIMARY KEY (id) ) PARTITION BY HASH(id) WITH (STORE = COLUMN); ``` Unlike data partitioning in row-oriented YDB tables, key values are not used to partition data in column-oriented tables. This way, you can uniformly distribute data across all your existing partitions. This kind of partitioning enables you to avoid hotspots at data inserta and speeding up analytical queries that process (that is, read) large amounts of data. How you select partitioning keys substantially affects the performance of queries to your column-oriented tables. Learn more in Selecting a primary key for maximum column-oriented table performance. To manage data partitioning, use the AUTOPARTITIONINGMINPARTITIONSCOUNT additional parameter. The system ignores other partitioning parameters for column-oriented tables. AUTOPARTITIONINGMINPARTITIONSCOUNT sets the minimum physical number of partitions used to store data. Because it ignores all the other partitioning parameters, the system uses the same value as the upper partition limit." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Apache RocketMQ", "subcategory": "Streaming & Messaging" }
[ { "data": "During Ali's nascent days of RocketMQ, we used it for asynchronous communications, search, social networking activity flows, data pipelines, and trade processes. As our trade business throughput rose, the pressure originating from our messaging cluster became urgent. According to our research, the ActiveMQ IO module reached a bottleneck as queue and virtual topic usage increased. We tried our best to solve this problem by throttling, circuit breaker or downgrading, but the results were not satisfactory. So we tried the popular messaging solution Kafka. unfortunately, Kafka could not meet our requirements, especially in terms of low latency and high reliability, as detailed here. In this case, we decided to invent a new messaging engine to handle a wider range of messaging use cases, covering from traditional pub/sub scenarios to high-volume, real-time, zero-error transaction systems. Since its inception, Apache RocketMQ has been widely adopted by many enterprise developers and cloud vendors for its simple architecture, rich business functionality, and extreme scalability. After more than ten years of large-scale scenario polishing, RocketMQ has become the industry consensus as the preferred solution for financial-grade reliable business messages, and is widely used in business scenarios in Internet, big data, mobile Internet, IoT and other fields. The following table shows the comparison between RocketMQ, ActiveMQ and Kafka | Messaging Product | Client SDK | Protocol and Specification | Ordered Message | Scheduled Message | Batched Message | BroadCast Message | Message Filter | Server Triggered Redelivery | Message Storage | Message Retroactive | Message Priority | High Availability and Failover | Message Track | Configuration | Management and Operation Tools | |:--|:|:--|:-|:--|:|:--|:--|:|:|:|:-|:--|:-|:--|:-| | ActiveMQ | Java, .NET, C++ etc. | Push model, support OpenWire, STOMP, AMQP, MQTT, JMS | Exclusive Consumer or Exclusive Queues can ensure ordering | Supported | Not Supported | Supported | Supported | Not Supported | Supports very fast persistence using JDBC along with a high performance journalsuch as levelDB, kahaDB | Supported | Supported | Supported, depending on storage,if using levelDB it requires a ZooKeeper server | Not Supported | The default configuration is low level, user need to optimize the configuration parameters | Supported | | Kafka | Java, Scala etc. | Pull model, support TCP | Ensure ordering of messages within a partition | Not Supported | Supported, with async producer | Not Supported | Supported, you can use Kafka Streams to filter messages | Not Supported | High performance file storage | Supported offset indicate | Not Supported | Supported, requires a ZooKeeper server | Not Supported | Kafka uses key-value pairs format for configuration. These values can be supplied either from a file or programmatically. | Supported, use terminal command to expose core metrics | | RocketMQ | Java, C++, Go | Pull model, support TCP, JMS, OpenMessaging | Ensure strict ordering of messages,and can scale out gracefully | Supported | Supported, with sync mode to avoid message loss | Supported | Supported, property filter expressions based on SQL92 | Supported | High performance and low latency file storage | Supported timestamp and offset two indicates | Not Supported | Supported, Master-Slave model, without another kit | Supported | Work out of box,user only need to pay attention to a few configurations | Supported, rich web and terminal command to expose core metrics |" } ]
{ "category": "App Definition and Development", "file_name": "operator.md", "project_name": "Vitess", "subcategory": "Database" }
[ { "data": "Vitess Operator for Kubernetes PlanetScale provides a Vitess Operator for Kubernetes, released under the Apache 2.0 license. The following steps show how to get started using Minikube: Before we get started, lets get a few pre-requisites out of the way: Install Docker Engine locally. Install Minikube and start a Minikube engine: ``` minikube start --kubernetes-version=v1.28.5 --cpus=4 --memory=11000 --disk-size=32g ``` Install kubectl and ensure it is in your PATH. Install the MySQL client locally. Install vtctldclient locally. Change to the operator example directory: ``` git clone https://github.com/vitessio/vitess cd vitess/examples/operator git checkout release-19.0 ``` Install the operator: ``` kubectl apply -f operator.yaml ``` In this directory, you will see a group of yaml files. The first digit of each file name indicates the phase of example. The next two digits indicate the order in which to execute them. For example, 101initialcluster.yaml is the first file of the first phase. We shall execute that now: ``` kubectl apply -f 101initialcluster.yaml ``` You can check the state of your cluster with kubectl get pods. After a few minutes, it should show that all pods are in the status of running: ``` $ kubectl get pods NAME READY STATUS RESTARTS AGE example-commerce-x-x-zone1-vtorc-c13ef6ff-5db4c77865-l96xq 1/1 Running 2 (2m49s ago) 5m16s example-etcd-faf13de3-1 1/1 Running 0 5m17s example-etcd-faf13de3-2 1/1 Running 0 5m17s example-etcd-faf13de3-3 1/1 Running 0 5m17s example-vttablet-zone1-2469782763-bfadd780 3/3 Running 1 (2m43s ago) 5m16s example-vttablet-zone1-2548885007-46a852d0 3/3 Running 1 (2m47s ago) 5m16s example-zone1-vtadmin-c03d7eae-7c6f6c98f8-f4f5z 2/2 Running 0 5m17s example-zone1-vtctld-1d4dcad0-57b9d7bc4b-2tnqd 1/1 Running 2 (2m53s ago) 5m17s example-zone1-vtgate-bc6cde92-7d445d676-x6npk 1/1 Running 2 (3m ago) 5m17s vitess-operator-5f47c6c45d-bgqp2 1/1 Running 0 6m52s ``` For ease-of-use, Vitess provides a script to port-forward from Kubernetes to your local machine. This script also recommends setting up aliases for mysql and vtctldclient: ``` ./pf.sh & alias vtctldclient=\"vtctldclient --server=localhost:15999\" alias mysql=\"mysql -h 127.0.0.1 -P 15306 -u user\" ``` Setting up aliases changes mysql to always connect to Vitess for your current session. To revert this, type unalias mysql && unalias vtctldclient or close your session. Once the port-forward starts running, the VTAdmin UI will be available at http://localhost:14000/ Load our initial schema: ``` vtctldclient ApplySchema --sql-file=\"createcommerceschema.sql\" commerce vtctldclient ApplyVSchema --vschema-file=\"vschemacommerceinitial.json\" commerce ``` You should now be able to connect to the VTGate Server in your cluster with the MySQL client: ``` ~/vitess/examples/operator$ mysql Welcome to the MySQL monitor. Commands end with ; or \\g. Your MySQL connection id is 3 Server version: 8.0.30-Vitess MySQL Community Server (GPL) Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\\h' for help. Type '\\c' to clear the current input statement. mysql> show databases; +--+ | Database | +--+ | commerce | | information_schema | | mysql | | sys | | performance_schema | +--+ 5 rows in set (0.01 sec) mysql> ``` In this example, we deployed a single unsharded keyspace named commerce. Unsharded keyspaces have a single shard named 0. The following schema reflects a common ecommerce scenario that was created by the script: ``` create table product( sku varbinary(128), description varbinary(128), price bigint, primary key(sku) ); create table customer( customerid bigint not null autoincrement, email varbinary(128), primary key(customer_id) ); create table corder( orderid bigint not null autoincrement, customer_id bigint, sku varbinary(128), price bigint, primary key(order_id) ); ``` The schema has been simplified to include only those fields that are significant to the example: You can now proceed with MoveTables. Or alternatively, if you would like to teardown your example: ``` kubectl delete -f 101initialcluster.yaml ``` Congratulations on completing this exercise! Last updated May 14, 2024" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "Try our Experimental-AI powered search above! YugabyteDB is a high-performance distributed SQL database for powering global, internet-scale applications. Built using a unique combination of high-performance document store, per-shard distributed consensus replication and multi-shard ACID transactions (inspired by Google Spanner), YugabyteDB serves both scale-out RDBMS and internet-scale OLTP workloads with low query latency, extreme resilience against failures and global data distribution. As a cloud native database, it can be deployed across public and private clouds as well as in Kubernetes environments with ease. YugabyteDB is developed and distributed as an Apache 2.0 open source project. YugabyteDB is a transactional database that brings together four must-have needs of cloud native apps - namely SQL as a flexible query language, low-latency performance, continuous availability, and globally-distributed scalability. Other databases do not serve all 4 of these needs simultaneously. Monolithic SQL databases offer SQL and low-latency reads, but neither have the ability to tolerate failures, nor can they scale writes across multiple nodes, zones, regions, and clouds. Distributed NoSQL databases offer read performance, high availability, and write scalability, but give up on SQL features such as relational data modeling and ACID transactions. YugabyteDB feature highlights are listed below. SQL JOINs and distributed transactions that allow multi-row access across any number of shards at any scale. Transactional document store backed by self-healing, strongly-consistent, synchronous replication. Low latency for geo-distributed applications with multiple read consistency levels and read replicas. Linearly scalable throughput for ingesting and serving ever-growing datasets. Global data distribution that brings consistent data close to users through multi-region and multi-cloud deployments. Optional two-region multi-master and master-follower configurations powered by CDC-driven asynchronous replication. Auto-sharding and auto-rebalancing to ensure uniform load across all nodes even for very large clusters. Built for the container era with highly elastic scaling and infrastructure portability, including Kubernetes-driven orchestration. Self-healing database that automatically tolerates any failures common in the inherently unreliable modern cloud infrastructure. YugabyteDB has had the following major (stable) releases: Releases, including upcoming releases, are outlined on the Releases Overview page. The roadmap for this release can be found on GitHub. Starting with v1.3, YugabyteDB is 100% open source. It is licensed under Apache 2.0 and the source is available on GitHub. Yes, both YugabyteDB APIs are production ready. YCQL achieved this status starting with v1.0 in May 2018 while YSQL became production ready starting v2.0 in September 2019. Reference deployments are listed in Success Stories. Some features are marked Beta in every release. Following are the points to consider: Code is well tested. Enabling the feature is considered safe. Some of these features enabled by default. Support for the overall feature will not be dropped, though details may change in incompatible ways in a subsequent beta or GA release. Recommended only for non-production" }, { "data": "Please do try our beta features and give feedback on them on our Slack community or by filing a GitHub issue. YugabyteDB is the 100% open source core database. It is the best choice for the startup organizations with strong technical operations expertise looking to deploy to production with traditional DevOps tools. YugabyteDB Anywhere is commercial software for running a self-managed YugabyteDB-as-a-Service. It has built-in cloud native operations, enterprise-grade deployment options, and world-class support. It is the simplest way to run YugabyteDB in mission-critical production environments with one or more regions (across both public cloud and on-premises data centers). YugabyteDB Managed is Yugabyte's fully-managed cloud service on Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Sign up to get started. For a more detailed comparison between the above, see Compare Products. Please follow the steps in the vulnerability disclosure policy to report a vulnerability to our security team. The policy outlines our commitments to you when you disclose a potential vulnerability, the reporting process, and how we will respond. Trade-offs depend on the type of database used as baseline for comparison. Examples: Amazon Aurora, Google Cloud Spanner, CockroachDB, TiDB Benefits of YugabyteDB Trade-offs Learn more: What is Distributed SQL? Examples: PostgreSQL, MySQL, Oracle, Amazon Aurora. Benefits of YugabyteDB Trade-offs Learn more: Distributed PostgreSQL on a Google Spanner Architecture Query Layer Examples: Vitess, Citus Benefits of YugabyteDB Trade-offs Learn more: Rise of Globally Distributed SQL Databases Redefining Transactional Stores for Cloud Native Era Examples: MongoDB, Amazon DynamoDB, FoundationDB, Azure Cosmos DB. Benefits of YugabyteDB Trade-offs Learn more: Why are NoSQL Databases Becoming Transactional? Examples: Apache Cassandra, Couchbase. Benefits of YugabyteDB Trade-offs Learn more: Apache Cassandra: The Truth Behind Tunable Consistency, Lightweight Transactions & Secondary Indexes YugabyteDB is a good fit for fast-growing, cloud native applications that need to serve business-critical data reliably, with zero data loss, high availability, and low latency. Common use cases include: Distributed Online Transaction Processing (OLTP) applications needing multi-region scalability without compromising strong consistency and low latency. For example, user identity, Retail product catalog, Financial data service. Hybrid Transactional/Analytical Processing (HTAP), also known as Translytical, applications needing real-time analytics on transactional data. For example, user personalization, fraud detection, machine learning. Streaming applications needing to efficiently ingest, analyze, and store ever-growing data. For example, IoT sensor analytics, time series metrics, real-time monitoring. See some success stories at yugabyte.com. YugabyteDB is not a good fit for traditional Online Analytical Processing (OLAP) use cases that need complete ad-hoc analytics. Use an OLAP store such as Druid or a data warehouse such as Snowflake. Yahoo Cloud Serving Benchmark (YCSB) is a popular benchmarking framework for NoSQL databases. We benchmarked the Yugabyte Cloud QL (YCQL) API against standard Apache Cassandra using YCSB. YugabyteDB outperformed Apache Cassandra by increasing margins as the number of keys (data density) increased across all the 6 YCSB workload configurations. Netflix Data Benchmark (NDBench) is another publicly available, cloud-enabled benchmark tool for data store" }, { "data": "We ran NDBench against YugabyteDB for 7 days and observed P99 and P995 latencies that were orders of magnitude less than that of Apache Cassandra. Details for both the above benchmarks are published in Building a Strongly Consistent Cassandra with Better Performance. Jepsen is a widely used framework to evaluate the behavior of databases under different failure scenarios. It allows for a database to be run across multiple nodes, and create artificial failure scenarios, as well as verify the correctness of the system under these scenarios. YugabyteDB 1.2 passes formal Jepsen testing. See Compare YugabyteDB to other databases DocDB, YugabyteDB's distributed document store is common across all APIs, and built using a custom integration of Raft replication, distributed ACID transactions, and the RocksDB storage engine. Specifically, DocDB enhances RocksDB by transforming it from a key-value store (with only primitive data types) to a document store (with complex data types). Every key is stored as a separate document in DocDB, irrespective of the API responsible for managing the key. DocDB's sharding, replication/fault-tolerance, and distributed ACID transactions architecture are all based on the Google Spanner design first published in 2012. How We Built a High Performance Document Store on RocksDB? provides an in-depth look into DocDB. In terms of the CAP theorem, YugabyteDB is a consistent and partition-tolerant (CP) database. It ensures high availability (HA) for most practical situations even while remaining strongly consistent. While this may seem to be a violation of the CAP theorem, that is not the case. CAP treats availability as a binary option whereas YugabyteDB treats availability as a percentage that can be tuned to achieve high write availability (reads are always available as long as a single node is available). During network partitions or node failures, the replicas of the impacted tablets (whose leaders got partitioned out or lost) form two groups: a majority partition that can still establish a Raft consensus and a minority partition that cannot establish such a consensus (given the lack of quorum). The replicas in the majority partition elect a new leader among themselves in a matter of seconds and are ready to accept new writes after the leader election completes. For these few seconds till the new leader is elected, the DB is unable to accept new writes given the design choice of prioritizing consistency over availability. All the leader replicas in the minority partition lose their leadership during these few seconds and hence become followers. Majority partitions are available for both reads and writes. Minority partitions are not available for writes, but may serve stale reads (up to a staleness as configured by the --maxstalereadboundtime_ms flag). Multi-active availability refers to YugabyteDB's ability to dynamically adjust to the state of the cluster and serve consistent writes at any replica in the majority" }, { "data": "The approach above obviates the need for any unpredictable background anti-entropy operations as well as need to establish quorum at read time. As shown in the YCSB benchmarks against Apache Cassandra, YugabyteDB delivers predictable p99 latencies as well as 3x read throughput that is also timeline-consistent (given no quorum is needed at read time). On one hand, the YugabyteDB storage and replication architecture is similar to that of Google Cloud Spanner, which is also a CP database with high write availability. While Google Cloud Spanner leverages Google's proprietary network infrastructure, YugabyteDB is designed work on commodity infrastructure used by most enterprise users. On the other hand, YugabyteDB's multi-model, multi-API, and tunable read latency approach is similar to that of Azure Cosmos DB. A post on our blog titled Practical Tradeoffs in Google Cloud Spanner, Azure Cosmos DB and YugabyteDB goes through the above tradeoffs in more detail. A YugabyteDB universe packs a lot more functionality than what people think of when referring to a cluster. In fact, in certain deployment choices, the universe subsumes the equivalent of multiple clusters and some of the operational work needed to run them. Here are just a few concrete differences, which made us feel like giving it a different name would help earmark the differences and avoid confusion: A YugabyteDB universe can move into new machines, availability zones (AZs), regions, and data centers in an online fashion, while these primitives are not associated with a traditional cluster. You can set up multiple asynchronous replicas with just a few clicks (using YugabyteDB Anywhere). This is built into the universe as a first-class operation with bootstrapping of the remote replica and all the operational aspects of running asynchronous replicas being supported natively. In the case of traditional clusters, the source and the asynchronous replicas are independent clusters. The user is responsible for maintaining these separate clusters as well as operating the replication logic. Failover to asynchronous replicas as the primary data and fallback once the original is up and running are both natively supported in a universe. Users primarily turn to YugabyteDB for scalability reasons. Consistent hash sharding is ideal for massively scalable workloads because it distributes data evenly across all the nodes in the cluster, while retaining ease of adding nodes into the cluster. Most use cases that require scalability do not need to perform range lookups on the primary key, so consistent hash sharding is the default sharding strategy for YugabyteDB. Common applications that do not need hash sharding include user identity (user IDs do not need ordering), product catalog (product IDs are not related to one another), and stock ticker data (one stock symbol is independent of all other stock symbols). For applications that benefit from range sharding, YugabyteDB lets you select that option. To learn more about sharding strategies and lessons learned, see Four Data Sharding Strategies We Analyzed in Building a Distributed SQL Database." } ]
{ "category": "App Definition and Development", "file_name": "01main.md", "project_name": "Apache RocketMQ", "subcategory": "Streaming & Messaging" }
[ { "data": "This section will describe steps to quickly deploy a RocketMQ cluster with a single node; Commands to send and receive messages to/from it are also included as proof of work. Apache RocketMQ is distributed both in binary and source packages. Click here to download Apache RocketMQ 5.2.0 source package. You may prefer prebuilt binary package, which can be run directly since it has been compiled. The following instruction takes the application of RocketMQ 5.2.0 source package in Linux environment as an example in order to introduce the installation process of RocketMQ. Extract the source package of RocketMQ 5.2.0, then compile and build the binary executables: ``` $ unzip rocketmq-all-5.2.0-source-release.zip$ cd rocketmq-all-5.2.0-source-release/$ mvn -Prelease-all -DskipTests -Dspotbugs.skip=true clean install -U$ cd distribution/target/rocketmq-5.2.0/rocketmq-5.2.0``` After the installation of RocketMQ, start the NameServer: ``` Once we see 'The Name Server boot success..' from namesrv.log, it means the NameServer has been started successfully. After nameserver startup, we need start the broker and proxy. We recommend Local deployment mode, where Broker and Proxy are deployed in the same process. We also support cluster deployment mode. Learn more Deployment introduction. ``` Once we see The broker[brokerName,ip:port] boot success.. from proxy.log, it means the Broker has been started successfully. Thus far, a single-Master RocketMQ cluster has been deployed, and we are able to send and receive simple messages by scripts. Before test with tools, we need set the nameserver address to system. like system environment variables NAMESRV_ADDR. ``` $ export NAMESRVADDR=localhost:9876$ sh bin/tools.sh org.apache.rocketmq.example.quickstart.Producer SendResult [sendStatus=SENDOK, msgId= ...$ sh bin/tools.sh org.apache.rocketmq.example.quickstart.Consumer ConsumeMessageThread_%d Receive New Messages: [MessageExt...``` We can also try to use the client sdk to send and receive messages, you can see more details from rocketmq-clients. Create a java project. Add sdk dependency to pom.xml, remember to replace the rocketmq-client-java-version with the latest release. ``` <dependency> <groupId>org.apache.rocketmq</groupId> <artifactId>rocketmq-client-java</artifactId> <version>${rocketmq-client-java-version}</version></dependency> ``` Create topic by mqadmin cli tools. ``` $ sh bin/mqadmin updatetopic -n localhost:9876 -t TestTopic -c DefaultCluster``` In the Java project you have created, create a program that sends messages and run it with the following code: ``` import java.io.IOException;import org.apache.rocketmq.client.apis.ClientConfiguration;import org.apache.rocketmq.client.apis.ClientConfigurationBuilder;import org.apache.rocketmq.client.apis.ClientException;import org.apache.rocketmq.client.apis.ClientServiceProvider;import org.apache.rocketmq.client.apis.message.Message;import org.apache.rocketmq.client.apis.producer.Producer;import org.apache.rocketmq.client.apis.producer.SendReceipt;import org.slf4j.Logger;import org.slf4j.LoggerFactory;public class ProducerExample { private static final Logger logger = LoggerFactory.getLogger(ProducerExample.class); public static void main(String[] args) throws ClientException, IOException { String endpoint = \"localhost:8081\"; String topic = \"TestTopic\"; ClientServiceProvider provider = ClientServiceProvider.loadService(); ClientConfigurationBuilder builder = ClientConfiguration.newBuilder().setEndpoints(endpoint); ClientConfiguration configuration = builder.build(); Producer producer = provider.newProducerBuilder() .setTopics(topic) .setClientConfiguration(configuration) .build(); Message message = provider.newMessageBuilder() .setTopic(topic) .setKeys(\"messageKey\") .setTag(\"messageTag\") .setBody(\"messageBody\".getBytes()) .build(); try { SendReceipt sendReceipt = producer.send(message); logger.info(\"Send message successfully, messageId={}\", sendReceipt.getMessageId()); } catch (ClientException e) { logger.error(\"Failed to send message\", e); } // producer.close(); }}``` In the Java project you have created, create a consumer demo program and run it. Apache RocketMQ support SimpleConsumer and PushConsumer. ``` import java.io.IOException;import java.util.Collections;import org.apache.rocketmq.client.apis.ClientConfiguration;import org.apache.rocketmq.client.apis.ClientException;import org.apache.rocketmq.client.apis.ClientServiceProvider;import org.apache.rocketmq.client.apis.consumer.ConsumeResult;import org.apache.rocketmq.client.apis.consumer.FilterExpression;import org.apache.rocketmq.client.apis.consumer.FilterExpressionType;import org.apache.rocketmq.client.apis.consumer.PushConsumer;import org.slf4j.Logger;import org.slf4j.LoggerFactory;public class PushConsumerExample { private static final Logger logger = LoggerFactory.getLogger(PushConsumerExample.class); private PushConsumerExample() { } public static void main(String[] args) throws ClientException, IOException, InterruptedException { final ClientServiceProvider provider = ClientServiceProvider.loadService(); String endpoints = \"localhost:8081\"; ClientConfiguration clientConfiguration = ClientConfiguration.newBuilder() .setEndpoints(endpoints) .build(); String tag = \"*\"; FilterExpression filterExpression = new FilterExpression(tag, FilterExpressionType.TAG); String consumerGroup = \"YourConsumerGroup\"; String topic = \"TestTopic\"; PushConsumer pushConsumer = provider.newPushConsumerBuilder() .setClientConfiguration(clientConfiguration) .setConsumerGroup(consumerGroup) .setSubscriptionExpressions(Collections.singletonMap(topic, filterExpression)) .setMessageListener(messageView -> { logger.info(\"Consume message successfully, messageId={}\", messageView.getMessageId()); return ConsumeResult.SUCCESS; }) .build(); Thread.sleep(Long.MAX_VALUE); // pushConsumer.close(); }}``` After finishing the practice, we could shut down the service by the following commands. ``` $ sh bin/mqshutdown brokerThe mqbroker(36695) is running...Send shutdown request to mqbroker(36695) OK$ sh bin/mqshutdown namesrvThe mqnamesrv(36664) is running...Send shutdown request to mqnamesrv(36664) OK```" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Apache StreamPipes", "subcategory": "Streaming & Messaging" }
[ { "data": "StreamPipes Connect is the module to connect external data sources with Apache StreamPipes directly from the user interface. StreamPipes Connect offers various adapters for common communication protocols and some specific sensors. Besides connecting data, StreamPipes Connect offers ways to pre-process data without the need to build pipelines and integrates a schema guesser that listens for incoming data and recommends the recognized event schema. The screenshot below illustrates the data marketplace, which shown after navigating to \"StreamPipes Connect\" and then clicking the \"New adapter\" button at the top. The data marketplace shows a list of all adapters that are currently installed in Apache StreamPipes. Each adapter offers various configuration options which depend on the specifics of the adapter. Adapters are distinguished a) by the data source concept they provide (data set or data stream) and b) the adapter type, where we distinguish between generic adapters, which usually implement a generic communication protocol such as MQTT or Apache Kafka or a specific sensor interface (e.g., for Netio power sockets). Several filter options are available to find a suitable adapter. The configuration of a new adapter starts with selecting one of the available adapters, which starts an assistant that supports the adapter generation. In the first step, basic configurations need to be provided. For instance, for an Apache PLC4X adapter, the IP address of the PLC needs to be provided. In this example, we provide basic settings for connecting to an Apache Kafka broker. After all values are provided, the \"Next\" button opens the next step. The next step, format generation, is only available for generic adapters which support different message formats to be sent over the corresponding protocol. Think of a message broker that is able to consume messages in both JSON format or binary format. Currently supported formats include XML, various JSON representations, images and CSV. After a format has been selected, further format configurations can be provided (depending on the selected format) to further customize the incoming message format. In the next step, based on the previously provided protocol and format settings, the system will either provide the fixed/pre-defined schema of the adapter or, in case of specific adapters, will connect to the underlying system and try to listen for incoming data. After a few seconds, the schema editor will appear that provides a list of detected fields from the incoming events (the schema). In the toolbar, several configuration options are available which transform the original schema: For each field (also called event property) of the schema, additional configuration options are available by clicking the Edit button: Assigning a timestamp is mandatory and can be either done by adding a timestamp from the menu, or by choosing an existing field and marking it as timestamp. Finally, the adapter is ready to be started. In the Adapter Generation page, a name and description for the resulting data stream must be provided. Once started, StreamPipes creates your new adapter and displays a preview of the connected data, which refreshes about once per second. Afterwards, the newly created data stream is available in the pipeline editor for further usage. Currently running adapters are available in the \"Running adapters\" section of StreamPipes Connect. Existing adapters can be stopped and deleted. Currently, there is no mechanism to edit an existing adapter or to stop the adapter without deleting it. For frequently used configurations, adapter templates can be created. An adapter template is a pre-configured adapter which can be further customized by users. Created adapter templates are available in the marketplace similar to standard adapters." } ]
{ "category": "App Definition and Development", "file_name": "sql-ref-ansi-compliance.html.md", "project_name": "Apache Spark", "subcategory": "Streaming & Messaging" }
[ { "data": "In Spark SQL, there are two options to comply with the SQL standard: spark.sql.ansi.enabled and spark.sql.storeAssignmentPolicy (See a table below for details). When spark.sql.ansi.enabled is set to true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results if the inputs to a SQL operator/function are invalid. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQLs style. Moreover, Spark SQL has an independent option to control implicit casting behaviours when inserting rows in a table. The casting behaviours are defined as store assignment rules in the standard. When spark.sql.storeAssignmentPolicy is set to ANSI, Spark SQL complies with the ANSI store assignment rules. This is a separate configuration because its default value is ANSI, while the configuration spark.sql.ansi.enabled is disabled by default. | Property Name | Default | Meaning | Since Version | |:--|:-|:-|:-| | spark.sql.ansi.enabled | false | When true, Spark tries to conform to the ANSI SQL specification: 1. Spark SQL will throw runtime exceptions on invalid operations, including integer overflow errors, string parsing errors, etc. 2. Spark will use different type coercion rules for resolving conflicts among data types. The rules are consistently based on data type precedence. | 3.0.0 | | spark.sql.storeAssignmentPolicy | ANSI | When inserting a value into a column with different data type, Spark will perform type conversion. Currently, we support 3 policies for the type coercion rules: ANSI, legacy and strict. 1. With ANSI policy, Spark performs the type coercion as per ANSI SQL. In practice, the behavior is mostly the same as PostgreSQL. It disallows certain unreasonable type conversions such as converting string to int or double to boolean. On inserting a numeric type column, an overflow error will be thrown if the value is out of the target data types range.2. With legacy policy, Spark allows the type coercion as long as it is a valid Cast, which is very loose. e.g. converting string to int or double to boolean is allowed. It is also the only behavior in Spark 2.x and it is compatible with Hive.3. With strict policy, Spark doesnt allow any possible precision loss or data truncation in type coercion, e.g. converting double to int or decimal to double is not allowed. | 3.0.0 | The following subsections present behaviour changes in arithmetic operations, type conversions, and SQL parsing when the ANSI mode enabled. For type conversions in Spark SQL, there are three kinds of them and this article will introduce them one by one: cast, store assignment and type coercion. In Spark SQL, arithmetic operations performed on numeric types (with the exception of decimal) are not checked for overflows by default. This means that in case an operation causes overflows, the result is the same with the corresponding operation in a Java/Scala program (e.g., if the sum of 2 integers is higher than the maximum value representable, the result is a negative number). On the other hand, Spark SQL returns null for decimal overflows. When spark.sql.ansi.enabled is set to true and an overflow occurs in numeric and interval arithmetic operations, it throws an arithmetic exception at runtime. ``` -- `spark.sql.ansi.enabled=true` SELECT 2147483647 + 1; org.apache.spark.SparkArithmeticException: [ARITHMETICOVERFLOW] integer overflow. Use 'tryadd' to tolerate overflow and return NULL instead. If necessary set spark.sql.ansi.enabled to \"false\" to bypass this error. == SQL(line 1, position 8) == SELECT 2147483647 + 1 ^^^^^^^^^^^^^^ SELECT abs(-2147483648); org.apache.spark.SparkArithmeticException: [ARITHMETIC_OVERFLOW] integer overflow. If necessary set spark.sql.ansi.enabled to \"false\" to bypass this" }, { "data": "-- `spark.sql.ansi.enabled=false` SELECT 2147483647 + 1; +-+ |(2147483647 + 1)| +-+ | -2147483648| +-+ SELECT abs(-2147483648); +-+ |abs(-2147483648)| +-+ | -2147483648| +-+ ``` When spark.sql.ansi.enabled is set to true, explicit casting by CAST syntax throws a runtime exception for illegal cast patterns defined in the standard, e.g. casts from a string to an integer. Besides, the ANSI SQL mode disallows the following type conversions which are allowed when ANSI mode is off: The valid combinations of source and target data type in a CAST expression are given by the following table. Y indicates that the combination is syntactically valid without restriction and N indicates that the combination is not valid. | Source\\Target | Numeric | String | Date | Timestamp | Timestamp_NTZ | Interval | Boolean | Binary | Array | Map | Struct | |:-|:-|:|:-|:|:-|:--|:-|:|:--|:|:| | Numeric | Y | Y | N | Y | N | Y | Y | N | N | N | N | | String | Y | Y | Y | Y | Y | Y | Y | Y | N | N | N | | Date | N | Y | Y | Y | Y | N | N | N | N | N | N | | Timestamp | Y | Y | Y | Y | Y | N | N | N | N | N | N | | Timestamp_NTZ | N | Y | Y | Y | Y | N | N | N | N | N | N | | Interval | Y | Y | N | N | N | Y | N | N | N | N | N | | Boolean | Y | Y | N | N | N | N | Y | N | N | N | N | | Binary | N | Y | N | N | N | N | N | Y | N | N | N | | Array | N | Y | N | N | N | N | N | N | Y | N | N | | Map | N | Y | N | N | N | N | N | N | N | Y | N | | Struct | N | Y | N | N | N | N | N | N | N | N | Y | In the table above, all the CASTs with new syntax are marked as red Y: ``` -- Examples of explicit casting -- `spark.sql.ansi.enabled=true` SELECT CAST('a' AS INT); org.apache.spark.SparkNumberFormatException: [CASTINVALIDINPUT] The value 'a' of the type \"STRING\" cannot be cast to \"INT\" because it is malformed. Correct the value as per the syntax, or change its target type. Use `try_cast` to tolerate malformed input and return NULL instead. If necessary set \"spark.sql.ansi.enabled\" to \"false\" to bypass this error. == SQL(line 1, position 8) == SELECT CAST('a' AS INT) ^^^^^^^^^^^^^^^^ SELECT CAST(2147483648L AS INT); org.apache.spark.SparkArithmeticException: [CASTOVERFLOW] The value 2147483648L of the type \"BIGINT\" cannot be cast to \"INT\" due to an overflow. Use `trycast` to tolerate overflow and return NULL instead. If necessary set \"spark.sql.ansi.enabled\" to \"false\" to bypass this error. SELECT CAST(DATE'2020-01-01' AS INT); org.apache.spark.sql.AnalysisException: cannot resolve 'CAST(DATE '2020-01-01' AS INT)' due to data type mismatch: cannot cast date to int. To convert values from date to int, you can use function UNIX_DATE instead. --" }, { "data": "(This is a default behaviour) SELECT CAST('a' AS INT); +--+ |CAST(a AS INT)| +--+ | null| +--+ SELECT CAST(2147483648L AS INT); +--+ |CAST(2147483648 AS INT)| +--+ | -2147483648| +--+ SELECT CAST(DATE'2020-01-01' AS INT) ++ |CAST(DATE '2020-01-01' AS INT)| ++ | null| ++ -- Examples of store assignment rules CREATE TABLE t (v INT); -- `spark.sql.storeAssignmentPolicy=ANSI` INSERT INTO t VALUES ('1'); org.apache.spark.sql.AnalysisException: [INCOMPATIBLEDATAFORTABLE.CANNOTSAFELYCAST] Cannot write incompatible data for table `sparkcatalog`.`default`.`t`: Cannot safely cast `v`: \"STRING\" to \"INT\". -- `spark.sql.storeAssignmentPolicy=LEGACY` (This is a legacy behaviour until Spark 2.x) INSERT INTO t VALUES ('1'); SELECT * FROM t; ++ | v| ++ | 1| ++ ``` While casting of a decimal with a fraction to an interval type with SECOND as the end-unit like INTERVAL HOUR TO SECOND, Spark rounds the fractional part towards nearest neighbor unless both neighbors are equidistant, in which case round up. As mentioned at the beginning, when spark.sql.storeAssignmentPolicy is set to ANSI(which is the default value), Spark SQL complies with the ANSI store assignment rules on table insertions. The valid combinations of source and target data type in table insertions are given by the following table. | Source\\Target | Numeric | String | Date | Timestamp | Timestamp_NTZ | Interval | Boolean | Binary | Array | Map | Struct | |:-|:-|:|:-|:|:-|:--|:-|:|:--|:|:| | Numeric | Y | Y | N | N | N | N | N | N | N | N | N | | String | N | Y | N | N | N | N | N | N | N | N | N | | Date | N | Y | Y | Y | Y | N | N | N | N | N | N | | Timestamp | N | Y | Y | Y | Y | N | N | N | N | N | N | | Timestamp_NTZ | N | Y | Y | Y | Y | N | N | N | N | N | N | | Interval | N | Y | N | N | N | N* | N | N | N | N | N | | Boolean | N | Y | N | N | N | N | Y | N | N | N | N | | Binary | N | Y | N | N | N | N | N | Y | N | N | N | | Array | N | N | N | N | N | N | N | N | Y | N | N | | Map | N | N | N | N | N | N | N | N | N | Y | N | | Struct | N | N | N | N | N | N | N | N | N | N | Y | Spark doesnt support interval type table column. For Array/Map/Struct types, the data type check rule applies recursively to its component elements. During table insertion, Spark will throw exception on numeric value overflow. ``` CREATE TABLE test(i INT); INSERT INTO test VALUES (2147483648L); org.apache.spark.SparkArithmeticException: [CASTOVERFLOWINTABLEINSERT] Fail to insert a value of \"BIGINT\" type into the \"INT\" type column `i` due to an overflow. Use `try_cast` on the input value to tolerate overflow and return NULL instead. ``` When spark.sql.ansi.enabled is set to true, Spark SQL uses several rules that govern how conflicts between data types are" }, { "data": "At the heart of this conflict resolution is the Type Precedence List which defines whether values of a given data type can be promoted to another data type implicitly. | Data type | precedence list(from narrowest to widest) | |:|:--| | Byte | Byte -> Short -> Int -> Long -> Decimal -> Float* -> Double | | Short | Short -> Int -> Long -> Decimal-> Float* -> Double | | Int | Int -> Long -> Decimal -> Float* -> Double | | Long | Long -> Decimal -> Float* -> Double | | Decimal | Decimal -> Float* -> Double | | Float | Float -> Double | | Double | Double | | Date | Date -> Timestamp_NTZ -> Timestamp | | Timestamp | Timestamp | | String | String, Long -> Double, Date -> Timestamp_NTZ -> Timestamp , Boolean, Binary | | Binary | Binary | | Boolean | Boolean | | Interval | Interval | | Map | Map* | | Array | Array* | | Struct | Struct* | For least common type resolution float is skipped to avoid loss of precision. String can be promoted to multiple kinds of data types. Note that Byte/Short/Int/Decimal/Float is not on this precedent list. The least common type between Byte/Short/Int and String is Long, while the least common type between Decimal/Float is Double. For a complex type, the precedence rule applies recursively to its component elements. Special rules apply for untyped NULL. A NULL can be promoted to any other type. This is a graphical depiction of the precedence list as a directed tree: The least common type from a set of types is the narrowest type reachable from the precedence list by all elements of the set of types. The least common type resolution is used to: ``` -- The coalesce function accepts any set of argument types as long as they share a least common type. -- The result type is the least common type of the arguments. SET spark.sql.ansi.enabled=true; SELECT typeof(coalesce(1Y, 1L, NULL)); BIGINT SELECT typeof(coalesce(1, DATE'2020-01-01')); Error: Incompatible types [INT, DATE] SELECT typeof(coalesce(ARRAY(1Y), ARRAY(1L))); ARRAY<BIGINT> SELECT typeof(coalesce(1, 1F)); DOUBLE SELECT typeof(coalesce(1L, 1F)); DOUBLE SELECT (typeof(coalesce(1BD, 1F))); DOUBLE SELECT typeof(coalesce(1, '2147483648')) BIGINT SELECT typeof(coalesce(1.0, '2147483648')) DOUBLE SELECT typeof(coalesce(DATE'2021-01-01', '2022-01-01')) DATE ``` Under ANSI mode(spark.sql.ansi.enabled=true), the function invocation of Spark SQL: ``` SET spark.sql.ansi.enabled=true; -- implicitly cast Int to String type SELECT concat('total number: ', 1); total number: 1 -- implicitly cast Timestamp to Date type select datediff(now(), current_date); 0 -- implicitly cast String to Double type SELECT ceil('0.1'); 1 -- special rule: implicitly cast NULL to Date type SELECT year(null); NULL CREATE TABLE t(s string); -- Can't store String column as Numeric types. SELECT ceil(s) from t; Error in query: cannot resolve 'CEIL(spark_catalog.default.t.s)' due to data type mismatch -- Can't store String column as Date type. select year(s) from t; Error in query: cannot resolve 'year(spark_catalog.default.t.s)' due to data type mismatch ``` The behavior of some SQL functions can be different under ANSI mode (spark.sql.ansi.enabled=true). The behavior of some SQL operators can be different under ANSI mode (spark.sql.ansi.enabled=true). When ANSI mode is on, it throws exceptions for invalid operations. You can use the following SQL functions to suppress such exceptions. When both spark.sql.ansi.enabled and spark.sql.ansi.enforceReservedKeywords are true, Spark SQL will use the ANSI mode parser. With the ANSI mode parser, Spark SQL has two kinds of keywords: With the default parser, Spark SQL has two kinds of keywords: By default, both spark.sql.ansi.enabled and spark.sql.ansi.enforceReservedKeywords are false. Below is a list of all the keywords in Spark" }, { "data": "" }, { "data": "" }, { "data": "" }, { "data": "" }, { "data": "| Keyword | Spark SQLANSI Mode | Spark SQLDefault Mode | SQL-2016 | |:|:|:|:--| | ADD | non-reserved | non-reserved | non-reserved | | AFTER | non-reserved | non-reserved | non-reserved | | ALL | reserved | non-reserved | reserved | | ALTER | non-reserved | non-reserved | reserved | | ALWAYS | non-reserved | non-reserved | non-reserved | | ANALYZE | non-reserved | non-reserved | non-reserved | | AND | reserved | non-reserved | reserved | | ANTI | non-reserved | strict-non-reserved | non-reserved | | ANY | reserved | non-reserved | reserved | | ANY_VALUE | non-reserved | non-reserved | non-reserved | | ARCHIVE | non-reserved | non-reserved | non-reserved | | ARRAY | non-reserved | non-reserved | reserved | | AS | reserved | non-reserved | reserved | | ASC | non-reserved | non-reserved | non-reserved | | AT | non-reserved | non-reserved | reserved | | AUTHORIZATION | reserved | non-reserved | reserved | | BETWEEN | non-reserved | non-reserved | reserved | | BIGINT | non-reserved | non-reserved | reserved | | BINARY | non-reserved | non-reserved | reserved | | BOOLEAN | non-reserved | non-reserved | reserved | | BOTH | reserved | non-reserved | reserved | | BUCKET | non-reserved | non-reserved | non-reserved | | BUCKETS | non-reserved | non-reserved | non-reserved | | BY | non-reserved | non-reserved | reserved | | BYTE | non-reserved | non-reserved | non-reserved | | CACHE | non-reserved | non-reserved | non-reserved | | CASCADE | non-reserved | non-reserved | non-reserved | | CASE | reserved | non-reserved | reserved | | CAST | reserved | non-reserved | reserved | | CATALOG | non-reserved | non-reserved | non-reserved | | CATALOGS | non-reserved | non-reserved | non-reserved | | CHANGE | non-reserved | non-reserved | non-reserved | | CHAR | non-reserved | non-reserved | reserved | | CHARACTER | non-reserved | non-reserved | reserved | | CHECK | reserved | non-reserved | reserved | | CLEAR | non-reserved | non-reserved | non-reserved | | CLUSTER | non-reserved | non-reserved | non-reserved | | CLUSTERED | non-reserved | non-reserved | non-reserved | | CODEGEN | non-reserved | non-reserved | non-reserved | | COLLATE | reserved | non-reserved | reserved | | COLLECTION | non-reserved | non-reserved | non-reserved | | COLUMN | reserved | non-reserved | reserved | | COLUMNS | non-reserved | non-reserved | non-reserved | | COMMENT | non-reserved | non-reserved | non-reserved | | COMMIT | non-reserved | non-reserved | reserved | | COMPACT | non-reserved | non-reserved | non-reserved | | COMPACTIONS | non-reserved | non-reserved | non-reserved | | COMPUTE | non-reserved | non-reserved | non-reserved | | CONCATENATE | non-reserved | non-reserved | non-reserved | | CONSTRAINT | reserved | non-reserved | reserved | | COST | non-reserved | non-reserved | non-reserved | | CREATE | reserved | non-reserved | reserved | | CROSS | reserved | strict-non-reserved | reserved | | CUBE | non-reserved | non-reserved | reserved | | CURRENT | non-reserved | non-reserved | reserved | | CURRENT_DATE | reserved | non-reserved | reserved | | CURRENT_TIME | reserved | non-reserved | reserved | | CURRENT_TIMESTAMP | reserved | non-reserved | reserved | | CURRENT_USER | reserved | non-reserved | reserved | | DATA | non-reserved | non-reserved | non-reserved | | DATE | non-reserved | non-reserved | reserved | | DATABASE | non-reserved | non-reserved | non-reserved | | DATABASES | non-reserved | non-reserved | non-reserved | | DATEADD | non-reserved | non-reserved | non-reserved | | DATE_ADD | non-reserved | non-reserved | non-reserved | |" } ]
{ "category": "App Definition and Development", "file_name": "01how-to-contribute.md", "project_name": "Apache RocketMQ", "subcategory": "Streaming & Messaging" }
[ { "data": "Apache RocketMQ Open and sharing open source community, sincerely invite you to join. Ways of community communication and contribution: Apache RocketMQ community provides a complete process to help you answer your questions. You can ask questions through User mailing list and Stack Overflow #rocketmq . If you have problems using RocketMQ,You can file an error report on GitHub Issue. The community is constantly looking for feedback to improve Apache RocketMQ,Your need for improvements or new features will benefit all RocketMQ users, Please create an issue on GitHub Issue Proposals need to include appropriate details and scope of impact. Please elaborate as much as possible on the requirements.We hope to get more complete information for the following reasons: If you plan to implement your proposal to contribute to the community, you will also need to provide detailed description information,And follow code-guidelines Code specification We recommend building community consensus before implementing features. By discussing the need for new features and how to implement them, proposals that are outside the scope of the project can be spotted early. Members of the Apache RocketMQ community communicate through the following two types of email: User mailing list Apache RocketMQ users use the mailing list to ask for help or advice. You can contribute to the community by subscribing to the email system to help others solve problems; You can also retrieve on Stackoverflow rocketmq tag answer user questions and get more insights. Development mailing list : Apache RocketMQ developers use this mailing list to communicate new features, pre-releases, general development processes, etc. If you are interested in contributing code to the RocketMQ community, you can join the mailing list. You can also by subscribing to mailing lists get more info about the community. Apache RocketMQ continues to grow with the help of its active community. Every few weeks we release a new version of RocketMQ to fix bugs, improve performance, add features, etc. The process for releasing a new version is as follows: We have compiled the release-manual release guide on the website. Testing a pre-release is a big job, and we need to get more people involved. The RocketMQ community encourages everyone to participate in testing the new version. By testing the pre-release version, you will be confident that the new RocketMQ version will still service your program properly and is indeed supporting version" }, { "data": "Apache RocketMQ has been and will continue to be maintained, optimized, and extended. So Apache RocketMQ encourages everyone to contribute source code.To give code contributors and reviewers a great code contribution experience and provide a high quality code repository, the community follows the contribution process in code-guidelines.The coding manual contains guidelines for building a development environment, community coding guidelines and coding styles, and describes how to submit contributed code. Be sure to read it carefully before coding code-guidelines And please read Apache Software Foundation contributor license to submit electronic signature. How to find the right issue? GitHub Issue lists the improvements and recommended features that have been proposed so far. Good documentation is essential to any kind of software. The Apache RocketMQ community is committed to providing concise, accurate, and complete technical documentation. The community invites all contributions to help refine and improve the RocketMQ documentation. Read Q&Ato learn how to contribute by updating and refining documents. The Apache RocketMQ website represents Apache RocketMQ and the Apache RocketMQ community. Its main functions are as follows: The community accepts any contribution that will help improve the site. Please provide your suggestions and ideas about the site by creating Github Issue If you would like to update or optimize the website, please visit apache/rocketmq-site new-official-website There are many more ways to contribute to the RocketMQ community that you can choose from: Committers are members of a community's project repository who can modify code, documents, and websites or accept contributions from other members. There is no strict protocol for becoming a commiter, and candidates are usually active contributors in the community. Being an active contributor means: participating in discussions on email lists, helping others solve problems, verifying pre-release versions, honoring the good people and continuously optimizing community management, which is part of the community in Apache. Undoubtedly, contributing code and documentation to the project is equally important. A good place to start is by optimizing performance, developing new features, and fixing bugs. Either way, you are responsible for contributing code, providing test cases and documentation, and maintaining it continuously. Candidates can be recommended by committer or PMC members in the community, and ultimately voted on by the PMC. If you are interested in becoming a committer in the RocketMQ community, please actively engage with the community and contribute to Apache RocketMQ in any of the above ways committer members in the community will be eager to share with you and give you advice and guidance as appropriate." } ]
{ "category": "App Definition and Development", "file_name": "4-using-online-machine-learning-on-a-streampipes-data-stream.md", "project_name": "Apache StreamPipes", "subcategory": "Streaming & Messaging" }
[ { "data": "The last tutorial (Getting live data from the StreamPipes data stream) showed how we can connect to a data stream, and it would be possible to use Online Machine Learning with this approach and train a model with the incoming events at the onEvent method. However, the StreamPipes client also provides an easier way to do this with the use of the River library for Online Machine Learning. We will have a look at this now. ``` from streampipes.client import StreamPipesClient from streampipes.client.config import StreamPipesClientConfig from streampipes.client.credential_provider import StreamPipesApiKeyCredentials ``` ``` %pip install river streampipes ``` ``` import os os.environ[\"USER\"] = \"admin@streampipes.apache.org\" os.environ[\"API-KEY\"] = \"XXX\" os.environ[\"BROKER-HOST\"] = \"localhost\" ``` ``` client_config = StreamPipesClientConfig( credentialprovider=StreamPipesApiKeyCredentials.fromenv(usernameenv=\"USER\", apikey_env=\"API-KEY\"), host_address=\"localhost\", port=80, https_disabled=True, ) client = StreamPipesClient(clientconfig=clientconfig) ``` ``` 2023-01-27 16:04:24,784 - streampipes.client.client - [INFO] - [client.py:128] [setup_logging] - Logging successfully initialized with logging level INFO. ``` ``` client.dataStreamApi.all().to_pandas() ``` ``` 2023-01-27 16:04:28,212 - streampipes.endpoint.endpoint - [INFO] - [endpoint.py:163] [makerequest] - Successfully retrieved all resources. ``` | Unnamed: 0 | elementid | name | description | iconurl | appid | includesassets | includeslocales | internallymanaged | measurementobject | index | ... | dom | rev | numtransportprotocols | nummeasurementcapability | numapplicationlinks | numincludedassets | numconnectedto | numcategory | numeventproperties | numincludedlocales | |-:|:-|:-|--:|:--|:|:|:-|:|:|--:|:|:|:--|--:|--:|:|-:|-:|:|--:|--:| | 0 | sp:spdatastream:xboBFK | Test | nan | None | None | False | False | True | None | 0 | ... | None | 5-558c861debc745e1ebae29a266a8bdb9 | 1 | 0 | 0 | 0 | 0 | 0 | 7 | 0 | | 1 | urn:streampipes.apache.org:eventstream:Wgyrse | Test File | nan | None | None | False | False | True | None | 0 | ... | None | 4-66548b6b84287011b7cec0876ef82baf | 1 | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 2 rows 22 columns After we configured the client as usual, we can start with the new part. The approach is straight forward and you can start with the ML part in just 3 steps: A StreamPipesFunction is then started, which trains the model for each new event. It also creates an output data stream which will send the prediction of the model back to StreamPipes. This output stream can be seen when creating a new pipeline and can be used like every other data source. So you can use it in a pipeline and save the predictions in a Data Lake. You can also stop and start the training with the method" }, { "data": "To stop the whole function use the stop methode and if you want to delete the output stream entirely, you can go to the Pipeline Element Installer in StreamPipes and uninstall it. Now let's take a look at some examples. If you want to execute the examples below you have to create an adapter for the Machine Data Simulator, select the flowrate sensor and insert the stream id of this stream. ``` from river import cluster, compose, preprocessing from streampipes.functionzoo.riverfunction import OnlineML from streampipes.functions.utils.datastreamgenerator import RuntimeType k_means = compose.Pipeline( (\"drop_features\", compose.Discard(\"sensorId\", \"timestamp\")), (\"scale\", preprocessing.StandardScaler()), (\"kmeans\", cluster.KMeans(nclusters=2)), ) clustering = OnlineML( client=client, streamids=[\"sp:spdatastream:xboBFK\"], model=kmeans, prediction_type=RuntimeType.INTEGER.value ) clustering.start() ``` ``` 2023-01-27 16:04:35,599 - streampipes.endpoint.endpoint - [INFO] - [endpoint.py:163] [makerequest] - Successfully retrieved all resources. 2023-01-27 16:04:35,599 - streampipes.functions.functionhandler - [INFO] - [functionhandler.py:64] [initializeFunctions] - Create output data stream \"sp:spdatastream:cwKPoo\" for the function \"65cf8b86-bcdf-433e-a1c7-3e920eab55d0\" 2023-01-27 16:04:37,766 - streampipes.endpoint.endpoint - [INFO] - [endpoint.py:163] [makerequest] - Successfully retrieved all resources. 2023-01-27 16:04:37,767 - streampipes.functions.functionhandler - [INFO] - [functionhandler.py:78] [initializeFunctions] - Using NatsBroker for RiverFunction 2023-01-27 16:04:37,791 - streampipes.functions.broker.natsbroker - [INFO] - [natsbroker.py:48] [_makeConnection] - Connected to NATS at localhost:4222 2023-01-27 16:04:37,791 - streampipes.functions.broker.natsbroker - [INFO] - [natsbroker.py:48] [_makeConnection] - Connected to NATS at localhost:4222 2023-01-27 16:04:37,792 - streampipes.functions.broker.natsbroker - [INFO] - [natsbroker.py:58] [createSubscription] - Subscribed to stream: sp:spdatastream:xboBFK ``` ``` clustering.set_learning(False) ``` ``` clustering.stop() ``` ``` 2023-01-27 16:04:57,303 - streampipes.functions.broker.natsbroker - [INFO] - [natsbroker.py:82] [disconnect] - Stopped connection to stream: sp:spdatastream:xboBFK 2023-01-27 16:04:57,304 - streampipes.functions.broker.natsbroker - [INFO] - [natsbroker.py:82] [disconnect] - Stopped connection to stream: sp:spdatastream:cwKPoo ``` ``` from river import cluster, compose, preprocessing, tree from streampipes.functionzoo.riverfunction import OnlineML from streampipes.functions.utils.datastreamgenerator import RuntimeType hoeffding_tree = compose.Pipeline( (\"drop_features\", compose.Discard(\"sensorId\", \"timestamp\")), (\"hoeffdingtree\", tree.HoeffdingTreeRegressor(graceperiod=5)), ) def draw_tree(self, event, streamId): \"\"\"Draw the tree and save the image.\"\"\" if self.learning: if self.model[1].n_nodes != None: self.model[1].draw().render(\"hoeffding_tree\", format=\"png\", cleanup=True) def save_model(self): \"\"\"Save the trained model.\"\"\" with open(\"hoeffding_tree.pkl\", \"wb\") as f: pickle.dump(self.model, f) regressor = OnlineML( client=client, stream_ids=[\"sp:spdatastream:xboBFK\"], model=hoeffding_tree, prediction_type=RuntimeType.FLOAT.value, supervised=True, target_label=\"temperature\", onevent=drawtree, onstop=savemodel, ) regressor.start() ``` ``` regressor.set_learning(False) ``` ``` regressor.stop() ``` ``` import pickle from river import cluster, compose, preprocessing, tree from streampipes.functionzoo.riverfunction import OnlineML from streampipes.functions.utils.datastreamgenerator import RuntimeType decision_tree = compose.Pipeline( (\"drop_features\", compose.Discard(\"sensorId\", \"timestamp\")), (\"decisiontree\", tree.ExtremelyFastDecisionTreeClassifier(graceperiod=5)), ) def draw_tree(self, event, streamId): \"\"\"Draw the tree and save the image.\"\"\" if self.learning: if self.model[1].n_nodes != None: self.model[1].draw().render(\"decicion_tree\", format=\"png\", cleanup=True) def save_model(self): \"\"\"Save the trained model.\"\"\" with open(\"decision_tree.pkl\", \"wb\") as f: pickle.dump(self.model, f) classifier = OnlineML( client=client, stream_ids=[\"sp:spdatastream:xboBFK\"], model=decision_tree, prediction_type=RuntimeType.BOOLEAN.value, supervised=True, targetlabel=\"sensorfault_flags\", onevent=drawtree, onstop=savemodel, ) classifier.start() ``` ``` classifier.set_learning(False) ``` ``` classifier.stop() ``` How do you like this tutorial? We hope you like it and would love to receive some feedback from you. Just go to our GitHub discussion page and let us know your impression. We'll read and react to them all, we promise!" } ]
{ "category": "App Definition and Development", "file_name": "RqK0w4DWHiqgdIkhGULcPW9rnFc.md", "project_name": "AutoMQ", "subcategory": "Streaming & Messaging" }
[ { "data": "AutoMQ is a next-generation distribution of Apache Kafka, re-engineered with cloud-native principles and fully compatible with Apache Kafka protocols and functionalities. In its technical design, AutoMQ leverages the compute layer code from Apache Kafka, implementing only minimal modifications at the storage layer. This approach guarantees complete compatibility with the respective versions of Apache Kafka. Applications developed for Apache Kafka can be effortlessly transitioned to AutoMQ. During the compatibility testing phase, AutoMQ utilized test suites from Apache Kafka and successfully passed the tests for the pertinent versions. Below are the specific results: | Apache Kafka Test Module | Passed Cases | Total Cases | Failed Cases | Reasons for Failure | |:|:|--:|:|:--| | sanity_check | 41 | 48 | 7 | Failed cases are only applicable to Zookeeper mode, while AutoMQ operates in KRaft mode, hence these cases are not relevant. | | client | 37 | 86 | 49 | Failed cases are only applicable to Zookeeper mode, while AutoMQ operates in KRaft mode, hence these cases are not relevant. | | tools | 8 | 9 | 1 | Failed case is only applicable to Zookeeper mode, while AutoMQ operates in KRaft mode, hence this case is not relevant. | | benchmark | 58 | 120 | 62 | Failed cases only applicable in Zookeeper mode, AutoMQ operates in KRaft mode, thus these cases need not be considered. | | core | 95 | 348 | 253 | Failed cases only applicable in Zookeeper mode, AutoMQ operates in KRaft mode, thus these cases need not be considered. | | connect & streams | 100 | 291 | 191 | Failed cases only applicable in Zookeeper mode, AutoMQ operates in KRaft mode, thus these cases need not be considered. | | Total | 339 | 902 | 563 | Failed cases only applicable in Zookeeper mode, AutoMQ operates in KRaft mode, thus these cases need not be considered. | The version compatibility relationship between AutoMQ and Apache Kafka is outlined below: | AutoMQ | Apache Kafka Server | Kafka Client | Kafka Connector | HTTP Proxy | |:|:|:|:|:-| | v1.1.x (upcoming) | Adapted: v3.7.xBackward compatible: v0.9.xv3.7.x | Compatible: v0.9.xv3.7.x | Compatible | Compatible | | v1.0.x | Adapted: v3.4.xBackward compatible: v0.9.xv3.4.x | Compatible: v0.9.xv3.4.x | Compatible | Compatible | AutoMQ versions are meticulously mapped to Apache Kafka versions to ensure comprehensive compatibility with the Kafka Client, Connector, Proxy, and other components in the Apache Kafka ecosystem. AutoMQ implements minimal modifications at the storage layer to align with Apache Kafka, facilitating rapid adaptation to new community releases and supporting the latest Apache Kafka version updates in as little as T+1 months. Apache, Apache Kafka, Kafka, Apache RocketMQ, RocketMQ, and associated open source project names are trademarks of the Apache Software Foundation" } ]
{ "category": "App Definition and Development", "file_name": "EsUBwQei4ilCDjkWb8WcbOZInwc.md", "project_name": "AutoMQ", "subcategory": "Streaming & Messaging" }
[ { "data": "AutoMQ is a cutting-edge distribution of Apache Kafka, reengineered with cloud-native principles, offering up to a 10-fold improvement in cost efficiency and elasticity while maintaining full compatibility with the Apache Kafka protocol. This article primarily explores the key differences and relationships between AutoMQ and Apache Kafka. Apache Kafka utilizes local disk storage, constructing highly reliable storage via software-level high-availability replication logic (ISR mechanism). This setup provides an \"infinite\" stream storage abstraction to the business logic layer. Typically, all Kafka data is stored on local disks in accordance with specific logic, an approach commonly referred to as a Shared Nothing architecture. In contrast, AutoMQ deviates from Apache Kafka by adopting a compute-storage separation strategy, eschewing local disks in favor of shared object storage services for data storage. AutoMQ introduces an S3Stream storage repository (software library) to replace the local log storage of Apache Kafka, seamlessly leveraging object storage to preserve Kafka data while ensuring that core Apache Kafka functionalities remain intact. This model is known as a Shared Storage architecture. The following table contrasts the two architectures of Apache Kafka and AutoMQ: | Apache Kafka Adopts Shared Nothing architecture | AutoMQ Adopts Shared Storage architecture | |:--|:| | Data on local disks, requires implementing cross-node multiple replica replication | Data in S3 shared storage (triple-replica high reliability), no need for multiple replica replication | | Data is isolated between nodes, with data access bound to nodes | Data is shared across nodes, allowing cross-node sharing and access | | Adding nodes or replacing failed nodes requires reassigning shard data | Adding nodes or replacing failed nodes allows for quick switching without the need for data reassignment | Note: Starting with version 3.6, Apache Kafka aims to introduce tiered storage capabilities (not yet ready for production), which will allow for offloading historical data to object storage services. This approach has both similarities and differences with AutoMQ, which exclusively utilizes object storage for its storage layer. For an in-depth comparison, please see Difference with Tiered Storage. Partition reassignment is a common and unavoidable challenge in Kafka production environments, occurring under circumstances such as partial node failures, cluster scaling, and handling local hotspots. Apache Kafka employs a Shared Nothing architecture, wherein each partition's data is fully contained on a designated storage node. If a partition reassignment is necessary, it requires transferring the entire data of the partition to a new target node to restore service. This process is marked by lengthy and unpredictable durations. Example: For a Kafka partition with a write throughput of 100MiB/s, the daily data production is around 8.2TiB. If reassigning this partition becomes necessary, it entails moving all the data to a different node. Even with network bandwidth at 1Gbps, this reassignment process can take several hours to complete. AutoMQ employs a compute-storage separation architecture, wherein the complete data of each partition is stored in S3 object" }, { "data": "Reassignment merely involves syncing minimal metadata, allowing the transition to be executed in mere seconds, independent of the write throughput scale of the partition. AutoMQ supports reassignment in seconds, offering AutoMQ faster and more predictable flexibility advantages in scenarios such as cluster scaling and fault recovery compared to Apache Kafka. Due to the technical architectural differences highlighted, AutoMQ and Apache Kafka exhibit significant disparities in their cost structures for computing and storage. AutoMQ minimizes inter-node replication traffic and stress by avoiding the necessity for multiple replica copies across nodes during message writing. Furthermore, AutoMQ utilizes S3 object storage as its storage medium, which is substantially cheaper than EBS block storage typically used on each node in public cloud settings. Specific comparison items are as follows: | Cost Comparison | Apache Kafka | AutoMQ | |:-|:-|:-| | Storage Unit Price | Scenario: 1GB of data requires 3GB of EBS (three replicas)Cost: $0.288 USD/month | Scenario: 1GB of business data requires 1GB of S3Cost: $0.023 USD/month | | Cross-node Replication Traffic | Scenario: Writing 1GB of data, requiring 2GB of traffic across nodes (three replicas)Cost: 0.04 USD | Scenario: Writing 1GB of data, direct upload to S3, no cross-node traffic needed (three replicas)Cost: 0 USD | Note: The pricing for storage units referenced here is based on a comparison between AWS S3 US East EBS GP3 instances and S3 Standard storage. For more details, please visit the link. The costs associated with cross-node traffic replication are derived from AWS AZ inter-traffic transmission rates. For a comprehensive cost comparison between AutoMQ and Apache Kafka, please see Cost-Effective: AutoMQ vs. Apache Kafka. Capacity planning presents a significant challenge during the large-scale deployment of Kafka in production settings. Due to architectural differences between AutoMQ and Apache Kafka, along with variations in storage media, there are noticeable differences in capacity planning: | Apache Kafka Uses local disks, integrated storage and computation | AutoMQ Uses S3 object storage, separate storage and computation | |:|:| | Disk space needs to be reserved in advance | Storage does not need to be reserved, allowing for on-demand use and pay-per-use | | Single-node storage is limited, with poor scalability | S3 object storage offers nearly unlimited space and good scalability | AutoMQ, reimagined as a next-generation Kafka distribution, delivers cost-efficiency and enhanced flexibility without compromising compatibility. Applications developed for Apache Kafka can switch to AutoMQ seamlessly, requiring no changes or adjustments. Building on the architectural comparison above, AutoMQ has introduced an S3Stream storage repository at its storage layer, effectively replacing the local log storage used by Apache Kafka. This layer maintains the same Partition abstraction, allowing upper layers such as KRaft metadata management, Coordinator, ReplicaManager, KafkaApis to integrate existing code seamlessly. AutoMQ upholds full compatibility with Apache Kafka protocols and semantics, continuously incorporating the latest updates and fixes from Apache Kafka. For additional details on how AutoMQ aligns with Apache Kafka, see Compatibility with Apache Kafka. Apache, Apache Kafka, Kafka, Apache RocketMQ, RocketMQ, and associated open source project names are trademarks of the Apache Software Foundation" } ]
{ "category": "App Definition and Development", "file_name": "python.md", "project_name": "Apache StreamPipes", "subcategory": "Streaming & Messaging" }
[ { "data": "Dependency issue with StreamPipes Python 0.92.0 In StreamPipes Python 0.92.0 there is a problem with the required dependencies. Pydantic has recently released the new version 2.0 with many exciting improvements, but also some breaking changes. Unfortunately, we didn't limit our requirements strictly enough, so yydantic 2.0 is installed together with streampipes, which is not (yet) compatible. To fix this bug, simply run the following command after installing streampipes, or adjust your dependencies accordingly: ``` pip install \"pydantic<2.0\" \"pydantic_core<2.0\" ``` Apache StreamPipes meets Python! We are working highly motivated on a Python library to interact with StreamPipes. In this way, we would like to unite the power of StreamPipes to easily connect to and read from different data sources, especially in the IoT domain, and the amazing universe of data analytics libraries in Python. StreamPipes Python is in beta The current version of this Python library is still a beta version. This means that it is still heavily under development, which may result in frequent and extensive API changes, unstable behavior, etc. As a quick example, we demonstrate how to set up and configure a StreamPipes client. In addition, we will get the available data lake measures out of StreamPipes. ``` from streampipes.client import StreamPipesClient from streampipes.client.config import StreamPipesClientConfig from streampipes.client.credential_provider import StreamPipesApiKeyCredentials config = StreamPipesClientConfig( credential_provider = StreamPipesApiKeyCredentials( username = \"test@streampipes.apache.org\", api_key = \"DEMO-KEY\", ), host_address = \"localhost\", https_disabled = True, port = 80 ) client = StreamPipesClient(client_config=config) measures = client.dataLakeMeasureApi.all() len(measures) ``` ``` 1 ``` ``` measures.to_pandas() ``` Output: ``` measurename timestampfield ... pipelineisrunning numeventproperties 0 test s0::timestamp ... False 2 [1 rows x 6 columns] ``` ``` from streampipes.client.credential_provider import StreamPipesApiKeyCredentials StreamPipesApiKeyCredentials() ``` username is always the username that is used to log in into StreamPipes. The api_key can be generated within the UI as demonstrated below:" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "CDEvents", "subcategory": "Streaming & Messaging" }
[ { "data": "Welcome to the CDEvents specification documentation for release v0.4.1. Here, you can find all the information necessary to understand and implement CDEvents within your application. For those new to CDEvents, we recommend starting with the White Paper and the Primer, to help you rapidly understand the concepts. Note that CDEvents builds upon CloudEvents, so it may be helpful to have some understanding of that specification first. To help you get up to speed quickly, we have broken the specification down into bite-sized chunks. The sections below will help you navigate to the information that you need. White Paper The Continuous Delivery Foundation White Paper on CDEvents Primer An introduction to CDEvents and associated concepts Common Metadata An overview of Metadata common across the CDEvents Specification Core Events Definition of specific events that are fundamental to pipeline execution and orchestration Source Code Control Events Handling Events relating to changes in version management of Source Code and related assets Continuous Integration Events Handling Events associated with Continuous Integration activities, typically involving build and test Testing Events Handling Events associated with Testing activities Continuous Deployment Events Handling Events associated with Continuous Deployment activities Continuous Operations Events Handling Events associated with Continuous Operations activities Ticket Events Handling Events associated with Tickets CloudEvents Binding and Transport Defining how CDEvents are mapped to CloudEvents for transportation and delivery The Continuous Delivery Foundation White Paper on CDEvents An introduction to CDEvents and associated concepts An overview of Metadata common across the CDEvents Specification Definition of specific events that are fundamental to pipeline execution and orchestration Handling Events relating to changes in version management of Source Code and related assets Handling Events associated with Continuous Integration activities, typically involving build and test Handling Events associated with Testing activities Handling Events associated with Continuous Deployment activities Handling Events associated with Continuous Operations activities Handling Events associated with Tickets Defining how CDEvents are mapped to CloudEvents for transportation and delivery" } ]
{ "category": "App Definition and Development", "file_name": "SfCawQ8ISiFXYpkMb7ycr3g0nzg.md", "project_name": "AutoMQ", "subcategory": "Streaming & Messaging" }
[ { "data": "Deploying AutoMQ on a single-machine environment provides an opportunity to explore message sending, receiving, and partition reassignment features. Linux/Mac/Windows Subsystem for Linux Docker Docker Compose version > 2.22.0 At least 8GB of free memory If you encounter slow download speeds for container images, consult the Docker Hub Mirror Configuration Execute the following command to establish a test cluster with 3 AutoMQ nodes. This cluster initiates AutoMQ and AWS LocalStack using Docker Compose, automatically creates a Bucket, and assigns local files as a stand-in for EBS. ``` curl https://download.automq.com/communityedition/standalonedeployment/install_run.sh | bash``` Once launched successfully, the address of the AutoMQ bootstrap server will be displayed in the standard output, as illustrated below: ``` AutoMQ has been successfully installed. You can now access AutoMQ from the bootstrap server address.localhost:9094,localhost:9095``` After initiating the AutoMQ cluster, you can execute the demo program below to test its capabilities. Example: Produce & Consume Message Example: Simple Benchmark Example: Partition Reassignment in Seconds Example: Self-Balancing When Cluster Nodes Change Example: Continuous Data Self-Balancing Once you have completed the functionality test, use the following commands to halt and remove the AutoGO cluster. ``` curl https://download.automq.com/communityedition/standalonedeployment/stop_uninstall.sh | bash``` To download software artifacts, please visit Software Artifact. For instructions on deploying the AutoMQ cluster in a production setting, see: Cluster Deployment on Linux Cluster Deployment on Kubernetes Try AutoMQ Enterprise Edition on the Cloud Marketplace Apache, Apache Kafka, Kafka, Apache RocketMQ, RocketMQ, and associated open source project names are trademarks of the Apache Software Foundation" } ]
{ "category": "App Definition and Development", "file_name": "examples-beam.html.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "Amazon Managed Service for Apache Flink was previously known as Amazon Kinesis Data Analytics for Apache Flink. In this exercise, you create a Managed Service for Apache Flink application that transforms data using Apache Beam. Apache Beam is a programming model for processing streaming data. For information about using Apache Beam with Managed Service for Apache Flink, see Using Apache Beam. To set up required prerequisites for this exercise, first complete the Getting started (DataStream API) exercise. Before you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources: Two Kinesis data streams (ExampleInputStream and ExampleOutputStream) An Amazon S3 bucket to store the application's code (ka-app-code-<username>) You can create the Kinesis streams and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics: Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data streams ExampleInputStream and ExampleOutputStream. How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app-code-<username>. In this section, you use a Python script to write random strings to the stream for the application to process. This section requires the AWS SDK for Python (Boto). Create a file named ping.py with the following contents: ``` import json import boto3 import random kinesis = boto3.client('kinesis') while True: data = random.choice(['ping', 'telnet', 'ftp', 'tracert', 'netstat']) print(data) kinesis.put_record( StreamName=\"ExampleInputStream\", Data=data, PartitionKey=\"partitionkey\") ``` Run the ping.py script: ``` $ python ping.py``` Keep the script running while completing the rest of the tutorial. The Java application code for this example is available from GitHub. To download the application code, do the following: Install the Git client if you haven't already. For more information, see Installing Git. Clone the remote repository with the following command: ``` git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git``` Navigate to the amazon-kinesis-data-analytics-java-examples/Beam directory. The application code is located in the BasicBeamStreamingJob.java file. Note the following about the application code: The application uses the Apache Beam ParDo to process incoming records by invoking a custom transform function called PingPongFn. The code to invoke the PingPongFn function is as follows: ``` .apply(\"Pong transform\", ParDo.of(new PingPongFn()) ``` Managed Service for Apache Flink applications that use Apache Beam require the following components. If you don't include these components and versions in your pom.xml, your application loads the incorrect versions from the environment dependencies, and since the versions do not match, your application crashes at runtime. ``` <jackson.version>2.10.2</jackson.version> ... <dependency> <groupId>com.fasterxml.jackson.module</groupId> <artifactId>jackson-module-jaxb-annotations</artifactId> <version>2.10.2</version> </dependency>``` The PingPongFn transform function passes the input data into the output stream, unless the input data is ping, in which case it emits the string pong\\n to the output stream. The code of the transform function is as follows: ``` private static class PingPongFn extends DoFn<KinesisRecord, byte[]> { private static final Logger LOG = LoggerFactory.getLogger(PingPongFn.class); @ProcessElement public void processElement(ProcessContext c) { String content = new String(c.element().getDataAsBytes(), StandardCharsets.UTF_8); if (content.trim().equalsIgnoreCase(\"ping\")) { LOG.info(\"Ponged!\");" }, { "data": "} else { LOG.info(\"No action for: \" + content); c.output(c.element().getDataAsBytes()); } } }``` To compile the application, do the following: Install Java and Maven if you haven't already. For more information, see Prerequisites in the Getting started (DataStream API) tutorial. Compile the application with the following command: ``` mvn package -Dflink.version=1.15.2 -Dflink.version.minor=1.8``` The provided source code relies on libraries from Java 11. Compiling the application creates the application JAR file (target/basic-beam-app-1.0.jar). In this section, you upload your application code to the Amazon S3 bucket you created in the Create dependent resources section. In the Amazon S3 console, choose the ka-app-code-<username> bucket, and choose Upload. In the Select files step, choose Add files. Navigate to the basic-beam-app-1.0.jar file that you created in the previous step. You don't need to change any of the settings for the object, so choose Upload. Your application code is now stored in an Amazon S3 bucket where your application can access it. Follow these steps to create, configure, update, and run the application using the console. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink On the Managed Service for Apache Flink dashboard, choose Create analytics application. On the Managed Service for Apache Flink - Create application page, provide the application details as follows: For Application name, enter MyApplication. For Runtime, choose Apache Flink. Apache Beam is not compatible with Apache Flink version 1.18 or later. Select Apache Flink version 1.15 from the version pulldown. For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. Choose Create application. When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows: Policy: kinesis-analytics-service-MyApplication-us-west-2 Role: kinesis-analytics-MyApplication-us-west-2 Edit the IAM policy to add permissions to access the Kinesis data streams. Open the IAM console at https://console.aws.amazon.com/iam/. Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy that the console created for you in the previous section. On the Summary page, choose Edit policy. Choose the JSON tab. Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901) with your account ID. ``` { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"ReadCode\", \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"logs:DescribeLogGroups\", \"s3:GetObjectVersion\" ], \"Resource\": [ \"arn:aws:logs:us-west-2:012345678901:log-group:*\", \"arn:aws:s3:::ka-app-code-<username>/basic-beam-app-1.0.jar\" ] }, { \"Sid\": \"DescribeLogStreams\", \"Effect\": \"Allow\", \"Action\": \"logs:DescribeLogStreams\", \"Resource\": \"arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*\" }, { \"Sid\": \"PutLogEvents\", \"Effect\": \"Allow\", \"Action\": \"logs:PutLogEvents\", \"Resource\": \"arn:aws:logs:us-west-2:012345678901:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream\" }, { \"Sid\": \"ListCloudwatchLogGroups\", \"Effect\": \"Allow\", \"Action\": [ \"logs:DescribeLogGroups\" ], \"Resource\": [ \"arn:aws:logs:us-west-2:012345678901:log-group:*\" ] }, { \"Sid\": \"ReadInputStream\", \"Effect\": \"Allow\", \"Action\": \"kinesis:*\", \"Resource\": \"arn:aws:kinesis:us-west-2:012345678901:stream/ExampleInputStream\" }, { \"Sid\": \"WriteOutputStream\", \"Effect\": \"Allow\", \"Action\": \"kinesis:*\", \"Resource\": \"arn:aws:kinesis:us-west-2:012345678901:stream/ExampleOutputStream\" } ] }``` On the MyApplication page, choose Configure. On the Configure application page, provide the Code location: For Amazon S3 bucket, enter ka-app-code-<username>. For Path to Amazon S3 object, enter" }, { "data": "Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2. Enter the following: | Group ID | Key | Value | |:--|:--|:--| | BeamApplicationProperties | InputStreamName | ExampleInputStream | | BeamApplicationProperties | OutputStreamName | ExampleOutputStream | | BeamApplicationProperties | AwsRegion | us-west-2 | Under Monitoring, ensure that the Monitoring metrics level is set to Application. For CloudWatch logging, select the Enable check box. Choose Update. When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows: Log group: /aws/kinesis-analytics/MyApplication Log stream: kinesis-analytics-log-stream This log stream is used to monitor the application. This is not the same log stream that the application uses to send results. The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job. You can check the Managed Service for Apache Flink metrics on the CloudWatch console to verify that the application is working. This section includes procedures for cleaning up AWS resources created in the Tumbling Window tutorial. Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink in the Managed Service for Apache Flink panel, choose MyApplication. In the application's page, choose Delete and then confirm the deletion. Open the Kinesis console at https://console.aws.amazon.com/kinesis. In the Kinesis Data Streams panel, choose ExampleInputStream. In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion. In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. Choose the ka-app-code-<username> bucket. Choose Delete and then enter the bucket name to confirm deletion. Open the IAM console at https://console.aws.amazon.com/iam/. In the navigation bar, choose Policies. In the filter control, enter kinesis. Choose the kinesis-analytics-service-MyApplication-us-west-2 policy. Choose Policy Actions and then choose Delete. In the navigation bar, choose Roles. Choose the kinesis-analytics-MyApplication-us-west-2 role. Choose Delete role and then confirm the deletion. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. In the navigation bar, choose Logs. Choose the /aws/kinesis-analytics/MyApplication log group. Choose Delete Log Group and then confirm the deletion. Now that you've created and run a basic Managed Service for Apache Flink application that transforms data using Apache Beam, see the following application for an example of a more advanced Managed Service for Apache Flink solution. Beam on Managed Service for Apache Flink Streaming Workshop: In this workshop, we explore an end to end example that combines batch and streaming aspects in one uniform Apache Beam pipeline. Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better." } ]
{ "category": "App Definition and Development", "file_name": "docs.md", "project_name": "CDEvents", "subcategory": "Streaming & Messaging" }
[ { "data": "Welcome to the CDEvents specification documentation for release v0.4.1. Here, you can find all the information necessary to understand and implement CDEvents within your application. For those new to CDEvents, we recommend starting with the White Paper and the Primer, to help you rapidly understand the concepts. Note that CDEvents builds upon CloudEvents, so it may be helpful to have some understanding of that specification first. To help you get up to speed quickly, we have broken the specification down into bite-sized chunks. The sections below will help you navigate to the information that you need. White Paper The Continuous Delivery Foundation White Paper on CDEvents Primer An introduction to CDEvents and associated concepts Common Metadata An overview of Metadata common across the CDEvents Specification Core Events Definition of specific events that are fundamental to pipeline execution and orchestration Source Code Control Events Handling Events relating to changes in version management of Source Code and related assets Continuous Integration Events Handling Events associated with Continuous Integration activities, typically involving build and test Testing Events Handling Events associated with Testing activities Continuous Deployment Events Handling Events associated with Continuous Deployment activities Continuous Operations Events Handling Events associated with Continuous Operations activities Ticket Events Handling Events associated with Tickets CloudEvents Binding and Transport Defining how CDEvents are mapped to CloudEvents for transportation and delivery The Continuous Delivery Foundation White Paper on CDEvents An introduction to CDEvents and associated concepts An overview of Metadata common across the CDEvents Specification Definition of specific events that are fundamental to pipeline execution and orchestration Handling Events relating to changes in version management of Source Code and related assets Handling Events associated with Continuous Integration activities, typically involving build and test Handling Events associated with Testing activities Handling Events associated with Continuous Deployment activities Handling Events associated with Continuous Operations activities Handling Events associated with Tickets Defining how CDEvents are mapped to CloudEvents for transportation and delivery" } ]
{ "category": "App Definition and Development", "file_name": "#cloudevents.md", "project_name": "CloudEvents", "subcategory": "Streaming & Messaging" }
[ { "data": "Eventarc lets you asynchronously deliver events from Google services, SaaS, and your own apps using loosely coupled services that react to state changes. Eventarc requires no infrastructure management you can optimize productivity and costs while building a modern, event-driven solution. Learn more Quickstart: Receive Cloud Storage events Create a trigger Develop event receivers Receive a Pub/Sub event Trigger Workflows using Cloud Audit Logs Supported event types gcloud commands REST API reference RPC API reference Eventarc roles and permissions Pricing Quotas and limits Release notes Get support Trigger Cloud Run services with Eventarc Use Eventarc to listen to events from Cloud Pub/Sub, Cloud Storage, and Audit Logs and pass the events to Cloud Run. Eventarc for Cloud Run Deploy a Cloud Run receiver to listen to events from Cloud Pub/Sub and Audit Logs. Receive events using Pub/Sub Learn how to send events using Cloud Pub/Sub to an event receiver running on Cloud Run. Trigger Workflows using Eventarc Learn how to execute a workflow that receives events from BigQuery using Cloud Audit Logs. Build a BigQuery processing pipeline with Eventarc Learn how to use Eventarc to develop a processing pipeline using Cloud Storage, BigQuery, and Cloud Scheduler. Node.js samples Includes samples to read events from Pub/Sub, Cloud Storage, and more. Python samples Includes samples to read events from Pub/Sub, Cloud Storage, and more. Go samples Includes samples to read events from Pub/Sub, Cloud Storage, and more. Java samples Includes samples to read events from Pub/Sub, Cloud Storage, and more. C# samples Includes samples to read events from Pub/Sub, Cloud Storage, and more. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-10 UTC." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "CloudEvents", "subcategory": "Streaming & Messaging" }
[ { "data": "Knative Eventing is a collection of APIs that enable you to use an event-driven architecture with your applications. You can use these APIs to create components that route events from event producers (known as sources) to event consumers (known as sinks) that receive events. Sinks can also be configured to respond to HTTP requests by sending a response event. Knative Eventing is a standalone platform that provides support for various types of workloads, including standard Kubernetes Services and Knative Serving Services. Knative Eventing uses standard HTTP POST requests to send and receive events between event producers and sinks. These events conform to the CloudEvents specifications, which enables creating, parsing, sending, and receiving events in any programming language. Knative Eventing components are loosely coupled, and can be developed and deployed independently of each other. Any producer can generate events before there are active event consumers that are listening for those events. Any event consumer can express interest in a class of events before there are producers that are creating those events. Examples of supported Knative Eventing use cases: Publish an event without creating a consumer. You can send events to a broker as an HTTP POST, and use binding to decouple the destination configuration from your application that produces events. Consume an event without creating a publisher. You can use a trigger to consume events from a broker based on event attributes. The application receives events as an HTTP POST. Tip Multiple event producers and sinks can be used together to create more advanced Knative Eventing flows to solve complex use cases. Creating and responding to Kubernetes API events image/svg+xml Creating an image processing pipeline image/svg+xml Facilitating AI workloads at the edge in large-scale, drone-powered sustainable agriculture projects We use cookies. Google Analytics is used to improve your experience and help us understand site traffic and page usage. Learn about analytics cookies and how you can take steps to opt-out from sharing your usage data. We use analytics and cookies to understand site traffic. Information about your use of our site is shared with Google for that purpose. Learn more." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "EMQ Technologies", "subcategory": "Streaming & Messaging" }
[ { "data": "English English Appearance What's New Feature Comparison Get Started Operating Limitations FAQ Technical Support Product Roadmap Deploy Debian Ubuntu CentOS/RHEL macOS Kubernetes Install from Source Code Rolling Upgrade Upgrade EMQX Cluster from 4.4 to 5.1 Upgrade EMQX on Kubernetes MQTT Core Concepts Test with MQTT Clients MQTT Shared Subscription MQTT Retained Message MQTT Will Message Exclusive Subscription Delayed Publish Auto Subscribe Topic Rewrite Wildcard Subscription Connect via C SDK Connect via Java SDK Connect via Go SDK Connect via Python SDK Connect via JavaScript SDK API Docs Architecture Create and Manage Cluster Cluster Security Load Balance EMQX Cluster with NGINX Load Balance EMQX Cluster with HAProxy Performance Tuning (Linux) Performance Test with eMQTT-Bench Performance Test with XMeter Cloud Test Scenarios and Results for Reference Enable SSL/TLS Connection Client TLS Connection Code Examples Obtain SSL/TLS Certificates CRL Check OCSP Stapling X.509 Certificate Authentication JWT Authentication Use Built-in Database Integrate with MySQL Integrate with MongoDB Integrate with PostgreSQL Integrate with Redis Integrate with LDAP Use HTTP Service MQTT 5.0 Enhanced Authentication PSK Authentication Use HTTP API to Manage User Data Use ACL File Use Built-in Database Integrate with MySQL Integrate with MongoDB Integrate with PostgreSQL Integrate with Redis Integrate with LDAP Use HTTP Service Banned Clients Flapping Detect Create Rules Rule SQL Reference Data Sources and Fields Built-in SQL Functions jq Functions Flow Designer Connector Webhook Apache Kafka Apache IoTDB Apache Pulsar AWS Kinesis AWS S3 Azure Event Hubs Cassandra ClickHouse Confluent DynamoDB Elasticsearch GCP PubSub GreptimeDB HStreamDB HTTP Server InfluxDB Microsoft SQL Server MongoDB MQTT MySQL OpenTSDB Oracle Database PostgreSQL RabbitMQ Redis RocketMQ SysKeeper TDengine TimescaleDB Cluster Listener MQTT Flapping Detection Limiter Logs Prometheus Dashboard Configuration Manual Command Line Interface Dashboard Home Page Connections Topics Subscriptions Retained Messages Authentication Authorization Banned Clients Flows Rules Data Bridge Management Extensions Diagnose System Configure OpenLDAP and Microsoft Entra ID SSO Configure SAML-Based SSO Audit Log Rate Limit Statistics and Metrics Alarm Logs Topic Metrics Slow Subscriptions Log Trace System Topic Integrate with Prometheus Metrics Logs Traces Backup and Restore Hooks Plugins gRPC Hook Extension Telemetry Features and Benefits Use MQTT over QUIC STOMP Gateway MQTT-SN Gateway CoAP Gateway LwM2M Gateway ExProto Gateway MQTT Client Attributes MQTT Programming MQTT Guide Clustering In-flight and Queue Message Retransmission MQTT 5.0 Specification MQTT 3.1.1 Specification MQTT Glossary MQTT 5.0 Features MQTT Reason Code Version 5 Version 4 Version 0.1 to 3.x Incompatible Changes in EMQX 5.7 Incompatible Changes in EMQX 5.6 Incompatible Changes in EMQX 5.5 Incompatible Changes in EMQX 5.4 Incompatible Changes between EMQX 4.4 and EMQX 5.1 Authentication / Authorization Incompatibility Between EMQX" }, { "data": "and EMQX 5.1 Data Integration Incompatibility Between EMQX 5.1 and EMQX 4.4 Gateway Incompatibility Between EMQX 4.4 and EMQX 5.1 EMQX is an open-source, highly scalable, and feature-rich MQTT broker designed for IoT and real-time messaging applications. It supports up to 100 million concurrent IoT device connections per cluster while maintaining a throughput of 1 million messages per second and a millisecond latency. EMQX supports various protocols, including MQTT (3.1, 3.1.1, and 5.0), HTTP, QUIC, and WebSocket. It also provides secure bi-directional communication with MQTT over TLS/SSL and various authentication mechanisms, ensuring reliable and efficient communication infrastructure for IoT devices and applications. With a built-in powerful SQL-based rules engine, EMQX can extract, filter, enrich, and transform IoT data in real-time. EMQX also ensures high availability and horizontal scalability with a masterless distributed architecture and provides an operations-friendly user experience with excellent observability. EMQX has been adopted by over 20,000 enterprise users, connecting more than 100 million IoT devices. Over 400 customers, including renowned brands like HPE, VMware, Verifone, SAIC Volkswagen, and Ericsson, trust EMQX for their mission-critical IoT scenarios. Massive Scale EMQX enables scaling up to 100 million concurrent MQTT connections in a single cluster, making it one of the most scalable MQTT brokers available. High Performance EMQX is capable of processing and handling millions of MQTT messages per second within a single broker. Low Latency EMQX offers almost real-time message delivery, with a sub-millisecond latency guarantee, ensuring that messages are received almost instantly. Fully MQTT 5.0 EMQX is fully compliant with both MQTT 5.0 and 3.x standards, providing better scalability, security, and reliability. High Availability EMQX enables high availability and horizontal scalability through a masterless distributed architecture, ensuring reliable and scalable performance. Cloud-Native & K8s EMQX can be easily deployed on-premises or in public clouds using Kubernetes Operator and Terraform. EMQ provides four deployment options for EMQX: two managed services (EMQX Cloud Serverless and EMQX Dedicated Cloud) and two self-hosted options (EMQX Open Source and EMQX Enterprise). To help you choose the best deployment option for your requirements, the following table lists a comparison of feature support across different deployment types. For a comparison of supported features in detail, refer to Feature Comparison. | Self Hosted | Self Hosted.1 | MQTT as a Service | MQTT as a Service.1 | |:-|:-|:-|:-| | EMQX Open Source | EMQX Enterprise | EMQX Cloud Serverless | EMQX Dedicated Cloud | | Open Source Download | Get a Free Trial License | Get Started Free | Start a Free 14-Day Trial | | Apache Version" }, { "data": "MQTT over QUIC Session storage in memory Supports Webhook and MQTT data bridge. Audit log and single sign-on (SSO) Multi-protocol gateways, including MQTT-SN, STOMP and CoAP Open source community | Commercial license (Business source license) MQTT over QUIC Session persistence in RocksDB Data integration with 40+ enterprise systems, including Kafka/Confluent, Timescale, InfluxDB, PostgreSQL, Redis etc. Audit log and single sign-on (SSO) Role-Based Access Control (RBAC) File transfer Message codec Multi-protocol gateways, with extra support on OCPP, JT/808 and GBT32960 24/7 global technical support | Pay as you go Free quota every month 1000 maximum connections Start deployment in seconds Auto scaling 8/5 global technical support | 14-days free trial Hourly billing Multi-cloud regions worldwide Flexible specifications VPC peering, NAT gateway, load balance and more Out-of-box integration with over 40+ cloud services 24/7 global technical support | As an MQTT broker designed for IoT and real-time messaging applications, EMQX is often used to fulfill various business requirements in the following scenarios. EMQX supports multiple protocols, including MQTT (3.1, 3.1.1, and 5.0), HTTP, QUIC, and WebSocket. It also provides secure bi-directional communication with MQTT over TLS/SSL and various authentication mechanisms, ensuring reliable and efficient communication infrastructure for IoT devices and applications. Using EMQX in mission-critical applications brings you key benefits as follows: EMQX can help you in various scenarios listed below. You can build up peer-to-peer communications with EMQX. In the asynchronous Pub/Sub model, the message publisher and subscriber are decoupled from each other, as they can be dynamically added or removed as needed. This decoupling provides flexibility to your applications and message communication. EMQX excels in scenarios where one-to-many messaging is vital, such as financial market updates. It effectively broadcasts messages to a large number of clients, ensuring timely information dissemination. The many-to-one message pattern in EMQX is ideal for consolidating data in large-scale networks, such as factory plats, modern buildings, retail chains, or electricity grids. EMQX can help you transfer and transmit the data from the endpoints in the network to your centralized backend servers on the cloud or on-premise. EMQX supports the MQTT 5.0 feature Request-Response. With this feature, you can now increase communication awareness and traceability in your asynchronous communication architect. In a partitioned, or limited network environment, EMQX can create the data integrations, provide you with a seamless messaging environment. With a built-in powerful SQL-based rules engine, EMQX can extract, filter, enrich, and transform the flowing data in real-time. Processed ata can be easily ingested into external HTTP servers and MQTT services. If you are using EMQX Enterprise, you can also ingest data into mainstream databases, data storage, and message queues." } ]
{ "category": "App Definition and Development", "file_name": "data-bridges.html.md", "project_name": "EMQ Technologies", "subcategory": "Streaming & Messaging" }
[ { "data": "English English Appearance What's New Feature Comparison Get Started Operating Limitations FAQ Technical Support Product Roadmap Deploy Debian Ubuntu CentOS/RHEL macOS Kubernetes Install from Source Code Rolling Upgrade Upgrade EMQX Cluster from 4.4 to 5.1 Upgrade EMQX on Kubernetes MQTT Core Concepts Test with MQTT Clients MQTT Shared Subscription MQTT Retained Message MQTT Will Message Exclusive Subscription Delayed Publish Auto Subscribe Topic Rewrite Wildcard Subscription Connect via C SDK Connect via Java SDK Connect via Go SDK Connect via Python SDK Connect via JavaScript SDK API Docs Architecture Create and Manage Cluster Cluster Security Load Balance EMQX Cluster with NGINX Load Balance EMQX Cluster with HAProxy Performance Tuning (Linux) Performance Test with eMQTT-Bench Performance Test with XMeter Cloud Test Scenarios and Results for Reference Enable SSL/TLS Connection Client TLS Connection Code Examples Obtain SSL/TLS Certificates CRL Check OCSP Stapling X.509 Certificate Authentication JWT Authentication Use Built-in Database Integrate with MySQL Integrate with MongoDB Integrate with PostgreSQL Integrate with Redis Integrate with LDAP Use HTTP Service MQTT 5.0 Enhanced Authentication PSK Authentication Use HTTP API to Manage User Data Use ACL File Use Built-in Database Integrate with MySQL Integrate with MongoDB Integrate with PostgreSQL Integrate with Redis Integrate with LDAP Use HTTP Service Banned Clients Flapping Detect Create Rules Rule SQL Reference Data Sources and Fields Built-in SQL Functions jq Functions Flow Designer Connector Webhook Apache Kafka Apache IoTDB Apache Pulsar AWS Kinesis AWS S3 Azure Event Hubs Cassandra ClickHouse Confluent DynamoDB Elasticsearch GCP PubSub GreptimeDB HStreamDB HTTP Server InfluxDB Microsoft SQL Server MongoDB MQTT MySQL OpenTSDB Oracle Database PostgreSQL RabbitMQ Redis RocketMQ SysKeeper TDengine TimescaleDB Cluster Listener MQTT Flapping Detection Limiter Logs Prometheus Dashboard Configuration Manual Command Line Interface Dashboard Home Page Connections Topics Subscriptions Retained Messages Authentication Authorization Banned Clients Flows Rules Data Bridge Management Extensions Diagnose System Configure OpenLDAP and Microsoft Entra ID SSO Configure SAML-Based SSO Audit Log Rate Limit Statistics and Metrics Alarm Logs Topic Metrics Slow Subscriptions Log Trace System Topic Integrate with Prometheus Metrics Logs Traces Backup and Restore Hooks Plugins gRPC Hook Extension Telemetry Features and Benefits Use MQTT over QUIC STOMP Gateway MQTT-SN Gateway CoAP Gateway LwM2M Gateway ExProto Gateway MQTT Client Attributes MQTT Programming MQTT Guide Clustering In-flight and Queue Message Retransmission MQTT 5.0 Specification MQTT 3.1.1 Specification MQTT Glossary MQTT 5.0 Features MQTT Reason Code Version 5 Version 4 Version 0.1 to 3.x Incompatible Changes in EMQX 5.7 Incompatible Changes in EMQX 5.6 Incompatible Changes in EMQX 5.5 Incompatible Changes in EMQX 5.4 Incompatible Changes between EMQX 4.4 and EMQX 5.1 Authentication / Authorization Incompatibility Between EMQX 4.4 and EMQX 5.1 Data Integration Incompatibility Between EMQX 5.1 and EMQX 4.4 Gateway Incompatibility Between EMQX 4.4 and EMQX 5.1 MQTT C SDK Example MQTT Java SDK Example MQTT Go SDK Example MQTT Python SDK Example MQTT JavaScript SDK Example Eclipse Paho Python SDK Eclipse Paho Golang SDK Eclipse Paho Java SDK MQTT.js MQTT.js Eclipse Paho C SDK Eclipse Paho C# SDK Mosquitto-PHP MQTT-Client-Framework Eclipse Paho Android SDK" } ]
{ "category": "App Definition and Development", "file_name": "getting-started.html.md", "project_name": "EMQ Technologies", "subcategory": "Streaming & Messaging" }
[ { "data": "English English Appearance What's New Feature Comparison Get Started Operating Limitations FAQ Technical Support Product Roadmap Deploy Debian Ubuntu CentOS/RHEL macOS Kubernetes Install from Source Code Rolling Upgrade Upgrade EMQX Cluster from 4.4 to 5.1 Upgrade EMQX on Kubernetes MQTT Core Concepts Test with MQTT Clients MQTT Shared Subscription MQTT Retained Message MQTT Will Message Exclusive Subscription Delayed Publish Auto Subscribe Topic Rewrite Wildcard Subscription Connect via C SDK Connect via Java SDK Connect via Go SDK Connect via Python SDK Connect via JavaScript SDK API Docs Architecture Create and Manage Cluster Cluster Security Load Balance EMQX Cluster with NGINX Load Balance EMQX Cluster with HAProxy Performance Tuning (Linux) Performance Test with eMQTT-Bench Performance Test with XMeter Cloud Test Scenarios and Results for Reference Enable SSL/TLS Connection Client TLS Connection Code Examples Obtain SSL/TLS Certificates CRL Check OCSP Stapling X.509 Certificate Authentication JWT Authentication Use Built-in Database Integrate with MySQL Integrate with MongoDB Integrate with PostgreSQL Integrate with Redis Integrate with LDAP Use HTTP Service MQTT 5.0 Enhanced Authentication PSK Authentication Use HTTP API to Manage User Data Use ACL File Use Built-in Database Integrate with MySQL Integrate with MongoDB Integrate with PostgreSQL Integrate with Redis Integrate with LDAP Use HTTP Service Banned Clients Flapping Detect Create Rules Rule SQL Reference Data Sources and Fields Built-in SQL Functions jq Functions Flow Designer Connector Webhook Apache Kafka Apache IoTDB Apache Pulsar AWS Kinesis AWS S3 Azure Event Hubs Cassandra ClickHouse Confluent DynamoDB Elasticsearch GCP PubSub GreptimeDB HStreamDB HTTP Server InfluxDB Microsoft SQL Server MongoDB MQTT MySQL OpenTSDB Oracle Database PostgreSQL RabbitMQ Redis RocketMQ SysKeeper TDengine TimescaleDB Cluster Listener MQTT Flapping Detection Limiter Logs Prometheus Dashboard Configuration Manual Command Line Interface Dashboard Home Page Connections Topics Subscriptions Retained Messages Authentication Authorization Banned Clients Flows Rules Data Bridge Management Extensions Diagnose System Configure OpenLDAP and Microsoft Entra ID SSO Configure SAML-Based SSO Audit Log Rate Limit Statistics and Metrics Alarm Logs Topic Metrics Slow Subscriptions Log Trace System Topic Integrate with Prometheus Metrics Logs Traces Backup and Restore Hooks Plugins gRPC Hook Extension Telemetry Features and Benefits Use MQTT over QUIC STOMP Gateway MQTT-SN Gateway CoAP Gateway LwM2M Gateway ExProto Gateway MQTT Client Attributes MQTT Programming MQTT Guide Clustering In-flight and Queue Message Retransmission MQTT 5.0 Specification MQTT 3.1.1 Specification MQTT Glossary MQTT 5.0 Features MQTT Reason Code Version 5 Version 4 Version 0.1 to 3.x Incompatible Changes in EMQX 5.7 Incompatible Changes in EMQX 5.6 Incompatible Changes in EMQX 5.5 Incompatible Changes in EMQX 5.4 Incompatible Changes between EMQX 4.4 and EMQX 5.1 Authentication / Authorization Incompatibility Between EMQX 4.4 and EMQX 5.1 Data Integration Incompatibility Between EMQX 5.1 and EMQX 4.4 Gateway Incompatibility Between EMQX 4.4 and EMQX 5.1 Besides working with a single EMQX node, EMQX natively supports a distributed cluster architecture, which can handle a large number of clients and messages while ensuring high availability, fault tolerance, and scalability. With the EMQX cluster, you can enjoy the benefits of fault tolerance and high availability by allowing the cluster to continue operating even if one or more nodes fail. This chapter introduces the benefits of clustering, the new Mria and RLOG architecture, how to create a cluster manually or automatically, how to implement load balancing, and how to ensure communication security within a" }, { "data": "EMQX cluster is recommended for larger or mission-critical applications and can bring the users the following benefits. The basic function of a distributed EMQX cluster is to forward and publish messages to different subscribers. In previous versions, EMQX utilizes Erlang/OTP's built-in database, Mnesia, to store MQTT session states. The database replication channel is powered by the \"Erlang distribution\" protocol, enabling each node to function as both a client and server. The default listening port number for this protocol is 4370. However, the full mesh topology imposes a practical limit on the cluster size. For EMQX versions prior to 5, it is recommended to keep the cluster size under 5 nodes. Beyond this, vertical scaling, which involves using more powerful machines, is a preferable option to maintain the cluster's performance and stability. In our benchmark environment, we managed to reach ten million concurrent connections with EMQX Enterprise 4.3. To provide our customers with a better cluster salability performance, EMQX 5.0 adopts a new Mria cluster architecture. With this Mria architecture, one EMQX cluster can support up to 100 million concurrent MQTT connections. To better understand how clustering in EMQX works, you can continue to read the EMQX clustering. EMQX adds an abstraction layer with the Ekka library on top of distributed Erlang, enabling features like auto discovery of EMQX nodes, auto cluster, network partition, autoheal, and autoclean. EMQX supports several node discovery strategies: | Strategy | Description | |:--|:| | manual | Manually create a cluster with commands | | static | Autocluster through static node list | | DNS | Autocluster through DNS A and SRV records | | etcd | Autocluster through etcd | | k8s | Autocluster provided by Kubernetes | Network partition autoheal is a feature of EMQX that allows the broker to recover automatically from network partitions without requiring any manual intervention, suitable for mission-critical applications where downtime is not acceptable. The network partition autoheal (cluster.autoheal) feature is enabled by default. With this feature, EMQX will continuously monitor the connectivity between nodes in the cluster. ``` cluster.autoheal = true``` If a network partition is detected, EMQX will isolate the affected nodes and continue to operate with the remaining nodes. Once the network partition is resolved, the broker will automatically re-integrate the isolated nodes into the cluster. Cluster node autoclean feature will automatically remove the disconnected nodes from the cluster after the configured time interval. This feature helps to ensure that the cluster is running efficiently and prevent performance degradation over time. This feature is enabled by default, you can customize the waiting period before removing the disconnected nodes. Default: 24h ``` cluster.autoclean = 24h``` The session across nodes feature ensures that the client sessions will not be lost even during the client's disconnection. To use this feature: Then EMQX will keep the previous session data associated with the Client ID when the client disconnects. If this client reconnects, EMQX will resume the previous sessions, deliver any messages that were queued during the client's disconnection, and maintain the client's subscriptions. To ensure optimal performance, the network latency for operating EMQX clusters should be less than 10 milliseconds. The cluster will not be available if the latency is higher than 100 ms. The core nodes should be under the same private network. In Mria+RLOG mode, it is also recommended to deploy the replicant nodes in the same private network." } ]
{ "category": "App Definition and Development", "file_name": "introduction.html.md", "project_name": "EMQ Technologies", "subcategory": "Streaming & Messaging" }
[ { "data": "English English Appearance What's New Feature Comparison Get Started Operating Limitations FAQ Technical Support Product Roadmap Deploy Debian Ubuntu CentOS/RHEL macOS Kubernetes Install from Source Code Rolling Upgrade Upgrade EMQX Cluster from 4.4 to 5.1 Upgrade EMQX on Kubernetes MQTT Core Concepts Test with MQTT Clients MQTT Shared Subscription MQTT Retained Message MQTT Will Message Exclusive Subscription Delayed Publish Auto Subscribe Topic Rewrite Wildcard Subscription Connect via C SDK Connect via Java SDK Connect via Go SDK Connect via Python SDK Connect via JavaScript SDK API Docs Architecture Create and Manage Cluster Cluster Security Load Balance EMQX Cluster with NGINX Load Balance EMQX Cluster with HAProxy Performance Tuning (Linux) Performance Test with eMQTT-Bench Performance Test with XMeter Cloud Test Scenarios and Results for Reference Enable SSL/TLS Connection Client TLS Connection Code Examples Obtain SSL/TLS Certificates CRL Check OCSP Stapling X.509 Certificate Authentication JWT Authentication Use Built-in Database Integrate with MySQL Integrate with MongoDB Integrate with PostgreSQL Integrate with Redis Integrate with LDAP Use HTTP Service MQTT 5.0 Enhanced Authentication PSK Authentication Use HTTP API to Manage User Data Use ACL File Use Built-in Database Integrate with MySQL Integrate with MongoDB Integrate with PostgreSQL Integrate with Redis Integrate with LDAP Use HTTP Service Banned Clients Flapping Detect Create Rules Rule SQL Reference Data Sources and Fields Built-in SQL Functions jq Functions Flow Designer Connector Webhook Apache Kafka Apache IoTDB Apache Pulsar AWS Kinesis AWS S3 Azure Event Hubs Cassandra ClickHouse Confluent DynamoDB Elasticsearch GCP PubSub GreptimeDB HStreamDB HTTP Server InfluxDB Microsoft SQL Server MongoDB MQTT MySQL OpenTSDB Oracle Database PostgreSQL RabbitMQ Redis RocketMQ SysKeeper TDengine TimescaleDB Cluster Listener MQTT Flapping Detection Limiter Logs Prometheus Dashboard Configuration Manual Command Line Interface Dashboard Home Page Connections Topics Subscriptions Retained Messages Authentication Authorization Banned Clients Flows Rules Data Bridge Management Extensions Diagnose System Configure OpenLDAP and Microsoft Entra ID SSO Configure SAML-Based SSO Audit Log Rate Limit Statistics and Metrics Alarm Logs Topic Metrics Slow Subscriptions Log Trace System Topic Integrate with Prometheus Metrics Logs Traces Backup and Restore Hooks Plugins gRPC Hook Extension Telemetry Features and Benefits Use MQTT over QUIC STOMP Gateway MQTT-SN Gateway CoAP Gateway LwM2M Gateway ExProto Gateway MQTT Client Attributes MQTT Programming MQTT Guide Clustering In-flight and Queue Message Retransmission MQTT 5.0 Specification MQTT 3.1.1 Specification MQTT Glossary MQTT 5.0 Features MQTT Reason Code Version 5 Version 4 Version 0.1 to 3.x Incompatible Changes in EMQX 5.7 Incompatible Changes in EMQX 5.6 Incompatible Changes in EMQX 5.5 Incompatible Changes in EMQX 5.4 Incompatible Changes between EMQX 4.4 and EMQX 5.1 Authentication / Authorization Incompatibility Between EMQX 4.4 and EMQX 5.1 Data Integration Incompatibility Between EMQX 5.1 and EMQX 4.4 Gateway Incompatibility Between EMQX 4.4 and EMQX 5.1 EMQX is the worlds most scalable and reliable MQTT messaging platform that can help you to connect, move and process your business data reliably in real-time. With this all-in-one MQTT platform, you can easily build your Internet of Things (IoT) applications with significant business" }, { "data": "This chapter gives you a tour of how to download and install EMQX and how to test the connecting and messaging services with our built-in WebSocket tool. TIP Besides the deployment methods introduced in this quickstart guide, you are also welcome to try our EMQX Cloud, a fully managed MQTT service for IoT. You only need to register for an account before you can start your MQTT services and connect your IoT devices to any cloud with zero need for infrastructure maintenance. EMQX is available in Open Source and Enterprise editions, you may click the link below to download the edition as your business needs. The world's most scalable distributed MQTT broker with a high-performance real-time message processing engine, powering event streaming for IoT devices at a massive scale. Download The worlds leading Cloud-Native IoT Messaging Platform with an all-in-one distributed MQTT broker and SQL-based IoT rule engine. It combines high performance with reliable data transport, processing, and integration for business-critical IoT solutions. Try Free EMQX can be run with Docker, installed with EMQX Kubernetes Operator, or installed on a computer or virtual machine (VM) via a download package. If you choose to install EMQX with a download package, the following operating systems are currently supported: For other platforms not listed above, you can try to build and install with source code or simply contact EMQ for support. In addition, you can also deploy EMQX with one click through EMQX Terraform on the cloud, for example, Alibaba Cloud and AWS. This quick start guide shows you the easiest ways to install and run EMQX, either through Docker or using the installation package. Container deployment is the quickest way to start exploring EMQX. In this section, we will show you how to run EMQX with Docker. To download and start the latest version of EMQX, enter the command below. Ensure Docker is installed and running before you execute this command. ``` docker run -d --name emqx -p 1883:1883 -p 8083:8083 -p 8084:8084 -p 8883:8883 -p 18083:18083 emqx/emqx:latest``` Start your web browser and enter http://localhost:18083/ ( localhost can be substituted with your IP address) in the address bar to access the EMQX Dashboard, from where you can connect to your clients or check the running status. Default user name and password: admin public You can also install EMQX with zip/tar.gz files on a computer or VM, so you can easily adjust the configurations or run performance tuning. The instructions below use macOS (macOS12 amd64) as an example to illustrate the installation steps. Note: Considering all the runtime dependencies, it is recommended to use zip/tar.gz files for testing and hot upgrades, and NOT recommended in a production environment. To download the zip file, enter: ``` wget https://www.emqx.com/en/downloads/broker/5.7.0/emqx-5.7.0-macos12-amd64.zip``` To install EMQX, enter: ``` mkdir -p emqx && unzip emqx-5.7.0-macos12-amd64.zip -d emqx``` To run EMQX, enter: ```" }, { "data": "start``` Start your web browser and enter http://localhost:18083/ ( localhost can be substituted with your IP address) in the address bar to access the EMQX Dashboard, from where you can connect to your clients or check the running status. The default user name and password are admin & public. You will be prompted to change the default password once logged in. To stop EMQX, enter: ``` ./emqx/bin/emqx stop``` To uninstall EMQX after your testing, simply delete the EMQX folder. Now that you have successfully started EMQX, you can continue to test the connection and message services with MQTTX. MQTTX is an elegant cross-platform MQTT 5.0 desktop client, running on macOS, Linux, and Windows. By utilizing a chat style of user interface, MQTTX allows users to quickly create connections and save multiple clients, which facilitates users to quickly test the MQTT/MQTTS connection, as well as the subscription and publication of MQTT messages. This section introduces how to verify the connection with MQTTX Web, the browser-based MQTT 5.0 WebSocket client tool, with zero need to download or install any application. Prerequisites The broker address and the port information should be prepared before testing the connection: Click MQTTX Web to visit the browser-based MQTTX. Configure and establish the MQTT connection. Click the + New Connection button to enter the configure page: Name: Input a connection name, for example, MQTTX_Test. Host Port: for example, 8083 is for the WebSockets protocol; Keep the default setting for the other fields or set it as your business needs. For a detailed explanation of different fields, see MQTT User Manual - Connect. Click the Connect button at the top right corner of the page. Test the publish/receive of messages: Click the send icon in the bottom right corner of the chat area, then the messages successfully sent will appear in the chat window above. After the connection is successfully established, you can continue to subscribe to different topics and publish messages. Click + New Subscription. MQTTX Web has already filled in some fields, according to the setting, you will subscribe to topic testtopic/# with QoS level of 0. You can repeat this step to subscribe to different topics, and MQTTX Web will differentiate topics with colors. In the right corner of the chat area at the bottom, click the send icon to test the message publishing/receiving. The messages successfully sent will appear in the chat window. If you want to continue the testing, such as one-way/two-way SSL authentication, and simulate test data with customized scripts, you can continue to explore with MQTTX. On the Cluster Overview page in the EMQX Dashboard, you can check metrics such as Connections, Topics, Subscriptions, Incoming Messages, Outgoing messages, and Dropped Messages. {% emqxee} {% endemqxee %} So far, you have completed the installation, startup, and access test of EMQX, you can continue to try out more advanced capabilities of EMQX, such as authentication and authorization and integration with Rule Engine. If you have any questions on the use of EMQX or EMQ products, you are warmly welcome to contact us for professional support." } ]
{ "category": "App Definition and Development", "file_name": "faq.html.md", "project_name": "EMQ Technologies", "subcategory": "Streaming & Messaging" }
[ { "data": "English English Appearance What's New Feature Comparison Get Started Operating Limitations FAQ Technical Support Product Roadmap Deploy Debian Ubuntu CentOS/RHEL macOS Kubernetes Install from Source Code Rolling Upgrade Upgrade EMQX Cluster from 4.4 to 5.1 Upgrade EMQX on Kubernetes MQTT Core Concepts Test with MQTT Clients MQTT Shared Subscription MQTT Retained Message MQTT Will Message Exclusive Subscription Delayed Publish Auto Subscribe Topic Rewrite Wildcard Subscription Connect via C SDK Connect via Java SDK Connect via Go SDK Connect via Python SDK Connect via JavaScript SDK API Docs Architecture Create and Manage Cluster Cluster Security Load Balance EMQX Cluster with NGINX Load Balance EMQX Cluster with HAProxy Performance Tuning (Linux) Performance Test with eMQTT-Bench Performance Test with XMeter Cloud Test Scenarios and Results for Reference Enable SSL/TLS Connection Client TLS Connection Code Examples Obtain SSL/TLS Certificates CRL Check OCSP Stapling X.509 Certificate Authentication JWT Authentication Use Built-in Database Integrate with MySQL Integrate with MongoDB Integrate with PostgreSQL Integrate with Redis Integrate with LDAP Use HTTP Service MQTT 5.0 Enhanced Authentication PSK Authentication Use HTTP API to Manage User Data Use ACL File Use Built-in Database Integrate with MySQL Integrate with MongoDB Integrate with PostgreSQL Integrate with Redis Integrate with LDAP Use HTTP Service Banned Clients Flapping Detect Create Rules Rule SQL Reference Data Sources and Fields Built-in SQL Functions jq Functions Flow Designer Connector Webhook Apache Kafka Apache IoTDB Apache Pulsar AWS Kinesis AWS S3 Azure Event Hubs Cassandra ClickHouse Confluent DynamoDB Elasticsearch GCP PubSub GreptimeDB HStreamDB HTTP Server InfluxDB Microsoft SQL Server MongoDB MQTT MySQL OpenTSDB Oracle Database PostgreSQL RabbitMQ Redis RocketMQ SysKeeper TDengine TimescaleDB Cluster Listener MQTT Flapping Detection Limiter Logs Prometheus Dashboard Configuration Manual Command Line Interface Dashboard Home Page Connections Topics Subscriptions Retained Messages Authentication Authorization Banned Clients Flows Rules Data Bridge Management Extensions Diagnose System Configure OpenLDAP and Microsoft Entra ID SSO Configure SAML-Based SSO Audit Log Rate Limit Statistics and Metrics Alarm Logs Topic Metrics Slow Subscriptions Log Trace System Topic Integrate with Prometheus Metrics Logs Traces Backup and Restore Hooks Plugins gRPC Hook Extension Telemetry Features and Benefits Use MQTT over QUIC STOMP Gateway MQTT-SN Gateway CoAP Gateway LwM2M Gateway ExProto Gateway MQTT Client Attributes MQTT Programming MQTT Guide Clustering In-flight and Queue Message Retransmission MQTT 5.0 Specification MQTT 3.1.1 Specification MQTT Glossary MQTT 5.0 Features MQTT Reason Code Version 5 Version 4 Version 0.1 to 3.x Incompatible Changes in EMQX 5.7 Incompatible Changes in EMQX 5.6 Incompatible Changes in EMQX 5.5 Incompatible Changes in EMQX 5.4 Incompatible Changes between EMQX 4.4 and EMQX 5.1 Authentication / Authorization Incompatibility Between EMQX 4.4 and EMQX 5.1 Data Integration Incompatibility Between EMQX 5.1 and EMQX 4.4 Gateway Incompatibility Between EMQX 4.4 and EMQX 5.1 EMQX has 3 products in total. The different products support different number of connections, features, services, etc. When a client connects to an EMQX server, the EMQX server can authenticate it in different ways. EMQX supports the following 3 approaches: Username and password: A client connection can be established only when passing the correct user name and password (which can be configured at server). ClientID: Every MQTT client will have a unique ClientID. A list of acceptable ClientIDs can be configured for" }, { "data": "Only ClientIDs in this list can be authenticated successfully. Anonymous: Allows anonymous access. Besides using the configuration file (to configure authentication), EMQX can also use database and integration with external applications, such as MySQL, PostgreSQL, Redis, MongoDB, HTTP and LDAP. A mqueue is a message queue that store messages for a session. If the clean session flag is set to false in the MQTT connect packet, then EMQX would maintain the session for the client even when the client has been disconnected from EMQX. Then the session would receive messages from the subscribed topic and store these messages into the sessions mqueue. And when the client is online again, these messages would be delivered to the client instantly. Because of low priority of QOS 0 message in mqtt protocol, EMQX do not save QOS 0 message in the mqueue. However, this behavior can be overridden by setting mqtt.mqueuestoreqos0 = true in emqx.conf. With the mqtt.mqueuestoreqos0 = true, even a QOS 0 message would been saved in the mqueue. The maximum size of the mqueue can be configured with the setting mqtt.maxmqueuelen. Notice that the mqueue is stored in memory, so please do not set the mqueue length to 0 (0 means there is no limit for mqueue), otherwise the system would risk running out of memory. WebSocket is a full-duplex communication protocol with an API supported by modern web browsers. A user can use the WebSocket API to create a dual direction communication channel between a web browser and a server. Through a WebSocket, the server can push messages to the web browser. EMQX provides support for WebSocket. This means that users can publish to MQTT topics and subscribe to MQTT topics from browsers. Shared subscription is an MQTT feature that was introduced in MQTT 5.0 specification. Before the feature was introduced in MQTT 5.0 specification, EMQ 2.x already supported the feature as a non-standard MQTT protocol. In general, all of subscribers will receive ALL messages for the subscribed topics. However, clients that share a subscription to a topic will receive the messages in a round-robin way, so only one of the clients that share a subscription will receive each message. This feature can thus be used for load-balancing. Shared subscription is very useful in data collection and centralized data analysis applications. In such cases, the number of data producers is much larger than consumers, and one message ONLY need to be consumed once. Usually an MQTT client receives messages only when it is connected to an EMQX broker, and it will not receive messages if it is off-line. But if a client has a fixed ClientID, and it connects to the broker with clean_session = false, the broker will store particular messages for it when it is off-line. If the Pub/Sub is done at certain QoS level (broker configuration), these messages will be delivered when this client is reconnected. Off-line messages are useful when the connection is not stable, or the application has special requirements on QoS. Usually an MQTT client has to subscribe to the topics explicitly by itself, if it wants to receive the messages under these" }, { "data": "Subscription by Broker means that the broker can subscribe to particular topics for a client without client's interaction. The relation of such clients and the topics they should be subscribed to is stored at broker side. Usage of Subscription by Broker can ease the management of massive clients, and save computational resources and bandwidth for devices. The system topics have a prefix of $SYS/. Periodically, EMQX publishes system messages to system topics, these messages include system status, statistics, client's online/offline status and so on. Here are some examples of system topics (for a complete list of system topic please refer to EMQX documentation): For better security, starting from version 4.3, EMQX runs on openssl-1.1. This may cause some troulbes for users running EMQX on some old linux distributions, If starting EMQX with command ./bin/emqx console result in below error messages: ``` FATAL: Unable to start Erlang. Please make sure openssl-1.1.1 (libcrypto) and libncurses are installed.``` Or for emqx version earlier to v4.3.10 and emqx-enterprise version earlier than e4.3.5 ``` \\{applicationstartfailure,kernel,\\{\\{shutdown,\\{failedtostartchild,kernelsafesup,\\{onloadfunctionfailed,crypto\\}\\}\\}, ..\\}``` It indicates that the \"crypto\" application in Erlang/OTP that EMQX depends on failed to start because the required openssl dynamic lib (.so) is not found. Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux. Go to the installation directory of EMQX (If you use the package management tool to install EMQX, you should enter the same level directory as the lib of EMQX) ``` $ cd emqx $ cd /lib/emqx``` Query the list of .so dynamic libraries that crypto depends on and its location in memory: ``` $ ldd lib/crypto-*/priv/lib/crypto.so lib/crypto-4.6/priv/lib/crypto.so: /lib64/libcrypto.so.10: version `OPENSSL_1.1.1' not found (required by lib/crypto-4.6/priv/lib/crypto.so) linux-vdso.so.1 => (0x00007fff67bfc000) libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007fee749ca000) libc.so.6 => /lib64/libc.so.6 (0x00007fee74609000) libdl.so.2 => /lib64/libdl.so.2 (0x00007fee74404000) libz.so.1 => /lib64/libz.so.1 (0x00007fee741ee000) /lib64/ld-linux-x86-64.so.2 (0x00007fee74fe5000)``` Among them, OPENSSL_1.1.1' not found indicates that the .so library of specified OPENSSL version is not installed correctly. Compile and install OPENSSL 1.1.1 from source code, and place its so file to a path recognized by the system: ``` $ wget https://www.openssl.org/source/openssl-1.1.1c.tar.gz $ scp openssl-1.1.1c.tar.gz ct-test-ha:~/ $ tar zxf openssl-1.1.1c.tar.gz $ cd openssl-1.1.1c $ ./config $ make test # Perform test; continue if PASS is output $ make install $ ln -s /usr/local/lib64/libssl.so.1.1 /usr/lib64/libssl.so.1.1 $ ln -s /usr/local/lib64/libcrypto.so.1.1 /usr/lib64/libcrypto.so.1.1``` After the completion, execute ldd lib/crypto-*/priv/lib/crypto.so in the lib-level directory of EMQX to check whether it can be correctly identified. If there is no .so library in not found, you can start EMQX normally. Go to the installation directory of EMQX: ``` $ cd emqx $ cd /usr/local/Cellar/emqx/<version>/``` Query the list of .so dynamic libraries that crypto depends on: ``` $ otool -L lib/crypto-*/priv/lib/crypto.so lib/crypto-4.4.2.1/priv/lib/crypto.so: /usr/local/opt/openssl@1.1/lib/libcrypto.1.1.dylib (compatibility version 1.1.0, current version 1.1.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1252.200.5)``` It shows that OPENSSL has been successfully installed to the specified directory by checking: ``` $ ls /usr/local/opt/openssl@1.1/lib/libcrypto.1.1.dylib ls: /usr/local/opt/openssl@1.1/lib/libcrypto.1.1.dylib: No such file or directory``` If the file does not exist, you need to install the version of OPENSSL corresponding with what printed by otool. For example, it shown here as openssl@1.1: ``` $ brew install openssl@1.1``` After the installation is complete, you can start EMQX" }, { "data": "The client cannot establish an SSL connection with EMQX. You can use the keywords in the EMQX log to perform simple troubleshooting. For the EMQX log related content, please refer to: Log and Trace. certificate_expired The certificate_expired keyword appears in the log, indicating that the certificate has expired, please renew it in time. nosuitablecipher The nosuitablecipher keyword appears in the log, indicating that a suitable cipher suite was not found during the handshake process. The possible reasons are that the certificate type does not match the cipher suite, the cipher suite supported by both the server and the client was not found, and so on. handshake_failure The handshake_failure keyword appears in the log. There are many reasons, which may be analyzed in conjunction with the error reported by the client. For example, the client may find that the connected server address does not match the domain name in the server certificate. unknown_ca The unknown_ca keyword appears in the log, which means that the certificate verification fails. Common reasons are that the intermediate CA certificate is omitted, the Root CA certificate is not specified, or the wrong Root CA certificate is specified. In the two-way authentication, we can judge whether the certificate configuration of the server or the client is wrong according to other information in the log. If there is a problem with the server certificate, the error log is usually: ``` {sslerror,{tlsalert,{unknown_ca,\"TLS server: In state certify received CLIENT ALERT: Fatal - Unknown CA\\n\"}}}``` When you see CLIENT ALERT, you can know that this is a warning message from the client, and the server certificate fails the client's check. If there is a problem with the client certificate, the error log is usually: ``` {sslerror,{tlsalert,{unknownca,\"TLS server: In state certify at sslhandshake.erl:1887 generated SERVER ALERT: Fatal - Unknown CA\\n\"}}}``` When you see SERVER ALERT, you can know that the server finds that the certificate cannot pass the authentication when checking the client certificate, and the client will receive a warning message from the server. protocol_version The protocol_version keyword appears in the log, indicating a mismatch between the TLS protocol versions supported by the client and server. EMQX Broker is free and it can be download at https://www.emqx.com/en/try?product=broker. EMQX Enterprise can be downloaded and evaluated for free. You can download it from https://www.emqx.com/en/try?product=enterprise, and then apply trial license at https://www.emqx.com/en/apply-licenses/emqx. Also you can use the EMQX enterprise version through public cloud service. You need to two steps: The license file will be sent by email, find the attached zip file and unzip it. Extract the license file (emqx.lic) from the zip file to a directory readable by the EMQX user. After the extraction is complete, the license needs to be reloaded from the command line to complete the update: ``` emqx ctl license reload [license file path]``` The update commands for different installation modes: ``` ./bin/emqx ctl license reload path/to/emqx.lic emqx ctl license reload path/to/emqx.lic docker exec -it emqx-ee emqx ctl license reload path/to/emqx.lic``` TIP On a multi-node cluster, the emqx ctl license reload command needs to be executed only on one of the nodes, as the license will be replicated and applied to all" }, { "data": "Each one will contain a copy of the new license under the configured data directory for EMQX, as well as a backup of the old license, if any. Note that this command only takes effect on the local node executing the command for EMQX versions prior to e4.3.10, so this command will require being executed on each node of the cluster for those older versions. When your license reaches its expiration date, a warning starts to appear each time the node is started to remind you of the expiration. Depending on your license type, additional restrictions may apply: If you are unsure which type of license you have, please confirm with your account manager. EMQX supports to capture device online and offline events through below 3 approaches, Web Hook Subscribe related $SYS topics Directly save events into database The final approach is only supported in enterprise version, and supported database includes Redis, MySQL, PostgreSQL, MongoDB and Cassandra. User can configure database, client.connected and client.disconnected events in the configuration file. When a device is online or offline, the information will be saved into database. EMQX can constrain clients used topics to realize device access controls. To use this feature, ACL (Access Control List) should be enabled, disable anonymous access and set acl_nomatch to 'deny' (For the convenience of debugging, the last 2 options are enabled by default, and please close them). ACL can be configured in configuration file, or backend databases. Below is one of sample line for ACL control file, the meaning is user 'dashboard' can subscribe '$SYS/#' topic. ACL configuration in backend databases is similar, refer to EMQX document for more detailed configurations. ``` {allow, {user, \"dashboard\"}, subscribe, [\"$SYS/#\"]}.``` Yes. Currently EMQX supports to control connection rate and message publish rate. Please refer to Rate limit. High concurrency and availability are design goals of EMQX. To achieve these goals, several technologies are applied: With the well design and implementation, a single EMQX node can handle 5 millions connections. EMQX supports clustering. The EMQX performance can be scale-out with the increased number of nodes in cluster, and the MQTT service will not be interrupted when a single node is down. The EMQX Enterprise supports data persistence. Please refer to Data Integration. Yes. You can do it by invoking REST API provided by EMQX, but the implementation is different in EMQX 2.x and 3.0+: The EMQX Enterprise integrates a Kafka bridge, it can bridge data to Kafka. Please refer to Sink - Apache Kafka. EMQX supports cluster auto discovery. EMQX clustering can be done manually or automatically. Please refer to Create Cluster. EMQX support forward messages to other MQTT broker. Using MQTT bridge, EMQX can forward messages of interested topics to other broker. Please refer to Data Integration. EMQX can forward messages to IoT Hub hosted on public cloud, this is a feature of EMQX bridge. EMQX can receive messages from other broker, but it depends also on the implementation of other brokers, Mosquitto can forward messages to EMQX, please refer to Sink - MQTT. EMQX support the tracing of messages from particular client or under particular topic. You can use the command line tool emqx ctl for" }, { "data": "The example below shows how to trace messages under 'topic' and save the result in 'trace_topic.log'. For more details, please refer to Log Trace. When executing a stress test, besides ensuring the necessary hardware resource, it is also necessary to tune the OS and the Erlang VM to make the maximum use of the resource. The most common tuning is to modify the global limitation of file handles, the user limitation of file handles, the TCP backlog and buffer, the limitation of process number of Erlang VM and so on. You will also need to tune the client machine to ensure it has the ability and resource to handle all the subs and pubs. Different use cases require different tuning. In the EMQX document there is a chapter about tuning the system for general purpose. Please refer to Tune. EMQX Support SSL/TLS. In production, we recommend to terminate the TLS connection by Load Balancer. By this way, the connection between device and server(load balancer) use secured connection, and connection between load balancer and EMQX nodes use general TCP connection. Execute $ emqx console to view the output. logger command is missing ``` $ emqx console Exec: /usr/lib/emqx/erts-10.3.5.1/bin/erlexec -boot /usr/lib/emqx/releases/v3.2.1/emqx -mode embedded -bootvar ERTSLIBDIR /usr/lib/emqx/erts-10.3.5.1/../lib -mnesia dir \"/var/lib/emqx/mnesia/emqx@127.0.0.1\" -config /var/lib/emqx/configs/app.2019.07.23.03.07.32.config -argsfile /var/lib/emqx/configs/vm.2019.07.23.03.07.32.args -vm_args /var/lib/emqx/configs/vm.2019.07.23.03.07.32.args -- console Root: /usr/lib/emqx /usr/lib/emqx /usr/bin/emqx: line 510: logger: command not found``` Solution: Centos/Redhat ``` $ yum install rsyslog``` Ubuntu/Debian ``` $ apt-get install bsdutils``` Missssl is missing ``` $ emqx console Exec: /emqx/erts-10.3/bin/erlexec -boot /emqx/releases/v3.2.1/emqx -mode embedded -bootvar ERTSLIBDIR /emqx/erts-10.3/../lib -mnesia dir \"/emqx/data/mnesia/emqx@127.0.0.1\" -config /emqx/data/configs/app.2019.07.23.03.34.43.config -argsfile /emqx/data/configs/vm.2019.07.23.03.34.43.args -vm_args /emqx/data/configs/vm.2019.07.23.03.34.43.args -- console Root: /emqx /emqx Erlang/OTP 21 [erts-10.3] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:32] [hipe] {\"Kernel pid terminated\",applicationcontroller,\"{applicationstartfailure,kernel,{{shutdown,{failedtostartchild,kernelsafesup,{onloadfunction_failed,crypto}}},{kernel,start,[normal,[]]}}}\"} Kernel pid terminated (applicationcontroller) ({applicationstartfailure,kernel,{{shutdown,{failedtostartchild,kernelsafesup,{onloadfunction_failed,crypto}}},{kernel,start,[normal,[]]}}}) Crash dump is being written to: log/crash.dump...done``` Solution: Install openssl above version 1.1.1 Modify the reuse_sessions = on in the emqx.conf configuration and take effect. If the client and the server are successfully connected through SSL, when the client connection is encountered for the second time, the SSL handshake phase is skipped, the connection is directly established to save the connection time and increase the client connection speed. Execute emqx ctl listeners to view the shutdown_count statistics under the corresponding port. Client disconnect link error code list: keepalive_timeoutMQTT keepalive time out closed TCP client disconnected (the FIN sent by the client did not receive the MQTT DISCONNECT) normal MQTT client is normally disconnected einvalEMQX wants to send a message to the client, but the Socket has been disconnected function_clauseMQTT packet format error etimedoutTCP Send timeout (no TCP ACK response received) protounexpectedcRepeatedly received an MQTT connection request when there is already an MQTT connection idle_timeout After the TCP connection is established for 15s, the connect packet has not been received yet. EMQX supports deployment on Linux, MacOS, ARM system, however it is recommended to deploy the product on one of the supported Linux distributions, such as CentOS, Ubuntu and Debian. The following factors will have an impact on EMQX resource consumption, mainly on CPU and memory usage. Number of connections: EMQX creates 2 Erlang process for each MQTT connection, and every Erlang process consumes some resource. The more connections, the more resources are" }, { "data": "Average throughput: Throughput means (pub message number + sub message number) processed by EMQX per second. With higher throughput value, more resource will be used for handling route and message delivery in EMQX. Payload size: With bigger size of payload, more memory and CPU are required for message cache and processing. Number of topics: With more topics, the route table in EMQX will increase, and more resource is required. QoS: With higher message QoS level, more resource will be used for message handling. If client devices connect to EMQX through TLS, more CPU resource is required for encryption and decryption. Our suggested solution is to add a load balancer in front of EMQX nodes, the TLS is offloaded at load balance node, connections between load balancer and backend EMQX nodes use plain TCP connections. Even when the connection number is low, or message rate is low, it still makes sense to deploy a cluster with multiple nodes in production. Clustering improves the availability of system: when a single node goes down, the rest of the nodes in the cluster ensure that the service is not interrupted. EMQX ensures that messages with the same topic from the same client are forwarded in the order they were received, regardless of the QoS level. The message forwarding order remains consistent regardless of message loss or duplication, as per MQTT requirements. However, EMQX does not guarantee the forwarding order of messages from different topics. These messages can be considered as entering separate channels. For example, if messages from topic A arrive at EMQX before messages from topic B, it is possible that messages from topic B will be forwarded earlier. EMQX's debug logs already capture all the behaviors and phenomena. By viewing the debug logs, we can determine when the client initiated the connection, the parameters specified during the connection, the success of rejection of the connection, and the reasons for rejection, among other details. However, the extensive information logged in debug mode can consume additional resources and make it challenging to analyze individual clients or topics. To address this, EMQX provides a Log Trace feature. We can specify the clients or topics we want to trace, and EMQX will output all the debug logs related to those clients or topics to the designated log file. This facilitates self-analysis and seeking assistance from the community. It's important to note that if the client cannot establish a connection with EMQX due to network issues, the log tracing feature will not be useful since EMQX does not receive any messages in such cases. This situation often arises from network configuration problems like firewalls or security groups, resulting in closed server ports. This is particularly common when deploying EMQX on cloud instances. Therefore, in addition to log tracing, troubleshooting network-related issues involves checking port occupation, listening status, and network configurations. CENSYS is an internet scanning and reconnaissance tool that performs regular scans of the IPv4 address space to identify default ports for various protocols such as HTTP, SSH, MQTT, and etc. Therefore, if you notice MQTT clients with a client ID of \"CENSYS\" or other unfamiliar clients accessing your MQTT broker, it indicates a relatively lower level of security" }, { "data": "To address this issue effectively, consider implementing the following measures: Yes, it is possible to use shared subscriptions to subscribe to certain system messages, such as client online/offline events, which are published frequently. Shared subscriptions are particularly useful for clients in such cases. For instance, you can subscribe to the following topic using a shared subscription: $share/group1/$SYS/brokers/+/clients/+/connected. According to the MQTT protocol, when a client uses a shared subscription, the server is not allowed to send retained messages to that client. When a shared subscriber's connection is disconnected but the session remains active, the server continues to deliver messages to the subscriber, which are temporarily stored in the session. As a result, other active shared subscribers may appear as if they have not consumed all the messages. In addition, if the shared subscriber chooses to create a new session when reconnecting, the messages cached in the old session will be permanently lost. If you have verified that the aforementioned situation does not occur, yet the issue of message loss persists, you can use the client tracking feature of EMQX to conduct further investigation. By default, EMQX will occupy 7 ports when it starts. They are: The complete WARNING log is as follows: ``` WARNING: Default (insecure) Erlang cookie is in use. WARNING: Configure node.cookie in /usr/lib/emqx/etc/emqx.conf or override from environment variable EMQXNODE_COOKIE WARNING: NOTE: Use the same cookie for all nodes in the cluster.``` Only EMQX nodes using the same cookie can form a cluster. While a cookie does not secure cluster communication, it prevents a node from connecting to a cluster it did not intend to communicate with. By default, EMQX nodes uniformly use the cookie value emqxsecretcookie. However, we recommend that users change the cookie value when building a cluster to enhance security. The second warning log indicates two ways to modify the cookie: by editting node.cookie in the emqx.conf configuration file or by setting the environment variable EMQXNODE_COOKIE. The runtime data of EMQX is stored in the /opt/emqx/data directory, including configuration rules, resources, retained messages, etc. To ensure data persistence during container restarts, it's important to mount the /opt/emqx/data directory to a local host directory or a data volume. However, even if the /opt/emqx/data directory is properly mounted, data loss may still occur after container restarts. This is because the runtime data of EMQX is stored in the /opt/emqx/data/mnesia/${Node Name} directory, and when the container is restarted, the node name of EMQX changes, leading to the creation of a new storage directory. EMQX node name consists of Name and Host, with the Host derived from the container's IP address by default. Under the default network configurations, the container's IP may change upon restarting, so you need to maintain a fixed IP for the container. To address this issue, EMQX provides an environment variable, EMQXHOST, which allows you to set the Host part of the node name. However, it is crucial that this Host value is reachable by other nodes, so it should be used in conjunction with a network alias. Here is an example command for running the EMQX Docker container with the EMQXHOST environment variable and a network alias: ``` docker run -d --name emqx -p 18083:18083 -p 1883:1883 -e EMQX_HOST=alias-for-emqx --network example --network-alias alias-for-emqx --mount type=bind,source=/tmp/emqx,target=/opt/emqx/data emqx:5.0.24```" } ]
{ "category": "App Definition and Development", "file_name": "install.html.md", "project_name": "EMQ Technologies", "subcategory": "Streaming & Messaging" }
[ { "data": "English English Appearance What's New Feature Comparison Get Started Operating Limitations FAQ Technical Support Product Roadmap Deploy Debian Ubuntu CentOS/RHEL macOS Kubernetes Install from Source Code Rolling Upgrade Upgrade EMQX Cluster from 4.4 to 5.1 Upgrade EMQX on Kubernetes MQTT Core Concepts Test with MQTT Clients MQTT Shared Subscription MQTT Retained Message MQTT Will Message Exclusive Subscription Delayed Publish Auto Subscribe Topic Rewrite Wildcard Subscription Connect via C SDK Connect via Java SDK Connect via Go SDK Connect via Python SDK Connect via JavaScript SDK API Docs Architecture Create and Manage Cluster Cluster Security Load Balance EMQX Cluster with NGINX Load Balance EMQX Cluster with HAProxy Performance Tuning (Linux) Performance Test with eMQTT-Bench Performance Test with XMeter Cloud Test Scenarios and Results for Reference Enable SSL/TLS Connection Client TLS Connection Code Examples Obtain SSL/TLS Certificates CRL Check OCSP Stapling X.509 Certificate Authentication JWT Authentication Use Built-in Database Integrate with MySQL Integrate with MongoDB Integrate with PostgreSQL Integrate with Redis Integrate with LDAP Use HTTP Service MQTT 5.0 Enhanced Authentication PSK Authentication Use HTTP API to Manage User Data Use ACL File Use Built-in Database Integrate with MySQL Integrate with MongoDB Integrate with PostgreSQL Integrate with Redis Integrate with LDAP Use HTTP Service Banned Clients Flapping Detect Create Rules Rule SQL Reference Data Sources and Fields Built-in SQL Functions jq Functions Flow Designer Connector Webhook Apache Kafka Apache IoTDB Apache Pulsar AWS Kinesis AWS S3 Azure Event Hubs Cassandra ClickHouse Confluent DynamoDB Elasticsearch GCP PubSub GreptimeDB HStreamDB HTTP Server InfluxDB Microsoft SQL Server MongoDB MQTT MySQL OpenTSDB Oracle Database PostgreSQL RabbitMQ Redis RocketMQ SysKeeper TDengine TimescaleDB Cluster Listener MQTT Flapping Detection Limiter Logs Prometheus Dashboard Configuration Manual Command Line Interface Dashboard Home Page Connections Topics Subscriptions Retained Messages Authentication Authorization Banned Clients Flows Rules Data Bridge Management Extensions Diagnose System Configure OpenLDAP and Microsoft Entra ID SSO Configure SAML-Based SSO Audit Log Rate Limit Statistics and Metrics Alarm Logs Topic Metrics Slow Subscriptions Log Trace System Topic Integrate with Prometheus Metrics Logs Traces Backup and Restore Hooks Plugins gRPC Hook Extension Telemetry Features and Benefits Use MQTT over QUIC STOMP Gateway MQTT-SN Gateway CoAP Gateway LwM2M Gateway ExProto Gateway MQTT Client Attributes MQTT Programming MQTT Guide Clustering In-flight and Queue Message Retransmission MQTT 5.0 Specification MQTT 3.1.1 Specification MQTT Glossary MQTT 5.0 Features MQTT Reason Code Version 5 Version 4 Version 0.1 to 3.x Incompatible Changes in EMQX 5.7 Incompatible Changes in EMQX 5.6 Incompatible Changes in EMQX 5.5 Incompatible Changes in EMQX 5.4 Incompatible Changes between EMQX 4.4 and EMQX 5.1 Authentication / Authorization Incompatibility Between EMQX" }, { "data": "and EMQX 5.1 Data Integration Incompatibility Between EMQX 5.1 and EMQX 4.4 Gateway Incompatibility Between EMQX 4.4 and EMQX 5.1 This chapter walks you through the basic installation steps for EMQX, the minimum hardware specification, and the file and directory locations to facilitate future configuration and maintenance jobs. This chapter also covers how to migrate from EMQX 4.4 to EMQX 5.1. The Erlang VM powering EMQX relies on system locale settings to enable Unicode support for various functionalities, including filenames and terminal IO in interactive Erlang shells. If you use the Linux operating system, it is recommended to make sure that UTF-8 locale is enabled in the system environment before starting EMQX. Click the tabs to see how to enable the UTF-8 locale on different platforms: Enable the UTF-8 locale with cloud-init configuration: ``` cat <<EOF | sudo tee /etc/cloud/cloud.cfg.d/99_locale.cfg locale: C.utf8 EOF``` It is usually enabled by localectl under systemd: ``` sudo localectl set-locale LANG=C.UTF-8``` Enable the UTF-8 locale in two ways: It is usually enabled by localectl under systemd: ``` sudo localectl set-locale LANG=C.UTF-8``` Otherwise, it can be enabled with update-locale. ``` sudo update-locale LANG=C.UTF-8``` Enable the UTF-8 locale with update-locale: ``` sudo update-locale LANG=C.UTF-8``` EMQX releases the installation packages for different operating systems or platforms in each release. You may click the links below to download. EMQX website: https://www.emqx.io/downloads You can also download the alpha, beta, or rc versions from our GitHub pages. {% endemqxce %} TIP Besides the above deployment methods, you are also welcome to try our EMQX Cloud, a fully managed MQTT service for IoT. You only need to register for an account before starting your MQTT services and connecting your IoT devices to any cloud with zero need for infrastructure maintenance. The table below lists the operating systems and versions that EMQX supports. | Operating system | Versions supported | x86_64/amd64 | arm64 | |:-|:-|:|:--| | Ubuntu | Ubuntu 18.04Ubuntu 20.04Ubuntu 22.04 | Yes | Yes | | Debian | Debian 10Debian 11Debian 12 | Yes | Yes | | CentOS/RHEL | CentOS 7Rocky Linux 8Rocky Linux 9 | Yes | Yes | | Amazon Linux | Amazon Linux 2Amazon Linux 2023 | Yes | Yes | | macOS | macOS 11macOS 12macOS 13 (Homebrew) | Yes | Yes | After installation, EMQX creates some directories to store running and configuration files, data, and logs. The table below lists the directories created and their file path under different installation methods: | Directory | Description | Installed with tar.gz | Installed with RPM/DEB | |:|:--|:|:-| | etc | Static config files |" }, { "data": "| /etc/emqx | | data | Database and config | ./data | /var/lib/emqx | | log | Log files | ./log | /var/log/emqx | | releases | Boot instructions | ./releases | /usr/lib/emqx/releases | | bin | Executables | ./bin | /usr/lib/emqx/bin | | lib | Erlang code | ./lib | /usr/lib/emqx/lib | | erts- | Erlang runtime | ./erts- | /usr/lib/emqx/erts-* | | plugins | Plugins | ./plugins | /usr/lib/emqx/plugins | TIP The table below introduces the files and subfolders of some directories. | Directory | Description | Permissions | Files | |:|:--|:--|:| | bin | Executables | Read | emqx and emqx.cmd: Executables of EMQX. For details, see Command Line Interface. | | etc | Configuration files | Read | emqx.conf: Main configuration file for EMQX, contains all the commonly-used configuration items.emqx-example-en.conf: Demo configuration files of EMQX, contains all the configurable items.acl.conf: Default ACl rules.vm.args: Operating parameters of the Erlang virtual machine.certs/: X.509 keys and certificate files for EMQX SSL listeners, may also be used in the SSL/TLS connection when integrating with external systems. | | data | Operating data | Write | authz: Stores file authorization rules uploaded by REST API or Dashboard. For details, see Authorization - File. certs: Stores certificate files uploaded by REST API or Dashboard.configs: Stores configuration files generated at boot, or configuration overrides by changes from API or CLI.mnesia: Built-in database to store EMQX operating data, including alarm records, authentication and authorization data of the clients, Dashboard user information, etc. If the directory is deleted, all these operating data will be lost. May contain subdirectories named after different node, e.g., emqx@127.0.0.1. Note: In case of node renaming, you should also delete or remove the corresponding subdirectory. Can use command emqx ctl mnesia to query the built-in database. For details, see Management Command CLI.patches: Stores the .beam files for EMQX to load as a hot patch. Can be used for a quick fix.trace: Online tracing log files.In production, it is recommended to periodically backup the data directory (excluding the trace folder ) for data safety. | | log | Operating logs | Read | emqx.log.*: Operation logs of EMQX, for more information, see logs. | TIP EMQX stores the configuration information in the data/configs and the etc directory. The etc directory stores read-only configuration files, while configuration updates from the Dashboard or REST API are saved in the data/configs directory to support hot configuration reloads at runtime. EMQX reads the configuration items from these files and converts them to the Erlang native configuration file format, to apply the configurations at runtime." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Flink", "subcategory": "Streaming & Messaging" }
[ { "data": "v2.3.0 Flink ML is a library which provides machine learning (ML) APIs and infrastructures that simplify the building of ML pipelines. Users can implement ML algorithms with the standard ML APIs and further use these infrastructures to build ML pipelines for both training and inference jobs. If youre interested in playing around with Flink ML, check out our quick start. It provides a simple example to submit and execute a Flink ML job on a Flink cluster. If you get stuck, check out our community support resources. In particular, Apache Flinks user mailing list is consistently ranked as one of the most active of any Apache project, and is a great way to get help quickly. Flink ML is developed under the umbrella of Apache Flink." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Fluvio", "subcategory": "Streaming & Messaging" }
[ { "data": "Fluvio is an open-source data streaming platform that aggregates, correlates, and apply programmable intelligence to data in motion. Written in Rust, Fluvio provides low-latency, high-performance programmable streaming on cloud-native architecture. SmartModules, powered by WebAssembly, provide users the ability to perform inline computations on streaming data without leaving the cluster. Learn more about SmartModules Fluvio provides client libraries for several popular programming languages. Learn more about Fluvios client libraries This guide outlines using the Fluvio CLI for streaming data on your local machine or self-hosted environment. We will cover: Download and install the CLI by running: ``` $ curl -fsS https://hub.infinyon.cloud/install/install.sh | bash ``` fvm will be installed at ~/.fvm/bin, and will install fluvio and the rest of the development tools at ~/.fluvio/bin. You will need to add these directories to your shells PATH environment variable. ``` $ echo 'export PATH=\"${HOME}/.fvm/bin:${HOME}/.fluvio/bin:${PATH}\"' >> ~/.bashrc ``` ``` $ echo 'export PATH=\"${HOME}/.fvm/bin:${HOME}/.fluvio/bin:${PATH}\"' >> ~/.zshrc ``` ``` $ fishaddpath ~/.fluvio/bin ``` Example output ``` $ curl -fsS https://hub.infinyon.cloud/install/install.sh | bash Downloading fluvio version manager, fvm target arch aarch64-apple-darwin Installing fvm done: FVM installed successfully at /Users/telant/.fvm help: Add FVM to PATH using source $HOME/.fvm/env fluvio install dir /Users/$USER/.fluvio If version of fluvio is already installed, you can run 'fvm install' or 'fvm switch' to change versions Installing fluvio info: Downloading (1/5): fluvio@0.11.0 info: Downloading (2/5): fluvio-cloud@0.2.15 info: Downloading (3/5): fluvio-run@0.11.0 info: Downloading (4/5): cdk@0.11.0 info: Downloading (5/5): smdk@0.11.0 done: Installed fluvio version 0.11.0 done: Now using fluvio version 0.11.0 Install complete! fluvio: You'll need to add '~/.fvm/bin' and ~/.fluvio/bin/' to your PATH variable fluvio: You can run the following to set your PATH on shell startup: fluvio: echo 'export PATH=\"${HOME}/.fvm/bin:${HOME}/.fluvio/bin:${PATH}\"' >> ~/.zshrc ``` ``` $ fluvio cluster start ``` Topics store and send data streams. Create one with: ``` $ fluvio topic create quickstart-topic ``` Send data to your topic with: ``` $ fluvio produce quickstart-topic hello world! Ok! ``` Exit the prompt with Ctrl+C. You can also send files or stream output: ``` fluvio produce -f ./path/to/file.txt echo \"hello world!\" | fluvio produce quickstart-topic ``` Read data from your topic with: ``` $ fluvio consume quickstart-topic -B -d ``` This will display data sent to the topic. SmartModules are user-defined functions that process data streams. Use the SmartModules Development Kit (smdk) binary to create your SmartModule project: ``` smdk generate cd quickstart ``` A filter SmartModule might look like this: ``` use fluvio_smartmodule::{smartmodule, Result, Record}; pub fn filter(record: &Record) -> Result<bool> { let string = std::str::fromutf8(record.value.asref())?; Ok(string.contains('a')) } ``` Build and test it with: ``` smdk build smdk test --text \"cats\" ``` Load it onto your cluster with: ``` smdk load ``` Verify its loaded: ``` fluvio smartmodule list ``` Create a topic and use SmartModules with producers and consumers: ``` fluvio topic create fruits fluvio consume fruits --smartmodule=quickstart fluvio produce fruits ``` Only records matching the SmartModules filter will be shown." } ]
{ "category": "App Definition and Development", "file_name": "flink-ml-docs-master.md", "project_name": "Flink", "subcategory": "Streaming & Messaging" }
[ { "data": "v1.19.0 Welcome to Flink! :) Flink is designed to process continuous streams of data at a lightning fast pace. This short guide will show you how to download the latest stable version of Flink, install, and run it. You will also run an example Flink job and view it in the web UI. Flink runs on all UNIX-like environments, i.e. Linux, Mac OS X, and Cygwin (for Windows). You need to have Java 11 installed. To check the Java version installed, type in your terminal: ``` $ java -version ``` Next, download the latest binary release of Flink, then extract the archive: ``` $ tar -xzf flink-*.tgz ``` Navigate to the extracted directory and list the contents by issuing: ``` $ cd flink-* && ls -l ``` You should see something like: For now, you may want to note that: To start a local cluster, run the bash script that comes with Flink: ``` $ ./bin/start-cluster.sh ``` You should see an output like this: Flink is now running as a background process. You can check its status with the following command: ``` $ ps aux | grep flink ``` You should be able to navigate to the web UI at localhost:8081 to view the Flink dashboard and see that the cluster is up and running. To quickly stop the cluster and all running components, you can use the provided script: ``` $ ./bin/stop-cluster.sh ``` Flink provides a CLI tool, bin/flink, that can run programs packaged as Java ARchives (JAR) and control their execution. Submitting a job means uploading the jobs JAR le and related dependencies to the running Flink cluster and executing it. Flink releases come with example jobs, which you can nd in the examples/ folder. To deploy the example word count job to the running cluster, issue the following command: ``` $ ./bin/flink run examples/streaming/WordCount.jar ``` You can verify the output by viewing the logs: ``` $ tail log/flink--taskexecutor-.out ``` Sample output: ``` (nymph,1) (in,3) (thy,1) (orisons,1) (be,4) (all,2) (my,1) (sins,1) (remember,1) (d,4) ``` Additionally, you can check Flinks web UI to monitor the status of the cluster and running job. You can view the data flow plan for the execution: Here for the job execution, Flink has two operators. The rst is the source operator which reads data from the collection source. The second operator is the transformation operator which aggregates counts of words. Learn more about DataStream operators. You can also look at the timeline of the job execution: You have successfully ran a Flink application! Feel free to select any other JAR archive from the examples/ folder or deploy your own job! In this guide, you downloaded Flink, explored the project directory, started and stopped a local cluster, and submitted a sample Flink job! To learn more about Flink fundamentals, check out the concepts section. If you want to try something more hands-on, try one of the tutorials." } ]
{ "category": "App Definition and Development", "file_name": "flink-cdc-docs-master.md", "project_name": "Flink", "subcategory": "Streaming & Messaging" }
[ { "data": "v3.2.0 Stateful Functions is an API that simplifies the building of distributed stateful applications with a runtime built for serverless architectures. It brings together the benefits of stateful stream processing - the processing of large datasets with low latency and bounded resource constraints - along with a runtime for modeling stateful entities that supports location transparency, concurrency, scaling, and resiliency. It is designed to work with modern architectures, like cloud-native deployments and popular event-driven FaaS platforms like AWS Lambda and KNative, and to provide out-of-the-box consistent state and messaging while preserving the serverless experience and elasticity of these platforms. If youre interested in playing around with Stateful Functions, check out our code playground. It provides a step by step introduction to the APIs and guides you through real applications. If you get stuck, check out our community support resources. In particular, Apache Flinks user mailing list is consistently ranked as one of the most active of any Apache project, and is a great way to get help quickly. The reference documentation covers all the details. Some starting points: Before putting your Stateful Functions application into production, read the deployment overview for details on how to successfully run and manage your system. Stateful Functions is developed under the umbrella of Apache Flink" } ]
{ "category": "App Definition and Development", "file_name": "flink-cdc-docs-stable.md", "project_name": "Flink", "subcategory": "Streaming & Messaging" }
[ { "data": "v1.19.0 This training presents an introduction to Apache Flink that includes just enough to get you started writing scalable streaming ETL, analytics, and event-driven applications, while leaving out a lot of (ultimately important) details. The focus is on providing straightforward introductions to Flinks APIs for managing state and time, with the expectation that having mastered these fundamentals, youll be much better equipped to pick up the rest of what you need to know from the more detailed reference documentation. The links at the end of each section will lead you to where you can learn more. Specifically, you will learn: This training focuses on four critical concepts: continuous processing of streaming data, event time, stateful stream processing, and state snapshots. This page introduces these concepts. Streams are datas natural habitat. Whether it is events from web servers, trades from a stock exchange, or sensor readings from a machine on a factory floor, data is created as part of a stream. But when you analyze data, you can either organize your processing around bounded or unbounded streams, and which of these paradigms you choose has profound consequences. Batch processing is the paradigm at work when you process a bounded data stream. In this mode of operation you can choose to ingest the entire dataset before producing any results, which means that it is possible, for example, to sort the data, compute global statistics, or produce a final report that summarizes all of the input. Stream processing, on the other hand, involves unbounded data streams. Conceptually, at least, the input may never end, and so you are forced to continuously process the data as it arrives. In Flink, applications are composed of streaming dataflows that may be transformed by user-defined operators. These dataflows form directed graphs that start with one or more sources, and end in one or more sinks. Often there is a one-to-one correspondence between the transformations in the program and the operators in the dataflow. Sometimes, however, one transformation may consist of multiple operators. An application may consume real-time data from streaming sources such as message queues or distributed logs, like Apache Kafka or Kinesis. But flink can also consume bounded, historic data from a variety of data sources. Similarly, the streams of results being produced by a Flink application can be sent to a wide variety of systems that can be connected as sinks. Programs in Flink are inherently parallel and distributed. During execution, a stream has one or more stream partitions, and each operator has one or more operator subtasks. The operator subtasks are independent of one another, and execute in different threads and possibly on different machines or containers. The number of operator subtasks is the parallelism of that particular operator. Different operators of the same program may have different levels of parallelism. Streams can transport data between two operators in a one-to-one (or forwarding) pattern, or in a redistributing pattern: One-to-one streams (for example between the Source and the map() operators in the figure above) preserve the partitioning and ordering of the elements. That means that subtask[1] of the map() operator will see the same elements in the same order as they were produced by subtask[1] of the Source" }, { "data": "Redistributing streams (as between map() and keyBy/window above, as well as between keyBy/window and Sink) change the partitioning of streams. Each operator subtask sends data to different target subtasks, depending on the selected transformation. Examples are keyBy() (which re-partitions by hashing the key), broadcast(), or rebalance() (which re-partitions randomly). In a redistributing exchange the ordering among the elements is only preserved within each pair of sending and receiving subtasks (for example, subtask[1] of map() and subtask[2] of keyBy/window). So, for example, the redistribution between the keyBy/window and the Sink operators shown above introduces non-determinism regarding the order in which the aggregated results for different keys arrive at the Sink. For most streaming applications it is very valuable to be able re-process historic data with the same code that is used to process live data and to produce deterministic, consistent results, regardless. It can also be crucial to pay attention to the order in which events occurred, rather than the order in which they are delivered for processing, and to be able to reason about when a set of events is (or should be) complete. For example, consider the set of events involved in an e-commerce transaction, or financial trade. These requirements for timely stream processing can be met by using event time timestamps that are recorded in the data stream, rather than using the clocks of the machines processing the data. Flinks operations can be stateful. This means that how one event is handled can depend on the accumulated effect of all the events that came before it. State may be used for something simple, such as counting events per minute to display on a dashboard, or for something more complex, such as computing features for a fraud detection model. A Flink application is run in parallel on a distributed cluster. The various parallel instances of a given operator will execute independently, in separate threads, and in general will be running on different machines. The set of parallel instances of a stateful operator is effectively a sharded key-value store. Each parallel instance is responsible for handling events for a specific group of keys, and the state for those keys is kept locally. The diagram below shows a job running with a parallelism of two across the first three operators in the job graph, terminating in a sink that has a parallelism of one. The third operator is stateful, and you can see that a fully-connected network shuffle is occurring between the second and third operators. This is being done to partition the stream by some key, so that all of the events that need to be processed together, will be. State is always accessed locally, which helps Flink applications achieve high throughput and low-latency. You can choose to keep state on the JVM heap, or if it is too large, in efficiently organized on-disk data structures. Flink is able to provide fault-tolerant, exactly-once semantics through a combination of state snapshots and stream replay. These snapshots capture the entire state of the distributed pipeline, recording offsets into the input queues as well as the state throughout the job graph that has resulted from having ingested the data up to that point. When a failure occurs, the sources are rewound, the state is restored, and processing is resumed. As depicted above, these state snapshots are captured asynchronously, without impeding the ongoing processing." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Google Cloud Dataflow", "subcategory": "Streaming & Messaging" }
[ { "data": "Dataflow is a Google Cloud service that provides unified stream and batch data processing at scale. Use Dataflow to create data pipelines that read from one or more sources, transform the data, and write the data to a destination. Typical use cases for Dataflow include the following: Dataflow uses the same programming model for both batch and stream analytics. Streaming pipelines can achieve very low latency. You can ingest, process, and analyze fluctuating volumes of real-time data. By default, Dataflow guarantees exactly-once processing of every record. For streaming pipelines that can tolerate duplicates, you can often reduce cost and improve latency by enabling at-least-once mode. This section describes some of the advantages of using Dataflow. Dataflow is a fully managed service. That means Google manages all of the resources needed to run Dataflow. When you run a Dataflow job, the Dataflow service allocates a pool of worker VMs to execute the pipeline. You don't need to provision or manage these VMs. When the job completes or is cancelled, Dataflow automatically deletes the VMs. You're billed for the compute resources that your job uses. For more information about costs, see Dataflow pricing. Dataflow is designed to support batch and streaming pipelines at large scale. Data is processed in parallel, so the work is distributed across multiple VMs. Dataflow can autoscale by provisioning extra worker VMs, or by shutting down some worker VMs if fewer are needed. It also optimizes the work, based on the characteristics of the pipeline. For example, Dataflow can dynamically rebalance work among the VMs, so that parallel work completes more efficiently. Dataflow is built on the open source Apache Beam project. Apache Beam lets you write pipelines using a language-specific SDK. Apache Beam supports Java, Python, and Go SDKs, as well as multi-language pipelines. Dataflow executes Apache Beam pipelines. If you decide later to run your pipeline on a different platform, such as Apache Flink or Apache Spark, you can do so without rewriting the pipeline code. You can use Dataflow for relatively simple pipelines, such as moving data. However, it's also suitable for more advanced applications, such as real-time streaming analytics. A solution built on Dataflow can grow with your needs as you move from batch to streaming or encounter more advanced use" }, { "data": "Dataflow supports several different ways to create and execute pipelines, depending on your needs: Write code using the Apache Beam SDKs. Deploy a Dataflow template. Templates let you run predefined pipelines. For example, a developer can create a template, and then a data scientist can deploy it on demand. Google also provides a library of templates for common scenarios. You can deploy these templates without knowing any Apache Beam programming concepts. Use JupyterLab notebooks to develop and run pipelines iteratively. You can monitor the status of your Dataflow jobs through the Dataflow monitoring interface in the Google Cloud console. The monitoring interface includes a graphical representation of your pipeline, showing the progress and execution details of each pipeline stage. The monitoring interface makes it easier to spot problems such as bottlenecks or high latency. You can also profile your Dataflow jobs to monitor CPU usage and memory allocation. Dataflow uses a data pipeline model, where data moves through a series of stages. Stages can include reading data from a source, transforming and aggregating the data, and writing the results to a destination. Pipelines can range from very simple to more complex processing. For example, a pipeline might do the following: A pipeline that is defined in Apache Beam does not specify how the pipeline is executed. Running the pipeline is the job of a runner. The purpose of a runner is to run an Apache Beam pipeline on a specific platform. Apache Beam supports multiple runners, including a Dataflow runner. To use Dataflow with your Apache Beam pipelines, specify the Dataflow runner. The runner uploads your executable code and dependencies to a Cloud Storage bucket and creates a Dataflow job. Dataflow then allocates a pool of VMs to execute the pipeline. The following diagram shows a typical ETL and BI solution using Dataflow and other Google Cloud services: This diagram shows the following stages: For basic data movement scenarios, you might run a Google-provided template. Some templates support user-defined functions (UDFs) written in JavaScript. UDFs let you add custom processing logic to a template. For more complex pipelines, start with the Apache Beam SDK. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-07 UTC." } ]
{ "category": "App Definition and Development", "file_name": "how-to.md", "project_name": "Google Cloud Dataflow", "subcategory": "Streaming & Messaging" }
[ { "data": "``` with pipeline as p: predictions = ( p | beam.ReadFromSource('a_source') | RunInference(MODEL_HANDLER)) ``` ``` with beam.Pipeline() as p: transformed_data = ( p | beam.Create(data) | MLTransform(...) | beam.Map(print)) ``` Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-05-23 UTC." } ]
{ "category": "App Definition and Development", "file_name": "gpu.md", "project_name": "Google Cloud Dataflow", "subcategory": "Streaming & Messaging" }
[ { "data": "Dataflow templates allow you to package a Dataflow pipeline for deployment. Anyone with the correct permissions can then use the template to deploy the packaged pipeline. You can create your own custom Dataflow templates, and Google provides pre-built templates for common scenarios. Templates have several advantages over directly deploying a pipeline to Dataflow: Google provides a variety of pre-built, open source Dataflow templates that you can use for common scenarios. For more information about the available templates, see Google-provided templates. Dataflow supports two types of template: Flex templates, which are newer, and classic templates. If you are creating a new Dataflow template, we recommend creating it as a Flex template. With a Flex template, the pipeline is packaged as a Docker image in Artifact Registry, along with a template specification file in Cloud Storage. The template specification contains a pointer to the Docker image. When you run the template, the Dataflow service starts a launcher VM, pulls the Docker image, and runs the pipeline. The execution graph is dynamically built based on runtime parameters provided by the user. To use the API to launch a job that uses a Flex template, use the projects.locations.flexTemplates.launch method. A classic template contains the JSON serialization of a Dataflow job graph. The code for the pipeline must wrap any runtime parameters in the ValueProvider interface. This interface allows users to specify parameter values when they deploy the template. To use the API to work with classic templates, see the projects.locations.templates API reference documentation. Flex templates have the following advantages over classic templates: Using Dataflow templates involves the following high-level steps: Dataflow jobs, including jobs run from templates, use two IAM service accounts: Ensure that these two service accounts have appropriate roles. For more information, see Dataflow security and permissions. To create your own templates, make sure your Apache Beam SDK version supports template creation. To create templates with the Apache Beam SDK 2.x for Java, you must have version 2.0.0-beta3 or higher. To create templates with the Apache Beam SDK 2.x for Python, you must have version 2.0.0 or higher. To run templates with Google Cloud CLI, you must have Google Cloud CLI version 138.0.0 or higher. You can build your own templates by extending the open source Dataflow templates. For example, for a template that uses a fixed window duration, data that arrives outside of the window might be discarded. To avoid this behavior, use the template code as a base, and modify the code to invoke the .withAllowedLateness operation. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-07 UTC." } ]
{ "category": "App Definition and Development", "file_name": "dataflow-sql-ui-walkthrough.md", "project_name": "Google Cloud Dataflow", "subcategory": "Streaming & Messaging" }
[ { "data": "Dataflow is a Google Cloud service that provides unified stream and batch data processing at scale. Use Dataflow to create data pipelines that read from one or more sources, transform the data, and write the data to a destination. Typical use cases for Dataflow include the following: Dataflow uses the same programming model for both batch and stream analytics. Streaming pipelines can achieve very low latency. You can ingest, process, and analyze fluctuating volumes of real-time data. By default, Dataflow guarantees exactly-once processing of every record. For streaming pipelines that can tolerate duplicates, you can often reduce cost and improve latency by enabling at-least-once mode. This section describes some of the advantages of using Dataflow. Dataflow is a fully managed service. That means Google manages all of the resources needed to run Dataflow. When you run a Dataflow job, the Dataflow service allocates a pool of worker VMs to execute the pipeline. You don't need to provision or manage these VMs. When the job completes or is cancelled, Dataflow automatically deletes the VMs. You're billed for the compute resources that your job uses. For more information about costs, see Dataflow pricing. Dataflow is designed to support batch and streaming pipelines at large scale. Data is processed in parallel, so the work is distributed across multiple VMs. Dataflow can autoscale by provisioning extra worker VMs, or by shutting down some worker VMs if fewer are needed. It also optimizes the work, based on the characteristics of the pipeline. For example, Dataflow can dynamically rebalance work among the VMs, so that parallel work completes more efficiently. Dataflow is built on the open source Apache Beam project. Apache Beam lets you write pipelines using a language-specific SDK. Apache Beam supports Java, Python, and Go SDKs, as well as multi-language pipelines. Dataflow executes Apache Beam pipelines. If you decide later to run your pipeline on a different platform, such as Apache Flink or Apache Spark, you can do so without rewriting the pipeline code. You can use Dataflow for relatively simple pipelines, such as moving data. However, it's also suitable for more advanced applications, such as real-time streaming analytics. A solution built on Dataflow can grow with your needs as you move from batch to streaming or encounter more advanced use" }, { "data": "Dataflow supports several different ways to create and execute pipelines, depending on your needs: Write code using the Apache Beam SDKs. Deploy a Dataflow template. Templates let you run predefined pipelines. For example, a developer can create a template, and then a data scientist can deploy it on demand. Google also provides a library of templates for common scenarios. You can deploy these templates without knowing any Apache Beam programming concepts. Use JupyterLab notebooks to develop and run pipelines iteratively. You can monitor the status of your Dataflow jobs through the Dataflow monitoring interface in the Google Cloud console. The monitoring interface includes a graphical representation of your pipeline, showing the progress and execution details of each pipeline stage. The monitoring interface makes it easier to spot problems such as bottlenecks or high latency. You can also profile your Dataflow jobs to monitor CPU usage and memory allocation. Dataflow uses a data pipeline model, where data moves through a series of stages. Stages can include reading data from a source, transforming and aggregating the data, and writing the results to a destination. Pipelines can range from very simple to more complex processing. For example, a pipeline might do the following: A pipeline that is defined in Apache Beam does not specify how the pipeline is executed. Running the pipeline is the job of a runner. The purpose of a runner is to run an Apache Beam pipeline on a specific platform. Apache Beam supports multiple runners, including a Dataflow runner. To use Dataflow with your Apache Beam pipelines, specify the Dataflow runner. The runner uploads your executable code and dependencies to a Cloud Storage bucket and creates a Dataflow job. Dataflow then allocates a pool of VMs to execute the pipeline. The following diagram shows a typical ETL and BI solution using Dataflow and other Google Cloud services: This diagram shows the following stages: For basic data movement scenarios, you might run a Google-provided template. Some templates support user-defined functions (UDFs) written in JavaScript. UDFs let you add custom processing logic to a template. For more complex pipelines, start with the Apache Beam SDK. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-07 UTC." } ]
{ "category": "App Definition and Development", "file_name": "provided-streaming#datastream-to-bigquery.md", "project_name": "Google Cloud Dataflow", "subcategory": "Streaming & Messaging" }
[ { "data": "Dataflow templates allow you to package a Dataflow pipeline for deployment. Anyone with the correct permissions can then use the template to deploy the packaged pipeline. You can create your own custom Dataflow templates, and Google provides pre-built templates for common scenarios. Templates have several advantages over directly deploying a pipeline to Dataflow: Google provides a variety of pre-built, open source Dataflow templates that you can use for common scenarios. For more information about the available templates, see Google-provided templates. Dataflow supports two types of template: Flex templates, which are newer, and classic templates. If you are creating a new Dataflow template, we recommend creating it as a Flex template. With a Flex template, the pipeline is packaged as a Docker image in Artifact Registry, along with a template specification file in Cloud Storage. The template specification contains a pointer to the Docker image. When you run the template, the Dataflow service starts a launcher VM, pulls the Docker image, and runs the pipeline. The execution graph is dynamically built based on runtime parameters provided by the user. To use the API to launch a job that uses a Flex template, use the projects.locations.flexTemplates.launch method. A classic template contains the JSON serialization of a Dataflow job graph. The code for the pipeline must wrap any runtime parameters in the ValueProvider interface. This interface allows users to specify parameter values when they deploy the template. To use the API to work with classic templates, see the projects.locations.templates API reference documentation. Flex templates have the following advantages over classic templates: Using Dataflow templates involves the following high-level steps: Dataflow jobs, including jobs run from templates, use two IAM service accounts: Ensure that these two service accounts have appropriate roles. For more information, see Dataflow security and permissions. To create your own templates, make sure your Apache Beam SDK version supports template creation. To create templates with the Apache Beam SDK 2.x for Java, you must have version 2.0.0-beta3 or higher. To create templates with the Apache Beam SDK 2.x for Python, you must have version 2.0.0 or higher. To run templates with Google Cloud CLI, you must have Google Cloud CLI version 138.0.0 or higher. You can build your own templates by extending the open source Dataflow templates. For example, for a template that uses a fixed window duration, data that arrives outside of the window might be discarded. To avoid this behavior, use the template code as a base, and modify the code to invoke the .withAllowedLateness operation. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-07 UTC." } ]
{ "category": "App Definition and Development", "file_name": "installing-beam-sdk.md", "project_name": "Google Cloud Dataflow", "subcategory": "Streaming & Messaging" }
[ { "data": "Apache Beam SDK 2.x for Java API reference on the Apache Beam documentation site. Apache Beam SDK 2.x for Python API reference on the Apache Beam documentation site. Apache Beam SDK 2.x for Go API reference on the Go site. Dataflow REST API reference documentation. Dataflow RPC reference documentation. Data Pipelines reference documentation. The runtime environments supported by Apache Beam. Docker base image reference for Flex Templates. The Docker base image versions that support Flex Templates. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates." } ]
{ "category": "App Definition and Development", "file_name": "quotas.md", "project_name": "Google Cloud Dataflow", "subcategory": "Streaming & Messaging" }
[ { "data": "This tutorialshows you how to use Dataflow SQL to join a stream of data from Pub/Sub with data from a BigQuery table. In this tutorial, you: In this document, you use the following billable components of Google Cloud: To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Cloud Dataflow, Compute Engine, Logging, Cloud Storage, Cloud Storage JSON, BigQuery, Cloud Pub/Sub, Cloud Resource Manager and Data Catalog. APIs. Enable the APIs Create a service account: In the Google Cloud console, go to the Create service account page. In the Service account name field, enter a name. The Google Cloud console fills in the Service account ID field based on this name. In the Service account description field, enter a description. For example, Service account for quickstart. Grant the Project > Owner role to the service account. To grant the role, find the Select a role list, then select Project > Owner. Click Done to finish creating the service account. Do not close your browser window. You will use it in the next step. Create a service account key: Set the environment variable GOOGLEAPPLICATIONCREDENTIALS to the path of the JSON file that contains your credentials. This variable applies only to your current shell session, so if you open a new session, set the variable again. Example: Linux or macOS ``` export GOOGLEAPPLICATIONCREDENTIALS=\"KEY_PATH\"``` Replace KEY_PATH with the path of the JSON file that contains your credentials. For example: ``` export GOOGLEAPPLICATIONCREDENTIALS=\"/home/user/Downloads/service-account-file.json\"``` Example: Windows For PowerShell: ``` $env:GOOGLEAPPLICATIONCREDENTIALS=\"KEY_PATH\"``` Replace KEY_PATH with the path of the JSON file that contains your credentials. For example: ``` $env:GOOGLEAPPLICATIONCREDENTIALS=\"C:\\Users\\username\\Downloads\\service-account-file.json\"``` For command prompt: ``` set GOOGLEAPPLICATIONCREDENTIALS=KEY_PATH``` Replace KEY_PATH with the path of the JSON file that contains your credentials. In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Cloud Dataflow, Compute Engine, Logging, Cloud Storage, Cloud Storage JSON, BigQuery, Cloud Pub/Sub, Cloud Resource Manager and Data Catalog. APIs. Enable the APIs Create a service account: In the Google Cloud console, go to the Create service account page. In the Service account name field, enter a name. The Google Cloud console fills in the Service account ID field based on this name. In the Service account description field, enter a description. For example, Service account for quickstart. Grant the Project > Owner role to the service account. To grant the role, find the Select a role list, then select Project > Owner. Click Done to finish creating the service account. Do not close your browser window. You will use it in the next step. Create a service account key: Set the environment variable GOOGLEAPPLICATIONCREDENTIALS to the path of the JSON file that contains your credentials. This variable applies only to your current shell session, so if you open a new session, set the variable" }, { "data": "Example: Linux or macOS ``` export GOOGLEAPPLICATIONCREDENTIALS=\"KEY_PATH\"``` Replace KEY_PATH with the path of the JSON file that contains your credentials. For example: ``` export GOOGLEAPPLICATIONCREDENTIALS=\"/home/user/Downloads/service-account-file.json\"``` Example: Windows For PowerShell: ``` $env:GOOGLEAPPLICATIONCREDENTIALS=\"KEY_PATH\"``` Replace KEY_PATH with the path of the JSON file that contains your credentials. For example: ``` $env:GOOGLEAPPLICATIONCREDENTIALS=\"C:\\Users\\username\\Downloads\\service-account-file.json\"``` For command prompt: ``` set GOOGLEAPPLICATIONCREDENTIALS=KEY_PATH``` Replace KEY_PATH with the path of the JSON file that contains your credentials. If you would like to follow the example provided in this tutorial, create the following sources and use them in the steps of the tutorial. ``` gcloud pubsub topics create transactions``` ``` import datetime, json, os, random, time project = 'project-id' FIRST_NAMES = ['Monet', 'Julia', 'Angelique', 'Stephane', 'Allan', 'Ulrike', 'Vella', 'Melia', 'Noel', 'Terrence', 'Leigh', 'Rubin', 'Tanja', 'Shirlene', 'Deidre', 'Dorthy', 'Leighann', 'Mamie', 'Gabriella', 'Tanika', 'Kennith', 'Merilyn', 'Tonda', 'Adolfo', 'Von', 'Agnus', 'Kieth', 'Lisette', 'Hui', 'Lilliana',] CITIES = ['Washington', 'Springfield', 'Franklin', 'Greenville', 'Bristol', 'Fairview', 'Salem', 'Madison', 'Georgetown', 'Arlington', 'Ashland',] STATES = ['MO','SC','IN','CA','IA','DE','ID','AK','NE','VA','PR','IL','ND','OK','VT','DC','CO','MS', 'CT','ME','MN','NV','HI','MT','PA','SD','WA','NJ','NC','WV','AL','AR','FL','NM','KY','GA','MA', 'KS','VI','MI','UT','AZ','WI','RI','NY','TN','OH','TX','AS','MD','OR','MP','LA','WY','GU','NH'] PRODUCTS = ['Product 2', 'Product 2 XL', 'Product 3', 'Product 3 XL', 'Product 4', 'Product 4 XL', 'Product 5', 'Product 5 XL',] while True: firstname, lastname = random.sample(FIRST_NAMES, 2) data = { 'trtimestr': datetime.datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\"), 'firstname': firstname, 'lastname': lastname, 'city': random.choice(CITIES), 'state':random.choice(STATES), 'product': random.choice(PRODUCTS), 'amount': float(random.randrange(50000, 70000)) / 100, } message = json.dumps(data) command = \"gcloud --project={} pubsub topics publish transactions --message='{}'\".format(project, message) print(command) os.system(command) time.sleep(random.randrange(1, 5)) ``` ``` stateid,statecode,statename,salesregion 1,MO,Missouri,Region_1 2,SC,South Carolina,Region_1 3,IN,Indiana,Region_1 6,DE,Delaware,Region_2 15,VT,Vermont,Region_2 16,DC,District of Columbia,Region_2 19,CT,Connecticut,Region_2 20,ME,Maine,Region_2 35,PA,Pennsylvania,Region_2 38,NJ,New Jersey,Region_2 47,MA,Massachusetts,Region_2 54,RI,Rhode Island,Region_2 55,NY,New York,Region_2 60,MD,Maryland,Region_2 66,NH,New Hampshire,Region_2 4,CA,California,Region_3 8,AK,Alaska,Region_3 37,WA,Washington,Region_3 61,OR,Oregon,Region_3 33,HI,Hawaii,Region_4 59,AS,American Samoa,Region_4 65,GU,Guam,Region_4 5,IA,Iowa,Region_5 32,NV,Nevada,Region_5 11,PR,Puerto Rico,Region_6 17,CO,Colorado,Region_6 18,MS,Mississippi,Region_6 41,AL,Alabama,Region_6 42,AR,Arkansas,Region_6 43,FL,Florida,Region_6 44,NM,New Mexico,Region_6 46,GA,Georgia,Region_6 48,KS,Kansas,Region_6 52,AZ,Arizona,Region_6 56,TN,Tennessee,Region_6 58,TX,Texas,Region_6 63,LA,Louisiana,Region_6 7,ID,Idaho,Region_7 12,IL,Illinois,Region_7 13,ND,North Dakota,Region_7 31,MN,Minnesota,Region_7 34,MT,Montana,Region_7 36,SD,South Dakota,Region_7 50,MI,Michigan,Region_7 51,UT,Utah,Region_7 64,WY,Wyoming,Region_7 9,NE,Nebraska,Region_8 10,VA,Virginia,Region_8 14,OK,Oklahoma,Region_8 39,NC,North Carolina,Region_8 40,WV,West Virginia,Region_8 45,KY,Kentucky,Region_8 53,WI,Wisconsin,Region_8 57,OH,Ohio,Region_8 49,VI,United States Virgin Islands,Region_9 62,MP,Commonwealth of the Northern Mariana Islands,Region_9 ``` Assigning a schema lets you run SQL queries on your Pub/Sub topic data. Currently, Dataflow SQL expects messages in Pub/Sub topics to be serialized in JSON format. To assign a schema to the example Pub/Sub topic transactions: Create a text file and name it transactions_schema.yaml. Copy and paste the following schema text into transactions_schema.yaml. ``` column: event_timestamp description: Pub/Sub event timestamp mode: REQUIRED type: TIMESTAMP column: trtimestr description: Transaction time string mode: NULLABLE type: STRING column: first_name description: First name mode: NULLABLE type: STRING column: last_name description: Last name mode: NULLABLE type: STRING column: city description: City mode: NULLABLE type: STRING column: state description: State mode: NULLABLE type: STRING column: product description: Product mode: NULLABLE type: STRING column: amount description: Amount of transaction mode: NULLABLE type: FLOAT ``` Assign the schema using the Google Cloud CLI. a. Update the gcloud CLI with the following command. Ensure that the gcloud CLI version is 242.0.0 or higher. ``` gcloud components update ``` b. Run the following command in a command-line window. Replace project-id with your project ID, and path-to-file with the path to your transactions_schema.yaml file. ``` gcloud data-catalog entries update \\ --lookup-entry='pubsub.topic.`project-id`.transactions'\\ --schema-from-file=path-to-file/transactions_schema.yaml ``` For more information about the command's parameters and allowed schema file formats, see the documentation page for gcloud data-catalog entries update. c. Confirm that your schema was successfully assigned to the transactions Pub/Sub" }, { "data": "Replace project-id with your project ID. ``` gcloud data-catalog entries lookup 'pubsub.topic.`project-id`.transactions' ``` The Dataflow SQL UI provides a way to find Pub/Sub data source objects for any project you have access to, so you don't have to remember their full names. For the example in this tutorial, navigate to the Dataflow SQL editor and search for the transactions Pub/Sub topic that you created: Navigate to the SQL Workspace. In the Dataflow SQL Editor panel, in the search bar, search for projectid=project-id transactions. Replace project-id with your project ID. Under Schema, you can view the schema that you assigned to the Pub/Sub topic. The Dataflow SQL UI lets you create SQL queries to run your Dataflow jobs. The following SQL query is a data enrichment query. It adds an additional field, sales_region, to the Pub/Sub stream of events (transactions), using a BigQuery table (usstatesalesregions) that maps states to sales regions. Copy and paste the following SQL query into the Query editor. Replace project-id with your project ID. ``` SELECT tr.*, sr.sales_region FROM pubsub.topic.`project-id`.transactions as tr INNER JOIN bigquery.table.`project-id`.dataflowsqltutorial.usstatesalesregions AS sr ON tr.state = sr.state_code ``` When you enter a query in the Dataflow SQL UI, the query validator verifies the query syntax. A green check mark icon is displayed if the query is valid. If the query is invalid, a red exclamation point icon is displayed. If your query syntax is invalid, clicking on the validator icon provides information about what you need to fix. The following screenshot shows the valid query in the Query editor. The validator displays a green check mark. If you created example sources: Before you execute your SQL query, run the transactions_injector.py Python script in a command-line window. The script periodically publishes messages to your Pub/Sub topic, transactions, and the script continues to publish messages to your topic until you stop the script. ``` python transactions_injector.py``` To run your SQL query, create a Dataflow job from the Dataflow SQL UI. In the Query editor, click Create job. In the Create Dataflow job panel that opens: Optional: Dataflow automatically chooses the settings that are optimal for your Dataflow SQL job, but you can expand the Optional parameters menu to manually specify the following pipeline options: Click Create. Your Dataflow job takes a few minutes to start running. Dataflow turns your SQL query into an Apache Beam pipeline. Click View job to open the Dataflow web UI, where you can see a graphical representation of your pipeline. To see a breakdown of the transformations occurring in the pipeline, click the boxes. For example, if you click the first box in the graphical representation, labeled Run SQL Query, a graphic appears that shows the operations taking place behind the scenes. The first two boxes represent the two inputs you joined: the Pub/Sub topic, transactions, and the BigQuery table, usstatesalesregions. To view the output table that contains the job results, go to the BigQuery UI. In the Explorer panel, in your project, click the dataflowsqltutorial dataset that you created. Then, click the output table, sales. The Preview tab displays the contents of the output table. The Dataflow UI stores past jobs and queries in the Dataflow Jobs" }, { "data": "You can use the job history list to see previous SQL queries. For example, you want to modify your query to aggregate sales by sales region every 15 seconds. Use the Jobs page to access the running job that you started earlier in the tutorial, copy the SQL query, and run another job with a modified query. From the Dataflow Jobs page, click the job you want to edit. On the Job details page, in the Job info panel, under Pipeline options, locate the SQL query. Find the row for queryString. Copy and paste the following SQL query into the Dataflow SQL Editor in the SQL Workspace to add tumbling windows. Replace project-id with your project ID. ``` SELECT sr.sales_region, TUMBLESTART(\"INTERVAL 15 SECOND\") AS periodstart, SUM(tr.amount) as amount FROM pubsub.topic.`project-id`.transactions AS tr INNER JOIN bigquery.table.`project-id`.dataflowsqltutorial.usstatesalesregions AS sr ON tr.state = sr.state_code GROUP BY sr.sales_region, TUMBLE(tr.event_timestamp, \"INTERVAL 15 SECOND\") ``` Click Create job to create a new job with the modified query. To avoid incurring charges to your Cloud Billing account for the resources used in this tutorial: Stop your transactions_injector.py publishing script if it is still running. Stop your running Dataflow jobs. Go to the Dataflow web UI in the Google Cloud console. Go to the Dataflow web UI For each job you created from following this walkthrough, do the following steps: Click the name of the job. On the Job details page, click Stop. The Stop Job dialog appears with your options for how to stop your job. Select Cancel. Click Stop job. The service halts all data ingestion and processing as soon as possible. Because Cancel immediately halts processing, you might lose any \"in-flight\" data. Stopping a job might take a few minutes. Delete your BigQuery dataset. Go to the BigQuery web UI in the Google Cloud console. Go to the BigQuery web UI In the Explorer panel, in the Resources section, click the dataflowsqltutorial dataset you created. In the details panel, click Delete. A confirmation dialog opens. In the Delete dataset dialog box, confirm the delete command by typing delete, and then click Delete. Delete your Pub/Sub topic. Go to the Pub/Sub topics page in the Google Cloud console. Go to the Pub/Sub topics page Select the transactions topic. Click Delete to permanently delete the topic. A confirmation dialog opens. In the Delete topic dialog box, confirm the delete command by typing delete, and then click Delete. Go to the Pub/Sub subscriptions page. Select any remaining subscriptions to transactions. If your jobs are not running anymore, there might not be any subscriptions. Click Delete to permanently delete the subscriptions. In the confirmation dialog, click Delete. Delete the Dataflow staging bucket in Cloud Storage. Go to the Cloud Storage Buckets page in the Google Cloud console. Go to Buckets Select the Dataflow staging bucket. Click Delete to permanently delete the bucket. A confirmation dialog opens. In the Delete bucket dialog box, confirm the delete command by typing DELETE, and then click Delete. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-07 UTC." } ]
{ "category": "App Definition and Development", "file_name": "tutorials_doctype=quickstart.md", "project_name": "Google Cloud Dataflow", "subcategory": "Streaming & Messaging" }
[ { "data": "Get started using Google Cloud by trying one of our product quickstarts, tutorials, or interactive walkthroughs. Get started with Google Cloud quickstarts Whether you're looking to deploy a web app, set up a database, or run big data workloads, it can be challenging to get started. Luckily, Google Cloud quickstarts offer step-by-step tutorials that cover basic use cases, operating the Google Cloud console, and how to use the Google command-line tools." } ]
{ "category": "App Definition and Development", "file_name": "tutorials#%22dataflow%22.md", "project_name": "Google Cloud Dataflow", "subcategory": "Streaming & Messaging" }
[ { "data": "This page contains media articles, videos and podcasts related to Dataflow. This page contains pricing information for using Dataflow. This page provides information on the resource quotas associated with using Dataflow. Release notes that pertain to Dataflow. Links to the Apache Beam release notes. Link to the Dataflow templates release notes. The Service Level Agreement for Dataflow. Regions and zones where you can use Dataflow. Where you can get support for Dataflow and the Apache Beam SDK. Resources to get help with billing questions. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates." } ]
{ "category": "App Definition and Development", "file_name": "molecules-walkthrough.md", "project_name": "Google Cloud Dataflow", "subcategory": "Streaming & Messaging" }
[ { "data": "Config Connector is an open source Kubernetes add-on that lets you manage Google Cloud resources through Kubernetes. Many cloud-native development teams work with a mix of configuration systems, APIs, and tools to manage their infrastructure. This mix is often difficult to understand, leading to reduced velocity and expensive mistakes. Config Connector provides a method to configure many Google Cloud services and resources using Kubernetes tooling and APIs. With Config Connector, your environments can use Kubernetes-managed Resources including: You can manage your Google Cloud infrastructure the same way you manage your Kubernetes applications, reducing the complexity and cognitive load for developers. Config Connector provides a collection of Kubernetes Custom Resource Definitions (CRDs) and controllers. The Config Connector CRDs allow Kubernetes to create and manage Google Cloud resources when you configure and apply Objects to your cluster. For Config Connector CRDs to function correctly, Config Connector deploys Pods to your nodes that have elevated RBAC permissions, such as the ability to create, delete, get, and list CustomResourceDefinitions (CRDs). These permissions are required for Config Connector to create and reconcile Kubernetes resources. To get started, install Config Connector and create your first resource. Config Connector's controllers eventually reconcile your environment with your desired state. Config Connector provides additional features beyond creating resources. For example, you can manage existing Google Cloud resources, and use Kubernetes Secrets to provide sensitive data, such as passwords, to your resources. For more information, see the list of how-to guides. In addition, you can learn more about how Config Connector uses Kubernetes constructs to manage Resources and see the Google Cloud resources Config Connector can manage. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-07 UTC." } ]
{ "category": "App Definition and Development", "file_name": "github-terms-of-service.md", "project_name": "Koperator", "subcategory": "Streaming & Messaging" }
[ { "data": "As highlighted in the features section, Koperator removed the reliance on StatefulSet,and supports several different usecases. Note: This is not a complete list, if you have a specific requirement or question, see our support options. You may have encountered situations where the horizontal scaling of a cluster is impossible. When only one Broker is throttling and needs more CPU or requires additional disks (because it handles the most partitions), a StatefulSet-based solution is useless, since it does not distinguish between replicas specifications. The handling of such a case requires unique Broker configurations. If there is a need to add a new disk to a unique Broker, there can be a waste of disk space (and money) with a StatefulSet-based solution, since it cant add a disk to a specific Broker, the StatefulSet adds one to each replica. With the Koperator, adding a new disk to any Broker is as easy as changing a CR configuration. Similarly, any Broker-specific configuration can be done on a Broker by Broker basis. In the event of an error with Broker #1, it is ideal to handle it without disrupting the other Brokers. To handle the error you would like to temporarily remove this Broker from the cluster, and fix its state, reconciling the node that serves the node, or maybe reconfigure the Broker using a new configuration. Again, when using StatefulSet, you lose the ability to remove specific Brokers from the cluster. StatefulSet only supports a field name replica that determines how many replicas an application should use. If theres a downscale/removal, this number can be lowered, however, this means that Kubernetes will remove the most recently added Pod (Broker #3) from the cluster - which, in this case, happens to suit the above purposes quite well. To remove the #1 Broker from the cluster, you need to lower the number of brokers in the cluster from three to" }, { "data": "This will cause a state in which only one Broker is live, while you kill the brokers that handle traffic. Koperator supports removing specific brokers without disrupting traffic in the cluster. Apache Kafka is a stateful application, where Brokers create/form a cluster with other Brokers. Every Broker is uniquely configurable (Koperator supports heterogenous environments, in which no nodes are the same, act the same or have the same specifications - from the infrastructure up through the Brokers Envoy configuration). Kafka has lots of Broker configs, which can be used to fine tune specific brokers, and Koperator did not want to limit these to ALL Brokers in a StatefulSet. Koperator supports unique Broker configs. In each of the three scenarios listed above, Koperator does not use StatefulSet, relying, instead, on Pods, PVCs and ConfigMaps. While using StatefulSet is a very convenient starting point, as it handles roughly 80% of scenarios, it also introduces huge limitations when running Kafka on Kubernetes in production. Use of monitoring is essential for any application, and all relevant information about Kafka should be published to a monitoring solution. When using Kubernetes, the de facto solution is Prometheus, which supports configuring alerts based on previously consumed metrics. Koperator was built as a standards-based solution (Prometheus and Alert Manager) that could handle and react to alerts automatically, so human operators wouldnt have to. Koperator supports alert-based Kafka cluster management. LinkedIn knows how to operate Kafka in a better way. They built a tool, called Cruise Control, to operate their Kafka infrastructure. And Koperator is built to handle the infrastructure, but not to reinvent the wheel in so far as operating Kafka. Koperator was built to leverage the Kubernetes operator pattern and our Kubernetes expertise by handling all Kafka infrastructure related issues in the best possible way. Managing Kafka can be a separate issue, for which there already exist some unique tools and solutions that are standard across the industry, so LinkedIns Cruise Control is integrated with the Koperator." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Koperator", "subcategory": "Streaming & Messaging" }
[ { "data": "As highlighted in the features section, Koperator removed the reliance on StatefulSet,and supports several different usecases. Note: This is not a complete list, if you have a specific requirement or question, see our support options. You may have encountered situations where the horizontal scaling of a cluster is impossible. When only one Broker is throttling and needs more CPU or requires additional disks (because it handles the most partitions), a StatefulSet-based solution is useless, since it does not distinguish between replicas specifications. The handling of such a case requires unique Broker configurations. If there is a need to add a new disk to a unique Broker, there can be a waste of disk space (and money) with a StatefulSet-based solution, since it cant add a disk to a specific Broker, the StatefulSet adds one to each replica. With the Koperator, adding a new disk to any Broker is as easy as changing a CR configuration. Similarly, any Broker-specific configuration can be done on a Broker by Broker basis. In the event of an error with Broker #1, it is ideal to handle it without disrupting the other Brokers. To handle the error you would like to temporarily remove this Broker from the cluster, and fix its state, reconciling the node that serves the node, or maybe reconfigure the Broker using a new configuration. Again, when using StatefulSet, you lose the ability to remove specific Brokers from the cluster. StatefulSet only supports a field name replica that determines how many replicas an application should use. If theres a downscale/removal, this number can be lowered, however, this means that Kubernetes will remove the most recently added Pod (Broker #3) from the cluster - which, in this case, happens to suit the above purposes quite well. To remove the #1 Broker from the cluster, you need to lower the number of brokers in the cluster from three to" }, { "data": "This will cause a state in which only one Broker is live, while you kill the brokers that handle traffic. Koperator supports removing specific brokers without disrupting traffic in the cluster. Apache Kafka is a stateful application, where Brokers create/form a cluster with other Brokers. Every Broker is uniquely configurable (Koperator supports heterogenous environments, in which no nodes are the same, act the same or have the same specifications - from the infrastructure up through the Brokers Envoy configuration). Kafka has lots of Broker configs, which can be used to fine tune specific brokers, and Koperator did not want to limit these to ALL Brokers in a StatefulSet. Koperator supports unique Broker configs. In each of the three scenarios listed above, Koperator does not use StatefulSet, relying, instead, on Pods, PVCs and ConfigMaps. While using StatefulSet is a very convenient starting point, as it handles roughly 80% of scenarios, it also introduces huge limitations when running Kafka on Kubernetes in production. Use of monitoring is essential for any application, and all relevant information about Kafka should be published to a monitoring solution. When using Kubernetes, the de facto solution is Prometheus, which supports configuring alerts based on previously consumed metrics. Koperator was built as a standards-based solution (Prometheus and Alert Manager) that could handle and react to alerts automatically, so human operators wouldnt have to. Koperator supports alert-based Kafka cluster management. LinkedIn knows how to operate Kafka in a better way. They built a tool, called Cruise Control, to operate their Kafka infrastructure. And Koperator is built to handle the infrastructure, but not to reinvent the wheel in so far as operating Kafka. Koperator was built to leverage the Kubernetes operator pattern and our Kubernetes expertise by handling all Kafka infrastructure related issues in the best possible way. Managing Kafka can be a separate issue, for which there already exist some unique tools and solutions that are standard across the industry, so LinkedIns Cruise Control is integrated with the Koperator." } ]
{ "category": "App Definition and Development", "file_name": "docs.github.com.md", "project_name": "Koperator", "subcategory": "Streaming & Messaging" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "KubeMQ", "subcategory": "Streaming & Messaging" }
[ { "data": "KubeMQ is a Kubernetes Message Queue Broker. Enterprise-grade message broker and message queue, scalable, high available and secured. A Kubernetes native solution in a lightweight container, deployed in just one minute. All-batteries included Messaging Queue Broker for Kubernetes environment Deployed with Operator for full life cycle operation Blazing fast (written in Go), small and lightweight Docker container Asynchronous and Synchronous messaging with support for At Most Once Delivery and At Least Once Delivery models Supports durable FIFO based Queue, Publish-Subscribe Events, Publish-Subscribe with Persistence (Events Store), RPC Command and Query messaging patterns Supports gRPC, Rest and WebSocket Transport protocols with TLS support (both RPC and Stream modes) Supports Access control Authorization and Authentication Supports message masticating and smart routing No Message broker configuration needed (i.e., queues, exchanges) .Net, Java, Python, Go and NodeJS SDKs Monitoring Dashboard Kubernetes - KubeMQ can be deployed on any Kubernetes cluster as stateful set. MicroK8s - Canonical's MicroK8s K3s - Rancher's KubeMQ supports distributed durable FIFO based queues with the following core features: Guaranteed Delivery - At-least-once delivery and most messages are delivered exactly once. Single and Batch Messages Send and Receive - Single and multiple messages in one call RPC and Stream Flows - RPC flow allows an insert and pull messages in one call. Stream flow allows single message consuming in transactional way Message Policy - Each message can be configured with expiration and delay" }, { "data": "In addition, each message can specify a dead-letter queue for unprocessed messages attempts Long Polling - Consumers can wait until a message available in the queue to consume Peak Messages - Consumers can peek into a queue without removing them from the queue Ack All Queue Messages - Any client can mark all the messages in a queue as discarded and will not be available anymore to consume Visibility timers - Consumers can pull a message from the queue and set a timer which will cause the message not be visible to other consumers. This timer can be extended as needed. Resend Messages - Consumers can send back a message they pulled to a new queue or send a modified message to the same queue for further processing. KubeMQ supports Publish-Subscribe (a.k.a Pub/Sub) messages patterns with the following core features: Events - An asynchronous real-time Pub/Sub pattern. Events Store -An asynchronous Pub/Sub pattern with persistence. Grouping - Load balancing of events between subscribers KubeMQ supports CQRS based RPC flows with the following core features: Commands - A synchronous two ways Command pattern for CQRS types of system architecture. Query - A synchronous two ways Query pattern for CQRS types of system architecture. Response - An answer for a Query type RPC call Timeout - Timeout interval is set for each RPC call. Once no response is received within the Timeout interval, RPC call return an error Grouping - Load balancing of RPC calls between receivers Caching - RPC response can be cached for future requests without the need to process again by a receiver gRPC - High performance RPC and streaming framework that can run in any environment, Open source and Cloud Native. Rest - Restful Api with WebSocket support for bi-directional streaming. C# - C# SDK based on gRPC Java - Java SDK based on gRPC Go - Go SDK based on gRPC Python - Python SDK based on gRPC cURL - cURL SDK based on Rest Node - Node SDK based on gRPC and Rest PHP - PHP SDK based on Rest Ruby - Ruby SDK based on Rest jQuery jQuery SDK based Rest Start with OpenShift Contact us See KubeMQ in Action Last updated 3 years ago Was this helpful?" } ]
{ "category": "App Definition and Development", "file_name": "docs.md", "project_name": "Koperator", "subcategory": "Streaming & Messaging" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:-|:-|:-|-:|-:| | parent directory.. | parent directory.. | parent directory.. | nan | nan | | benchmarks | benchmarks | benchmarks | nan | nan | | examples/springboot-kafka-avro | examples/springboot-kafka-avro | examples/springboot-kafka-avro | nan | nan | | img | img | img | nan | nan | | developer.md | developer.md | developer.md | nan | nan | | monitoring.md | monitoring.md | monitoring.md | nan | nan | | roadmap.md | roadmap.md | roadmap.md | nan | nan | | ssl.md | ssl.md | ssl.md | nan | nan | | test.md | test.md | test.md | nan | nan | | topics.md | topics.md | topics.md | nan | nan | | View all files | View all files | View all files | nan | nan |" } ]