<!DOCTYPE html>
<!--
==============================================================================
           "GitHub HTML5 Pandoc Template" v2.1 — by Tristano Ajmone           
==============================================================================
Copyright © Tristano Ajmone, 2017, MIT License (MIT). Project's home:
- https://github.com/tajmone/pandoc-goodies
The CSS in this template reuses source code taken from the following projects:
- GitHub Markdown CSS: Copyright © Sindre Sorhus, MIT License (MIT):
  https://github.com/sindresorhus/github-markdown-css
- Primer CSS: Copyright © 2016-2017 GitHub Inc., MIT License (MIT):
  http://primercss.io/
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The MIT License 
Copyright (c) Tristano Ajmone, 2017 (github.com/tajmone/pandoc-goodies)
Copyright (c) Sindre Sorhus <sindresorhus@gmail.com> (sindresorhus.com)
Copyright (c) 2017 GitHub Inc.
"GitHub Pandoc HTML5 Template" is Copyright (c) Tristano Ajmone, 2017, released
under the MIT License (MIT); it contains readaptations of substantial portions
of the following third party softwares:
(1) "GitHub Markdown CSS", Copyright (c) Sindre Sorhus, MIT License (MIT).
(2) "Primer CSS", Copyright (c) 2016 GitHub Inc., MIT License (MIT).
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
==============================================================================-->
<html>
<head>
  <meta name="generator" content="pandoc" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
  <meta http-equiv="X-UA-Compatible" content="IE=edge" />
  <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
  <title>GridDB Features Referemce</title>
  <style type="text/css">
.markdown-body{-ms-text-size-adjust:100%;-webkit-text-size-adjust:100%;color:#24292e;font-family:-apple-system,system-ui,BlinkMacSystemFont,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol";font-size:16px;line-height:1.5;word-wrap:break-word;box-sizing:border-box;min-width:200px;max-width:980px;margin:0 auto;padding:45px}.markdown-body a{color:#0366d6;background-color:transparent;text-decoration:none;-webkit-text-decoration-skip:objects}.markdown-body a:active,.markdown-body a:hover{outline-width:0}.markdown-body a:hover{text-decoration:underline}.markdown-body a:not([href]){color:inherit;text-decoration:none}.markdown-body strong{font-weight:600}.markdown-body h1,.markdown-body h2,.markdown-body h3,.markdown-body h4,.markdown-body h5,.markdown-body h6{margin-top:24px;margin-bottom:16px;font-weight:600;line-height:1.25}.markdown-body h1{font-size:2em;margin:.67em 0;padding-bottom:.3em;border-bottom:1px solid #eaecef}.markdown-body h2{padding-bottom:.3em;font-size:1.5em;border-bottom:1px solid #eaecef}.markdown-body h3{font-size:1.25em}.markdown-body h4{font-size:1em}.markdown-body h5{font-size:.875em}.markdown-body h6{font-size:.85em;color:#6a737d}.markdown-body img{border-style:none}.markdown-body svg:not(:root){overflow:hidden}.markdown-body hr{box-sizing:content-box;height:.25em;margin:24px 0;padding:0;overflow:hidden;background-color:#e1e4e8;border:0}.markdown-body hr::before{display:table;content:""}.markdown-body hr::after{display:table;clear:both;content:""}.markdown-body input{margin:0;overflow:visible;font:inherit;font-family:inherit;font-size:inherit;line-height:inherit}.markdown-body [type=checkbox]{box-sizing:border-box;padding:0}.markdown-body *{box-sizing:border-box}.markdown-body blockquote{margin:0}.markdown-body ol,.markdown-body ul{padding-left:2em}.markdown-body ol ol,.markdown-body ul ol{list-style-type:lower-roman}.markdown-body ol ol,.markdown-body ol ul,.markdown-body ul ol,.markdown-body ul ul{margin-top:0;margin-bottom:0}.markdown-body ol ol ol,.markdown-body ol ul ol,.markdown-body ul ol ol,.markdown-body ul ul ol{list-style-type:lower-alpha}.markdown-body li>p{margin-top:16px}.markdown-body li+li{margin-top:.25em}.markdown-body dd{margin-left:0}.markdown-body dl{padding:0}.markdown-body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:600}.markdown-body dl dd{padding:0 16px;margin-bottom:16px}.markdown-body code{font-family:SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace}.markdown-body pre{font:12px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;word-wrap:normal}.markdown-body blockquote,.markdown-body dl,.markdown-body ol,.markdown-body p,.markdown-body pre,.markdown-body table,.markdown-body ul{margin-top:0;margin-bottom:16px}.markdown-body blockquote{padding:0 1em;color:#6a737d;border-left:.25em solid #dfe2e5}.markdown-body blockquote>:first-child{margin-top:0}.markdown-body blockquote>:last-child{margin-bottom:0}.markdown-body table{display:block;overflow:auto;border-spacing:0;border-collapse:collapse}.markdown-body table th{font-weight:600}.markdown-body table td,.markdown-body table th{padding:6px 13px;border:1px solid #dfe2e5}.markdown-body table tr{background-color:#fff;border-top:1px solid #c6cbd1}.markdown-body table tr:nth-child(2n){background-color:#f6f8fa}.markdown-body figure{text-align:center;margin:1em 0;}.markdown-body img{box-sizing:content-box;background-color:#fff}.markdown-body code{padding:.2em 0;margin:0;font-size:85%;background-color:rgba(27,31,35,.05);border-radius:3px}.markdown-body code::after,.markdown-body code::before{letter-spacing:-.2em;content:"\00a0"}.markdown-body pre>code{padding:0;margin:0;font-size:100%;word-break:normal;white-space:pre;background:0 0;border:0}.markdown-body .highlight{margin-bottom:16px}.markdown-body .highlight pre{margin-bottom:0;word-break:normal}.markdown-body .highlight pre,.markdown-body pre{padding:16px;overflow:auto;font-size:85%;line-height:1.45;background-color:#f6f8fa;border-radius:3px}.markdown-body pre code{display:inline;max-width:auto;padding:0;margin:0;overflow:visible;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}.markdown-body pre code::after,.markdown-body pre code::before{content:normal}.markdown-body .full-commit .btn-outline:not(:disabled):hover{color:#005cc5;border-color:#005cc5}.markdown-body kbd{box-shadow:inset 0 -1px 0 #959da5;display:inline-block;padding:3px 5px;font:11px/10px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;color:#444d56;vertical-align:middle;background-color:#fcfcfc;border:1px solid #c6cbd1;border-bottom-color:#959da5;border-radius:3px;box-shadow:inset 0 -1px 0 #959da5}.markdown-body :checked+.radio-label{position:relative;z-index:1;border-color:#0366d6}.markdown-body .task-list-item{list-style-type:none}.markdown-body .task-list-item+.task-list-item{margin-top:3px}.markdown-body .task-list-item input{margin:0 .2em .25em -1.6em;vertical-align:middle}.markdown-body::before{display:table;content:""}.markdown-body::after{display:table;clear:both;content:""}.markdown-body>:first-child{margin-top:0!important}.markdown-body>:last-child{margin-bottom:0!important}.Alert,.Error,.Note,.Success,.Warning{padding:11px;margin-bottom:24px;border-style:solid;border-width:1px;border-radius:4px}.Alert p,.Error p,.Note p,.Success p,.Warning p{margin-top:0}.Alert p:last-child,.Error p:last-child,.Note p:last-child,.Success p:last-child,.Warning p:last-child{margin-bottom:0}.Alert{color:#246;background-color:#e2eef9;border-color:#bac6d3}.Warning{color:#4c4a42;background-color:#fff9ea;border-color:#dfd8c2}.Error{color:#911;background-color:#fcdede;border-color:#d2b2b2}.Success{color:#22662c;background-color:#e2f9e5;border-color:#bad3be}.Note{color:#2f363d;background-color:#f6f8fa;border-color:#d5d8da}.Alert h1,.Alert h2,.Alert h3,.Alert h4,.Alert h5,.Alert h6{color:#246;margin-bottom:0}.Warning h1,.Warning h2,.Warning h3,.Warning h4,.Warning h5,.Warning h6{color:#4c4a42;margin-bottom:0}.Error h1,.Error h2,.Error h3,.Error h4,.Error h5,.Error h6{color:#911;margin-bottom:0}.Success h1,.Success h2,.Success h3,.Success h4,.Success h5,.Success h6{color:#22662c;margin-bottom:0}.Note h1,.Note h2,.Note h3,.Note h4,.Note h5,.Note h6{color:#2f363d;margin-bottom:0}.Alert h1:first-child,.Alert h2:first-child,.Alert h3:first-child,.Alert h4:first-child,.Alert h5:first-child,.Alert h6:first-child,.Error h1:first-child,.Error h2:first-child,.Error h3:first-child,.Error h4:first-child,.Error h5:first-child,.Error h6:first-child,.Note h1:first-child,.Note h2:first-child,.Note h3:first-child,.Note h4:first-child,.Note h5:first-child,.Note h6:first-child,.Success h1:first-child,.Success h2:first-child,.Success h3:first-child,.Success h4:first-child,.Success h5:first-child,.Success h6:first-child,.Warning h1:first-child,.Warning h2:first-child,.Warning h3:first-child,.Warning h4:first-child,.Warning h5:first-child,.Warning h6:first-child{margin-top:0}h1.title,p.subtitle{text-align:center}h1.title.followed-by-subtitle{margin-bottom:0}p.subtitle{font-size:1.5em;font-weight:600;line-height:1.25;margin-top:0;margin-bottom:16px;padding-bottom:.3em}div.line-block{white-space:pre-line}
  </style>
  <style type="text/css">
#TOC{ width: 23%; height: 100%; top: 0px; left: 0px; font-size: 70%; position: fixed; overflow: auto; }
#TOC ul { margin: 1pt 0 1pt 1.5em; padding: 0; list-style-type: none; }
#TOC li { margin: 1pt 0; }
#main{ width: 76%; float: right; }
#postamble { display: none; }
.revision { text-align: right; font-size: 8pt; }
@media print { 
#TOC { width:100%; font-size: 100%; position: static; overflow: visible; }
#main { padding: 0px; width:100%; float: none; }
}
  </style>
  <style type="text/css">code{white-space: pre;}</style>
  <!--[if lt IE 9]>
    <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
  <![endif]-->
</head>
<body>
<div id="main">
<article class="markdown-body">
<header>
<h1 class="title">GridDB Features Referemce</h1>
</header>
<hr>
<nav id="TOC">
<h1 class="toc-title">Table of Contents</h1>
<ul>
<li><a href="#1-introduction"><span class="header-section-number">1</span> Introduction</a>
<ul>
<li><a href="#11-aim--composition-of-this-manual"><span class="header-section-number">1.1</span> Aim &amp; composition of this manual</a></li>
</ul></li>
<li><a href="#2-what-is-griddb"><span class="header-section-number">2</span> What is GridDB?</a>
<ul>
<li><a href="#21-features-of-griddb"><span class="header-section-number">2.1</span> Features of GridDB</a>
<ul>
<li><a href="#211-big-data-volume"><span class="header-section-number">2.1.1</span> Big data (volume)</a></li>
<li><a href="#212-various-data-types-variety"><span class="header-section-number">2.1.2</span> Various data types (variety)</a></li>
<li><a href="#213-high-speed-processing-velocity"><span class="header-section-number">2.1.3</span> High-speed processing (velocity)</a>
<ul>
<li><a href="#2131-processing-is-carried-out-in-the-memory-space-as-much-as-possible"><span class="header-section-number">2.1.3.1</span> Processing is carried out in the memory space as much as possible</a></li>
<li><a href="#2132-reduces-the-overhead"><span class="header-section-number">2.1.3.2</span> Reduces the overhead</a></li>
<li><a href="#2133-processing-in-parallel"><span class="header-section-number">2.1.3.3</span> Processing in parallel</a></li>
</ul></li>
<li><a href="#214-reliabilityavailability"><span class="header-section-number">2.1.4</span> Reliability/availability</a></li>
</ul></li>
</ul></li>
<li><a href="#3-terminology"><span class="header-section-number">3</span> Terminology</a></li>
<li><a href="#4-structure-of-griddb"><span class="header-section-number">4</span> Structure of GridDB</a>
<ul>
<li><a href="#41-composition-of-a-cluster"><span class="header-section-number">4.1</span> Composition of a cluster</a>
<ul>
<li><a href="#411-status-of-node"><span class="header-section-number">4.1.1</span> Status of node</a></li>
<li><a href="#412-status-of-cluster"><span class="header-section-number">4.1.2</span> Status of cluster</a></li>
<li><a href="#413-status-of-partition"><span class="header-section-number">4.1.3</span> Status of partition</a></li>
</ul></li>
<li><a href="#42-cluster-configuration-methods"><span class="header-section-number">4.2</span> Cluster configuration methods</a>
<ul>
<li><a href="#421-setting-up-cluster-configuration-files"><span class="header-section-number">4.2.1</span> Setting up cluster configuration files</a>
<ul>
<li><a href="#4211-fixed_list-fixed-list-method"><span class="header-section-number">4.2.1.1</span> FIXED_LIST: fixed list method</a></li>
<li><a href="#4212-provider-provider-method"><span class="header-section-number">4.2.1.2</span> PROVIDER: provider method</a></li>
</ul></li>
</ul></li>
</ul></li>
<li><a href="#5-data-model"><span class="header-section-number">5</span> Data model</a>
<ul>
<li><a href="#51-container"><span class="header-section-number">5.1</span> Container</a>
<ul>
<li><a href="#511-type"><span class="header-section-number">5.1.1</span> Type</a></li>
<li><a href="#512-data-type"><span class="header-section-number">5.1.2</span> Data type</a>
<ul>
<li><a href="#5121-basic-data-types"><span class="header-section-number">5.1.2.1</span> Basic data types</a></li>
<li><a href="#5122-hybrid-types"><span class="header-section-number">5.1.2.2</span> Hybrid types</a></li>
</ul></li>
<li><a href="#513-primary-key"><span class="header-section-number">5.1.3</span> Primary key</a></li>
</ul></li>
</ul></li>
<li><a href="#6-database-function"><span class="header-section-number">6</span> Database function</a>
<ul>
<li><a href="#61-resource-management"><span class="header-section-number">6.1</span> Resource management</a></li>
<li><a href="#62-data-access-function"><span class="header-section-number">6.2</span> Data access function</a>
<ul>
<li><a href="#621-tql"><span class="header-section-number">6.2.1</span> TQL</a></li>
<li><a href="#622-batch-processing-function-to-multiple-containers"><span class="header-section-number">6.2.2</span> Batch-processing function to multiple containers</a></li>
</ul></li>
<li><a href="#63-index-function"><span class="header-section-number">6.3</span> Index function</a></li>
<li><a href="#64-function-specific-to-time-series-data"><span class="header-section-number">6.4</span> Function specific to time series data</a>
<ul>
<li><a href="#641-compression-function"><span class="header-section-number">6.4.1</span> Compression function</a>
<ul>
<li><a href="#6411-thinning-out-method-with-error-value-hi"><span class="header-section-number">6.4.1.1</span> Thinning out method with error value (HI).</a></li>
<li><a href="#6412-thinning-out-method-without-error-value-ss"><span class="header-section-number">6.4.1.2</span> Thinning out method without error value (SS)</a></li>
</ul></li>
<li><a href="#642-operation-function-of-tql"><span class="header-section-number">6.4.2</span> Operation function of TQL</a>
<ul>
<li><a href="#6421-aggregate-operations"><span class="header-section-number">6.4.2.1</span> Aggregate operations</a></li>
<li><a href="#6422-selectioninterpolation-operation"><span class="header-section-number">6.4.2.2</span> Selection/interpolation operation</a></li>
</ul></li>
<li><a href="#643-expiry-release-function"><span class="header-section-number">6.4.3</span> Expiry release function</a>
<ul>
<li><a href="#6431-expiry-release-types"><span class="header-section-number">6.4.3.1</span> Expiry release types</a></li>
</ul></li>
</ul></li>
<li><a href="#66-transaction-function"><span class="header-section-number">6.6</span> Transaction function</a>
<ul>
<li><a href="#661-starting-and-ending-a-transaction"><span class="header-section-number">6.6.1</span> Starting and ending a transaction</a></li>
<li><a href="#662-transaction-consistency-level"><span class="header-section-number">6.6.2</span> Transaction consistency level</a></li>
<li><a href="#663-transaction-isolation-level"><span class="header-section-number">6.6.3</span> Transaction isolation level</a></li>
<li><a href="#664-mvcc"><span class="header-section-number">6.6.4</span> MVCC</a></li>
<li><a href="#665-lock"><span class="header-section-number">6.6.5</span> Lock</a>
<ul>
<li><a href="#6651-lock-granularity"><span class="header-section-number">6.6.5.1</span> Lock granularity</a></li>
<li><a href="#6652-lock-range-by-database-operations"><span class="header-section-number">6.6.5.2</span> Lock range by database operations</a></li>
</ul></li>
<li><a href="#666-data-perpetuation"><span class="header-section-number">6.6.6</span> Data perpetuation</a></li>
<li><a href="#667-timeout-process"><span class="header-section-number">6.6.7</span> Timeout process</a>
<ul>
<li><a href="#6671-nosql-if-timeout-process"><span class="header-section-number">6.6.7.1</span> NoSQL I/F timeout process</a></li>
</ul></li>
</ul></li>
<li><a href="#67-replication-function"><span class="header-section-number">6.7</span> Replication function</a></li>
<li><a href="#68-affinity-function"><span class="header-section-number">6.8</span> Affinity function</a>
<ul>
<li><a href="#681-data-affinity-function"><span class="header-section-number">6.8.1</span> Data affinity function</a></li>
<li><a href="#682-node-affinity-function"><span class="header-section-number">6.8.2</span> Node affinity function</a></li>
</ul></li>
<li><a href="#69-trigger-function"><span class="header-section-number">6.9</span> Trigger function</a></li>
<li><a href="#610-change-the-definition-of-a-container-table"><span class="header-section-number">6.10</span> Change the definition of a container (table)</a>
<ul>
<li><a href="#6101-add-column"><span class="header-section-number">6.10.1</span> Add column</a></li>
<li><a href="#6102-delete-column"><span class="header-section-number">6.10.2</span> Delete column</a></li>
</ul></li>
<li><a href="#611-database-compressionrelease-function"><span class="header-section-number">6.11</span> Database compression/release function</a>
<ul>
<li><a href="#6111-block-data-compression"><span class="header-section-number">6.11.1</span> Block data compression</a></li>
<li><a href="#6112-deallocation-of-unused-data-blocks"><span class="header-section-number">6.11.2</span> Deallocation of unused data blocks</a></li>
</ul></li>
</ul></li>
<li><a href="#8-parameter"><span class="header-section-number">8</span> Parameter</a>
<ul>
<li><a href="#81-cluster-definition-file-gs_clusterjson"><span class="header-section-number">8.1</span> Cluster definition file (gs_cluster.json)</a></li>
<li><a href="#82-node-definition-file-gs_nodejson"><span class="header-section-number">8.2</span> Node definition file (gs_node.json)</a></li>
</ul></li>
<li><a href="#9-system-limiting-values"><span class="header-section-number">9</span> System limiting values</a>
<ul>
<li><a href="#91-limitations-on-numerical-value"><span class="header-section-number">9.1</span> Limitations on numerical value</a></li>
<li><a href="#92-limitations-on-naming"><span class="header-section-number">9.2</span> Limitations on naming</a></li>
</ul></li>
</ul>
</nav>
<hr>
<p>Revision: CE-20200130</p>
<hr />
<h1 id="1-introduction"><span class="header-section-number">1</span> Introduction</h1>
<h2 id="11-aim--composition-of-this-manual"><span class="header-section-number">1.1</span> Aim &amp; composition of this manual</h2>
<p><strong>This manual explains the functions of GridDB.</strong></p>
<p>The contents of this manual are as follows.</p>
<ul>
<li>What is GridDB?
<ul>
<li>Describes the features and application examples of GridDB.</li>
</ul></li>
<li>Structure of GridDB
<ul>
<li>Describes the cluster operating structure in GridDB.</li>
</ul></li>
<li>The data model of GridDB
<ul>
<li>Describes the data model of GridDB.</li>
</ul></li>
<li>Functions provided by GridDB
<ul>
<li>Describes the data management functions provided by GridDB.</li>
</ul></li>
<li>Parameter
<ul>
<li>Describes the parameters to control the operations in GridDB.</li>
</ul></li>
</ul>
<h1 id="2-what-is-griddb"><span class="header-section-number">2</span> What is GridDB?</h1>
<p>GridDB is a distributed NoSQL database to manage a group of data (known as a row) that is made up of a key and multiple values. Besides having a composition of an in-memory database that arranges all the data in the memory, it can also adopt a hybrid composition combining the use of a disk (including SSD as well) and a memory. By employing a hybrid composition, it can also be used in small scale, small memory systems.</p>
<p>In addition to the 3 Vs (volume, variety, velocity) required in big data solutions, data reliability/availability is also assured in GridDB. Using the autonomous node monitoring and load balancing functions, labor-saving can also be realized in cluster applications.</p>
<p><span id="griddb_features"></span></p>
<h2 id="21-features-of-griddb"><span class="header-section-number">2.1</span> Features of GridDB</h2>
<h3 id="211-big-data-volume"><span class="header-section-number">2.1.1</span> Big data (volume)</h3>
<p>As the scale of a system expands, the data volume handled increases and thus the system needs to be expanded so as to quickly process the big data.</p>
<p>System expansion can be broadly divided into 2 approaches - scale-up (vertical scalability) and scale-out (horizontal scalability).</p>
<ul>
<li><p>What is scale-up (vertical scalability)?</p>
<p>This approach reinforces the system by adding memory to the operating machines, using SSD for the disks, adding processors, and so on. Generally, this approach increases individual processing time and increases the system processing speed. On the other hand, since the nodes must be stopped before the scale-up operation, as it is not a cluster application using multiple machines, once a failure occurs, failure recovery is also time-consuming.</p></li>
<li><p>What is scale-out (horizontal scalability)?</p>
<p>This approach increases the number of nodes constituting a system to improve the processing capability. Since multiple nodes are generally set to operate in coordination, this approach features that there is no need to completely stop the service during maintenance or even when a failure occurs. However, the application management time and effort increases as the number of nodes increases. This architecture is suitable for performing highly parallel processing.</p></li>
</ul>
<p>In GridDB, in addition to the scale-up approach to increase the number of operating nodes and reinforce the system, new nodes can be added to expand the system with a scale-out approach to incorporate nodes into an operating cluster.</p>
<p>As an in-memory processing database, GridDB can handle a large volume of data with its scale-out model. In GridDB, data is distributed throughout the nodes inside a cluster that is composed of multiple nodes. That is, GridDB provides a large-scale memory database by handling memories of more than one nodes as one big memory space.</p>
<p>Moreover, since GridDB manages data both in memories and on a disk, even when a single node is in operation, it can maintain and access the data larger than its memory size. A large capacity that is not limited by the memory size can also be realized.</p>
<p><img src="img/feature_disk_and_memory.png" alt="Combined use of in-memory/disk" /></p>
<p>System expansion can be carried out online with a scale-out approach. That is, without stopping the system in operation, the system can be expanded when the volume of data increases.</p>
<p>In the scale-out approach, data is relocated into the new nodes added to the system in accordance with the load of each existing node in the system. As GridDB will optimize the load balance, the application administrator does not need to worry about the data arrangement. Operation is also easy because a structure to automate such operations has been built into the system.</p>
<p><img src="img/feature_scale_up.png" alt="Scale-out model" /></p>
<h3 id="212-various-data-types-variety"><span class="header-section-number">2.1.2</span> Various data types (variety)</h3>
<p>GridDB data adopts a Key-Container data model that is expanded from Key-Value. Data is stored in a device equivalent to a RDB table known as a container. (A container can be considered a RDB table for easier understanding.)</p>
<p>When accessing data in GridDB, the model allows data to be short-listed with a key thanks to its Key-Value database structure, allowing processing to be carried out at the highest speed. A design that prepares a container serving as a key is required to support the entity under management.</p>
<p><img src="img/feature_data_model.png" alt="Data model" /></p>
<p>Besides being suitable for handling a large volume of time series data (TimeSeries container) that is generated by a sensor or the like and other values paired with the time of occurrence, space data such as position information, etc. can also be registered and space specific operations (space intersection) can also be carried out in a container. A variety of data can be handled as the system supports non-standard data such as array data, BLOB and other data as well.</p>
<p>A unique compression function and a function to release data that has expired and so on are provided in a TimeSeries container, making it suitable for the management of data which is generated in large volumes.</p>
<h3 id="213-high-speed-processing-velocity"><span class="header-section-number">2.1.3</span> High-speed processing (velocity)</h3>
<p>A variety of architectural features is embedded in GridDB to achieve high-speed processing.</p>
<h4 id="2131-processing-is-carried-out-in-the-memory-space-as-much-as-possible"><span class="header-section-number">2.1.3.1</span> Processing is carried out in the memory space as much as possible</h4>
<p>In the case of an operating system with an in-memory in which all the data is arranged, there is no real need to be concerned about the access overhead in the disk. However, in order to process a volume of data so large that it cannot be saved in the memory, there is a need to localize the data accessed by the application and to reduce access to the data arranged in the disk as much as possible.</p>
<p>In order to localize data access from an application, GridDB provides a function to arrange related data in the same block as far as possible. Since data in the data block can be consolidated according to the hints provided in the data, the memory hit rate is raised during data access, thereby increasing the processing speed for data access. By setting hints for memory consolidation according to the access frequency and access pattern in the application, limited memory space can be used effectively for operation (Affinity function).</p>
<h4 id="2132-reduces-the-overhead"><span class="header-section-number">2.1.3.2</span> Reduces the overhead</h4>
<p>In order to minimize waiting time caused by locks or latches in a simultaneous access to the database, GridDB allocates exclusive memory and DB files to each CPU core and thread, so as to eliminate waiting time for exclusive and synchronization processing.</p>
<p><img src="img/feature_architecture.png" alt="Architecture" /></p>
<p>In addition, direct access between the client and node is possible in GridDB by caching the data arrangement when accessing the database for the first time on the client library end. Since direct access to the target data is possible without going through the master node to manage the operating status of the cluster and data arrangement, access to the master node can be centralized to reduce communication cost substantially.</p>
<p><img src="img/feature_client_access.png" alt="Access from a client" /></p>
<h4 id="2133-processing-in-parallel"><span class="header-section-number">2.1.3.3</span> Processing in parallel</h4>
<p>GridDB provides high-speed processing using the following functions: parallel processing e.g. by dividing a request into processing units capable of parallel processing in the drive engine and executing the process using a thread in the node and between nodes, as well as dispersing a single large data into multiple nodes (partitioning) for processing to be carried out in parallel between nodes.</p>
<h3 id="214-reliabilityavailability"><span class="header-section-number">2.1.4</span> Reliability/availability</h3>
<p>Data are duplicated in a cluster and the duplicated data, replicas, are located in multiple nodes. Replicas include master data, called an owner replica, and duplicated data called a backup. By using these replicas, processing can be continued in any of the nodes constituting a cluster even when a failure occurs. Special operating procedures are not necessary as the system will also automatically perform re-arrangement of the data after a node failure occurs (autonomous data arrangement). Data arranged in a failed node is restored from a replica and then the data is re-arranged so that the set number of replicas is reached automatically.</p>
<p>Duplex, triplex or multiplex replica can be set according to the availability requirements.</p>
<p>Each node performs persistence of the data update information using a disk. Even if a failure occurs in the cluster system, all the registered and updated data up to the failure can be restored without being lost.</p>
<p>In addition, since the client also possesses cache information on the data arrangement and management, upon detecting a node failure, it will automatically perform a failover and data access can be continued using a replica.</p>
<p><img src="img/feature_durability.png" alt="High availability" /></p>
<h1 id="3-terminology"><span class="header-section-number">3</span> Terminology</h1>
<p>Describes the terms used in GridDB in a list.</p>
<table>
<thead>
<tr class="header">
<th>Term</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>Node</td>
<td>Refers to the individual server process to perform data management in GridDB.</td>
</tr>
<tr class="even">
<td>Cluster</td>
<td>Single or a set of nodes that perform data management together in an integrated manner.</td>
</tr>
<tr class="odd">
<td>Master node</td>
<td>Node to perform a cluster management process.</td>
</tr>
<tr class="even">
<td>Follower node</td>
<td>All other nodes in the cluster other than the master node.</td>
</tr>
<tr class="odd">
<td>number of nodes constituting a cluster</td>
<td>Refers to the number of nodes constituting a GridDB cluster. When starting GridDB for the first time, the number is used as a threshold value for the cluster to be valid. (Cluster service is started when the number of nodes constituting a cluster joins the cluster.)</td>
</tr>
<tr class="even">
<td>number of nodes already participating in a cluster</td>
<td>Number of nodes currently in operation that have been incorporated into the cluster among the nodes constituting the GridDB cluster.</td>
</tr>
<tr class="odd">
<td>Block</td>
<td>A block is a data unit for data persistence processing in a disk (hereinafter referred to a checkpoint) and is the smallest physical data management unit in GridDB. Multiple container data are arranged in a block. Block size is set up in a definition file (cluster definition file) before the initial startup of GridDB.</td>
</tr>
<tr class="even">
<td>Partitioned table</td>
<td>Data management unit to arrange a container. Smallest data arrangement unit among clusters, and data movement and replication unit for adjusting the load balance between nodes (rebalancing) and for managing data replicas in case of failure.</td>
</tr>
<tr class="odd">
<td>Partition group</td>
<td>A group summarizing multiple partitions which is equivalent to the data file in the file system when the data is perpetuated in a disk. 1 checkpoint file corresponds to 1 partition group. Partition groups are created according to the number of concurrency (/dataStore/concurrency) in the node definition file.</td>
</tr>
<tr class="even">
<td>Row</td>
<td>Refers to one row of data registered in a container or table. Multiple rows are registered in a container or table. A row consists of values of columns corresponding to the schema definition of the container (table).</td>
</tr>
<tr class="odd">
<td>Container (Table)</td>
<td>Container to manage a set of rows. It may be called a container when operated with NoSQL I/F, and may be called a table when operated with NewSQL I/F. What these names refer are the same object, only in different names. A container has two data types: collection and timeseries container.</td>
</tr>
<tr class="even">
<td>Collection (table)</td>
<td>One type of container (table) to manage rows having a general key.</td>
</tr>
<tr class="odd">
<td>Timeseries container (timeseries table)</td>
<td>One type of container (table) to manage rows having a timeseries key. Possesses a special function to handle timeseries data.</td>
</tr>
<tr class="even">
<td>Database file</td>
<td>A database file is a file group consisting of transaction log file and checkpoint file that are perpetuated to a HDD or SSD. Transaction log file is updated every time the GridDB database is updated or a transaction occurs, whereas the checkpoint file is written at a specified time interval.</td>
</tr>
<tr class="odd">
<td>Checkpoint file</td>
<td>A file written into a disk by a partition group. Updated information is reflected in the memory by a cycle of the node definition file (/checkpoint/checkpointInterval).</td>
</tr>
<tr class="even">
<td>Transaction log file</td>
<td>Update information of the transaction is saved sequentially as a log.</td>
</tr>
<tr class="odd">
<td>LSN (Log Sequence Number)</td>
<td>Shows the update log sequence number, which is assigned to each partition during the update in a transaction. The master node of a cluster configuration maintains the maximum number of LSN (MAXLSN) of all the partitions maintained by each node.</td>
</tr>
<tr class="even">
<td>Replica</td>
<td>Replication is the process of creating an exact copy of the original data. In this case, one or more replica are created and stored on multiple nodes, which results to the creation of partition across the nodes. There are 2 forms of replica, master and backup. The former one refers to the original or master data, whereas the latter one is used in case of failure as a reference.</td>
</tr>
<tr class="odd">
<td>Owner node</td>
<td>A node that can update a container in a partition. A node that records the container serving as a master among the replicated containers.</td>
</tr>
<tr class="even">
<td>Backup node</td>
<td>A node that records the container for backup data among the replicated containers.</td>
</tr>
<tr class="odd">
<td>Definition file</td>
<td>Definition file includes two types of parameter files: gs_cluster.json, hereinafter referred to as a cluster definition file, used when composing a cluster; gs_node.json, hereinafter referred to as a node definition file, used to set the operations and resources of the node in a cluster. It also includes a user definition file.</td>
</tr>
<tr class="even">
<td>Event log file</td>
<td>Event logs of the GridDB server are saved in this file including messages such as errors, warnings and so on.</td>
</tr>
<tr class="odd">
<td>User definition file</td>
<td>File in which an user is registered. During initial installation, admin is registered.</td>
</tr>
<tr class="even">
<td>Cluster database</td>
<td>General term for all databases that can be accessed in a GridDB cluster system.</td>
</tr>
<tr class="odd">
<td>Database</td>
<td>Theoretical data management unit created in a cluster database. A public database is created in a cluster database by default.</td>
</tr>
<tr class="even">
<td>Failover</td>
<td>When a failure occurs in a cluster currently in operation, the structure allows the backup node to automatically take over the function and continue with the processing.</td>
</tr>
<tr class="odd">
<td>Client failover</td>
<td>When a failure occurs in a cluster currently in operation, the structure allows the backup node to be automatically re-connected to continue with the processing as a retry process when a failure occurs in the API on the client side.</td>
</tr>
<tr class="even">
<td>Data Affinity</td>
<td>A function to raise the memory hit rate by placing highly correlated data in a container in the same block and localizing data access.</td>
</tr>
<tr class="odd">
<td>Placement of container/table based on node affinity</td>
<td>A function to reduce the network load during data access by placing highly correlated containers in the same node.</td>
</tr>
</tbody>
</table>
<h1 id="4-structure-of-griddb"><span class="header-section-number">4</span> Structure of GridDB</h1>
<p>Describes the data model and cluster operating structure in GridDB.</p>
<h2 id="41-composition-of-a-cluster"><span class="header-section-number">4.1</span> Composition of a cluster</h2>
<p>GridDB is operated by clusters which are composed of multiple nodes. Before accessing the database from an application system, nodes must be started and the cluster must be constituted, that is, cluster service is executed.</p>
<p>A cluster is formed and cluster service is started when a number of nodes specified by the user joins the cluster. Cluster service will not be started and access from the application will not be possible until all nodes constituting a cluster have joined the cluster.</p>
<p>A cluster needs to be constituted even when operating GridDB with a single node. In this case, the number of nodes constituting a cluster is</p>
<ol>
<li>A composition that operates a single node is known as a single composition.</li>
</ol>
<p><img src="img/arc_clusterNameCount.png" alt="Cluster name and number of nodes constituting a cluster" /></p>
<p>A cluster name is used to distinguish a cluster from other clusters so as to compose a cluster using the right nodes selected from multiple GridDB nodes on a network. Using cluster names, multiple GridDB clusters can be composed in the same network. A cluster is composed of nodes with the following features in common: cluster name, the number of nodes constituting a cluster, and the connection method setting. A cluster name needs to be set in the cluster definition file for each node constituting a cluster, and needs to be specified as a parameter when composing a cluster as well.</p>
<p>The method of constituting a cluster using multicast is called multicast method. See <a href="#cluster_configuration_methods">Cluster configuration methods</a> for details.</p>
<p>The operation of a cluster composition is shown below.</p>
<p><img src="img/arc_clusterConfigration.png" alt="Operation of a cluster composition" /></p>
<p>To start up a node and compose a cluster, the operation commands gs_startnode/gs_joincluster are used. In addition, there is a service control function to start up the nodes at the same time as the OS and to compose the cluster.</p>
<p>To compose a cluster, the number of nodes joining a cluster (number of nodes constituting a cluster) and the cluster name must be the same for all the nodes joining the cluster.</p>
<p>Even if a node fails and is separated from the cluster after operation in the cluster started, cluster service will continue so long as the majority of the number of nodes is joining the cluster.</p>
<p>Since cluster operation will continue as long as the majority of the number of nodes is in operation. So, a node can be separated from the cluster for maintenance while keeping the cluster in operation. The node can be get back into the cluster via network after the maintenance. Nodes can also be added via network to reinforce the system.</p>
<p>The following two networks can be separated: the network that communicates within the cluster and the network dedicated to client communication.</p>
<h3 id="411-status-of-node"><span class="header-section-number">4.1.1</span> Status of node</h3>
<p>Nodes have several types of status that represent their status. The status changes by user command execution or internal processing of the node. The <a href="#status_of_cluster">status of a cluster</a> is determined by the status of the nodes in a cluster.</p>
<p>This section explains types of node status, status transition, and how to check the node status.</p>
<ul>
<li><p>Types of node status</p>
<table>
<thead>
<tr class="header">
<th>Node status</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>STOP</td>
<td>The GridDB server has not been started in the node.</td>
</tr>
<tr class="even">
<td>STARTING</td>
<td>The GridDB server is starting in the node. Depending on the previous operating state, start-up processes such as recovery processing of the database are carried out. The only possible access from a client is checking the status of the system with a gs_stat command. Access from the application is not possible.</td>
</tr>
<tr class="odd">
<td>STARTED</td>
<td>The GridDB server has been started in the node. However, access from the application is not possible as the node has not joined the cluster. To obtain the cluster composition, execute a cluster operating command, such as gs_joincluster to join the node to the cluster.</td>
</tr>
<tr class="even">
<td>WAIT</td>
<td>The system is waiting for the cluster to be composed. Nodes have been informed to join a cluster but the number of nodes constituting a cluster is insufficient, so the system is waiting for the number of nodes constituting a cluster to be reached. WAIT status also indicates the node status when the number of nodes constituting a cluster drops below the majority and the cluster service is stopped.</td>
</tr>
<tr class="odd">
<td>SERVICING</td>
<td>A cluster has been constituted and access from the application is possible. However, access may be delayed if synchronization between the clusters of the partition occurs due to a re-start after a failure when the node is stopped or the like.</td>
</tr>
<tr class="even">
<td>STOPPING</td>
<td>Intermediate state in which a node has been instructed to stop but has not stopped yet.</td>
</tr>
<tr class="odd">
<td>ABNORMAL</td>
<td>The state in which an error is detected by the node in SERVICING state or during state transition. A node in the ABNORMAL state will be automatically separated from the cluster. After collecting system operation information, it is necessary to forcibly stop and restart the node in the ABNORMAL state. By re-starting the system, recovery processing will be automatically carried out.</td>
</tr>
</tbody>
</table></li>
<li><p>Transition in the node status</p>
<p><img src="img/arc_NodeStatus.png" alt="Node status" /></p>
<table>
<thead>
<tr class="header">
<th>State transition</th>
<th>State transition event</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>①</td>
<td>Command execution</td>
<td>Start a node by executing the commands such as gs_startnode command.</td>
</tr>
<tr class="even">
<td>②</td>
<td>System</td>
<td>Status changes automatically at the end of recovery processing or loading of database files.</td>
</tr>
<tr class="odd">
<td>③</td>
<td>Command execution</td>
<td>Joining a node to a cluster by executing the commands such as gs_joincluster/gs_appendcluster command.</td>
</tr>
<tr class="even">
<td>④</td>
<td>System</td>
<td>Status changes automatically when the required number of component nodes join a cluster.</td>
</tr>
<tr class="odd">
<td>⑤</td>
<td>System</td>
<td>Status changes automatically when some nodes consisting the cluster are detached from the service due to a failure or by some other reasons, and the number of nodes joining the cluster become less than half of the value set in the definition file.</td>
</tr>
<tr class="even">
<td>⑥</td>
<td>Command execution</td>
<td>Detaches a node from a cluster by executing the commands such as gs_leavecluster command.</td>
</tr>
<tr class="odd">
<td>⑦</td>
<td>Command execution</td>
<td>Detaches a node from a cluster by executing the commands such as gs_leavecluster/gs_stopcluster command.</td>
</tr>
<tr class="even">
<td>⑧</td>
<td>Command execution</td>
<td>Stop a node by executing the commands such as gs_stopnode command.</td>
</tr>
<tr class="odd">
<td>⑨</td>
<td>System</td>
<td>Stops the server process once the final processing ends</td>
</tr>
<tr class="even">
<td>⑩</td>
<td>System</td>
<td>Detached state due to a system failure. In this state, the node needs to be stopped by force once.</td>
</tr>
</tbody>
</table></li>
<li><p>How to check the node status</p>
<p>The node status is determined by the combination of the node status and the node role.</p>
<p>The operation status of a node and the role of a node can be checked from the result of the gs_stat command, which is in json format. That is, for the operation status of a node, check the value of /cluster/nodeStatus, for the role of a node, check /cluster/clusterStatus)</p>
<p>The table below shows the node status, determined by the combination of the operation status of a node and the role of a node.</p>
<table>
<thead>
<tr class="header">
<th>Node status</th>
<th>Operation status of a node<br />
(/cluster/nodeStatus)</th>
<th>Role of a node<br />
(/cluster/clusterStatus)</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>STOP</td>
<td>- (Connection error of gs_stat)</td>
<td>- (Connection error of gs_stat)</td>
</tr>
<tr class="even">
<td>STARTING</td>
<td>INACTIVE</td>
<td>SUB_CLUSTER</td>
</tr>
<tr class="odd">
<td>STARTED</td>
<td>INACTIVE</td>
<td>SUB_CLUSTER</td>
</tr>
<tr class="even">
<td>WAIT</td>
<td>ACTIVE</td>
<td>SUB_CLUSTER</td>
</tr>
<tr class="odd">
<td>SERVICING</td>
<td>ACTIVE</td>
<td>MASTER or FOLLOWER</td>
</tr>
<tr class="even">
<td>STOPPING</td>
<td>NORMAL_SHUTDOWN</td>
<td>SUB_CLUSTER</td>
</tr>
<tr class="odd">
<td>ABNORMAL</td>
<td>ABNORMAL</td>
<td>SUB_CLUSTER</td>
</tr>
</tbody>
</table>

<ul>
<li><p>Operation status of a node</p>
<p>The table below shows the operation status of a node. Each state is expressed as the value of /cluster/nodeStatus of the gs_stat command.</p>
<table>
<thead>
<tr class="header">
<th>Operation status of a node</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>ACTIVE</td>
<td>Active state</td>
</tr>
<tr class="even">
<td>ACTIVATING</td>
<td>In transition to an active state</td>
</tr>
<tr class="odd">
<td>INACTIVE</td>
<td>Non-active state</td>
</tr>
<tr class="even">
<td>DEACTIVATING</td>
<td>In transition to a non-active state.</td>
</tr>
<tr class="odd">
<td>NORMAL_SHUTDOWN</td>
<td>Under shutdown process</td>
</tr>
<tr class="even">
<td>ABNORMAL</td>
<td>Abnormal state</td>
</tr>
</tbody>
</table></li>
<li><p>Role of a node</p>
<p>The table below shows the role of a node. Each state is expressed as the value of /cluster/clusterStatus of the gs_stat command.</p>
<p>A node has two types of roles: "master" and "follower". To start a cluster, one of the nodes which constitute the cluster needs to be a "master." The master manages the whole cluster. All the nodes other than the master become "followers." A follower performs cluster processes, such as a synchronization, following the directions from the master.</p>
<table>
<thead>
<tr class="header">
<th>Role of a node</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>MASTER</td>
<td>Master</td>
</tr>
<tr class="even">
<td>FOLLOWER</td>
<td>Follower</td>
</tr>
<tr class="odd">
<td>SUB_CLUSTER/SUB_MASTER</td>
<td>Role undefined</td>
</tr>
</tbody>
</table></li>
</ul></li>
</ul>
<p><span id="status_of_cluster"></span></p>
<h3 id="412-status-of-cluster"><span class="header-section-number">4.1.2</span> Status of cluster</h3>
<p>The cluster operating status is determined by the state of each node, and the status may be one of 3 states - IN OPERATION/INTERRUPTED/STOPPED.</p>
<p>During the initial system construction, cluster service starts after all the nodes, the number of which was specified by the user as the number of nodes constituting a cluster, have joined the cluster.</p>
<p>During initial cluster construction, the state in which the cluster is waiting to be composed when all the nodes that make up the cluster have not been incorporated into the cluster is known as [INIT_WAIT]. When the number of nodes constituting a cluster has joined the cluster, the state will automatically change to the operating state.</p>
<p>Operation status includes two states, [STABLE] and [UNSTABLE].</p>
<ul>
<li>[STABLE] state
<ul>
<li>State in which a cluster has been formed by the number of nodes specified in the number of nodes constituting a cluster and service can be provided in a stable manner.</li>
</ul></li>
<li>[UNSTABLE] state
<ul>
<li>A cluster in this state is joined by the nodes less than "the number of the nodes constituting the cluster" but more than half the constituting clusters are in operation.</li>
<li>Cluster service will continue for as long as a majority of the number of nodes constituting a cluster is in operation.</li>
</ul></li>
</ul>
<p>A cluster can be operated in an [UNSTABLE] state as long as a majority of the nodes are in operation even if some nodes are detached from a cluster due to maintenance and for other reasons.</p>
<p>Cluster service is interrupted automatically in order to avoid a split brain when the number of nodes constituting a cluster is less than half the number of nodes constituting a cluster. The status of the cluster will become [WAIT].</p>
<ul>
<li><p>What is split brain?</p>
<p>A split brain is an action where multiple cluster systems performing the same process provide simultaneous service when a system is divided due to a hardware or network failure in a tightly-coupled system that works like a single server interconnecting multiple nodes. If the operation is continued in this state, data saved as replicas in multiple clusters will be treated as master data, causing data inconsistency.</p></li>
</ul>
<p>To resume the cluster service from a [WAIT] state, add the node, which recovered from the abnormal state, or add a new node, by using a node addition operation. After the cluster is joined by all the nodes, the number of which is the same as the one specified in "the number of nodes constituting a cluster", the status will be [STABLE], and the service will be resumed.</p>
<p>Even when the cluster service is disrupted, since the number of nodes constituting a cluster becomes less than half due to failures in the nodes constituting the cluster, the cluster service will be automatically restarted once a majority of the nodes joine the cluster by adding new nodes and/or the nodes restored from the errors to the cluster.</p>
<p><img src="img/arc_clusterStatus.png" alt="Cluster status" /></p>
<p>A STABLE state is a state in which the value of the json parameter shown in gs_stat, /cluster/activeCount, is equal to the value of /cluster/designatedCount.</p>
<pre class="example"><code>$ gs_stat -u admin/admin
{
    &quot;checkpoint&quot;: {
        &quot;archiveLog&quot;: 0,
　　　　　：
　　　　　：
    },
    &quot;cluster&quot;: {
        &quot;activeCount&quot;:4,　　　　　　　　　　　 // Nodes in operation within the cluster
        &quot;clusterName&quot;: &quot;test-cluster&quot;,
        &quot;clusterStatus&quot;: &quot;MASTER&quot;,
        &quot;designatedCount&quot;: 4,                  // Number of nodes constituting a cluster
        &quot;loadBalancer&quot;: &quot;ACTIVE&quot;,
        &quot;master&quot;: {
            &quot;address&quot;: &quot;192.168.0.1&quot;,
            &quot;port&quot;: 10040
        },
        &quot;nodeList&quot;: [　　　　　　　　　　　　　// Node list constituting a cluster
            {
                &quot;address&quot;: &quot;192.168.0.1&quot;,
                &quot;port&quot;: 10040
            },
            {
                &quot;address&quot;: &quot;192.168.0.2&quot;,
                &quot;port&quot;: 10040
            },
            {
                &quot;address&quot;: &quot;192.168.0.3&quot;,
                &quot;port&quot;: 10040
            },
            {
                &quot;address&quot;: &quot;192.168.0.4&quot;,
                &quot;port&quot;: 10040
            },

        ],
        ：
        ：
</code></pre>
<h3 id="413-status-of-partition"><span class="header-section-number">4.1.3</span> Status of partition</h3>
<p>The partition status represents the status of the entire partition in a cluster, showing whether the partitions in an operating cluster are accessible, or the partitions are balanced.</p>
<table>
<thead>
<tr class="header">
<th>Partition status</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>NORMAL</td>
<td>All the partitions are in normal states where all of them are placed as planned.</td>
</tr>
<tr class="even">
<td>NOT_BALANCE</td>
<td>With no replica_loss, no owner_loss but partition placement is unbalanced.</td>
</tr>
<tr class="odd">
<td>REPLICA_LOSS</td>
<td>Replica data is missing in some partitions.<br />
(Availability of the partition is reduced, that is, the node cannot be detached from the cluster.)</td>
</tr>
<tr class="even">
<td>OWNER_LOSS</td>
<td>Owner data is missing in some partitions.<br />
(The data of the partition are not accessible.)</td>
</tr>
<tr class="odd">
<td>INITIAL</td>
<td>The initial state no partition has joined the cluster</td>
</tr>
</tbody>
</table>

<p>Partition status can be checked by executing gs_stat command to a master node. (The state is expressed as the value of /cluster/partitionStatus)</p>
<pre class="example"><code>$ gs_stat -u admin/admin
{
　　：
　　：
&quot;cluster&quot;: {
    ：
    &quot;nodeStatus&quot;: &quot;ACTIVE&quot;,
    &quot;notificationMode&quot;: &quot;MULTICAST&quot;,
    &quot;partitionStatus&quot;: &quot;NORMAL&quot;,
    ：
</code></pre>
<p>[Notes]</p>
<ul>
<li>The value of /cluster/partitionStatus of the nodes other than a master node may not be correct. Be sure to check the value of a master node.</li>
</ul>
<p><span id="cluster_configuration_methods"></span></p>
<h2 id="42-cluster-configuration-methods"><span class="header-section-number">4.2</span> Cluster configuration methods</h2>
<p>A cluster consists of one or more nodes connected in a network. Each node maintains a list of the other nodes' addresses for communication purposes.</p>
<p>GridDB supports 3 cluster configuration methods for configuring the address list. Different cluster configuration methods can be used depending on the environment or use case. Connection method of client or operational tool may also be different depending on the configuration methods.</p>
<p>Three cluster configuration methods are available: Multicast method, Fixed list method and Provider method. Multicast method is recommended.</p>
<p>Fixed list or provider method can be used in the environment where multicast is not supported.</p>
<ul>
<li>Multicast method
<ul>
<li>This method performs node discovery in multi-cast to automatically configure the address list.</li>
</ul></li>
<li>Fixed list method
<ul>
<li>A fixed address list is saved in the cluster definition file.</li>
</ul></li>
<li>Provider method
<ul>
<li>Provider method</li>
<li>The address provider can be configured as a Web service or as a static content.</li>
</ul></li>
</ul>
<p>The table below compares the three cluster configuration methods.</p>
<table>
<thead>
<tr class="header">
<th>Property</th>
<th>Multicast method (recommended)</th>
<th>Fixed list method</th>
<th>Provider method</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>Parameters</td>
<td>- Multicast address and port</td>
<td>- List of IP address and port of all the node</td>
<td>- URL of the address provider</td>
</tr>
<tr class="even">
<td>Use case</td>
<td>- When multicast is supported</td>
<td>- When multicast is not supported<br />
- System scale estimation can be performed accurately</td>
<td>- When multicast is not supported<br />
- System scale estimation can not be performed</td>
</tr>
<tr class="odd">
<td>Cluster operation</td>
<td>- Perform automatic discovery of nodes at a specified time interval</td>
<td>- Set a common address list for all nodes<br />
- Read that list only once at node startup</td>
<td>- Obtain the address list at a specified time interval from address provider</td>
</tr>
<tr class="even">
<td>Pros.</td>
<td>- No need to restart the cluster when adding nodes</td>
<td>- No mistake of configuration by consistency check of the list</td>
<td>- No need to restart the cluster when adding nodes</td>
</tr>
<tr class="odd">
<td>Cons.</td>
<td>- Multicast is required for client connection</td>
<td>- Need to restart cluster when adding nodes<br />
- Need to update the connection setting of the client</td>
<td>- Need to ensure the availability of the address provider</td>
</tr>
</tbody>
</table>

<h3 id="421-setting-up-cluster-configuration-files"><span class="header-section-number">4.2.1</span> Setting up cluster configuration files</h3>
<p>Fixed list method or provider method can be used in the environment where multicast is not supported. Network setting of fixed list method and provider method is as follows.</p>
<h4 id="4211-fixed_list-fixed-list-method"><span class="header-section-number">4.2.1.1</span> FIXED_LIST: fixed list method</h4>
<p>When a fixed address list is given to start a node, the list is used to compose the cluster.</p>
<p>When composing a cluster using the fixed list method, configure the parameters in the cluster definition file.</p>
<p><strong>cluster definition file</strong></p>
<table>
<thead>
<tr class="header">
<th>Property</th>
<th>JSON Data type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>/cluster/notificationMember</td>
<td>string</td>
<td>Specify the address list when using the fixed list method as the cluster configuration method.</td>
</tr>
</tbody>
</table>
<p>A configuration example of a cluster definition file is shown below.</p>
<pre class="example"><code>{
                             :
                             :
    &quot;cluster&quot;:{
        &quot;clusterName&quot;:&quot;yourClusterName&quot;,
        &quot;replicationNum&quot;:2,
        &quot;heartbeatInterval&quot;:&quot;5s&quot;,
        &quot;loadbalanceCheckInterval&quot;:&quot;180s&quot;,
        &quot;notificationMember&quot;: [
            {
                &quot;cluster&quot;: {&quot;address&quot;:&quot;172.17.0.44&quot;, &quot;port&quot;:10010},
                &quot;sync&quot;: {&quot;address&quot;:&quot;172.17.0.44&quot;, &quot;port&quot;:10020},
                &quot;system&quot;: {&quot;address&quot;:&quot;172.17.0.44&quot;, &quot;port&quot;:10040},
                &quot;transaction&quot;: {&quot;address&quot;:&quot;172.17.0.44&quot;, &quot;port&quot;:10001},
                &quot;sql&quot;: {&quot;address&quot;:&quot;172.17.0.44&quot;, &quot;port&quot;:20001}
            },
            {
                &quot;cluster&quot;: {&quot;address&quot;:&quot;172.17.0.45&quot;, &quot;port&quot;:10010},
                &quot;sync&quot;: {&quot;address&quot;:&quot;172.17.0.45&quot;, &quot;port&quot;:10020},
                &quot;system&quot;: {&quot;address&quot;:&quot;172.17.0.45&quot;, &quot;port&quot;:10040},
                &quot;transaction&quot;: {&quot;address&quot;:&quot;172.17.0.45&quot;, &quot;port&quot;:10001},
                &quot;sql&quot;: {&quot;address&quot;:&quot;172.17.0.45&quot;, &quot;port&quot;:20001}
            },
            {
                &quot;cluster&quot;: {&quot;address&quot;:&quot;172.17.0.46&quot;, &quot;port&quot;:10010},
                &quot;sync&quot;: {&quot;address&quot;:&quot;172.17.0.46&quot;, &quot;port&quot;:10020},
                &quot;system&quot;: {&quot;address&quot;:&quot;172.17.0.46&quot;, &quot;port&quot;:10040},
                &quot;transaction&quot;: {&quot;address&quot;:&quot;172.17.0.46&quot;, &quot;port&quot;:10001},
                &quot;sql&quot;: {&quot;address&quot;:&quot;172.17.0.46&quot;, &quot;port&quot;:20001}
            }
        ]
    },
                             :
                             :
}
</code></pre>
<h4 id="4212-provider-provider-method"><span class="header-section-number">4.2.1.2</span> PROVIDER: provider method</h4>
<p>Get the address list supplied by the address provider to perform cluster configuration.</p>
<p>When composing a cluster using the provider method, configure the parameters in the cluster definition file.</p>
<p><strong>cluster definition file</strong></p>
<table>
<thead>
<tr class="header">
<th>Property</th>
<th>JSON Data type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>/cluster/notificationProvider/url</td>
<td>string</td>
<td>Specify the URL of the address provider when using the provider method as the cluster configuration method.</td>
</tr>
<tr class="even">
<td>/cluster/notificationProvider/updateInterval</td>
<td>string</td>
<td>Specify the interval to get the list from the address provider. Specify the value more than 1 second and less than 2<sup>31</sup> seconds.</td>
</tr>
</tbody>
</table>
<p>A configuration example of a cluster definition file is shown below.</p>
<pre class="example"><code>{
                             :
                             :
    &quot;cluster&quot;:{
        &quot;clusterName&quot;:&quot;yourClusterName&quot;,
        &quot;replicationNum&quot;:2,
        &quot;heartbeatInterval&quot;:&quot;5s&quot;,
        &quot;loadbalanceCheckInterval&quot;:&quot;180s&quot;,
        &quot;notificationProvider&quot;:{
            &quot;url&quot;:&quot;http://example.com/notification/provider&quot;,
            &quot;updateInterval&quot;:&quot;30s&quot;
        }
    },
                             :
                             :
}
</code></pre>
<p>The address provider can be configured as a Web service or as a static content. The address provider needs to provide the following specifications.</p>
<ul>
<li>Compatible with the GET method.</li>
<li>When accessing the URL, the node address list of the cluster containing the cluster definition file in which the URL is written is returned as a response.
<ul>
<li>Response body: Same JSON as the contents of the node list specified in the fixed list method</li>
<li>Response header: Including Content-Type:application/json</li>
</ul></li>
</ul>
<p>An example of a response sent from the address provider is as follows.</p>
<pre class="example"><code>$ curl http://example.com/notification/provider
[
    {
        &quot;cluster&quot;: {&quot;address&quot;:&quot;172.17.0.44&quot;, &quot;port&quot;:10010},
        &quot;sync&quot;: {&quot;address&quot;:&quot;172.17.0.44&quot;, &quot;port&quot;:10020},
        &quot;system&quot;: {&quot;address&quot;:&quot;172.17.0.44&quot;, &quot;port&quot;:10040},
        &quot;transaction&quot;: {&quot;address&quot;:&quot;172.17.0.44&quot;, &quot;port&quot;:10001},
        &quot;sql&quot;: {&quot;address&quot;:&quot;172.17.0.44&quot;, &quot;port&quot;:20001}
    },
    {
        &quot;cluster&quot;: {&quot;address&quot;:&quot;172.17.0.45&quot;, &quot;port&quot;:10010},
        &quot;sync&quot;: {&quot;address&quot;:&quot;172.17.0.45&quot;, &quot;port&quot;:10020},
        &quot;system&quot;: {&quot;address&quot;:&quot;172.17.0.45&quot;, &quot;port&quot;:10040},
        &quot;transaction&quot;: {&quot;address&quot;:&quot;172.17.0.45&quot;, &quot;port&quot;:10001},
        &quot;sql&quot;: {&quot;address&quot;:&quot;172.17.0.45&quot;, &quot;port&quot;:20001}
    },
    {
        &quot;cluster&quot;: {&quot;address&quot;:&quot;172.17.0.46&quot;, &quot;port&quot;:10010},
        &quot;sync&quot;: {&quot;address&quot;:&quot;172.17.0.46&quot;, &quot;port&quot;:10020},
        &quot;system&quot;: {&quot;address&quot;:&quot;172.17.0.46&quot;, &quot;port&quot;:10040},
        &quot;transaction&quot;: {&quot;address&quot;:&quot;172.17.0.46&quot;, &quot;port&quot;:10001},
        &quot;sql&quot;: {&quot;address&quot;:&quot;172.17.0.46&quot;, &quot;port&quot;:20001}
    }
]
</code></pre>
<p>[Note]</p>
<ul>
<li>Specify the serviceAddress and servicePort of the node definition file in each module (cluster,sync etc.) for each address and port.</li>
<li>Module sql is needed only in GridDB Advanced Edition.</li>
<li>Set either the /cluster/notificationAddress, /cluster/notificationMember, /cluster/notificationProvider in the cluster definition file to match the cluster configuration method used.</li>
</ul>
<p><span id="data_model"></span></p>
<h1 id="5-data-model"><span class="header-section-number">5</span> Data model</h1>
<p>GridDB is a unique Key-Container data model that resembles Key-Value. It has the following features.</p>
<ul>
<li>A concept resembling a RDB table that is a container for grouping Key-Value has been introduced.</li>
<li>A schema to define the data type for the container can be set. An index can be set in a column.</li>
<li>Transactions can be carried out on a row basis within the container. In addition, ACID is guaranteed on a container basis.</li>
</ul>
<p><img src="img/arc_DataModel.png" alt="Data model" /></p>
<p>GridDB manages data on a block, container, table, row, partition, and partition group basis.</p>
<ul>
<li><p>Block</p>
<p>A block is a data unit for data persistence processing in a disk (hereinafter referred to a checkpoint) and is the smallest physical data management unit in GridDB. Multiple container data are arranged in a block. Block size is set up in a definition file (cluster definition file) before the initial startup of GridDB.</p>
<p>As a database file is created during initial startup of the system, the block size cannot be changed after initial startup of GridDB.</p></li>
<li><p>Container (Table)</p>
<p>A container is a data structure that serves as an interface with the user. A container consists of multiple blocks. Data structure serving as an I/F with the user. Container to manage a set of rows. 2 data types exist, collection (table) and timeseries container (timeseries table).</p>
<p>Before registering data in an application, there is a need to make sure that a container (table) is created beforehand. Data is registered in a container (table).</p></li>
<li><p>Row</p>
<p>A row refers to a row of data to be registered in a container or table. Multiple rows can be registered in a container or table but this does not mean that data is arranged in the same block. Depending on the registration and update timing, data is arranged in suitable blocks within partitions.</p>
<p>A row includes columns of more than one data type.</p></li>
<li><p>Partitioned table</p>
<p>A partition is a data management unit that includes 1 or more containers or tables.</p>
<p>A partition is a data arrangement unit between clusters for managing the data movement to adjust the load balance between nodes and data multiplexing (replica) in case of a failure. Data replica is arranged in a node to compose a cluster on a partition basis.</p>
<p>A node that can update a container in a partition is called an owner node and one owner node is allocated to one partition. A node that maintains replicas other than owner nodes is a backup node. Master data and multiple backup data exist in a partition, depending on the number of replicas set.</p>
<p>The relationship between a container and a partition is persistent and the partition which has a specific container is not changed. The relationship between a partition and a node is temporary and the autonomous data placement may cause partition migration to another node.</p></li>
<li><p>Partition group</p>
<p>A group of multiple partitions is known as a partition group.</p>
<p>Data maintained by a partition group is saved in an OS disk as a physical database file. A partition group is created with a number that depends on the degree of parallelism of the database processing threads executed by the node.</p></li>
</ul>
<p><img src="img/arc_DataPieces.png" alt="Data management unit" /></p>
<p>　</p>
<p><span id="label_container"></span></p>
<h2 id="51-container"><span class="header-section-number">5.1</span> Container</h2>
<p>To register and search for data in GridDB, a container (table) needs to be created to store the data. Data structure serving as an I/F with the user. Container to manage a set of rows.</p>
<p>The naming rules for containers (tables) are the same as those for databases.</p>
<ul>
<li>A string consisting of alphanumeric characters, the underscore mark, the hyphen mark, the dot mark, the slash mark and the equal mark can be specified. The container name should not start with a number.</li>
<li>Although the name is case sensitive, a container (table) cannot be created if it has the same name as an existing container when they are case insensitive.</li>
</ul>
<h3 id="511-type"><span class="header-section-number">5.1.1</span> Type</h3>
<p>There are 2 container (table) data types. A timeseries container (timeseries table) is a data type which is suitable for managing hourly data together with the occurrence time while a collection (table) is suitable for managing a variety of data.</p>
<h3 id="512-data-type"><span class="header-section-number">5.1.2</span> Data type</h3>
<p>The schema can be set in a container (table). The basic data types that can be registered in a container (table) are the basic data type and array data type .</p>
<h4 id="5121-basic-data-types"><span class="header-section-number">5.1.2.1</span> Basic data types</h4>
<p>Describes the basic data types that can be registered in a container (table). A basic data type cannot be expressed by a combination of other data types.</p>
<table>
<thead>
<tr class="header">
<th>JSON Data type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>BOOL</td>
<td>True or false</td>
</tr>
<tr class="even">
<td>STRING</td>
<td>Composed of an arbitrary number of characters using the unicode code point</td>
</tr>
<tr class="odd">
<td>BYTE</td>
<td>Integer value from -2<sup>7</sup>to 2<sup>7</sup>-1 (8bits)</td>
</tr>
<tr class="even">
<td>SHORT</td>
<td>Integer value from -2<sup>15</sup>to 2<sup>15</sup>-1 (16bits)</td>
</tr>
<tr class="odd">
<td>INTEGER</td>
<td>Integer value from -2<sup>31</sup>to 2<sup>31</sup>-1 (32bits)</td>
</tr>
<tr class="even">
<td>LONG</td>
<td>Integer value from -2<sup>63</sup>to 2<sup>63</sup>-1 (64bits)</td>
</tr>
<tr class="odd">
<td>FLOAT</td>
<td>Single precision (32 bits) floating point number defined in IEEE754</td>
</tr>
<tr class="even">
<td>DOUBLE</td>
<td>Double precision (64 bits) floating point number defined in IEEE754</td>
</tr>
<tr class="odd">
<td>TIMESTAMP</td>
<td>Data type expressing the date and time Data format maintained in the database is UTC, and accuracy is in milliseconds</td>
</tr>
<tr class="even">
<td>GEOMETRY</td>
<td>Data type to represent a space structure</td>
</tr>
<tr class="odd">
<td>BLOB</td>
<td>Data type for binary data such as images, audio, etc.</td>
</tr>
</tbody>
</table>
<p>The following restrictions apply to the size of the data that can be managed for STRING, GEOMETRY and BLOB data. The restriction value varies according to the block size which is the input/output unit of the database in the GridDB definition file (gs_node.json).</p>
<table>
<thead>
<tr class="header">
<th>Data type</th>
<th>Block size (64KB)</th>
<th>Block size (1MB～32MB)</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>STRING</td>
<td>Maximum 31KB (equivalent to UTF-8 encode)</td>
<td>Maximum 128KB (equivalent to UTF-8 encode)</td>
</tr>
<tr class="even">
<td>GEOMETRY</td>
<td>Maximum 31KB (equivalent to the internal storage format)</td>
<td>Maximum 128KB (equivalent to the internal storage format)</td>
</tr>
<tr class="odd">
<td>BLOB</td>
<td>Maximum 1GB - 1Byte</td>
<td>Maximum 1GB - 1Byte</td>
</tr>
</tbody>
</table>
<p><strong>GEOMETRY-type (Spatial-type)</strong></p>
<p>GEOMETRY-type (Spatial-type) data is often used in map information system and available only for a NoSQL interface, not supported by a NewSQL interface.</p>
<p>GEOMETRY type data is described using WKT (Well-known text). WKT is formulated by the Open Geospatial Consortium (OGC), a nonprofit organization promoting standardization of information on geospatial information. In GridDB, the spatial information described by WKT can be stored in a column by setting the column of a container as a GEOMETRY type.</p>
<p>GEOMETRY type supports the following WKT forms.</p>
<ul>
<li>POINT
<ul>
<li>Point represented by two or three-dimensional coordinate.</li>
<li>Example) POINT(0 10 10)</li>
</ul></li>
<li>LINESTRING
<ul>
<li>Set of straight lines in two or three-dimensional space represented by two or more points.</li>
<li>Example) LINESTRING(0 10 10, 10 10 10, 10 10 0)</li>
</ul></li>
<li>POLYGON
<ul>
<li>Closed area in two or three-dimensional space represented by a set of straight lines. Specify the corners of a POLYGON counterclockwise. When building an island in a POLYGON, specify internal points clockwise.</li>
<li>Example) POLYGON((0 0,10 0,10 10,0 10,0 0)), POLYGON((35 10, 45 45, 15 40, 10 20, 35 10),(20 30, 35 35, 30 20, 20 30))</li>
</ul></li>
<li>POLYHEDRALSURFACE
<ul>
<li>Area in the three-dimensional space represented by a set of the specified area.</li>
<li>Example) POLYHEDRALSURFACE(((0 0 0, 0 1 0, 1 1 0, 1 0 0, 0 0 0)), ((0 0 0, 0 1 0, 0 1 1, 0 0 1, 0 0 0)), ((0 0 0, 1 0 0, 1 0 1, 0 0 1, 0 0 0)), ((1 1 1, 1 0 1, 0 0 1, 0 1 1, 1 1 1)), ((1 1 1, 1 0 1, 1 0 0, 1 1 0, 1 1 1)), ((1 1 1, 1 1 0, 0 1 0, 0 1 1, 1 1 1)))</li>
</ul></li>
<li>QUADRATICSURFACE
<ul>
<li>Two-dimensional curved surface in a three-dimensional space represented by defining equation f(X) = &lt;AX, X&gt; + BX + c.</li>
</ul></li>
</ul>
<p>The space structure written by QUADRATICSURFACE cannot be stored in a container, only can be specified as a search condition.</p>
<p>Operations using GEOMETRY can be executed with API or TQL.</p>
<p>With TQL, management of two or three-dimensional spatial structure is possible. Generating and judgement function are also provided.</p>
<pre class="example"><code> SELECT * WHERE ST_MBRIntersects(geom, ST_GeomFromText(&#39;POLYGON((0 0,10 0,10 10,0 10,0 0))&#39;))
</code></pre>
<h4 id="5122-hybrid-types"><span class="header-section-number">5.1.2.2</span> Hybrid types</h4>
<p>A data type composed of a combination of basic data types that can be registered in a container. The only hybrid data type in the current version is an array.</p>
<ul>
<li><p>Array</p>
<p>Expresses an array of values. Among the basic data types, only GEOMETRY and BLOB data cannot be maintained as an array. The restriction on the data volume that can be maintained in an array varies according to the block size of the database.</p>
<table>
<thead>
<tr class="header">
<th>Data type</th>
<th>Block size (64KB)</th>
<th>Block size (1MB～32MB)</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>Number of arrays</td>
<td>4000</td>
<td>65000</td>
</tr>
</tbody>
</table></li>
</ul>
<p>[Note]</p>
<p>The following restrictions apply to TQL operations in an array column.</p>
<ul>
<li><p>Although the i-th value in the array column can be compared, calculations (aggregation) cannot be performed on all the elements.</p></li>
<li><p>(Example) When columnA was defined as an array</p>
<ul>
<li><p>The elements in an array such as select * where ELEMENT (0, column A) &gt; 0 can be specified and compared. However, a variable cannot be specified instead of "0" in the ELEMENT.</p></li>
<li><p>Aggregation such as select SUM (column A) cannot be carried out.</p></li>
</ul></li>
</ul>
<p><span id="primary_key"></span></p>
<h3 id="513-primary-key"><span class="header-section-number">5.1.3</span> Primary key</h3>
<p>A ROWKEY is the data set in the row of a container. The uniqueness of a row with a set ROWKEY is guaranteed. NULL is not allowed in the column ROWKEY is set.</p>
<p>In NewSQL I/F, ROWKEY is called as PRIMARY KEY.</p>
<ul>
<li>For a timeseries container (timeseries table)
<ul>
<li>A ROWKEY can be set in the first column of the row. (This is set in Column No. 0 since columns start from 0 in GridDB.)</li>
<li>ROWKEY (PRIMARY KEY) is a TIMESTAMP</li>
<li>Must be specified.</li>
</ul></li>
<li>For a collection (table)
<ul>
<li>ROWKEY (PRIMARY KEY) can be set to multiple columns that are continuous from the first column. The ROWKEY set to multiple columns is called composite ROWKEY, which can be set up to 16 columns.</li>
<li>A ROWKEY (PRIMARY KEY) is either a STRING, INTEGER, LONG or TIMESTAMP column.</li>
<li>Need not be specified. A default index prescribed in advance according to the column data type can be set in a column set in ROWKEY (PRIMARY KEY).</li>
</ul></li>
</ul>
<p>In the current version GridDB, the default index of all STRING, INTEGER, LONG or TIMESTAMP data that can be specified in a ROWKEY (PRIMARY KEY) is the TREE index.</p>
<p>　　</p>
<h1 id="6-database-function"><span class="header-section-number">6</span> Database function</h1>
<h2 id="61-resource-management"><span class="header-section-number">6.1</span> Resource management</h2>
<p>Besides the database residing in the memory, other resources constituting a GridDB cluster are perpetuated to a disk. The perpetuated resources are listed below.</p>
<ul>
<li><p>Database file</p>
<p>A database file is a file group consisting of transaction log file and checkpoint file that are perpetuated to a HDD or SSD. Transaction log file is updated every time the GridDB database is updated or a transaction occurs, whereas the checkpoint file is written at a specified time interval.</p></li>
<li><p>Checkpoint file</p>
<p>A checkpoint file is the perpetuation of a partition group data from the memory to the disk at a specified time interval, Updated information is reflected in the memory by a cycle of the node definition file (/checkpoint/checkpointInterval). The size of checkpoint file increases along with the size of the data, however once the file gets expanded, its size will not decrease even if data such as containers or rows are deleted. In this case, GridDB reuses the free space instead. Checkpoint files can be split so as to be stored in multiple disks.</p></li>
<li><p>Transaction log file</p>
<p>Transaction data that are written to the database in memory is perpetuated to the transaction log file by writing the data sequentially in a log format.</p></li>
<li><p>Definition file</p>
<p>Definition file includes two types of parameter files: gs_cluster.json, hereinafter referred to as a cluster definition file, used when composing a cluster; gs_node.json, hereinafter referred to as a node definition file, used to set the operations and resources of the node in a cluster. It also includes a user definition file.</p></li>
<li><p>Event log file</p>
<p>The event log of the GridDB server is saved in this file, including messages such as errors, warnings and so on.</p></li>
</ul>
<p><img src="img/arc_DatabaseFile.png" alt="Database file" /></p>
<h2 id="62-data-access-function"><span class="header-section-number">6.2</span> Data access function</h2>
<p>To access GridDB data, there is a need to develop an application using NoSQL I/F or NewSQL I/F (GridDB AE only). Data can be accessed simply by connecting to the cluster database of GridDB without having to take into account position information on where the container or table is located in the cluster database. The application system does not need to consider which node constituting the cluster the container is placed in.</p>
<p>In the GridDB API, when connecting to a cluster database initially, placement hint information of the container is retained (cached) on the client end together with the node information (partition).</p>
<p>Communication overheads are kept to a minimum as the node maintaining the container is connected and processed directly without having to access the cluster to search for nodes that have been placed every time the container used by the application is switched.</p>
<p>Although the container placement changes dynamically due to the rebalancing process in GridDB, the position of the container is transmitted as the client cache is updated regularly. For example, even when there is a node mishit during access from a client due to a failure or a discrepancy between the regular update timing and re-balancing timing, relocated information is automatically acquired to continue with the process.</p>
<p><span id="tql"></span></p>
<h3 id="621-tql"><span class="header-section-number">6.2.1</span> TQL</h3>
<p>TQL is supported as database access languages.</p>
<ul>
<li><p>What is TQL?</p>
<p>A simplified SQL prepared for GridDB SE. The support range is limited to functions such as search, aggregation, etc., using a container as a unit. TQL is employed by using the client API (Java, C language) of GridDB SE.</p>
<p>The TQL is adequate for the search in the case of a small container and a small number of hits. For that case, the response is faster than SQL. The number of hits can be suppressed by the LIMIT clause of TQL.</p></li>
</ul>
<p><span id="batch_functions"></span></p>
<h3 id="622-batch-processing-function-to-multiple-containers"><span class="header-section-number">6.2.2</span> Batch-processing function to multiple containers</h3>
<p>An interface to quickly process event information that occurs occasionally is available in NoSQL I/F.</p>
<p>When a large volume of events is sent to the database server every time an event occurs, the load on the network increases and system throughput does not increase. Significant impact will appear especially when the communication line bandwidth is narrow. Multi-processing is available in NoSQL I/F to process multiple row registrations for multiple containers and multiple inquiries (TQL) to multiple containers with a single request. The overall throughput of the system rises as the database server is not accessed frequently.</p>
<p>An example is given below.</p>
<ul>
<li><p>Multi-put</p>
<ul>
<li><p>A container is prepared for each sensor name as a process to register event information from multiple sensors in the database. The sensor name and row array of the timeseries event of the sensor are created and a list (map) summarizing the data for multiple sensors is created. This list data is registered in the GridDB database each time the API is invoked.</p></li>
<li><p>Multi-put API optimizes the communication process by combining requests of data registration into multiple containers to a node in GridDB, which is formed by multiple clusters. In addition, multi-registrations are processed quickly without performing MVCC when executing a transaction.</p></li>
<li><p>In a multi-put processing, transactions are committed automatically. Data is confirmed on a single case basis.</p></li>
</ul></li>
</ul>
<p><img src="img/func_multiput.png" alt="Multi-put" /></p>
<ul>
<li><p>Multi-query (fetchAll)</p>
<ul>
<li>Instead of executing multiple queries to a container, these can be executed in a single query by aggregating event information of the sensor. For example, this is most suitable for acquiring aggregate results such as the daily maximum, minimum and average values of data acquired from a sensor, or data of a row set having the maximum or minimum value, or data of a row set meeting the specified condition.</li>
</ul></li>
</ul>
<p><img src="img/func_multiquery.png" alt="fetchAll" /></p>
<ul>
<li><p>Multi-get</p>
<ul>
<li><p>Instead of executing multiple queries to a sensor, these can be executed in a single query by consolidating event information of the sensor. For example, this is most suitable for acquiring aggregate results such as the daily maximum, minimum and average values of data acquired from a sensor, or data of a row set having the maximum or minimum value, or data of a row set meeting the specified condition.</p></li>
<li><p>In a RowKeyPredicate object, the acquisition condition is set in either one of the 2 formats below.</p>
<ul>
<li>Specify the acquisition range</li>
<li>Specified individual value</li>
</ul></li>
</ul></li>
</ul>
<p><img src="img/func_multiget.png" alt="multi-get" /></p>
<p><span id="index_function"></span></p>
<h2 id="63-index-function"><span class="header-section-number">6.3</span> Index function</h2>
<p>A condition-based search can be processed quickly by creating an index for the columns of a container (table).</p>
<p>There are 3 types of index - hash index (HASH), tree index (TREE) and space index (SPATIAL). A hash index is used in an equivalent-value search when searching with a query in a container. Besides equivalent-value search, a tree index is used in comparisons including the range (bigger/same, smaller/same etc.).</p>
<p>The index that can be set differs depending on the container (table) type and column data type.</p>
<ul>
<li>HASH INDEX
<ul>
<li>An equivalent value search can be conducted quickly but this is not suitable for searches that read the rows sequentially.</li>
<li>Columns of the following data type can be set in a collection. Cannot be set in a timeseries container, a table, and a timeseries table.
<ul>
<li>STRING</li>
<li>BOOL</li>
<li>BYTE</li>
<li>SHORT</li>
<li>INTEGER</li>
<li>LONG</li>
<li>FLOAT</li>
<li>DOUBLE</li>
<li>TIMESTAMP</li>
</ul></li>
</ul></li>
<li>TREE INDEX
<ul>
<li>Besides equivalent-value search, a tree index is used in comparisons including the range (bigger/same, smaller/same etc.).</li>
<li>This can be set for columns of the following data type in any type of container (table), except for columns corresponding to a rowkey in a timeseries container (timeseries table).
<ul>
<li>STRING</li>
<li>BOOL</li>
<li>BYTE</li>
<li>SHORT</li>
<li>INTEGER</li>
<li>LONG</li>
<li>FLOAT</li>
<li>DOUBLE</li>
<li>TIMESTAMP</li>
</ul></li>
<li>Only a tree index allows an index with multiple columns, which is called a composite index. A composite index can be set up to 16 columns, where the same column cannot be specified more than once.</li>
</ul></li>
<li>SPATIAL INDEX
<ul>
<li>Can be set for only GEOMETRY columns in a collection. This is specified when conducting a spatial search at a high speed.</li>
</ul></li>
</ul>
<p>Although there are no restrictions on the no. of indices that can be created in a container, creation of an index needs to be carefully designed. An index is updated when the rows of a configured container are inserted, updated or deleted. Therefore, when multiple indices are created in a column of a row that is updated frequently, this will affect the performance in insertion, update or deletion operations.</p>
<p>An index is created in a column as shown below.</p>
<ul>
<li>A column that is frequently searched and sorted</li>
<li>A column that is frequently used in the condition of the WHERE section of TQL</li>
<li>High cardinality column (containing few duplicated values)</li>
</ul>
<p>[Note]</p>
<ul>
<li>Only a tree index can be set to the column of a table (time series table).</li>
</ul>
<p><span id="ts_data_functions"></span></p>
<h2 id="64-function-specific-to-time-series-data"><span class="header-section-number">6.4</span> Function specific to time series data</h2>
<p>To manage data frequently produced from sensors, data is placed in accordance with the data placement algorithm (TDPA: Time Series Data Placement Algorithm), which allows the best use of the memory. In a timeseries container (timeseries table), memory is allocated while classifying internal data by its periodicity. When hint information is given in an affinity function, the placement efficiency rises further. Expired data in a timeseries container is released at almost zero cost while being expelled to a disk where necessary.</p>
<p>A timeseries container (timeseries table) has a TIMESTAMP ROWKEY (PRIMARY KEY).</p>
<h3 id="641-compression-function"><span class="header-section-number">6.4.1</span> Compression function</h3>
<p>In timeseries container (timeseries table), data can be compressed and held. Data compression can improve memory usage efficiency. Compression options can be specified when creating a timeseries container (timeseries table).</p>
<p>However, the following row operations cannot be performed on a timeseries container (timeseries table) for which compression options are specified.</p>
<ul>
<li>Updating a specified row.</li>
<li>Deleting a specified row.</li>
<li>Inserting a new row when there is a row at a later time than the specified time.</li>
</ul>
<p>The following compression types are supported:</p>
<ul>
<li>HI: thinning out method with error value</li>
<li>NO: no compression.</li>
<li>SS: thinning out method without error value</li>
</ul>
<p>The explanation of each option is as follows.</p>
<h4 id="6411-thinning-out-method-with-error-value-hi"><span class="header-section-number">6.4.1.1</span> Thinning out method with error value (HI).</h4>
<p>When the previous and the following registered data lies in the same slope, the current data, which is represented by a row is omitted. The condition of the slope can be specified by the user.</p>
<p>The row data is omitted only when the specified column satisfies the condition and the values of the other columns are the same as the previous data. The condition is specified by the error width (Width).</p>
<p><img src="img/func_TimeseriseCompression.png" alt="Compression of timeseries container (timeseries table)" /></p>
<p>Compression can be enabled to the following data types:</p>
<ul>
<li>LONG</li>
<li>INTEGER</li>
<li>SHORT</li>
<li>BYTE</li>
<li>FLOAT</li>
<li>DOUBLE</li>
</ul>
<p>Since lossy compression is used, data omitted by the compression cannot be restored to its original value.</p>
<p>Omitted data will be restored without error value at the process of interpolate and sample processing.</p>
<h4 id="6412-thinning-out-method-without-error-value-ss"><span class="header-section-number">6.4.1.2</span> Thinning out method without error value (SS)</h4>
<p>With SS type, the row with the same data as the row registered just before and immediately after will be omitted. Omitted data will be restored without error value at the process of interpolate and sample processing.</p>
<h3 id="642-operation-function-of-tql"><span class="header-section-number">6.4.2</span> Operation function of TQL</h3>
<h4 id="6421-aggregate-operations"><span class="header-section-number">6.4.2.1</span> Aggregate operations</h4>
<p>In a timeseries container (timeseries table), the calculation is performed with the data weighted at the time interval of the sampled data. In other words, if the time interval is long, the calculation is carried out assuming the value is continued for an extended time.</p>
<p>The functions of the aggregate operation are as follows:</p>
<ul>
<li><p>TIME_AVG</p>
<ul>
<li>TIME_AVG Returns the average weighted by a time-type key of values in the specified column.</li>
<li>The weighted average is calculated by dividing the sum of products of sample values and their respective weighted values by the sum of weighted values. The method for calculating a weighted value is as shown above.</li>
<li>The details of the calculation method are shown in the figure:</li>
</ul></li>
</ul>
<p><img src="img/func_TIME_AVG.png" alt="Aggregation of weighted values (TIME_AVG)" /></p>
<h4 id="6422-selectioninterpolation-operation"><span class="header-section-number">6.4.2.2</span> Selection/interpolation operation</h4>
<p>Time data may deviate slightly from the expected time due to the timing of the collection and the contents of the data to be collected. Therefore when conducting a search using time data as a key, a function that allows data around the specified time to be acquired is also required.</p>
<p>The functions for searching the timeseries container (timeseries table) and acquiring the specified row are as follows:</p>
<ul>
<li><p>TIME_NEXT(*, timestamp)</p>
<p>Selects a time-series row whose timestamp is identical with or just after the specified timestamp.</p></li>
<li><p>TIME_NEXT_ONLY(*, timestamp)</p>
<p>Select a time-series row whose timestamp is just after the specified timestamp.</p></li>
<li><p>TIME_PREV(*, timestamp)</p>
<p>Selects a time-series row whose timestamp is identical with or just before the specified timestamp.</p></li>
<li><p>TIME_PREV_ONLY(*, timestamp)</p>
<p>Selects a time-series row whose timestamp is just before the specified timestamp.</p></li>
</ul>
<p>In addition, the functions for interpolating the values of the columns are as follows:</p>
<ul>
<li><p>TIME_INTERPOLATED(column, timestamp)</p>
<p>Returns a specified column value of the time-series row whose timestamp is identical with the specified timestamp, or a value obtained by linearly interpolating specified column values of adjacent rows whose timestamps are just before and after the specified timestamp, respectively.</p></li>
<li><p>TIME_SAMPLING(*|column, timestamp_start, timestamp_end, interval, DAY|HOUR|MINUTE|SECOND|MILLISECOND)</p>
<p>Takes a sampling of Rows in a specific range from a given start time to a given end time.</p>
<p>Each sampling time point is defined by adding a sampling interval multiplied by a non-negative integer to the start time, excluding the time points later than the end time.</p>
<p>If there is a Row whose timestamp is identical with each sampling time point, the values of the Row are used. Otherwise, interpolated values are used.</p></li>
</ul>
<h3 id="643-expiry-release-function"><span class="header-section-number">6.4.3</span> Expiry release function</h3>
<p>An expiry release is a function to delete expired row data from GridDB physically. The data becomes unavailable by removing from a target for a search or a delete before deleting. Deleting old unused data results to keep database size results in prevention of performance degradation caused by bloat of database size.</p>
<p><img src="img/func_expiration.png" alt="Expiry release settings" /></p>
<p>The retention period is set in container units. The row which is outside the retention period is called "expired data." The APIs become unable to operate expired data because it becomes unavailable. Therefore, applications can not access the row. Expired data will be the target for being deleted physically from GridDB after a certain period. The target is called "cold data." It is possible to delete it automatically from GridDB at the time and after saving to a external file.</p>
<h4 id="6431-expiry-release-types"><span class="header-section-number">6.4.3.1</span> Expiry release types</h4>
<ul>
<li><p>Row expiry release</p>
<ul>
<li>It can be set for a time series container.</li>
<li>Setting items consist of a retention period, a retention period unit, and a division count.</li>
<li>The retention period unit can be set in day/hour/minute/sec/millisecond units. The year unit and month unit cannot be specified.</li>
<li>The expiration date of rows is calculated by adding row key stored date and time (retention period start date) to the retention period. It is calculated for every row.</li>
<li>The unit for becoming cold data is the rows in the period (retention period/division count). For example, if the retention period is 720 days and the division count is 36, the rows in 20 (720/36) days become cold data. Physical data delete for the rows in 20 days is executed all at once.</li>
</ul></li>
</ul>
<p><img src="img/func_expiration_row.png" alt="Row expiry release" /></p>
<p>[Note]</p>
<ul>
<li><p>Set for expiry release on the container creation. They cannot be changed after creation.</p></li>
<li><p>The current time used for judging expiration depends on the environment of each node of GridDB. Therefore, because of the network latency or time difference of the execution environments, you may not be able to access the rows in a GridDB node whose environment time is ahead of that of the client you use; on the contrary, you may be able to access the rows if the client you use is ahead of the time of GridDB. You had better set the period a larger value than you need to avoid unintentional loss of rows.</p></li>
<li><p>The expired rows are separated from the object of search and updating, being treated as not to exist in the GridDB. Operations to the expired row do not cause errors.</p>
<ul>
<li>For example, when you register a row with a timestamp of 31 days ago to the container with the expiration of 30 days, registration processing does not cause an error, while the row is not saved in the container.</li>
</ul></li>
<li><p>When you set up expiry release, be sure to synchronize the environment time of all the nodes of a cluster. If the time is different among the nodes, the expired data may not be released at the same time among the nodes.</p></li>
<li><p>The period that expired data becomes cold data depends on the setting of the retention period in the expiry release.</p>
<table>
<thead>
<tr class="header">
<th>Retention period</th>
<th>Max period that expired data becomes cold data</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>-3 days</td>
<td>about 1.2 hours</td>
</tr>
<tr class="even">
<td>3 days-12 days</td>
<td>about 5 hours</td>
</tr>
<tr class="odd">
<td>12 days-48 days</td>
<td>about 19 hours</td>
</tr>
<tr class="even">
<td>48 days-192 days</td>
<td>about 3 days</td>
</tr>
<tr class="odd">
<td>192 days-768 days</td>
<td>about 13 days</td>
</tr>
<tr class="even">
<td>768 days-</td>
<td>about 38 days</td>
</tr>
</tbody>
</table></li>
</ul>
<h2 id="66-transaction-function"><span class="header-section-number">6.6</span> Transaction function</h2>
<p>GridDB supports transaction processing on a container basis and ACID characteristics which are generally known as transaction characteristics. The supporting functions in a transaction process are explained in detail below.</p>
<h3 id="661-starting-and-ending-a-transaction"><span class="header-section-number">6.6.1</span> Starting and ending a transaction</h3>
<p>When a row search or update etc. is carried out on a container, a new transaction is started and this transaction ends when the update results of the data are committed or aborted.</p>
<p>[Note]</p>
<ul>
<li>A commit is a process to confirm transaction information under processing to perpetuate the data.
<ul>
<li>In GridDB, updated data of a transaction is stored as a transaction log by a commit process, and the lock that had been maintained will be released.</li>
</ul></li>
<li>An abort is a process to rollback (delete) all transaction data under processing.
<ul>
<li>In GridDB, all data under processing are discarded and retained locks will also be released.</li>
</ul></li>
</ul>
<p>The initial action of a transaction is set in autocommit.</p>
<p>In autocommit, a new transaction is started every time a container is updated (data addition, deletion or revision) by the application, and this is automatically committed at the end of the operation. A transaction can be committed or aborted at the requested timing by the application by turning off autocommit.</p>
<p>A transaction recycle may terminate in an error due to a timeout in addition to being completed through a commit or abort. If a transaction terminates in an error due to a timeout, the transaction is aborted. The transaction timeout is the elapsed time from the start of the transaction. Although the initial value of the transaction timeout time is set in the definition file (gs_node.json), it can also be specified as a parameter when connecting to GridDB on an application basis.</p>
<h3 id="662-transaction-consistency-level"><span class="header-section-number">6.6.2</span> Transaction consistency level</h3>
<p>There are 2 types of transaction consistency levels, immediate consistency and eventual consistency. This can also be specified as a parameter when connecting to GridDB for each application. The default setting is immediate consistency.</p>
<ul>
<li><p>immediate consistency</p>
<ul>
<li>Container update results from other clients are reflected immediately at the end of the transaction concerned. As a result, the latest details can be referenced all the time.</li>
</ul></li>
<li><p>eventual consistency</p>
<ul>
<li>Container update results from other clients may not be reflected immediately at the end of the transaction concerned. As a result, there is a possibility that old details may be referred to.</li>
</ul></li>
</ul>
<p>Immediate consistency is valid in update operations and read operations. Eventual consistency is valid in read operations only. For applications which do not require the latest results to be read all the time, the reading performance improves when eventual consistency is specified.</p>
<h3 id="663-transaction-isolation-level"><span class="header-section-number">6.6.3</span> Transaction isolation level</h3>
<p>Conformity of the database contents need to be maintained all the time. When executing multiple transaction simultaneously, the following events will generally surface as issues.</p>
<ul>
<li><p>Dirty read</p>
<p>An event which involves uncommitted data written by a dirty read transaction being read by another transaction.</p></li>
<li><p>Non-repeatable read</p>
<p>An event which involves data read previously by a non-repeatable read transaction becoming unreadable. Even if you try to read the data read previously by a transaction again, the previous data can no longer be read as the data has already been updated and committed by another transaction (the new data after the update will be read instead).</p></li>
<li><p>Phantom read</p>
<p>An event in which the inquiry results obtained previously by a phantom read transaction can no longer be acquired. Even if you try to execute an inquiry executed previously in a transaction again in the same condition, the previous results can no longer be acquired as the data satisfying the inquiry condition has already been changed, added and committed by another transaction (new data after the update will be acquired instead).</p></li>
</ul>
<p>In GridDB, "READ_COMMITTED" is supported as a transaction isolation level. In READ_COMMITTED, the latest data confirmed data will always be read.</p>
<p>When executing a transaction, this needs to be taken into consideration so that the results are not affected by other transactions. The isolation level is an indicator from 1 to 4 that shows how isolated the executed transaction is from other transactions (the extent that consistency can be maintained).</p>
<p>The 4 isolation levels and the corresponding possibility of an event raised as an issue occurring during simultaneous execution are as follows.</p>
<table>
<thead>
<tr class="header">
<th>Isolation level</th>
<th>Dirty read</th>
<th>Non-repeatable read</th>
<th>Phantom read</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>READ_UNCOMMITTED</td>
<td>Possibility of occurrence</td>
<td>Possibility of occurrence</td>
<td>Possibility of occurrence</td>
</tr>
<tr class="even">
<td>READ_COMMITTED</td>
<td>Safe</td>
<td>Possibility of occurrence</td>
<td>Possibility of occurrence</td>
</tr>
<tr class="odd">
<td>REPEATABLE_READ</td>
<td>Safe</td>
<td>Safe</td>
<td>Possibility of occurrence</td>
</tr>
<tr class="even">
<td>SERIALIZABLE</td>
<td>Safe</td>
<td>Safe</td>
<td>Safe</td>
</tr>
</tbody>
</table>
<p>In READ_COMMITED, if data read previously is read again, data that is different from the previous data may be acquired, and if an inquiry is executed again, different results may be acquired even if you execute the inquiry with the same search condition. This is because the data has already been updated and committed by another transaction after the previous read.</p>
<p>In GridDB, data that is being updated by MVCC is isolated.</p>
<h3 id="664-mvcc"><span class="header-section-number">6.6.4</span> MVCC</h3>
<p>In order to realize READ_COMMITTED, GridDB has adopted "MVCC (Multi-Version Concurrency Control)".</p>
<p>MVCC is a processing method that refers to the data prior to being updated instead of the latest data that is being updated by another transaction when a transaction sends an inquiry to the database. System throughput improves as the transaction can be executed concurrently by referring to the data prior to the update.</p>
<p>When the transaction process under execution is committed, other transactions can also refer to the latest data.</p>
<p><img src="img/func_MVCC.png" alt="MVCC" /></p>
<h3 id="665-lock"><span class="header-section-number">6.6.5</span> Lock</h3>
<p>There is a data lock mechanism to maintain the consistency when there are competing container update requests from multiple transactions.</p>
<p>The lock granularity differs depending on the type of container. In addition, the lock range changes depending on the type of operation in the database.</p>
<h4 id="6651-lock-granularity"><span class="header-section-number">6.6.5.1</span> Lock granularity</h4>
<p>The lock granularity for each container type is as follows.</p>
<ul>
<li>Collection: Lock by ROW unit.</li>
<li>Timeseries container: Locked by ROW collection
<ul>
<li>In a ROW collection, multiple rows are placed in a timeseries container by dividing a block into several data processing units. This data processing unit is known as a row set. It is a data management unit to process a large volume of timeseries containers at a high speed even though the data granularity is coarser than the lock granularity in a collection.</li>
</ul></li>
</ul>
<p>These lock granularity were determined based on the use-case analysis of each container type.</p>
<ul>
<li>Collection data may include cases in which an existing ROW data is updated as it manages data just like a RDB table.</li>
<li>A timeseries container is a data structure to hold data that is being generated with each passing moment and rarely includes cases in which the data is updated at a specific time.</li>
</ul>
<h4 id="6652-lock-range-by-database-operations"><span class="header-section-number">6.6.5.2</span> Lock range by database operations</h4>
<p>Container operations are not limited to just data registration and deletion but also include schema changes accompanying a change in data structure, index creation to improve speed of access, and other operations. The lock range depends on either operations on the entire container or operations on specific rows in a container.</p>
<ul>
<li><p>Lock the entire container</p>
<ul>
<li>Index operations (createIndex/dropIndex)</li>
<li>Deleting container</li>
<li>Schema change</li>
</ul></li>
<li><p>Lock in accordance with the lock granularity</p>
<ul>
<li>put/update/remove</li>
<li>get(forUpdate)</li>
</ul>
<p>In a data operation on a row, a lock following the lock granularity is ensured.</p></li>
</ul>
<p>If there is competition in securing the lock, the subsequent transaction will be put in standby for securing the lock until the earlier transaction has been completed by a commit or rollback process and the lock is released.</p>
<p>A standby for securing a lock can also be cancelled by a timeout besides completing the execution of the transaction.</p>
<h3 id="666-data-perpetuation"><span class="header-section-number">6.6.6</span> Data perpetuation</h3>
<p>Data registered or updated in a container or table is perpetuated in the disk or SSD, and protected from data loss when a node failure occurs. There are 2 types of transaction log process, one to synchronize data in a data update and write the updated data sequentially in a transaction log file, and the other is a checkpoint process to store updated data in the memory regularly in the database file on a block basis.</p>
<p>To write to a transaction log, either one of the following settings can be made in the node definition file.</p>
<ul>
<li>0: SYNC</li>
<li>An integer value of 1 or higher: DELAYED_SYNC</li>
</ul>
<p>In the "SYNC" mode, log writing is carried out synchronously every time an update transaction is committed or aborted. In the "DELAYED_SYNC" mode, log writing during an update is carried out at a specified delay of several seconds regardless of the update timing. Default value is "1 (DELAYED_SYNC 1 sec)".</p>
<p>When "SYNC" is specified, although the possibility of losing the latest update details when a node failure occurs is lower, the performance is affected in systems that are updated frequently.</p>
<p>On the other hand, if "DELAYED_SYNC" is specified, although the update performance improves, any update details that have not been written in the disk when a node failure occurs will be lost.</p>
<p>If there are 2 or more replicas in a raster configuration, the possibility of losing the latest update details when a node failure occurs is lower even if the mode is set to "DELAYED_SYNC" as the other nodes contain replicas. Consider setting the mode to "DELAYED_SYNC" as well if the update frequency is high and performance is required.</p>
<p>In a checkpoint, the update block is updated in the database file. A checkpoint process operates at the cycle set on a node basis. A checkpoint cycle is set by the parameters in the node definition file. Initial value is 60 sec (1 minute).</p>
<p>By raising the checkpoint execution cycle figure, data perpetuation can be set to be carried out in a time band when there is relatively more time to do so e.g. by perpetuating data to a disk at night and so on. On the other hand, when the cycle is lengthened, the disadvantage is that the number of transaction log files that have to be rolled forward when a node is restarted outside the system process increases, thereby increasing the recovery time.</p>
<p>The data updated at a checkpoint is collected and maintained in a memory different from the block in which the data was wrote at the checkpoint. Set up concurrent execution of checkpoints for faster checkpoint processing. When the concurrent execution is set up, up to as many as the number of concurrent execution of a transaction, checkpoints are processed concurrently.</p>
<p><img src="img/func_checkpnt.png" alt="Checkpoint" /></p>
<h3 id="667-timeout-process"><span class="header-section-number">6.6.7</span> Timeout process</h3>
<h4 id="6671-nosql-if-timeout-process"><span class="header-section-number">6.6.7.1</span> NoSQL I/F timeout process</h4>
<p>In the NoSQL I/F, 2 types of timeout could be notified to the application developer, Transaction timeout and Failover timeout. The former is related to the processing time limit of a transaction, and the latter is related to the retry time of a recovery process when a failure occurs.</p>
<ul>
<li><p>TransactionTimeout</p>
<p>The timer is started when access to the container subject to the process begins, and a timeout occurs when the specified time is exceeded.</p>
<p>Transaction timeout is configured to delete lock, and memory from a long-duration update lock (application searches for data in the update mode, and does not delete when the lock is maintained) or a transaction that maintains a large amount of results (application does not delete the data when the lock is maintained). Application process is aborted when transaction timeout is triggered.</p>
<p>A transaction timeout time can be specified in the application with a parameter during cluster connection. The upper limit of this can be specified in the node definition file. The default value of upper limit is 300 seconds. To monitor transactions that take a long time to process, enable the timeout setting and set a maximum time in accordance with the system requirements.</p></li>
<li><p>FailoverTimeout</p>
<p>Timeout time during an error retry when a client connected to a node constituting a cluster which failed connects to a replacement node. If a new connection point is discovered in the retry process, the client application will not be notified of the error. Failover timeout is also used in timeout during initial connection.</p>
<p>A failover timeout time can be specified in the application by a parameter during cluster connection. Set the timeout time to meet the system requirements.</p></li>
</ul>
<p>Both the transaction timeout and failover timeout can be set when connecting to a cluster using a GridDB object in the Java API or C API.</p>
<h2 id="67-replication-function"><span class="header-section-number">6.7</span> Replication function</h2>
<p>Data replicas are created on a partition basis in accordance with the number of replications set by the user among multiple nodes constituting a cluster.</p>
<p>A process can be continued non-stop even when a node failure occurs by maintaining replicas of the data among scattered nodes. In the client API, when a node failure is detected, the client automatically switches access to another node where the replica is maintained.</p>
<p>The default number of replication is 2, allowing data to be replicated twice when operating in a cluster configuration with multiple nodes.</p>
<p>When there is an update in a container, the owner node (the node having the master replica) among the replicated partitions is updated.</p>
<p>There are 2 ways of subsequently reflecting the updated details from the owner node in the backup node.</p>
<ul>
<li><p>Asynchronous replication</p>
<p>Replication is carried out without synchronizing with the timing of the asynchronous replication update process. Update performance is better for quasi-synchronous replication but the availability is worse.</p></li>
<li><p>Quasi-synchronous replication</p>
<p>Although replication is carried out synchronously at the quasi-synchronous replication update process timing, no appointment is made at the end of the replication. Availability is excellent but performance is inferior.</p></li>
</ul>
<p>If performance is more important than availability, set the mode to asynchronous replication and if availability is more important, set it to quasi-synchronous replication.</p>
<p>[Note]</p>
<ul>
<li>The number of replications is set in the cluster definition file (gs_cluster.json) /cluster/replicationNum. Synchronous settings of the replication are set in the cluster definition file (gs_cluster.json) /transaction/replicationMode.</li>
</ul>
<h2 id="68-affinity-function"><span class="header-section-number">6.8</span> Affinity function</h2>
<p>An affinity is a function to connect related data. There are 2 types of affinity function in GridDB, data affinity and node affinity.</p>
<h3 id="681-data-affinity-function"><span class="header-section-number">6.8.1</span> Data affinity function</h3>
<p>A data affinity is a function to raise the memory hit rate by arranging highly correlated data in the same block and localizing data access. By raising the memory hit ratio, the no. of memory mishits during data access can be reduced and the throughput can be improved. By using data affinity, even machines with a small memory can be operated effectively.</p>
<p>The data affinity settings provide hint information as container properties when creating a container (table). The characters that can be specified for the hint information are restricted by naming rules that are similar to those for the container (table) name. Data with the same hint information is placed in the same block as much as possible.</p>
<p>Data affinity hints are set separately by the data update frequency and reference frequency. For example, consider the data structure when system data is registered, referenced or updated by the following operating method in a system that samples and refers to the data on a daily, monthly or annual basis in a monitoring system.</p>
<ol>
<li>Data in minutes is sent from the monitoring device and saved in the container created on a monitoring device basis.</li>
<li>Since data reports are created daily, one day's worth of data is aggregated from the data in minutes and saved in the daily container</li>
<li>Since data reports are created monthly, daily container (table) data is aggregated and saved in the monthly container</li>
<li>Since data reports are created annually, monthly container (table) data is aggregated and saved in the annual container</li>
<li>The current space used (in minutes and days) is constantly updated and displayed in the display panel.</li>
</ol>
<p>In GridDB, instead of occupying a block in a container unit, data close to the time is placed in the block. Therefore, refer to the daily container (table) in 2., perform monthly aggregation and use the aggregation time as a ROWKEY (PRIMARY KEY). The data in 3. and the data in minutes in 1. may be saved in the same block.</p>
<p>When performing yearly aggregation (No.4 above) of a large amount of data, the data which need constant monitoring (No.1) may be swapped out. This is caused by reading the data, which is stored in different blocks (No.4 above), into the memory that is not large enough for all the monitoring data.</p>
<p>In this case, by providing hints to the container (table) according to the container (table) access frequency using a data affinity e.g. on a minute, daily or monthly basis, etc., data with a low access frequency and data with a high access frequency is separated into different blocks when the data is placed.</p>
<p>In this way, data can be placed to suit the usage scene of the application by the data affinity function.</p>
<p><img src="img/feature_data_afinity.png" alt="Data Affinity" /></p>
<h3 id="682-node-affinity-function"><span class="header-section-number">6.8.2</span> Node affinity function</h3>
<p>Node affinity is a function to reduce the network load when accessing data by arranging highly correlated containers and tables in the same node.</p>
<p><img src="img/func_Node_Affinity.png" alt="Placement of container/table based on node affinity" /></p>
<p>To use the node affinity function, hint information is given in the container (table) name when the container (table) is created. A container (table) with the same hint information is placed in the same partition. Specify the container name as shown below.</p>
<ul>
<li>Container (table) name@node affinity hint information</li>
</ul>
<p>The naming rules for node affinity hint information are the same as the naming rules for the container (table) name.</p>
<h2 id="69-trigger-function"><span class="header-section-number">6.9</span> Trigger function</h2>
<p>A trigger function is an automatic notification function using Java Messaging Service (JMS) or REST, when an operation (add/update or delete) is carried out on the row data of a container. Event notifications can be received without the need to poll and monitor database updates in the application system.</p>
<p><img src="img/func_trigger.png" alt="Action of a trigger function" /></p>
<ul>
<li><p>Notification method</p>
<ul>
<li>There are 2 ways of notifying the application system.
<ul>
<li>Java Messaging Service(JMS)</li>
<li>REST</li>
</ul></li>
</ul></li>
<li><p>When the operating target is a single node</p>
<ul>
<li>The following three operations are available: setting a trigger, unsetting the trigger, and acquiring the settings of the trigger.</li>
</ul></li>
<li><p>Timing of notice</p>
<ul>
<li>Notify when a row is created, updated, or deleted.</li>
<li>Notify before a replication is completed. When not in automatic commitment mode, notify while un-committed.</li>
</ul></li>
<li><p>Contents of notice</p>
<ul>
<li>Notify a container name and the type of operation: creating, updating, or deletion a row.</li>
<li>When a column is specified to be noticed, the value of the column which includes the operated row is also in the notice.</li>
</ul></li>
<li><p>Processing when an error occurs</p>
<ul>
<li>When an error occurs at the time of a notice, error information is recorded in an event log. The notice is not sent again after recovering from the failure.</li>
</ul></li>
<li><p>Others</p>
<ul>
<li>When more than one rows are created and/or updated, a notice is issued for each row. For Java API, this processing is equivalent to the call of Container#put (java.util.Collection) or GridStore#multiPut (Map).</li>
<li>When a schema is changed in a container with a trigger setting, the setting will be effective in the changed container. The column which is not in the changed schema will be automatically deleted from the column name to be noticed.</li>
<li>Both JMS and REST notice can be set to a container, but should be set under different trigger names.</li>
</ul></li>
</ul>
<p>[Note]</p>
<ul>
<li>Caution about the number of triggers and updating performance
<ul>
<li>Updating performance decreases as the increase in the number of containers with an active trigger and the number of active triggers. Set only the minimum necessary triggers.</li>
</ul></li>
<li>Caution about the processing performance of the destination server of the notice
<ul>
<li>When the throughput of the destination server is extremely lower than that of the update process of GridDB, trigger process may fail and an error message may be recorded in an event log. When you update frequently the container with a trigger, consider the performance of the destination server.</li>
</ul></li>
</ul>
<h2 id="610-change-the-definition-of-a-container-table"><span class="header-section-number">6.10</span> Change the definition of a container (table)</h2>
<p>It is possible to change the definition such as addition of columns after creating a container.</p>
<h3 id="6101-add-column"><span class="header-section-number">6.10.1</span> Add column</h3>
<p>Add a new column to a container.</p>
<ul>
<li><p>NoSQL API</p>
<ul>
<li><p>Add a column with GridStore#putContainer.</p></li>
<li><p>Get container information "ContainerInfo" from an existing container. Execute putContainer after setting a new column to container information.</p></li>
<li><p>[Example program]</p>
<pre class="sourceCode"><code>// Get container information
ContainerInfo conInfo = store.getContainerInfo(&quot;table1&quot;);
List&lt;ColumnInfo&gt; newColumnList = new ArrayList&lt;ColumnInfo&gt;();
for ( int i = 0; i &lt; conInfo.getColumnCount(); i++ ){
    newColumnList.add(conInfo.getColumnInfo(i));
}
// Set a new column to the tail
newColumnList.add(new ColumnInfo(&quot;NewColumn&quot;, GSType.INTEGER));
conInfo.setColumnInfoList(newColumnList);

// Add a column
store.putCollection(&quot;table1&quot;, conInfo, true);
</code></pre></li>
</ul></li>
</ul>
<p>If you obtain existing rows after adding columns, the "empty value" defined in the data type of each column as a additional column value returns.</p>
<p><img src="img/add_column.png" alt="Example of adding an column" /></p>
<h3 id="6102-delete-column"><span class="header-section-number">6.10.2</span> Delete column</h3>
<p>Delete a column. It is only operational with NoSQL APIs.</p>
<ul>
<li>NoSQL API
<ul>
<li>Delete a column with GridStore#putContainer. Get container information "ContainerInfo" from an existing container at first. Then, execute putContainer after excluding column information of a deletion target.</li>
</ul></li>
</ul>
<h2 id="611-database-compressionrelease-function"><span class="header-section-number">6.11</span> Database compression/release function</h2>
<p><span id="block_data_compression"></span></p>
<h3 id="6111-block-data-compression"><span class="header-section-number">6.11.1</span> Block data compression</h3>
<p>When GridDB writes in-memory data to the database file residing on the disk, a database with larger capacity independent to the memory size can be obtained. However, as the size increases, so does the cost of the storage. To reduce the cost, the database file (checkpoint file) can be effectively compressed using GridDB's block data compression. In this case, flash memory with a higher price per unit of capacity can be utilized much more efficiently than HDD.</p>
<p><strong>Compression method</strong></p>
<p>When exporting in-memory data to the database file (checkpoint file), compression is performed to each block of GridDB write unit. The vacant area of Linux's file space due to compression can be deallocated, thereby reducing disk usages.</p>
<p><strong>Supported environment</strong></p>
<p>Since block data compression uses the Linux function, it depends on the Linux kernel version and file system. Block data compression is supported in the following environment.</p>
<ul>
<li>OS: RHEL / CentOS 7.2 and later</li>
<li>File system: XFS</li>
<li>File system block size: 4 KB</li>
</ul>
<p>　If block data compression is enabled in other environments, the GridDB node will fail to start.</p>
<p><strong>Configuration method</strong></p>
<p>The compression function needs to be configured in every nodes.</p>
<ul>
<li>Set the following string in the node definition file (gs_node.json) /dataStore/storeCompressionMode.
<ul>
<li>To disable compression functionality: NO_COMPRESSION (default)</li>
<li>To enable compression functionality: COMPRESSION</li>
</ul></li>
<li>The settings will be applied after GridDB node is restarted.</li>
<li>By restarting GridDB node, enable/disable operation of the compression function can be changed.</li>
</ul>
<p>[Note]</p>
<ul>
<li>Block data compression can only be applied to checkpoint file. Transaction log files, backup file, and GridDB's in-memory data are not subject to compression.</li>
<li>Due to block data compression, checkpoint file will become sparse file.</li>
<li>Even if the compression function is changed effectively, data already written to the checkpoint file cannot be compressed.</li>
</ul>
<h3 id="6112-deallocation-of-unused-data-blocks"><span class="header-section-number">6.11.2</span> Deallocation of unused data blocks</h3>
<p>The deallocation of unused data blocks is the function that reduces the size (disk space) of database files by the Linux file block deallocation processing on unused block areas of database files (checkpoint files).</p>
<p>Use this function in the following cases.</p>
<ul>
<li>A large amount of data has been deleted</li>
<li>There is no plan to update data and it is necessary to keep the DB for a long term.</li>
<li>The disk becomes full when updating data and reducing the DB size is needed temporarily.</li>
</ul>
<p>The processing for the deallocation of unused blocks, the support environment and the execution method are explained below.</p>
<p><strong>Processing for deallocation</strong></p>
<p>The unused blocks of database files (checkpoint files) are deallocated in a GridDB node at the time of starting the node. Those remain deallocated until data is updated on them.</p>
<p><strong>Supported environment</strong></p>
<p>The support environment is the same as the <a href="#block_data_compression">block data compression</a>.</p>
<p><strong>Execution method</strong></p>
<p>Specify the deallocation option, --releaseUnusedFileBlocks, of the gs_startnode command, in the time of starting GridDB nodes.</p>
<p>Check the size of unused blocks and allocated blocks by the following command.</p>
<ul>
<li>Items shown by the gs_stat command
<ul>
<li><p>storeTotalUse</p>
<p>The total size of used blocks in the checkpoint files (bytes)</p></li>
<li><p>checkpointFileAllocateSize</p>
<p>The total size of allocated blocks in the checkpoint files (bytes)</p></li>
</ul></li>
</ul>
<p>It is desired to perform this function when the size of allocated and unused blocks is large (storeTotalUse &lt;&lt; checkpointFileAllocateSize).</p>
<p>[Note]</p>
<ul>
<li>This function is available only for the checkpoint files. It is not available for the transaction log files and backup files.</li>
<li>The checkpoint files become sparse files by performing this function.</li>
<li>The disk usage can be reduced by this function, but it is possible to be a disadvantage of the performance by the fragmentations of sparse files.</li>
<li>The start-up of GridDB with this function may take more time than the normal start-up.</li>
</ul>
<p><span id="label_parameters"></span></p>
<h1 id="8-parameter"><span class="header-section-number">8</span> Parameter</h1>
<p>Describes the parameters to control the operations in GridDB. In the GridDB parameters, there is a node definition file to configure settings such as the setting information and usable resources etc., and a cluster definition file to configure operational settings of a cluster. Explains the meanings of the item names in the definition file and the settings and parameters in the initial state.</p>
<p>The unit of the setting is set as shown below.</p>
<ul>
<li><p>The byte size can be specified in the following units: TB, GB, MB, KB, B, T, G, M, K, or lowercase notations of these units. Unit cannot be omitted unless otherwise stated.</p></li>
<li><p>Time can be specified in the following units: h, min, s, ms. Unit cannot be omitted unless otherwise stated.</p></li>
</ul>
<p>　</p>
<h2 id="81-cluster-definition-file-gs_clusterjson"><span class="header-section-number">8.1</span> Cluster definition file (gs_cluster.json)</h2>
<p>The same setting in the cluster definition file needs to be made in all the nodes constituting the cluster. As the partitionNum and storeBlockSize parameters are important parameters to determine the database structure, they cannot be changed after GridDB is started for the first time.</p>
<p>The meanings of the various settings in the cluster definition file are explained below.</p>
<p>By adding an item name, items that are not included in the initial state can be recognized by the system. Indicate whether the parameter can be changed and the change timing in the change field.</p>
<ul>
<li>Disallowed: Node cannot be changed once it has been started. The database needs to be initialized if you want to change the setting.</li>
<li>Restart: Parameter can be changed by restarting all the nodes constituting the cluster.</li>
<li>Online: Parameters that are currently in operation online can be changed. However, the contents in the definition file need to be manual amended as the change details will not be perpetuated.</li>
</ul>
<p>　</p>
<table>
<thead>
<tr class="header">
<th>Configuration of GridDB</th>
<th>Default</th>
<th>Meaning of parameters and limitation values</th>
<th>Change</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>/notificationAddress</td>
<td>239.0.0.1</td>
<td>Standard setting of a multi-cast address. This setting will become valid if a parameter with the same cluster, transaction name is omitted. If a different value is set, the address of the individual setting is valid.</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/dataStore/partitionNum</td>
<td>128</td>
<td>Specify a common multiple that will allow the number of partitions to be divided and placed by the number of constituting clusters. Integer: Specify an integer that is 1 or higher and 10000 or lower.</td>
<td>Disallowed</td>
</tr>
<tr class="odd">
<td>/dataStore/storeBlockSize</td>
<td>64KB</td>
<td>Specify the disk I/O size from 64KB,1MB,4MB,8MB,16MB,32MB. Larger block size enables more records to be stored in one block, suitable for full scans of large tables, but also increases the possibility of conflict. Select the size suitable for the system. Cannot be changed after server is started.</td>
<td>Disallowed</td>
</tr>
<tr class="even">
<td>/cluster/clusterName</td>
<td>-</td>
<td>Specify the name for identifying a cluster. Mandatory input parameter.</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/cluster/replicationNum</td>
<td>2</td>
<td>Specify the number of replicas. Partition is doubled if the number of replicas is 2.</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/cluster/notificationAddress</td>
<td>239.0.0.1</td>
<td>Specify the multicast address for cluster configuration</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/cluster/notificationPort</td>
<td>20000</td>
<td>Specify the multicast port for cluster configuration. Specify a value within a specifiable range as a multi-cast port no.</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/cluster/notificationInterval</td>
<td>5s</td>
<td>Multicast period for cluster configuration. Specify the value more than 1 second and less than 2<sup>31</sup> seconds.</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/cluster/heartbeatInterval</td>
<td>5s</td>
<td>Specify a check period (heart beat period) to check the node survival among clusters. Specify the value more than 1 second and less than 2<sup>31</sup> seconds.</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/cluster/loadbalanceCheckInterval</td>
<td>180s</td>
<td>To adjust the load balance among the nodes constituting the cluster, specify a data sampling period, as a criteria whether to implement the balancing process or not. Specify the value more than 1 second and less than 2<sup>31</sup> seconds.</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/cluster/notificationMember</td>
<td>-</td>
<td>Specify the address list when using the fixed list method as the cluster configuration method.</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/cluster/notificationProvider/url</td>
<td>-</td>
<td>Specify the URL of the address provider when using the provider method as the cluster configuration method.</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/cluster/notificationProvider/updateInterval</td>
<td>5s</td>
<td>Specify the interval to get the list from the address provider. Specify the value more than 1 second and less than 2<sup>31</sup> seconds.</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/sync/timeoutInterval</td>
<td>30s</td>
<td>Specify the timeout time during data synchronization among clusters. 　If a timeout occurs, the system load may be high, or a failure may have occurred. Specify the value more than 1 second and less than 2<sup>31</sup> seconds.</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/transaction/notificationAddress</td>
<td>239.0.0.1</td>
<td>Multi-cast address that a client connects to initially. Master node is notified in the client.</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/transaction/notificationPort</td>
<td>31999</td>
<td>Multi-cast port that a client connects to initially. Specify a value within a specifiable range as a multi-cast port no.</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/transaction/notificationInterval</td>
<td>5s</td>
<td>Multi-cast period for a master to notify its clients. Specify the value more than 1 second and less than 2<sup>31</sup> seconds.</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/transaction/replicationMode</td>
<td>0</td>
<td>Specify the data synchronization (replication) method when updating the data in a transaction. Specify a string or integer, "ASYNC"or 0 (non-synchronous), "SEMISYNC"or 1 (quasi-synchronous).</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/transaction/replicationTimeoutInterval</td>
<td>10s</td>
<td>Specify the timeout time for communications among nodes when synchronizing data in a quasi-synchronous replication transaction. Specify the value more than 1 second and less than 2<sup>31</sup> seconds.</td>
<td>Restart</td>
</tr>
</tbody>
</table>
<p>　</p>
<h2 id="82-node-definition-file-gs_nodejson"><span class="header-section-number">8.2</span> Node definition file (gs_node.json)</h2>
<p>A node definition file defines the default settings of the resources in nodes constituting a cluster. In an online operation, there are also parameters whose values can be changed online from the resource, access frequency, etc., that have been laid out. Conversely, note that there are also values (concurrency) that cannot be changed once set.</p>
<p>The meanings of the various settings in the node definition file are explained below.</p>
<p>By adding an item name, items that are not included in the initial state can be recognized by the system. Indicate whether the parameter can be changed and the change timing in the change field.</p>
<ul>
<li>Disallowed: Node cannot be changed once it has been started. The database needs to be initialized if you want to change the setting.</li>
<li>Restart: Parameter can be changed by restarting all the nodes constituting the cluster.</li>
<li>Online: Parameters that are currently in operation online can be changed. However, the contents in the definition file need to be manual amended as the change details will not be perpetuated.</li>
</ul>
<p>Specify the directory by specifying the full path or a relative path from the GS_HOME environmental variable. For relative path, the initial directory of GS_HOME serves as a reference point. Initial configuration directory of GS_HOME is /var/lib/gridstore.</p>
<table>
<thead>
<tr class="header">
<th>Configuration of GridDB</th>
<th>Default</th>
<th>Meaning of parameters and limitation values</th>
<th>Change</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>/serviceAddress</td>
<td>-</td>
<td>Set the initial value of each cluster, transaction, sync service address. The initial value of each service address can be set by setting this address only without having to set the addresses of the 3 items.</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/dataStore/dbPath</td>
<td>data</td>
<td>The deployment directory of the database file is specified by the full path or a relative path</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/dataStore/dbFilePathList</td>
<td>Empty list</td>
<td>The list of directories where the split checkpoint files are placed when the checkpoint file is to be split. More than one can be specified (example: ["/stg01", "/stg02"]).</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/dataStore/dbFileSplitCount</td>
<td>0 (no splitting)</td>
<td>Number of checkpoint file splitting</td>
<td>Disallowed</td>
</tr>
<tr class="odd">
<td>/dataStore/syncTempPath</td>
<td>sync</td>
<td>Specify the path of the Data sync temporary file directory.</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/dataStore/storeMemoryLimit</td>
<td>1024MB</td>
<td>Upper memory limit for data management</td>
<td>Online</td>
</tr>
<tr class="odd">
<td>/dataStore/concurrency</td>
<td>4</td>
<td>Specify the concurrency of processing.</td>
<td>Disallowed</td>
</tr>
<tr class="even">
<td>/dataStore/logWriteMode</td>
<td>1</td>
<td>Specify the log writing mode and cycle. If the log writing mode period is -1 or 0, log writing is performed at the end of the transaction. If it is 1 or more and less than 2<sup>31</sup>, log writing is performed at a period specified in seconds</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/dataStore/persistencyMode</td>
<td>1(NORMAL)</td>
<td>In the perpetuation mode, the period that the update log file is maintained during a data update is specified. Specify either 1 (NORMAL) or 2 (RETAINING_ALL_LOGS). For "NORMAL", a transaction log file which is no longer required will be deleted by the checkpoint. For "RETAINING_ALL_LOGS", all transaction log files are retained.</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/dataStore/storeWarmStart</td>
<td>false(invalid)</td>
<td>Specify whether to save in-memory up to the upper limit of the chunk memory during a restart.</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/dataStore/affinityGroupSize</td>
<td>4</td>
<td>Number of affinity groups</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/dataStore/storeCompressionMode</td>
<td>NO_COMPRESSION</td>
<td>Data block compression mode</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/dataStore/autoExpire</td>
<td>false</td>
<td>Specify whether to delete the rows of a container in which an expiry release is set automatically after the rows become cold data. false: Not delete automatically (Needs to be deleted by executing the long term archive) true: Delete automatically</td>
<td>Online</td>
</tr>
<tr class="even">
<td>/checkpoint/checkpointInterval</td>
<td>60s</td>
<td>Checkpoint process execution period to perpetuate a data update block in the memory</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/checkpoint/checkpointMemoryLimit</td>
<td>1024MB</td>
<td>Upper limit of special checkpoint write memory* Pool the required memory space up to the upper limit when there is a update transaction in the checkpoint.</td>
<td>Online</td>
</tr>
<tr class="even">
<td>/checkpoint/useParallelMode</td>
<td>false(invalid)</td>
<td>Specify whether to execute the checkpoint concurrently. *The no. of concurrent threads is the same as the concurrency.</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/checkpoint/checkpointCopyInterval</td>
<td>100ms</td>
<td>Output process interval when outputting a block with added or updated data to a disk in a checkpoint process.</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/cluster/serviceAddress</td>
<td>Follow the "/serviceAddress"</td>
<td>Standby address for cluster configuration</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/cluster/servicePort</td>
<td>10010</td>
<td>Standby port for cluster configuration</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/cluster/notificationInterfaceAddress</td>
<td>""</td>
<td>Specify the address of the interface which sends multicasting packets.</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/sync/serviceAddress</td>
<td>Follow the "/serviceAddress"</td>
<td>Reception address for data synchronization among the clusters</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/sync/servicePort</td>
<td>10020</td>
<td>Standby port for data synchronization</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/system/serviceAddress</td>
<td>Follow the "/serviceAddress"</td>
<td>Standby address for operation commands</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/system/servicePort</td>
<td>10040</td>
<td>Standby port for operation commands</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/system/eventLogPath</td>
<td>log</td>
<td>Event log file deployment directory path</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/transaction/serviceAddress</td>
<td>Follow the "/serviceAddress"</td>
<td>Standby address for transaction processing for client communication, used also for cluster internal communication when /transaction/localserviceAddress is not specified.</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/transaction/localServiceAddress</td>
<td>Follow the "/serviceAddress"</td>
<td>Standby address for transaction processing for cluster internal communication</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/transaction/servicePort</td>
<td>10001</td>
<td>Standby port for transaction process</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/transaction/connectionLimit</td>
<td>5000</td>
<td>Upper limit of the no. of transaction process connections</td>
<td>Restart</td>
</tr>
<tr class="even">
<td>/transaction/transactionTimeoutLimit</td>
<td>300s</td>
<td>Transaction timeout upper limit</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/transaction/workMemoryLimit</td>
<td>128MB</td>
<td>Maximum memory size for data reference (get, TQL) in transaction processing (for each concurrent processing)</td>
<td>Online</td>
</tr>
<tr class="even">
<td>/transaction/notificationInterfaceAddress</td>
<td>""</td>
<td>Specify the address of the interface which sends multicasting packets.</td>
<td>Restart</td>
</tr>
<tr class="odd">
<td>/trace/fileCount</td>
<td>30</td>
<td>Upper file count limit for event log files.</td>
<td>Restart</td>
</tr>
</tbody>
</table>
<h1 id="9-system-limiting-values"><span class="header-section-number">9</span> System limiting values</h1>
<h2 id="91-limitations-on-numerical-value"><span class="header-section-number">9.1</span> Limitations on numerical value</h2>
<table>
<thead>
<tr class="header">
<th>Block size</th>
<th>64KB</th>
<th>1MB - 32MB</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>STRING/GEOMETRY data size</td>
<td>31KB</td>
<td>128KB</td>
</tr>
<tr class="even">
<td>BLOB data size</td>
<td>1GB - 1Byte</td>
<td>1GB - 1Byte</td>
</tr>
<tr class="odd">
<td>Array length</td>
<td>4000</td>
<td>65000</td>
</tr>
<tr class="even">
<td>No. of columns</td>
<td>1024</td>
<td>Approx. 7K - 32000 (*1)</td>
</tr>
<tr class="odd">
<td>No. of indexes (Per container)</td>
<td>1024</td>
<td>16000</td>
</tr>
<tr class="even">
<td>No. of columns subject to linear complementary compression</td>
<td>100</td>
<td>100</td>
</tr>
<tr class="odd">
<td>URL of trigger</td>
<td>4KB</td>
<td>4KB</td>
</tr>
<tr class="even">
<td>Number of affinity groups</td>
<td>10000</td>
<td>10000</td>
</tr>
<tr class="odd">
<td>No. of divisions in a timeseries container with a cancellation deadline</td>
<td>160</td>
<td>160</td>
</tr>
<tr class="even">
<td>Size of communication buffer managed by a GridDB node</td>
<td>Approx. 2GB</td>
<td>Approx. 2GB</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr class="header">
<th>Block size</th>
<th>64KB</th>
<th>1MB</th>
<th>4MB</th>
<th>8MB</th>
<th>16MB</th>
<th>32MB</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>Partition size</td>
<td>Approx. 4TB</td>
<td>Approx. 64TB</td>
<td>Approx. 256TB</td>
<td>Approx. 512TB</td>
<td>Approx. 1PB</td>
<td>Approx. 2PB</td>
</tr>
</tbody>
</table>
<ul>
<li>STRING, URL of trigger
<ul>
<li>Limiting value is equivalent to UTF-8 encode</li>
</ul></li>
<li>Spatial-type
<ul>
<li>Limiting value is equivalent to the internal storage format</li>
</ul></li>
<li>(*1) The number of columns
<ul>
<li>There is a restriction on the upper limit of the number of columns. The total size of a fixed length column (BOOL, INTEGER, FLOAT, DOUBLE, TIMESTAMP type) must be less than or equal to 59 KB. The upper limit of the number of columns is 32000 if the type is not a fixed length column.
<ul>
<li>Example) If a container consists of LONG type columns: the upper limit of the number of columns is 7552 ( The total size of a fixed length column 8B * 7552 = 59KB )</li>
<li>Example) If a container consists of BYTE type columns: the upper limit of the number of columns is 32000 ( The total size of a fixed length column 1B * 32000 = Approx. 30KB -&amp;gt; Up to 32000 columns can be created because the size restriction on a fixed length column does not apply to it )</li>
<li>Example) If a container consists of STRING type columns: the upper limit of the number of columns is 32000 ( Up to 32000 columns can be created because the size restriction on a fixed length column does not apply to it )</li>
</ul></li>
</ul></li>
</ul>
<h2 id="92-limitations-on-naming"><span class="header-section-number">9.2</span> Limitations on naming</h2>
<table>
<thead>
<tr class="header">
<th>Field</th>
<th>Allowed characters</th>
<th>Maximum length</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>User</td>
<td>The head of name is "gs#" and the following characters are either alphanumeric or '_'</td>
<td>64 characters</td>
</tr>
<tr class="odd">
<td>&lt;Password&gt;</td>
<td>Composed of an arbitrary number of characters<br />
using the unicode code point</td>
<td>64 bytes (by UTF-8 encoding)</td>
</tr>
<tr class="even">
<td>cluster name</td>
<td>Alphanumeric, '_', '-', '.', '/', and '='</td>
<td>64 characters</td>
</tr>
<tr class="even">
<td>Container name<br />
Table name</td>
<td>Alphanumeric, '_', '-', '.', '/', and '='<br />
(and '@' only for specifying a node affinity)</td>
<td>16384 characters (for 64KB block)<br />
131072 characters (for 1MB - 32MB block)</td>
</tr>
<tr class="odd">
<td>Column name</td>
<td>Alphanumeric, '_', '-', '.', '/', and '='</td>
<td>256 characters</td>
</tr>
<tr class="even">
<td>Index name</td>
<td>Alphanumeric, '_', '-', '.', '/', and '='</td>
<td>16384 characters (for 64KB block)<br />
131072 characters (for 1MB - 32MB block)</td>
</tr>
<tr class="odd">
<td>Trigger name</td>
<td>Alphanumeric, '_', '-', '.', '/', and '='</td>
<td>256 characters</td>
</tr>
<tr class="odd">
<td>Data Affinity</td>
<td>Alphanumeric, '_', '-', '.', '/', and '='</td>
<td>8 characters</td>
</tr>
</tbody>
</table>

<ul>
<li><p>Case sensitivity</p>
<ul>
<li><p>Cluster names, trigger names and passwords are case-sensitive. So the names of the following example are handled as different names.</p>
<pre class="example"><code>Example) trigger, TRIGGER
</code></pre></li>
</ul></li>
<li><p>Other names are not case-sensitive. Uppercase and lowercase characters are identified as the same.</p></li>
<li><p>Uppercase and lowercase characters in names at the creation are hold as data.</p></li>
<li><p>The names enclosed with '"' in TQL are case-sensitive. In that case, uppercase and lowercase characters are not identified as the same.</p>
<pre class="example"><code>Example) Search on the container &quot;SensorData&quot; and the column &quot;Column1&quot;
    select &quot;Column1&quot; from &quot;SensorData&quot;   Success
    select &quot;COLUMN1&quot; from &quot;SENSORDATA&quot; Fail (Because &quot;SENSORDATA&quot; container does not exist)
</code></pre></li>
<li><p>Specifying names by TQL</p>
<ul>
<li><p>In the case that the name is not enclosed with '"', it can contain only alphanumeric and '_'. To use other characters, the name is required to be enclosed with '"'.</p>
<pre class="example"><code>Example) select &quot;012column&quot;, data_15 from &quot;container.2017-09&quot;
</code></pre></li>
</ul></li>
</ul>
</div>
</article>
</body>
</html>
